threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,all\r\n\r\nOn website: https://wiki.postgresql.org/wiki/Todo#libpq\r\nI found that in libpq module,there is a todo case:\r\n-------------------------------------------------------------------------------\r\nPrevent PQfnumber() from lowercasing unquoted column names\r\nPQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it\r\n\r\n-------------------------------------------------------------------------------\r\nI am interested in this one. So ,Had it be fixed?\r\nIf not, I am willing to do so.\r\nIn that way ,could anyone tell me the detail features of this function it supported to be?\r\nI will try to fix it~\r\n\r\n\r\n\r\n--\r\nBest Regards\r\n-----------------------------------------------------\r\nWu Fei\r\nDX3\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-9356\r\nCOINS: 7998-9356\r\nFAX: +86+25-83317685\r\nMAIL:wufei.fnst@cn.fujitsu.com\r\nhttp://www.fujitsu.com/cn/fnst/\r\n---------------------------------------------------\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nHi,all\n \nOn website: \nhttps://wiki.postgresql.org/wiki/Todo#libpq\nI found that in libpq module,there is a todo case:\n-------------------------------------------------------------------------------\n\nPrevent PQfnumber() from lowercasing unquoted column names\n\nPQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it\n \n-------------------------------------------------------------------------------\nI am interested in this one. So ,Had it be fixed?\nIf not, I am willing to do so.\nIn that way ,could anyone tell me the detail features of this function it supported to be?\nI will try to fix it~\n \n \n \n--\nBest Regards\n-----------------------------------------------------\n\nWu Fei\nDX3\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-9356\nCOINS: 7998-9356\nFAX: +86+25-83317685\nMAIL:wufei.fnst@cn.fujitsu.com\nhttp://www.fujitsu.com/cn/fnst/\n---------------------------------------------------",
"msg_date": "Fri, 15 Mar 2019 03:47:05 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Willing to fix a TODO case in libpq module"
}
] |
[
{
"msg_contents": "Hi\n\nI propose mentioned functions without specified separator. In this case the\nstring is transformed to array of chars, in second case, the array of chars\nis transformed back to string.\n\nComments, notes?\n\nRegards\n\nPavel\n\nHiI propose mentioned functions without specified separator. In this case the string is transformed to array of chars, in second case, the array of chars is transformed back to string.Comments, notes?RegardsPavel",
"msg_date": "Fri, 15 Mar 2019 05:04:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 05:04:02AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I propose mentioned functions without specified separator. In this case the\n> string is transformed to array of chars, in second case, the array of chars\n> is transformed back to string.\n> \n> Comments, notes?\n\nWhatever optimizations you have in mind for this, could they also work\nfor string_to_array() and array_to_string() when they get an empty\nstring handed to them?\n\nAs to naming, some languages use explode/implode.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 15 Mar 2019 15:03:02 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "pá 15. 3. 2019 v 15:03 odesílatel David Fetter <david@fetter.org> napsal:\n\n> On Fri, Mar 15, 2019 at 05:04:02AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > I propose mentioned functions without specified separator. In this case\n> the\n> > string is transformed to array of chars, in second case, the array of\n> chars\n> > is transformed back to string.\n> >\n> > Comments, notes?\n>\n> Whatever optimizations you have in mind for this, could they also work\n> for string_to_array() and array_to_string() when they get an empty\n> string handed to them?\n>\n\nmy idea is use string_to_array('AHOJ') --> {A,H,O,J}\n\nempty input means empty result --> {}\n\n\n>\n> As to naming, some languages use explode/implode.\n>\n\ncan be, but if we have string_to_array already, I am thinking so it is good\nname.\n\n\n\n> Best,\n> David.\n> --\n> David Fetter <david(at)fetter(dot)org> http://fetter.org/\n> Phone: +1 415 235 3778\n>\n> Remember to vote!\n> Consider donating to Postgres: http://www.postgresql.org/about/donate\n>\n\npá 15. 3. 2019 v 15:03 odesílatel David Fetter <david@fetter.org> napsal:On Fri, Mar 15, 2019 at 05:04:02AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I propose mentioned functions without specified separator. In this case the\n> string is transformed to array of chars, in second case, the array of chars\n> is transformed back to string.\n> \n> Comments, notes?\n\nWhatever optimizations you have in mind for this, could they also work\nfor string_to_array() and array_to_string() when they get an empty\nstring handed to them?my idea is use string_to_array('AHOJ') --> {A,H,O,J}empty input means empty result --> {} \n\nAs to naming, some languages use explode/implode.can be, but if we have string_to_array already, I am thinking so it is good name. \n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Fri, 15 Mar 2019 16:46:43 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On 3/15/19 11:46 AM, Pavel Stehule wrote:\n> pá 15. 3. 2019 v 15:03 odesílatel David Fetter <david@fetter.org> napsal:\n>> Whatever optimizations you have in mind for this, could they also work\n>> for string_to_array() and array_to_string() when they get an empty\n>> string handed to them?\n> \n> my idea is use string_to_array('AHOJ') --> {A,H,O,J}\n> \n> empty input means empty result --> {}\n\nI thought the question was maybe about an empty /delimiter/ string.\n\nIt seems that string_to_array already has this behavior if NULL is\npassed as the delimiter:\n\n> select string_to_array('AHOJ', null);\n string_to_array\n-----------------\n {A,H,O,J}\n\nand array_to_string has the proposed behavior if passed an\nempty string as the delimiter (as one would naturally expect)\n... but not null for a delimiter (that just makes the result null).\n\nSo the proposal seems roughly equivalent to making string_to_array's\nsecond parameter optional default null, and array_to_string's second\nparameter optional default ''.\n\nDoes that sound right?\n\nRegards,\n-Chap\n\n",
"msg_date": "Fri, 15 Mar 2019 11:59:01 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "pá 15. 3. 2019 v 16:59 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> On 3/15/19 11:46 AM, Pavel Stehule wrote:\n> > pá 15. 3. 2019 v 15:03 odesílatel David Fetter <david@fetter.org>\n> napsal:\n> >> Whatever optimizations you have in mind for this, could they also work\n> >> for string_to_array() and array_to_string() when they get an empty\n> >> string handed to them?\n> >\n> > my idea is use string_to_array('AHOJ') --> {A,H,O,J}\n> >\n> > empty input means empty result --> {}\n>\n> I thought the question was maybe about an empty /delimiter/ string.\n>\n> It seems that string_to_array already has this behavior if NULL is\n> passed as the delimiter:\n>\n> > select string_to_array('AHOJ', null);\n> string_to_array\n> -----------------\n> {A,H,O,J}\n>\n> and array_to_string has the proposed behavior if passed an\n> empty string as the delimiter (as one would naturally expect)\n> ... but not null for a delimiter (that just makes the result null).\n>\n> So the proposal seems roughly equivalent to making string_to_array's\n> second parameter optional default null, and array_to_string's second\n> parameter optional default ''.\n>\n> Does that sound right?\n>\n\nyes\n\nPavel\n\n\n> Regards,\n> -Chap\n>\n\npá 15. 3. 2019 v 16:59 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 3/15/19 11:46 AM, Pavel Stehule wrote:\n> pá 15. 3. 2019 v 15:03 odesílatel David Fetter <david@fetter.org> napsal:\n>> Whatever optimizations you have in mind for this, could they also work\n>> for string_to_array() and array_to_string() when they get an empty\n>> string handed to them?\n> \n> my idea is use string_to_array('AHOJ') --> {A,H,O,J}\n> \n> empty input means empty result --> {}\n\nI thought the question was maybe about an empty /delimiter/ string.\n\nIt seems that string_to_array already has this behavior if NULL is\npassed as the delimiter:\n\n> select string_to_array('AHOJ', null);\n string_to_array\n-----------------\n {A,H,O,J}\n\nand array_to_string has the proposed behavior if passed an\nempty string as the delimiter (as one would naturally expect)\n... but not null for a delimiter (that just makes the result null).\n\nSo the proposal seems roughly equivalent to making string_to_array's\nsecond parameter optional default null, and array_to_string's second\nparameter optional default ''.\n\nDoes that sound right?yesPavel\n\nRegards,\n-Chap",
"msg_date": "Fri, 15 Mar 2019 17:00:54 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> So the proposal seems roughly equivalent to making string_to_array's\n> second parameter optional default null, and array_to_string's second\n> parameter optional default ''.\n\nIn that case why bother? It'll just create a cross-version compatibility\nhazard for next-to-no keystroke savings. If the cases were so common\nthat they could be argued to be sane \"default\" behavior, I might feel\ndifferently --- but if you were asked in a vacuum what the default\ndelimiters ought to be, I don't think you'd say \"no delimiter\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Mar 2019 12:15:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "pá 15. 3. 2019 v 17:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Chapman Flack <chap@anastigmatix.net> writes:\n> > So the proposal seems roughly equivalent to making string_to_array's\n> > second parameter optional default null, and array_to_string's second\n> > parameter optional default ''.\n>\n> In that case why bother? It'll just create a cross-version compatibility\n> hazard for next-to-no keystroke savings. If the cases were so common\n> that they could be argued to be sane \"default\" behavior, I might feel\n> differently --- but if you were asked in a vacuum what the default\n> delimiters ought to be, I don't think you'd say \"no delimiter\".\n>\n\nMy motivation is following - sometimes I need to convert string to array of\nchars. Using NULL as separator is possible, but it is not intuitive. When\nyou use string_to_array function without separator, then only one possible\nsemantic is there - separation by chars.\n\nI understand so there is a possible collision and possible meaning of\nmissing parameter like default value. But in this case this meaning,\nsemantic is not practical.\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\npá 15. 3. 2019 v 17:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Chapman Flack <chap@anastigmatix.net> writes:\n> So the proposal seems roughly equivalent to making string_to_array's\n> second parameter optional default null, and array_to_string's second\n> parameter optional default ''.\n\nIn that case why bother? It'll just create a cross-version compatibility\nhazard for next-to-no keystroke savings. If the cases were so common\nthat they could be argued to be sane \"default\" behavior, I might feel\ndifferently --- but if you were asked in a vacuum what the default\ndelimiters ought to be, I don't think you'd say \"no delimiter\".My motivation is following - sometimes I need to convert string to array of chars. Using NULL as separator is possible, but it is not intuitive. When you use string_to_array function without separator, then only one possible semantic is there - separation by chars.I understand so there is a possible collision and possible meaning of missing parameter like default value. But in this case this meaning, semantic is not practical.RegardsPavel\n\n regards, tom lane",
"msg_date": "Fri, 15 Mar 2019 17:26:22 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On 3/15/19 12:15 PM, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> So the proposal seems roughly equivalent to making string_to_array's\n>> second parameter optional default null, and array_to_string's second\n>> parameter optional default ''.\n> \n> In that case why bother? It'll just create a cross-version compatibility\n> hazard for next-to-no keystroke savings. If the cases were so common\n> that they could be argued to be sane \"default\" behavior, I might feel\n> differently --- but if you were asked in a vacuum what the default\n> delimiters ought to be, I don't think you'd say \"no delimiter\".\n\nOne could go further and argue that the non-optional arguments improve\nclarity: a reader seeing the explicit NULL or '' argument gets a strong\nclue what's intended, who in the optional-argument case might end up\nthinking \"must go look up what this function's default delimiter is\".\n\n-Chap\n\n",
"msg_date": "Fri, 15 Mar 2019 12:31:21 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On 3/15/19 12:26 PM, Pavel Stehule wrote:\n> you use string_to_array function without separator, then only one possible\n> semantic is there - separation by chars.\n\nOther languages can and do specify other semantics for the\nseparator-omitted case: often (as in Python) it means to split\naround \"runs of one or more characters the platform considers white\nspace\", as a convenience, given that it's a fairly commonly wanted\nmeaning but can be tedious to spell out as an explicit separator.\n\nI admit I think a separator of '' would be more clear than null,\nso if I were designing string_to_array in a green field, I think\nI would swap the meanings of null and '' as the delimiter: null\nwould mean \"don't really split anything\", and '' would mean \"split\neverywhere you can find '' in the string\", that is, everywhere.\n\nBut the current behavior is already established....\n\nRegards,\n-Chap\n\n",
"msg_date": "Fri, 15 Mar 2019 12:54:26 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "pá 15. 3. 2019 v 17:54 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> On 3/15/19 12:26 PM, Pavel Stehule wrote:\n> > you use string_to_array function without separator, then only one\n> possible\n> > semantic is there - separation by chars.\n>\n> Other languages can and do specify other semantics for the\n> separator-omitted case: often (as in Python) it means to split\n> around \"runs of one or more characters the platform considers white\n> space\", as a convenience, given that it's a fairly commonly wanted\n> meaning but can be tedious to spell out as an explicit separator.\n>\n\nfor this proposal \"char\" != byte\n\nresult[n] = substring(str FROM n FOR 1)\n\n\n> I admit I think a separator of '' would be more clear than null,\n> so if I were designing string_to_array in a green field, I think\n> I would swap the meanings of null and '' as the delimiter: null\n> would mean \"don't really split anything\", and '' would mean \"split\n> everywhere you can find '' in the string\", that is, everywhere.\n>\n> But the current behavior is already established....\n>\n\nyes\n\nPavel\n\n>\n> Regards,\n> -Chap\n>\n\npá 15. 3. 2019 v 17:54 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 3/15/19 12:26 PM, Pavel Stehule wrote:\n> you use string_to_array function without separator, then only one possible\n> semantic is there - separation by chars.\n\nOther languages can and do specify other semantics for the\nseparator-omitted case: often (as in Python) it means to split\naround \"runs of one or more characters the platform considers white\nspace\", as a convenience, given that it's a fairly commonly wanted\nmeaning but can be tedious to spell out as an explicit separator.for this proposal \"char\" != byteresult[n] = substring(str FROM n FOR 1)\n\nI admit I think a separator of '' would be more clear than null,\nso if I were designing string_to_array in a green field, I think\nI would swap the meanings of null and '' as the delimiter: null\nwould mean \"don't really split anything\", and '' would mean \"split\neverywhere you can find '' in the string\", that is, everywhere.\n\nBut the current behavior is already established....yesPavel\n\nRegards,\n-Chap",
"msg_date": "Fri, 15 Mar 2019 17:59:06 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On 3/15/19 12:59 PM, Pavel Stehule wrote:\n> for this proposal \"char\" != byte\n> \n> result[n] = substring(str FROM n FOR 1)\n\nI think that's what string_to_array(..., null) already does:\n\nSHOW server_encoding;\nserver_encoding\nUTF8\n\nWITH\n t0(s) AS (SELECT text 'verlorn ist daz slüzzelîn'),\n t1(a) AS (SELECT string_to_array(s, null) FROM t0)\nSELECT\n char_length(s), octet_length(convert_to(s, 'UTF8')),\n array_length(a,1), a\nFROM\n t0, t1;\n\nchar_length|octet_length|array_length|a\n25|27|25|{v,e,r,l,o,r,n,\" \",i,s,t,\" \",d,a,z,\" \",s,l,ü,z,z,e,l,î,n}\n\n\nRegards,\n-Chap\n\n",
"msg_date": "Fri, 15 Mar 2019 13:29:54 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "pá 15. 3. 2019 v 18:30 odesílatel Chapman Flack <chap@anastigmatix.net>\nnapsal:\n\n> On 3/15/19 12:59 PM, Pavel Stehule wrote:\n> > for this proposal \"char\" != byte\n> >\n> > result[n] = substring(str FROM n FOR 1)\n>\n> I think that's what string_to_array(..., null) already does:\n>\n\nsure. My proposal is +/- just reduction about null parameter.\n\n\n\n> SHOW server_encoding;\n> server_encoding\n> UTF8\n>\n> WITH\n> t0(s) AS (SELECT text 'verlorn ist daz slüzzelîn'),\n> t1(a) AS (SELECT string_to_array(s, null) FROM t0)\n> SELECT\n> char_length(s), octet_length(convert_to(s, 'UTF8')),\n> array_length(a,1), a\n> FROM\n> t0, t1;\n>\n> char_length|octet_length|array_length|a\n> 25|27|25|{v,e,r,l,o,r,n,\" \",i,s,t,\" \",d,a,z,\" \",s,l,ü,z,z,e,l,î,n}\n>\n>\n> Regards,\n> -Chap\n>\n\npá 15. 3. 2019 v 18:30 odesílatel Chapman Flack <chap@anastigmatix.net> napsal:On 3/15/19 12:59 PM, Pavel Stehule wrote:\n> for this proposal \"char\" != byte\n> \n> result[n] = substring(str FROM n FOR 1)\n\nI think that's what string_to_array(..., null) already does:sure. My proposal is +/- just reduction about null parameter. \n\nSHOW server_encoding;\nserver_encoding\nUTF8\n\nWITH\n t0(s) AS (SELECT text 'verlorn ist daz slüzzelîn'),\n t1(a) AS (SELECT string_to_array(s, null) FROM t0)\nSELECT\n char_length(s), octet_length(convert_to(s, 'UTF8')),\n array_length(a,1), a\nFROM\n t0, t1;\n\nchar_length|octet_length|array_length|a\n25|27|25|{v,e,r,l,o,r,n,\" \",i,s,t,\" \",d,a,z,\" \",s,l,ü,z,z,e,l,î,n}\n\n\nRegards,\n-Chap",
"msg_date": "Fri, 15 Mar 2019 19:19:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 12:31:21PM -0400, Chapman Flack wrote:\n> On 3/15/19 12:15 PM, Tom Lane wrote:\n> > Chapman Flack <chap@anastigmatix.net> writes:\n> >> So the proposal seems roughly equivalent to making string_to_array's\n> >> second parameter optional default null, and array_to_string's second\n> >> parameter optional default ''.\n> > \n> > In that case why bother? It'll just create a cross-version compatibility\n> > hazard for next-to-no keystroke savings. If the cases were so common\n> > that they could be argued to be sane \"default\" behavior, I might feel\n> > differently --- but if you were asked in a vacuum what the default\n> > delimiters ought to be, I don't think you'd say \"no delimiter\".\n> \n> One could go further and argue that the non-optional arguments improve\n> clarity: a reader seeing the explicit NULL or '' argument gets a strong\n> clue what's intended, who in the optional-argument case might end up\n> thinking \"must go look up what this function's default delimiter is\".\n\nGoing to look up the function's behavior would be much more fun if\nthere were comments on these functions explaining things. I'll draft\nup a patch for some of that.\n\nIn a similar vein, I haven't been able to come up with hazards of\nnaming function parameters in some document-ish way. What did I miss?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sat, 16 Mar 2019 02:18:23 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: string_to_array, array_to_string function without separator"
}
] |
[
{
"msg_contents": "Hi,\n\nI think ts_vector is a typo for tsvector.\n\nregards,\nSho Kato",
"msg_date": "Fri, 15 Mar 2019 04:37:03 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in test code comments"
},
{
"msg_contents": "Hello.\n\nAt Fri, 15 Mar 2019 04:37:03 +0000, \"Kato, Sho\" <kato-sho@jp.fujitsu.com> wrote in <25C1C6B2E7BE044889E4FE8643A58BA963E1D03D@G01JPEXMBKW03>\n> Hi,\n> \n> I think ts_vector is a typo for tsvector.\n\n> --- ts_vector corner cases\n> +-- tsvector corner cases\n> select to_tsvector('\"\"'::json);\n\nYeah, surely it is typo, but not for tsvector but\nto_tsvector. See the block just below.\n\n> -- json_to_tsvector corner cases\n> select json_to_tsvector('\"\"'::json, '\"all\"');\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Mar 2019 14:24:29 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in test code comments"
},
{
"msg_contents": "At Friday, March 15, 2019 2:24 PM, Kyotaro HORIGUCHI wrote\n> Yeah, surely it is typo, but not for tsvector but to_tsvector. See the\n> block just below.\n> \n> > -- json_to_tsvector corner cases\n> > select json_to_tsvector('\"\"'::json, '\"all\"');\n\nOops, thank you for your advice.\nI fixed it.\n\nRegards, \nSho Kato\n\n> -----Original Message-----\n> From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp]\n> Sent: Friday, March 15, 2019 2:24 PM\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: Fix typo in test code comments\n> \n> Hello.\n> \n> At Fri, 15 Mar 2019 04:37:03 +0000, \"Kato, Sho\" <kato-sho@jp.fujitsu.com>\n> wrote in <25C1C6B2E7BE044889E4FE8643A58BA963E1D03D@G01JPEXMBKW03>\n> > Hi,\n> >\n> > I think ts_vector is a typo for tsvector.\n> \n> > --- ts_vector corner cases\n> > +-- tsvector corner cases\n> > select to_tsvector('\"\"'::json);\n> \n> Yeah, surely it is typo, but not for tsvector but to_tsvector. See the\n> block just below.\n> \n> > -- json_to_tsvector corner cases\n> > select json_to_tsvector('\"\"'::json, '\"all\"');\n> \n> \n> regards.\n> \n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n> \n>",
"msg_date": "Fri, 15 Mar 2019 05:49:47 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix typo in test code comments"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 02:24:29PM +0900, Kyotaro HORIGUCHI wrote:\n> Yeah, surely it is typo, but not for tsvector but\n> to_tsvector. See the block just below.\n\nYes, I agree with Horiguchi-san here that this refers to the function\ncall, and not the data type. Everybody agrees?\n--\nMichael",
"msg_date": "Fri, 15 Mar 2019 14:50:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in test code comments"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 05:49:47AM +0000, Kato, Sho wrote:\n> Oops, thank you for your advice.\n> I fixed it.\n\nCommitted.\n--\nMichael",
"msg_date": "Fri, 15 Mar 2019 16:23:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in test code comments"
},
{
"msg_contents": "> Committed.\n\nThanks!\n\nRegards,\nsho kato\n> -----Original Message-----\n> From: Michael Paquier [mailto:michael@paquier.xyz]\n> Sent: Friday, March 15, 2019 4:24 PM\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\n> Cc: 'Kyotaro HORIGUCHI' <horiguchi.kyotaro@lab.ntt.co.jp>;\n> pgsql-hackers@postgresql.org\n> Subject: Re: Fix typo in test code comments\n> \n> On Fri, Mar 15, 2019 at 05:49:47AM +0000, Kato, Sho wrote:\n> > Oops, thank you for your advice.\n> > I fixed it.\n> \n> Committed.\n> --\n> Michael\n\n\n",
"msg_date": "Fri, 15 Mar 2019 08:16:25 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fix typo in test code comments"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWhat exactly is a savepointLevel?\n\nThey seem to have been there for 15 years[1], diligently copied from\nparent transactions to children, fastidiously checked to avoid crossing\na level on rollback or release, but does anything ever change the level\nfrom its initial value? I'm drawing a blank[2].\n\nRegards,\n-Chap\n\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blame;f=src/backend/access/transam/xact.c;h=fd5d6b5;hb=90cb9c3#l93\n\n[2]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=search&h=0516c61b&st=grep&s=savepointLevel\n\n",
"msg_date": "Fri, 15 Mar 2019 01:18:28 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "What is a savepointLevel ?"
},
{
"msg_contents": "On Fri, 15 Mar 2019 at 18:18, Chapman Flack <chap@anastigmatix.net> wrote:\n> What exactly is a savepointLevel?\n>\n> They seem to have been there for 15 years[1], diligently copied from\n> parent transactions to children, fastidiously checked to avoid crossing\n> a level on rollback or release, but does anything ever change the level\n> from its initial value? I'm drawing a blank[2].\n\nI had a look too, checking for uses where savepointLevel might be set\nas part of a struct initialisation. I can't find any.\n\nThere's some discussion about it in July 2004.\n\nhttps://www.postgresql.org/message-id/flat/Pine.LNX.4.58.0407101609080.4563%40linuxworld.com.au#dad1807aaa73de2be7070a1bc54d0f6b\n\nhttps://www.postgresql.org/message-id/flat/5902.1090695230%40sss.pgh.pa.us#53d8db46b7f452acd19ec89fcb023e71\n\nAdding the field was committed on the 27th.\n\n(I'm very ignorant on the following.)\n\nIt looks like the point of savepoint levels is to distinguish between\nsavepoints created in the top transaction level versus those created\nin nested function calls, and to stop you from trying to\nrelease/rollback to a savepoint belonging to the outer scope. But I\ndon't think we support savepoints from inside functions of any kind.\nVarious PLs use BeginInternalSubTransaction and they handle the\nrolling back/releasing internally.\n\nSo the savepointLevel variable, and the two error checks that use it,\nlook a bit unused. If SAVEPOINT commands were supported in functions,\nyou'd want to increment savepointLevel when you made a subtransaction\non entering the function.\n\nDoes that sound approximately right?\n\nEdmund\n\n\n\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blame;f=src/backend/access/transam/xact.c;h=fd5d6b5;hb=90cb9c3#l93\n>\n> [2]\n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=search&h=0516c61b&st=grep&s=savepointLevel\n\n",
"msg_date": "Sun, 17 Mar 2019 02:29:24 +1300",
"msg_from": "Edmund Horner <ejrh00@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: What is a savepointLevel ?"
},
{
"msg_contents": "Edmund Horner <ejrh00@gmail.com> writes:\n> On Fri, 15 Mar 2019 at 18:18, Chapman Flack <chap@anastigmatix.net> wrote:\n>> What exactly is a savepointLevel?\n>> \n>> They seem to have been there for 15 years[1], diligently copied from\n>> parent transactions to children, fastidiously checked to avoid crossing\n>> a level on rollback or release, but does anything ever change the level\n>> from its initial value? I'm drawing a blank[2].\n\n> I had a look too, checking for uses where savepointLevel might be set\n> as part of a struct initialisation. I can't find any.\n\nYeah, I think that the field's basically been there for future use\nsince day one. The SQL spec discusses savepoint levels, but as far\nas I could find in some desultory searching, the only way to actually\nchange to a new savepoint level is to enter a function or procedure that\nhas the NEW SAVEPOINT LEVEL property, which is syntax we don't have.\n\nEven though the code's dead today, I'm disinclined to remove it;\nnow that we have procedures, the need to treat savepoint levels\nas a real feature might be closer upon us than it has been. It\ndoesn't look like it's costing any significant amount of cycles\nor code anyhow.\n\n(On the other hand, maybe this is something we'd never implement.\nAttaching savepoint level control to the callee, rather than the\ncaller, seems pretty weird to me. AFAICS the point of a new\nsavepoint level would be to prevent the function from messing\nwith savepoints of the outer level, so I'd think what you'd want\nis syntax whereby the caller can protect itself against the\ncallee doing that.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 13:33:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What is a savepointLevel ?"
}
] |
[
{
"msg_contents": "Hi all,\n\nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.\n\nRegards\nTarkeshwar\n\n\n\n\n\n\n\n\n\nHi all,\n \nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.\n \nRegards\nTarkeshwar",
"msg_date": "Fri, 15 Mar 2019 05:19:48 +0000",
"msg_from": "M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>",
"msg_from_op": true,
"msg_subject": "Facing issue in using special characters"
},
{
"msg_contents": "On Thursday, March 14, 2019, M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com>\nwrote:\n>\n> Facing issue in using special characters. We are trying to insert records\n> to a remote Postgres Server and our application not able to perform this\n> because of errors.\n>\n> It seems that issue is because of the special characters that has been\n> used in one of the field of a row.\n>\n\nEmailing -general ONLY is both sufficient and polite. Providing more\ndetail, and ideally an example, is necessary.\n\nDavid J.\n\nOn Thursday, March 14, 2019, M Tarkeshwar Rao <m.tarkeshwar.rao@ericsson.com> wrote:\nFacing issue in using special characters. We are trying to insert records to a remote Postgres Server and our application not able to perform this because of errors.\nIt seems that issue is because of the special characters that has been used in one of the field of a row.Emailing -general ONLY is both sufficient and polite. Providing more detail, and ideally an example, is necessary.David J.",
"msg_date": "Thu, 14 Mar 2019 23:33:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "This is not an issue for \"hackers\" nor \"performance\" in fact even for \n\"general\" it isn't really an issue.\n\n\"Special characters\" is actually nonsense.\n\nWhen people complain about \"special characters\" they haven't thought \nthings through.\n\nIf you are unwilling to think things through and go step by step to make \nsure you know what you are doing, then you will not get it and really \nnobody can help you.\n\nIn my professional experience, people who complain about \"special \ncharacters\" need to be cut loose or be given a chance (if they are \nestablished employees who carry some weight). If a contractor complains \nabout \"special characters\" they need to be fired.\n\nUnderstand charsets -- character set, code point, and encoding. Then \nunderstand how encoding and string literals and \"escape sequences\" in \nstring literals might work.\n\nKnow that UNICODE today is the one standard, and there is no more need \nto do code table switch. There is nothing special about a Hebrew alef or \na greek lower case alpha or a latin A. Nor a hyphen and en-dash or an \nem-dash. All these characters are in the UNICODE. Yes, there are some \nJapanese who claim that they don't like that their Chinese character \nversions are put together with simplified reform Chinese font. But \nthat's a font issue, not a character code issue.\n\n7 bit ASCII is the first page of UNICODE, even in the UTF-8 encoding.\n\nISO Latin 1, or the Windoze 123 whatever special table of ISO Latin 1 \nhas the same code points as UNICODE pages 0 and 1, but not compatible \nwith UTF-8 coding because of the way UTF-8 uses the 8th bit.\n\nBut none of this is likely your problem.\n\nYour problem is about string literals in SQL for examples. About the \nconfiguration of your database (I always use initdb with --locale C and \n--encoding UTF-8). Use UTF-8 in the database. Then all your issues are \nabout string literals in SQL and in JAVA and JSON and XML or whatever \nyou are using.\n\nYou have to do the right thing. If you produce any representation, \nwhether that is XML or JSON or SQL or URL query parameters, or a CSV \nfile, or anything at all, you need to escape your string values properly.\n\nThis question with no detail didn't deserve such a thorough answer, but \nit's my soap box. I do not accept people complaining about \"special \ncharacters\". My own people get that same sermon from me when they make \nthat mistake.\n\n-Gunther\n\nOn 3/15/2019 1:19, M Tarkeshwar Rao wrote:\n>\n> Hi all,\n>\n> Facing issue in using special characters. We are trying to insert \n> records to a remote Postgres Server and our application not able to \n> perform this because of errors.\n>\n> It seems that issue is because of the special characters that has been \n> used in one of the field of a row.\n>\n> Regards\n>\n> Tarkeshwar\n>\n\n\n\n\n\n\nThis is not an issue for \"hackers\" nor \"performance\" in fact even\n for \"general\" it isn't really an issue.\n\"Special characters\" is actually nonsense.\nWhen people complain about \"special characters\" they haven't\n thought things through.\nIf you are unwilling to think things through and go step by step\n to make sure you know what you are doing, then you will not get it\n and really nobody can help you.\nIn my professional experience, people who complain about \"special\n characters\" need to be cut loose or be given a chance (if they are\n established employees who carry some weight). If a contractor\n complains about \"special characters\" they need to be fired.\nUnderstand charsets -- character set, code point, and encoding.\n Then understand how encoding and string literals and \"escape\n sequences\" in string literals might work.\n\nKnow that UNICODE today is the one standard, and there is no more\n need to do code table switch. There is nothing special about a\n Hebrew alef or a greek lower case alpha or a latin A. Nor a hyphen\n and en-dash or an em-dash. All these characters are in the\n UNICODE. Yes, there are some Japanese who claim that they don't\n like that their Chinese character versions are put together with\n simplified reform Chinese font. But that's a font issue, not a\n character code issue. \n\n7 bit ASCII is the first page of UNICODE, even in the UTF-8\n encoding.\n ISO Latin 1, or the Windoze 123 whatever special table of ISO\n Latin 1 has the same code points as UNICODE pages 0 and 1, but not\n compatible with UTF-8 coding because of the way UTF-8 uses the 8th\n bit.\nBut none of this is likely your problem.\nYour problem is about string literals in SQL for examples. About\n the configuration of your database (I always use initdb with\n --locale C and --encoding UTF-8). Use UTF-8 in the database. Then\n all your issues are about string literals in SQL and in JAVA and\n JSON and XML or whatever you are using. \n\nYou have to do the right thing. If you produce any\n representation, whether that is XML or JSON or SQL or URL query\n parameters, or a CSV file, or anything at all, you need to escape\n your string values properly. \n\nThis question with no detail didn't deserve such a thorough\n answer, but it's my soap box. I do not accept people complaining\n about \"special characters\". My own people get that same sermon\n from me when they make that mistake.\n\n-Gunther\n\nOn 3/15/2019 1:19, M Tarkeshwar Rao\n wrote:\n\n\n\n\n\n\nHi all,\n�\nFacing issue in using special characters.\n We are trying to insert records to a remote Postgres Server\n and our application not able to perform this because of\n errors.\nIt seems that issue is because of the\n special characters that has been used in one of the field of a\n row.\n�\nRegards\nTarkeshwar",
"msg_date": "Fri, 15 Mar 2019 11:59:48 -0400",
"msg_from": "Gunther <raj@gusw.net>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "On 3/15/19 11:59 AM, Gunther wrote:\n> This is not an issue for \"hackers\" nor \"performance\" in fact even for\n> \"general\" it isn't really an issue.\n\nAs long as it's already been posted, may as well make it something\nhelpful to find in the archive.\n\n> Understand charsets -- character set, code point, and encoding. Then\n> understand how encoding and string literals and \"escape sequences\" in\n> string literals might work.\n\nGood advice for sure.\n\n> Know that UNICODE today is the one standard, and there is no more need\n\nI wasn't sure from the question whether the original poster was in\na position to choose the encoding of the database. Lots of things are\neasier if it can be set to UTF-8 these days, but perhaps it's a legacy\nsituation.\n\nMaybe a good start would be to go do\n\n SHOW server_encoding;\n SHOW client_encoding;\n\nand then hit the internet and look up what that encoding (or those\nencodings, if different) can and can't represent, and go from there.\n\nIt's worth knowing that, when the server encoding isn't UTF-8,\nPostgreSQL will have the obvious limitations entailed by that,\nbut also some non-obvious ones that may be surprising, e.g. [1].\n\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmobUp8Q-wcjaKvV%3DsbDcziJoUUvBCB8m%2B_xhgOV4DjiA1A%40mail.gmail.com\n\n",
"msg_date": "Fri, 15 Mar 2019 15:26:50 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "Many of us have faced character encoding issues because we are not in control of our input sources and made the common assumption that UTF-8 covers everything.\r\n\r\nIn my lab, as an example, some of our social media posts have included ZawGyi Burmese character sets rather than Unicode Burmese. (Because Myanmar developed technology In a closed to the world environment, they made up their own non-standard character set which is very common still in Mobile phones.). We had fully tested the app with Unicode Burmese, but honestly didn’t know ZawGyi was even a thing that we would see in our dataset. We’ve also had problems with non-Unicode word separators in Arabic.\r\n\r\nWhat we’ve found to be helpful is to view the troubling code in a hex editor and determine what non-standard characters may be causing the problem.\r\n\r\nIt may be some data conversion is necessary before insertion. But the first step is knowing WHICH characters are causing the issue.\r\n\r\n\n\n\n\n\n\r\nMany of us have faced character encoding issues because we are not in control of our input sources and made the common assumption that UTF-8 covers everything.\r\n\n\nIn my lab, as an example, some of our social media posts have included ZawGyi Burmese character sets rather than Unicode Burmese. (Because Myanmar developed technology In a closed to the world environment, they made up their own non-standard character\r\n set which is very common still in Mobile phones.). We had fully tested the app with Unicode Burmese, but honestly didn’t know ZawGyi was even a thing that we would see in our dataset. We’ve also had problems with non-Unicode word separators in Arabic.\n\n\nWhat we’ve found to be helpful is to view the troubling code in a hex editor and determine what non-standard characters may be causing the problem.\n\n\nIt may be some data conversion is necessary before insertion. But the first step is knowing WHICH characters are causing the issue.",
"msg_date": "Sun, 17 Mar 2019 15:01:40 +0000",
"msg_from": "\"Warner, Gary, Jr\" <gar@uab.edu>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
},
{
"msg_contents": "On 2019-03-17 15:01:40 +0000, Warner, Gary, Jr wrote:\n> Many of us have faced character encoding issues because we are not in control\n> of our input sources and made the common assumption that UTF-8 covers\n> everything.\n\nUTF-8 covers \"everything\" in the sense that there is a round-trip from\neach character in every commonly-used charset/encoding to Unicode and\nback.\n\nThe actual code may of course be different. For example, the € sign is\n0xA4 in iso-8859-15, but U+20AC in Unicode. So you need an\nencoding/decoding step.\n\nAnd \"commonly-used\" means just that. Unicode covers a lot of character\nsets, but it can't cover every character set ever invented (I invented\nmy own character sets when I was sixteen. Nobody except me ever used\nthem and they have long succumbed to bit rot).\n\n> In my lab, as an example, some of our social media posts have included ZawGyi\n> Burmese character sets rather than Unicode Burmese. (Because Myanmar developed\n> technology In a closed to the world environment, they made up their own\n> non-standard character set which is very common still in Mobile phones.).\n\nI'd be surprised if there was a character set which is \"very common in\nMobile phones\", even in a relatively poor country like Myanmar. Does\nZawGyi actually include characters which aren't in Unicode are are they\njust encoded differently?\n\n hp\n\n-- \n _ | Peter J. Holzer | we build much bigger, better disasters now\n|_|_) | | because we have much more sophisticated\n| | | hjp@hjp.at | management tools.\n__/ | http://www.hjp.at/ | -- Ross Anderson <https://www.edge.org/>",
"msg_date": "Mon, 18 Mar 2019 22:19:23 +0100",
"msg_from": "\"Peter J. Holzer\" <hjp-pgsql@hjp.at>",
"msg_from_op": false,
"msg_subject": "Re: Facing issue in using special characters"
}
] |
[
{
"msg_contents": "Hi,all\r\n\r\nOn website: https://wiki.postgresql.org/wiki/Todo#libpq\r\nI found that in libpq module,there is a todo case:\r\n-------------------------------------------------------------------------------\r\nPrevent PQfnumber() from lowercasing unquoted column names\r\nPQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it\r\n\r\n-------------------------------------------------------------------------------\r\nI am interested in this one. So ,Had it be fixed?\r\nIf not, I am willing to do so.\r\nIn that way ,could anyone tell me the detail features of this function it supported to be?\r\nI will try to fix it~\r\n\r\n\r\n--\r\nBest Regards\r\n-----------------------------------------------------\r\nWu Fei\r\nDevelopment Department II\r\nSoftware Division III\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-9356\r\nCOINS: 7998-9356\r\nFAX: +86+25-83317685\r\nMAIL:wufei.fnst@cn.fujitsu.com\r\nhttp://www.fujitsu.com/cn/fnst/\r\n---------------------------------------------------\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nHi,all\n \nOn website: \nhttps://wiki.postgresql.org/wiki/Todo#libpq\nI found that in libpq module,there is a todo case:\n-------------------------------------------------------------------------------\n\nPrevent PQfnumber() from lowercasing unquoted column names\n\nPQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it\n \n-------------------------------------------------------------------------------\nI am interested in this one. So ,Had it be fixed?\nIf not, I am willing to do so.\nIn that way ,could anyone tell me the detail features of this function it supported to be?\nI will try to fix it~\n \n \n--\nBest Regards\n-----------------------------------------------------\n\nWu Fei\nDevelopment Department II\nSoftware Division III\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-9356\nCOINS: 7998-9356\nFAX: +86+25-83317685\nMAIL:wufei.fnst@cn.fujitsu.com\nhttp://www.fujitsu.com/cn/fnst/\n---------------------------------------------------",
"msg_date": "Fri, 15 Mar 2019 08:50:49 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Willing to fix a TODO case in libpq module"
},
{
"msg_contents": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com> writes:\n> On website: https://wiki.postgresql.org/wiki/Todo#libpq\n> I found that in libpq module,there is a todo case:\n> -------------------------------------------------------------------------------\n> Prevent PQfnumber() from lowercasing unquoted column names\n> PQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it\n\n> -------------------------------------------------------------------------------\n> I am interested in this one. So ,Had it be fixed?\n\nHmm, I think this item might be obsolete. The existing definition that\nPQfnumber performs quote-stripping and down-casing is a bit overcomplicated,\nbut it's not impossible to work with. It's pretty hard to call it a bug,\nso I don't think we'd consider actually changing the function's behavior.\n\nMaybe there's room for a second function with a different name that\njust looks for an exact match to the input string. But I've heard\nfew if any requests for that. The use-case for PQfnumber is pretty\nnarrow to begin with --- I suspect most apps just hard-wire the expected\ncolumn number --- so the demand for a marginally-more-efficient version\nwould be even narrower.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 13:43:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a TODO case in libpq module"
}
] |
[
{
"msg_contents": "Hello\n\nPer discussion started here: https://www.postgresql.org/message-id/CA%2BTgmoZWSLUjVcc9KBSVvbn%3DU5QRgW1O-MgUX0y5CnLZOA2qyQ%40mail.gmail.com\n\nWe have INFO ereport messages in alter table attach partition like this:\n> partition constraint for table \\\"%s\\\" is implied by existing constraints\n\nPersonally I like this message and not want remove it.\nBut recently my colleague noticed that INFO level is written to stderr by psql. For example, simple command\n\n> psql -c \"alter table measurement attach partition measurement_y2006m04 for values from ('2006-04-01') to ('2006-05-01');\"\n\ncan produce stderr output like error, but this is expected behavior from successful execution.\n\nAnd INFO level always sent to client regardless of client_min_messages as clearly documented in src/include/utils/elog.h\n\nSo now I am +1 to idea of change error level for this messages. I attach patch to lower such ereport to DEBUG1 level\n\nthanks\n\nPS: possible we can change level to NOTICE but I doubt we will choose this way\n\nregards, Sergei",
"msg_date": "Fri, 15 Mar 2019 12:55:36 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 12:55:36PM +0300, Sergei Kornilov wrote:\n> We have INFO ereport messages in alter table attach partition like this:\n> > partition constraint for table \\\"%s\\\" is implied by existing constraints\n> \n> So now I am +1 to idea of change error level for this messages. I attach patch to lower such ereport to DEBUG1 level\n\n+1\n\nI reviewed existing logging behavior and now I agree.\n\nAlso, I wondered if it was worth considering a way to configure logging which\nscales better than boolean GUCs:\n\nlog_duration\nlog_checkpoints\nlog_(dis)connections\nlog_lock_waits\nlog_replication_commands\n..plus a bunch more developer ones:\nhttps://www.postgresql.org/docs/current/runtime-config-developer.html\n\nI'm (very tentatively) thinking of a string GUC which is split on whitespace\nand is parsed into a bitmap which is consulted instead of the existing vars, as\nin: if (logging_bits & LOG_CHECKPOINTS) ... which could be either an enum or\n#define.. If there's an entry in logging_bits which isn't recognized, I guess\nit'd be logged at NOTICE or WARNING.\n\nI'd also request this be conditional promoted from DEBUG1 to LOG depending a\nnew logging_bit for LOG_PARALLEL_WORKER:\n|starting background worker process \"parallel worker for PID...\nWhen written to csvlog (and when using log_min_error_severity=notice or\nsimilar), that would help answering questions like: \"what queries are my\nmax_parallel_workers(_per_process) being used for (at the possible exclusion of\nother queries)?\". I noticed on our servers that a query running possibly every\n~10sec had been using parallel query, which not only hurt that query, but also\nmeant that works may have been unavailable for report queries which could have\nbenefited from their use. It'd be nice to know if there were other such issues.\n\nJustin\n\n",
"msg_date": "Sat, 16 Mar 2019 07:24:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\nThis change is discussed as open item for pg12. Seems we have nor objections nor agreement. I attached updated version due merge conflict.\n\n> Per discussion started here: https://www.postgresql.org/message-id/CA%2BTgmoZWSLUjVcc9KBSVvbn%3DU5QRgW1O-MgUX0y5CnLZOA2qyQ%40mail.gmail.com\n\nregards, Sergei",
"msg_date": "Mon, 01 Jul 2019 15:17:46 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 12:17 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> This change is discussed as open item for pg12. Seems we have nor objections nor agreement. I attached updated version due merge conflict.\n>\n> > Per discussion started here: https://www.postgresql.org/message-id/CA%2BTgmoZWSLUjVcc9KBSVvbn%3DU5QRgW1O-MgUX0y5CnLZOA2qyQ%40mail.gmail.com\n\nI took the liberty of setting this to \"Ready for Committer\" to see if\nwe can get a decision one way or another and clear both a Commitfest\nitem and a PG12 Open Item. No committer is signed up, but it looks\nlike Amit L wrote the messages in question, Robert committed them, and\nDavid made arguments for AND against on the referenced thread, so I'm\nCCing them, and retreating to a safe distance.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 15 Jul 2019 11:46:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Mon, 15 Jul 2019 at 11:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Jul 2, 2019 at 12:17 AM Sergei Kornilov <sk@zsrv.org> wrote:\n> > This change is discussed as open item for pg12. Seems we have nor objections nor agreement. I attached updated version due merge conflict.\n> >\n> > > Per discussion started here: https://www.postgresql.org/message-id/CA%2BTgmoZWSLUjVcc9KBSVvbn%3DU5QRgW1O-MgUX0y5CnLZOA2qyQ%40mail.gmail.com\n>\n> I took the liberty of setting this to \"Ready for Committer\" to see if\n> we can get a decision one way or another and clear both a Commitfest\n> item and a PG12 Open Item. No committer is signed up, but it looks\n> like Amit L wrote the messages in question, Robert committed them, and\n> David made arguments for AND against on the referenced thread, so I'm\n> CCing them, and retreating to a safe distance.\n\nI think the only argument against it was around lack of ability to\ntest if the constraint was used to verify no row breaks the partition\nbound during the ATTACH PARTITION.\n\nDoes anyone feel strongly that we need to the test to confirm that the\nconstraint was used for this?\n\nIf nobody feels so strongly about that then I say we can just push\nthis. It seems something that's unlikely to get broken, but then you\ncould probably say that for most things our tests test for.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jul 2019 14:03:25 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Sun, Jul 14, 2019 at 7:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... and retreating to a safe distance.\n\nIs that measure in, like, light-years?\n\nI vote for changing it to NOTICE instead of DEBUG1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 15 Jul 2019 11:13:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On 2019-Jul-15, David Rowley wrote:\n\n> I think the only argument against it was around lack of ability to\n> test if the constraint was used to verify no row breaks the partition\n> bound during the ATTACH PARTITION.\n\nWould it work to set client_min_messages to DEBUG1 for the duration of\nthe test, or does that have too much unrelated noise?\n\n> Does anyone feel strongly that we need to the test to confirm that the\n> constraint was used for this?\n\nWell, IME if we don't test it, we're sure to break it in the future.\nThe only questions are 1) when, 2) how long till we notice, 3) how\ndifficult is it to fix at that point. I think breakage is easily\nnoticed by users, and a fix is unlikely to require hard measures such as\nABI breaks or catversion bumps. I'd like more than zero tests, but it\ndoesn't seem *that* severe.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 15 Jul 2019 12:07:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Tue, 16 Jul 2019 at 03:13, Robert Haas <robertmhaas@gmail.com> wrote:\n> I vote for changing it to NOTICE instead of DEBUG1.\n\nWell, there are certainly other DDL commands that spit out NOTICES.\n\npostgres=# create table z (a int);\nCREATE TABLE\npostgres=# create table x (a int) inherits(z);\nNOTICE: merging column \"a\" with inherited definition\nCREATE TABLE\n\nHowever, we did get rid of a few of those a while back. In 9.2 we used to have:\n\npostgres=# create table a (a int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"a_pkey\" for table \"a\"\n\nI'm pretty keen for consistency. Having ATTACH PARTITION spit out an\nINFO and merge attributes a NOTICE, and SET NOT NULL just a DEBUG1 is\npretty far from consistent. I wouldn't object to making them all\nNOTICE. I've only seen complaints about the INFO one.\n\nWould anyone complain if we made them all INFO?\n\nIf we do that should we backpatch the change into PG12. SET NOT NULL\nusing a constraint was new there.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 16 Jul 2019 13:15:38 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Would anyone complain if we made them all INFO?\n\nThat would be remarkably horrid, because that makes them unsuppressable.\n\nI'm generally for having these be less in-your-face, not more so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 21:26:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:15 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> On Tue, 16 Jul 2019 at 03:13, Robert Haas <robertmhaas@gmail.com> wrote:\n> > I vote for changing it to NOTICE instead of DEBUG1.\n>\n> Well, there are certainly other DDL commands that spit out NOTICES.\n>\n> postgres=# create table z (a int);\n> CREATE TABLE\n> postgres=# create table x (a int) inherits(z);\n> NOTICE: merging column \"a\" with inherited definition\n> CREATE TABLE\n>\n> However, we did get rid of a few of those a while back. In 9.2 we used to have:\n>\n> postgres=# create table a (a int primary key);\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> \"a_pkey\" for table \"a\"\n>\n> I'm pretty keen for consistency. Having ATTACH PARTITION spit out an\n> INFO and merge attributes a NOTICE, and SET NOT NULL just a DEBUG1 is\n> pretty far from consistent. I wouldn't object to making them all\n> NOTICE. I've only seen complaints about the INFO one.\n\nFwiw, I'm leaning toward NOTICE for all. It's helpful for users to\nknow a certain action was taken.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 16 Jul 2019 11:40:22 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\nHere is two patches with NOTICE ereport: one for partitions operations and one for \"set not null\" (for consistency)\n\nregards, Sergei",
"msg_date": "Tue, 16 Jul 2019 14:19:43 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On 2019-Jul-15, David Rowley wrote:\n\n> I think the only argument against it was around lack of ability to\n> test if the constraint was used to verify no row breaks the partition\n> bound during the ATTACH PARTITION.\n\nWould it work to set client_min_messages to DEBUG1 for the duration of\nthe test, or does that have too much unrelated noise?\n\n> Does anyone feel strongly that we need to the test to confirm that the\n> constraint was used for this?\n\nWell, IME if we don't test it, we're sure to break it in the future.\nThe only questions are 1) when, 2) how long till we notice, 3) how\ndifficult is it to fix at that point. I think breakage is easily\nnoticed by users, and a fix is unlikely to require hard measures such as\nABI breaks or catversion bumps. I'd like more than zero tests, but it\ndoesn't seem *that* severe.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 13:56:27 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jul-15, David Rowley wrote:\n>> I think the only argument against it was around lack of ability to\n>> test if the constraint was used to verify no row breaks the partition\n>> bound during the ATTACH PARTITION.\n\n> Would it work to set client_min_messages to DEBUG1 for the duration of\n> the test, or does that have too much unrelated noise?\n\nIt's not awful. I tried inserting \"set client_min_messages = debug1\"\ninto alter_table.sql, and got the attached diffs. Evidently we\ncould not keep it on throughout that test script, because of the\nvariable OIDs in some of the toast table names. But in the areas\nwhere we're currently emitting INFO messages, we could have it on\nand not have any other noise except some \"verifying table\" messages,\nwhich actually seem like a good thing for this test.\n\nSo right at the moment my vote is to downgrade all of these to DEBUG1\nand fix the test-coverage complaint by adjusting client_min_messages\nas needed in the test scripts.\n\nA potential objection is that this'd constrain people's ability to add\nDEBUG1 messages in code reachable from ALTER TABLE --- but we can\ncross that bridge when we come to it.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 17 Jul 2019 14:20:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hi\n\n> It's not awful. I tried inserting \"set client_min_messages = debug1\"\n> into alter_table.sql\n\nWe already did this in March. And this change was reverted in 5655565c077c53b6e9b4b9bfcdf96439cf3af065 because this will not work on buildfarm animals with log_statement = 'all'\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 17 Jul 2019 21:44:47 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n>> It's not awful. I tried inserting \"set client_min_messages = debug1\"\n>> into alter_table.sql\n\n> We already did this in March. And this change was reverted in 5655565c077c53b6e9b4b9bfcdf96439cf3af065 because this will not work on buildfarm animals with log_statement = 'all'\n\nOh :-(.\n\nSeems like maybe what we need is to transpose the tests at issue into\na TAP test? That could grep for the messages we care about and disregard\nother ones.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:01:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On Thu, 18 Jul 2019 at 07:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Seems like maybe what we need is to transpose the tests at issue into\n> a TAP test? That could grep for the messages we care about and disregard\n> other ones.\n\nThat seems like a good idea. I guess that's a vote in favour of\nhaving DEBUG1 for ATTACH PARTITION and SET NOT NULL too?\n\nI don't know my way around the tap tests that well, but I started to\nlook at this and ended up a bit stuck on where the test should be\nlocated. I see src/test/modules/brin has some brin related tests, so\nI thought that src/test/modules/alter_table might be the spot, but\nafter looking at src/test/README I see it mentions that only tests\nthat are themselves an extension should be located within:\n\nmodules/\n Extensions used only or mainly for test purposes, generally not suitable\n for installing in production databases\n\nThere are a few others in the same situation as brin; commit_ts,\nsnapshot_too_old, unsafe_tests. I see unsafe_tests does mention the\nlack of module in the README file.\n\nIs there a better place to do the alter_table ones? Or are the above\nones in there because there's no better place?\n\nAlso, if I'm not wrong, the votes so far appear to be:\n\nNOTICE: Robert, Amit\nDEBUG1: Tom, Alvaro (I'm entirely basing this on the fact that they\nmentioned possible ways to test with DEBUG1)\n\nI'll be happy with DEBUG1 if we can get tests to test it.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 14:22:03 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On 2019-Jul-23, David Rowley wrote:\n\n> Also, if I'm not wrong, the votes so far appear to be:\n> \n> NOTICE: Robert, Amit\n> DEBUG1: Tom, Alvaro (I'm entirely basing this on the fact that they\n> mentioned possible ways to test with DEBUG1)\n> \n> I'll be happy with DEBUG1 if we can get tests to test it.\n\nWell, I think the user doesn't *care* to see a message about the\noptimization. They just want the command to be fast. *We* (developers)\nwant the message in order to ensure the command remains fast. So some\nDEBUG level seems the right thing.\n\nAnother way to reach the same conclusion is to think about the \"building\nindex ... serially\" messages, which are are pretty much in the same\ncategory and are using DEBUG1. (I do think the TOAST ones are just\nnoise though, and since they disrupt potential testing with\nclient_min_messages=debug1, another way to go about this is to reduce\nthose to DEBUG2 or just elide them.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 12:35:59 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On 2019-Jul-23, David Rowley wrote:\n\n> I don't know my way around the tap tests that well, but I started to\n> look at this and ended up a bit stuck on where the test should be\n> located. I see src/test/modules/brin has some brin related tests, so\n> I thought that src/test/modules/alter_table might be the spot, but\n> after looking at src/test/README I see it mentions that only tests\n> that are themselves an extension should be located within:\n> \n> modules/\n> Extensions used only or mainly for test purposes, generally not suitable\n> for installing in production databases\n> \n> There are a few others in the same situation as brin; commit_ts,\n> snapshot_too_old, unsafe_tests. I see unsafe_tests does mention the\n> lack of module in the README file.\n\nThe readme in src/test/modules says \"extensions or libraries\", and I see\nno reason to think that a TAP test would be totally out of place there.\nI think the alter_table/ subdir is a perfect place.\n\nSergei, can we enlist you to submit a patch for this? Namely reduce the\nlog level to DEBUG1 and add a TAP test in src/test/modules/alter_table/\nthat verifies that the message is or isn't emitted, as appropriate.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 14 Aug 2019 17:20:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hi\n\n> Sergei, can we enlist you to submit a patch for this? Namely reduce the\n> log level to DEBUG1 and add a TAP test in src/test/modules/alter_table/\n> that verifies that the message is or isn't emitted, as appropriate.\n\nYes, will do. Probably in few days.\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 15 Aug 2019 17:48:55 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\n> Sergei, can we enlist you to submit a patch for this? Namely reduce the\n> log level to DEBUG1 and add a TAP test in src/test/modules/alter_table/\n> that verifies that the message is or isn't emitted, as appropriate.\n\nI created this patch.\nI test message existence. Also I check message \"verifying table\" (generated on DEBUG1 from ATRewriteTable). So with manually damaged logic in NotNullImpliedByRelConstraints or ConstraintImpliedByRelConstraint \"make check\" may works but fails on new test during \"make check-world\". As we want.\n\nregards, Sergei",
"msg_date": "Tue, 20 Aug 2019 18:20:10 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\nI noticed appveyor build on windows is not happy:\n\n> perl buildsetup.pl\n> Could not determine contrib module type for alter_table\n> at buildsetup.pl line 38.\n\nBut I have no idea why. I can't check on windows. Possible I miss some change while adding new module to tree. Will check. Please let me know if root of such error is known.\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/26831382\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.53294\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 21 Aug 2019 23:01:25 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n> I noticed appveyor build on windows is not happy:\n>> perl buildsetup.pl\n>> Could not determine contrib module type for alter_table\n>> at buildsetup.pl line 38.\n\n> But I have no idea why. I can't check on windows. Possible I miss some change while adding new module to tree. Will check. Please let me know if root of such error is known.\n\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/builds/26831382\n> https://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.53294\n\ngrep is your friend: that message is coming out of Mkvcbuild.pm's\nAddContrib. (No idea why perl is fingering someplace else.) Apparently\nyou need to have one of MODULE_big, MODULES, or PROGRAM defined, unless\nyou add the module to @contrib_excludes to keep it from being built.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Aug 2019 16:20:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\nThank you! It seems the most appropriate option for this test is to change @contrib_excludes\nDone in attached patch, will check appveyor reaction.\n\nregards, Sergei",
"msg_date": "Sun, 25 Aug 2019 12:42:24 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n> Thank you! It seems the most appropriate option for this test is to change @contrib_excludes\n> Done in attached patch, will check appveyor reaction.\n\nAppveyor seems happy, so I took a look through this. There's little\nto say about 0001: if we're going to drop the elevel, that's what\nresults. 0002 is slightly more interesting. I did not like calling\nthe module \"alter_table\": we should encourage any future similar needs\nto add more tests in this same directory, not invent whole new ones.\nI went with \"test_misc\", but that's still open for bikeshedding of course.\nI did a minor amount of cleanup in the test script, including running\nit through pgperltidy, but no substantive changes. Also added a README.\n\nI think there are basically two objections that might be raised to\ncommitting this:\n\n1. Making this a src/test/modules/ subdirectory, when there is no\nactual extension module in it, is a triumph of expediency over\ngood file-tree structure. If there were no other constraints\nI'd want to call it src/test/misc/ or src/test/tap/ or something\nlike that. The expediency angle is that if we do that, the\nbuildfarm client script will need changes to know about it.\nIs it better to go with the long-term view and accept that we\nwon't have full buildfarm coverage right away?\n\n2. It seems kind of expensive and redundant to duplicate all these\ntest cases from the core tests. On my machine the new test script\ntook close to 2.5 seconds as-submitted. I was able to knock that\ndown to 2.1 by the expedient of combining adjacent psql invocations\nthat we didn't need to examine the results of. But it still is\nadding a noticeable amount of time to check-world, which takes only\ncirca 100s overall (with parallelism). Should we think about\ndeleting some of these test cases from the core tests?\n\n(An argument not to do so is that the test conditions are a bit\ndifferent: since the TAP test starts a new session for each\nquery, it fails to exercise carry-over of relcache entries,\nwhich might possibly be interesting in this area.)\n\nOr, of course, we could forget the whole thing and switch the output\nlevel for these messages to NOTICE instead. I'm not for that, but\nnow that we see what it'll cost us to have them better hidden, we can\nat least have an informed debate.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 03 Sep 2019 14:42:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "I wrote:\n> Or, of course, we could forget the whole thing and switch the output\n> level for these messages to NOTICE instead. I'm not for that, but\n> now that we see what it'll cost us to have them better hidden, we can\n> at least have an informed debate.\n> Thoughts?\n\nHearing no comments, I've pushed that patch, and marked the v12\nopen item closed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Sep 2019 19:10:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "On 2019-Sep-07, Tom Lane wrote:\n\n> I wrote:\n> > Or, of course, we could forget the whole thing and switch the output\n> > level for these messages to NOTICE instead. I'm not for that, but\n> > now that we see what it'll cost us to have them better hidden, we can\n> > at least have an informed debate.\n> > Thoughts?\n> \n> Hearing no comments, I've pushed that patch, and marked the v12\n> open item closed.\n\nI've marked https://commitfest.postgresql.org/24/2076/ committed also.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 7 Sep 2019 19:45:55 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org> writes:\n> I've marked https://commitfest.postgresql.org/24/2076/ committed also.\n\nYeah, I just remembered about doing that, and saw you'd beat me to it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Sep 2019 19:49:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
},
{
"msg_contents": "Hello\n\n> Hearing no comments, I've pushed that patch, and marked the v12\n> open item closed.\n\nThank you!\n\nregards, Sergei\n\n\n",
"msg_date": "Sun, 08 Sep 2019 12:31:04 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": true,
"msg_subject": "Re: Change ereport level for QueuePartitionConstraintValidation"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nFor a project of ours we need GTIN14 data type support. The isn \nextension already supports EAN13, and a '0' prefixed EAN13 is a valid \nGTIN14. The leftmost \"new\" 14th digit is a packaging level indicator \nwhich we need (= using EAN13 and faking a leading 0 in output doesn't \ncut it).\n\nLooking at the code I saw every format that isn-extension supports is \nstored as an EAN13. Theoretically that can be changed to be GTIN14, but \nthat would mean quite a lot of rewrite I feared, so I chose to code only \nGTIN14 I/O separetely to not interfere with any existing conversion \nmagic. This yields an easier to understand patch and doesn't touch \nexisting functionality. However it introduces redundancy to a certain \nextent.\n\nFind my patch attached. Please let me know if there are things that need \nchanges, I'll do my best to get GTIN support into postgresql.\n\nthanks in advance\n mike",
"msg_date": "Fri, 15 Mar 2019 17:01:49 +0100",
"msg_from": "Michael Kefeder <mike@multiwave.ch>",
"msg_from_op": true,
"msg_subject": "GTIN14 support for contrib/isn"
},
{
"msg_contents": "Michael Kefeder <mike@multiwave.ch> writes:\n> For a project of ours we need GTIN14 data type support.\n\nHm, what is that and where would a reviewer find the specification for it?\n\n> Looking at the code I saw every format that isn-extension supports is \n> stored as an EAN13. Theoretically that can be changed to be GTIN14, but \n> that would mean quite a lot of rewrite I feared, so I chose to code only \n> GTIN14 I/O separetely to not interfere with any existing conversion \n> magic. This yields an easier to understand patch and doesn't touch \n> existing functionality. However it introduces redundancy to a certain \n> extent.\n\nYeah, you certainly don't get to change the on-disk format of the existing\ntypes, unfortunately. Not sure what the least messy way of dealing with\nthat is. I guess we do want this to be part of contrib/isn rather than\nan independent module, if there are sane datatype conversions with the\nexisting isn types.\n\n> Find my patch attached. Please let me know if there are things that need \n> changes, I'll do my best to get GTIN support into postgresql.\n\nWell, two comments immediately:\n\n* where's the documentation changes?\n\n* simply editing the .sql file in-place is not acceptable; that breaks\nthe versioning conventions for extensions, and leaves users with no\neasy upgrade path. What you need to do is create a version upgrade\nscript that adds the new objects. For examples look for other recent\npatches that have added features to contrib modules, eg\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eb6f29141bed9dc95cb473614c30f470ef980705\n\nAlso, I'm afraid you've pretty much missed the deadline to get this\ninto PG v12; we've already got more timely-submitted patches than\nwe're likely to be able to finish reviewing. Please add it to the\nfirst v13 commit fest,\n\nhttps://commitfest.postgresql.org/23/\n\nso that we don't forget about it when the time does come to look at it.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Mar 2019 12:27:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GTIN14 support for contrib/isn"
},
{
"msg_contents": "\nAm 15.03.19 um 17:27 schrieb Tom Lane:\n> Michael Kefeder <mike@multiwave.ch> writes:\n>> For a project of ours we need GTIN14 data type support.\n> \n> Hm, what is that and where would a reviewer find the specification for it?\n> \nspecs are from GS1 here https://www.gs1.org/standards/id-keys/gtin\nside-note EAN13 is actually called GTIN-13 now. Wikipedia has a quick \noverview https://en.wikipedia.org/wiki/Global_Trade_Item_Number\n\n>> Looking at the code I saw every format that isn-extension supports is\n>> stored as an EAN13. Theoretically that can be changed to be GTIN14, but\n>> that would mean quite a lot of rewrite I feared, so I chose to code only\n>> GTIN14 I/O separetely to not interfere with any existing conversion\n>> magic. This yields an easier to understand patch and doesn't touch\n>> existing functionality. However it introduces redundancy to a certain\n>> extent.\n> \n> Yeah, you certainly don't get to change the on-disk format of the existing\n> types, unfortunately. Not sure what the least messy way of dealing with\n> that is. I guess we do want this to be part of contrib/isn rather than\n> an independent module, if there are sane datatype conversions with the\n> existing isn types.\n> \nthe on-disk format does not change (it would support even longer codes \nit's just an integer where one bit is used for valid/invalid flag, did \nnot touch that at all). Putting GTIN14 in isn makes sense I find and is \nback/forward compatible.\n\n>> Find my patch attached. Please let me know if there are things that need\n>> changes, I'll do my best to get GTIN support into postgresql.\n> \n> Well, two comments immediately:\n> \n> * where's the documentation changes?\n> \n> * simply editing the .sql file in-place is not acceptable; that breaks\n> the versioning conventions for extensions, and leaves users with no\n> easy upgrade path. What you need to do is create a version upgrade\n> script that adds the new objects. For examples look for other recent\n> patches that have added features to contrib modules, eg\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eb6f29141bed9dc95cb473614c30f470ef980705\n> \n> Also, I'm afraid you've pretty much missed the deadline to get this\n> into PG v12; we've already got more timely-submitted patches than\n> we're likely to be able to finish reviewing. Please add it to the\n> first v13 commit fest,\n> \n> https://commitfest.postgresql.org/23/\n> \n> so that we don't forget about it when the time does come to look at it.\n> \n> \t\t\tregards, tom lane\n> \n\nthanks for the feedback! will do mentioned documentation changes and \ncreate a separate upgrade sql file. Making it into v13 is fine by me.\n\nbr\n mike\n\n",
"msg_date": "Fri, 15 Mar 2019 17:42:26 +0100",
"msg_from": "Michael Kefeder <mike@multiwave.ch>",
"msg_from_op": true,
"msg_subject": "Re: GTIN14 support for contrib/isn"
},
{
"msg_contents": "čt 8. 6. 2023 v 17:20 odesílatel Michael Kefeder <mike@multiwave.ch> napsal:\n>\n>\n> Am 15.03.19 um 17:27 schrieb Tom Lane:\n> > Michael Kefeder <mike@multiwave.ch> writes:\n> >> For a project of ours we need GTIN14 data type support.\n> >\n> > Hm, what is that and where would a reviewer find the specification for it?\n> >\n> specs are from GS1 here https://www.gs1.org/standards/id-keys/gtin\n> side-note EAN13 is actually called GTIN-13 now. Wikipedia has a quick\n> overview https://en.wikipedia.org/wiki/Global_Trade_Item_Number\n>\n> >> Looking at the code I saw every format that isn-extension supports is\n> >> stored as an EAN13. Theoretically that can be changed to be GTIN14, but\n> >> that would mean quite a lot of rewrite I feared, so I chose to code only\n> >> GTIN14 I/O separetely to not interfere with any existing conversion\n> >> magic. This yields an easier to understand patch and doesn't touch\n> >> existing functionality. However it introduces redundancy to a certain\n> >> extent.\n> >\n> > Yeah, you certainly don't get to change the on-disk format of the existing\n> > types, unfortunately. Not sure what the least messy way of dealing with\n> > that is. I guess we do want this to be part of contrib/isn rather than\n> > an independent module, if there are sane datatype conversions with the\n> > existing isn types.\n> >\n> the on-disk format does not change (it would support even longer codes\n> it's just an integer where one bit is used for valid/invalid flag, did\n> not touch that at all). Putting GTIN14 in isn makes sense I find and is\n> back/forward compatible.\n>\n> >> Find my patch attached. Please let me know if there are things that need\n> >> changes, I'll do my best to get GTIN support into postgresql.\n> >\n> > Well, two comments immediately:\n> >\n> > * where's the documentation changes?\n> >\n> > * simply editing the .sql file in-place is not acceptable; that breaks\n> > the versioning conventions for extensions, and leaves users with no\n> > easy upgrade path. What you need to do is create a version upgrade\n> > script that adds the new objects. For examples look for other recent\n> > patches that have added features to contrib modules, eg\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eb6f29141bed9dc95cb473614c30f470ef980705\n> >\n> > Also, I'm afraid you've pretty much missed the deadline to get this\n> > into PG v12; we've already got more timely-submitted patches than\n> > we're likely to be able to finish reviewing. Please add it to the\n> > first v13 commit fest,\n> >\n> > https://commitfest.postgresql.org/23/\n> >\n> > so that we don't forget about it when the time does come to look at it.\n> >\n> > regards, tom lane\n> >\n>\n> thanks for the feedback! will do mentioned documentation changes and\n> create a separate upgrade sql file. Making it into v13 is fine by me.\n\nHello!\n\nIf I understand it well, this patch wasn't finished and submitted\nafter this discussion. If there is still interest, I can try to polish\nthe patch, rebase and submit. I'm interested in GTIN14 support.\n\n> br\n> mike\n>\n>\n>\n\n\n",
"msg_date": "Thu, 8 Jun 2023 17:23:36 +0200",
"msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GTIN14 support for contrib/isn"
},
{
"msg_contents": "\n------- Original Message -------\nOn Thursday, June 8th, 2023 at 5:23 PM, Josef Šimánek <josef.simanek@gmail.com> wrote:\n\n\n> čt 8. 6. 2023 v 17:20 odesílatel Michael Kefeder mike@multiwave.ch napsal:\n> \n> > Am 15.03.19 um 17:27 schrieb Tom Lane:\n> > \n> > > Michael Kefeder mike@multiwave.ch writes:\n> > > \n> > > > For a project of ours we need GTIN14 data type support.\n> > > \n> > > Hm, what is that and where would a reviewer find the specification for it?\n> > \n> > specs are from GS1 here https://www.gs1.org/standards/id-keys/gtin\n> > side-note EAN13 is actually called GTIN-13 now. Wikipedia has a quick\n> > overview https://en.wikipedia.org/wiki/Global_Trade_Item_Number\n> > \n> > > > Looking at the code I saw every format that isn-extension supports is\n> > > > stored as an EAN13. Theoretically that can be changed to be GTIN14, but\n> > > > that would mean quite a lot of rewrite I feared, so I chose to code only\n> > > > GTIN14 I/O separetely to not interfere with any existing conversion\n> > > > magic. This yields an easier to understand patch and doesn't touch\n> > > > existing functionality. However it introduces redundancy to a certain\n> > > > extent.\n> > > \n> > > Yeah, you certainly don't get to change the on-disk format of the existing\n> > > types, unfortunately. Not sure what the least messy way of dealing with\n> > > that is. I guess we do want this to be part of contrib/isn rather than\n> > > an independent module, if there are sane datatype conversions with the\n> > > existing isn types.\n> > \n> > the on-disk format does not change (it would support even longer codes\n> > it's just an integer where one bit is used for valid/invalid flag, did\n> > not touch that at all). Putting GTIN14 in isn makes sense I find and is\n> > back/forward compatible.\n> > \n> > > > Find my patch attached. Please let me know if there are things that need\n> > > > changes, I'll do my best to get GTIN support into postgresql.\n> > > \n> > > Well, two comments immediately:\n> > > \n> > > * where's the documentation changes?\n> > > \n> > > * simply editing the .sql file in-place is not acceptable; that breaks\n> > > the versioning conventions for extensions, and leaves users with no\n> > > easy upgrade path. What you need to do is create a version upgrade\n> > > script that adds the new objects. For examples look for other recent\n> > > patches that have added features to contrib modules, eg\n> > > \n> > > https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=eb6f29141bed9dc95cb473614c30f470ef980705\n> > > \n> > > Also, I'm afraid you've pretty much missed the deadline to get this\n> > > into PG v12; we've already got more timely-submitted patches than\n> > > we're likely to be able to finish reviewing. Please add it to the\n> > > first v13 commit fest,\n> > > \n> > > https://commitfest.postgresql.org/23/\n> > > \n> > > so that we don't forget about it when the time does come to look at it.\n> > > \n> > > regards, tom lane\n> > \n> > thanks for the feedback! will do mentioned documentation changes and\n> > create a separate upgrade sql file. Making it into v13 is fine by me.\n> \n> \n> Hello!\n> \n> If I understand it well, this patch wasn't finished and submitted\n> after this discussion. If there is still interest, I can try to polish\n> the patch, rebase and submit. I'm interested in GTIN14 support.\n> \n\nHello Josef,\n\n From my side you can finish the patch. Sorry that I didn't follow up on it, the company completely switched product line and then I forgot about it because we no longer needed it.\n\nbr\n mike\n\n\n",
"msg_date": "Fri, 09 Jun 2023 07:58:04 +0000",
"msg_from": "Michael Kefeder <mike@multiwave.ch>",
"msg_from_op": true,
"msg_subject": "Re: GTIN14 support for contrib/isn"
}
] |
[
{
"msg_contents": "Hello, I am inquiring into the program Parallel access. How do I eile it from my device/ data completetly. I don't believe ir is reliable... a broken laptop might have had the harddrive taken by a \"friend\" and accessed my account through this avenue. I am also wondering what you know about phone numbers... I had +61 0487653571- but just broke my phone. I will get a new number today and need to change it for my verification apps. is there a way to make sure this sim number cannot be replicated. I realise all my phone numbers have dissappeared over the past are of everything and see that I am struggling- some might be inhibiting the process. Hence why it has taken a year to get anywhere. MY new email I will use id : saint.ae.surety@gmail.com after I phase out this sight. Getting my device to flow has not been easy. IF I thought that you needed access to help that would be ok... it doesn look lilebNew phone number to be updated soon. Please help, Ae\nHello, I am inquiring into the program Parallel access. How do I eile it from my device/ data completetly. I don't believe ir is reliable... a broken laptop might have had the harddrive taken by a \"friend\" and accessed my account through this avenue. I am also wondering what you know about phone numbers... I had +61 0487653571- but just broke my phone. I will get a new number today and need to change it for my verification apps. is there a way to make sure this sim number cannot be replicated. I realise all my phone numbers have dissappeared over the past are of everything and see that I am struggling- some might be inhibiting the process. Hence why it has taken a year to get anywhere. MY new email I will use id : saint.ae.surety@gmail.com after I phase out this sight. Getting my device to flow has not been easy. IF I thought that you needed access to help that would be ok... it doesn look lilebNew phone number to be updated soon. Please help, Ae",
"msg_date": "Sat, 16 Mar 2019 03:24:24 +1100",
"msg_from": "emmjadea <emmjadea@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inquiries"
}
] |
[
{
"msg_contents": "Hey pg developers,\n\nDo you think if we can add queryId into the pg_stat_get_activity function\nand ultimatly expose it in the view? It would be easier to track \"similar\"\nquery's performance over time easier.\n\nThanks a lot!\nYun\n\nHey pg developers,Do you think if we can add queryId into the pg_stat_get_activity function and ultimatly expose it in the view? It would be easier to track \"similar\" query's performance over time easier.Thanks a lot!Yun",
"msg_date": "Fri, 15 Mar 2019 14:54:19 -0700",
"msg_from": "Yun Li <liyunjuanyong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Yun Li <liyunjuanyong@gmail.com> writes:\n> Do you think if we can add queryId into the pg_stat_get_activity function\n> and ultimatly expose it in the view? It would be easier to track \"similar\"\n> query's performance over time easier.\n\nNo, we're not likely to do that, because it would mean (1) baking one\nsingle definition of \"query ID\" into the core system and (2) paying\nthe cost to calculate that ID all the time.\n\npg_stat_statements has a notion of query ID, but that notion might be\nquite inappropriate for other usages, which is why it's an extension\nand not core.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Mar 2019 21:50:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 15, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yun Li <liyunjuanyong@gmail.com> writes:\n> > Do you think if we can add queryId into the pg_stat_get_activity function\n> > and ultimatly expose it in the view? It would be easier to track \"similar\"\n> > query's performance over time easier.\n>\n> No, we're not likely to do that, because it would mean (1) baking one\n> single definition of \"query ID\" into the core system and (2) paying\n> the cost to calculate that ID all the time.\n>\n> pg_stat_statements has a notion of query ID, but that notion might be\n> quite inappropriate for other usages, which is why it's an extension\n> and not core.\n\nHaving written an extension that also wanted a query ID, I disagree\nwith this position. There's only one query ID field available, and\nyou can't use two extensions that care about query ID unless they\ncompute it the same way, and replicating all the code that computes\nthe query ID into each new extension that wants one sucks. I think we\nshould actually bite the bullet and move all of that code into core,\nand then just let extensions say whether they care about it getting\nset.\n\nAlso, I think this is now the third independent request to expose\nquery ID in pg_stat_statements. I think we should give the people\nwhat they want.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Sat, 16 Mar 2019 10:32:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 15, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> pg_stat_statements has a notion of query ID, but that notion might be\n>> quite inappropriate for other usages, which is why it's an extension\n>> and not core.\n\n> Having written an extension that also wanted a query ID, I disagree\n> with this position.\n\n[ shrug... ] The fact remains that pg_stat_statements's definition is\npretty lame. There's a lot of judgment calls in which query fields\nit chooses to examine or ignore, and there's been no attempt at all\nto make the ID PG-version-independent, and I rather doubt that it's\nplatform-independent either. Nor will the IDs survive a dump/reload\neven on the same server, since object OIDs will likely change.\n\nThese things are OK, or at least mostly tolerable, for pg_stat_statements'\nusage ... but I don't think it's a good idea to have the core code\ndictating that definition to all extensions. Right now, if you have\nan extension that needs some other query-ID definition, you can do it,\nyou just can't run that extension alongside pg_stat_statements.\nBut you'll be out of luck if the core code starts filling that field.\n\nI'd be happier about having the core code compute a query ID if we\nhad a definition that was not so obviously slapped together.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 12:20:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Mar 15, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> pg_stat_statements has a notion of query ID, but that notion might be\n> >> quite inappropriate for other usages, which is why it's an extension\n> >> and not core.\n>\n> > Having written an extension that also wanted a query ID, I disagree\n> > with this position.\n>\n> [ shrug... ] The fact remains that pg_stat_statements's definition is\n> pretty lame. There's a lot of judgment calls in which query fields\n> it chooses to examine or ignore, and there's been no attempt at all\n> to make the ID PG-version-independent, and I rather doubt that it's\n> platform-independent either. Nor will the IDs survive a dump/reload\n> even on the same server, since object OIDs will likely change.\n>\n> These things are OK, or at least mostly tolerable, for pg_stat_statements'\n> usage ... but I don't think it's a good idea to have the core code\n> dictating that definition to all extensions. Right now, if you have\n> an extension that needs some other query-ID definition, you can do it,\n> you just can't run that extension alongside pg_stat_statements.\n> But you'll be out of luck if the core code starts filling that field.\n>\n> I'd be happier about having the core code compute a query ID if we\n> had a definition that was not so obviously slapped together.\n\nBut the queryId itself is stored in core. Exposing it in\npg_stat_activity or log_line_prefix would still allow users to choose\nthe implementation of their choice, or none. That seems like a\ndifferent complaint from asking pgss integration in core to have all\nits metrics available by default (or at least without a restart).\n\nMaybe we could add a GUC for pg_stat_statements to choose whether it\nshould set the queryid itself and not, if anyone wants to have its\nmetrics but with different queryid semantics?\n\n",
"msg_date": "Sat, 16 Mar 2019 19:02:52 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hello,\n\nThis is available in https://github.com/legrandlegrand/pg_stat_sql_plans\nextension with a specific function\npgssp_backend_queryid(pid) that permits to join pg_stat_activity with\npg_stat_sql_plans (that is similar to pg_stat_statements) and also permits\nto collect samples of wait events per query id.\n\nThis extension computes its own queryid based on a normalized query text\n(that doesn't change after table\ndrop/create).\n\nMaybe that queryid calculation should stay in a dedicated extension,\npermiting to users to choose their queryid definition.\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Sat, 16 Mar 2019 11:21:13 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Thanks a lot for really good points!! I did not expected I will get this\nmany points of view. :P\n\nI have identical experience with Robert when other extension calculate the\nid different as PGSS, PGSS will overwritten that id when it is on. But Tom\ngot a point that if we centralize the logic that pgss has, then other\nextension will have no way to change it unless we have some new config to\ntoggle pointed out by Julien. Also Tom got the concern about the current\nPGSS jumble query logic is not bullet proof and may take time then impact\nthe perf.\n\nLet's take one step back. Since queryId is stored in core as Julien pointed\nout, can we just add that global to the pg_stat_get_activity and ultimately\nexposed in pg_stat_activity view? Then no matter whether PGSS is on or\noff, or however the customer extensions are updating that filed, we expose\nthat field in that view then enable user to leverage that id to join with\npgss or their extension. Will this sounds a good idea?\n\nThanks again,\nYun\n\nOn Sat, Mar 16, 2019 at 11:01 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Sat, Mar 16, 2019 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Fri, Mar 15, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> pg_stat_statements has a notion of query ID, but that notion might be\n> > >> quite inappropriate for other usages, which is why it's an extension\n> > >> and not core.\n> >\n> > > Having written an extension that also wanted a query ID, I disagree\n> > > with this position.\n> >\n> > [ shrug... ] The fact remains that pg_stat_statements's definition is\n> > pretty lame. There's a lot of judgment calls in which query fields\n> > it chooses to examine or ignore, and there's been no attempt at all\n> > to make the ID PG-version-independent, and I rather doubt that it's\n> > platform-independent either. Nor will the IDs survive a dump/reload\n> > even on the same server, since object OIDs will likely change.\n> >\n> > These things are OK, or at least mostly tolerable, for\n> pg_stat_statements'\n> > usage ... but I don't think it's a good idea to have the core code\n> > dictating that definition to all extensions. Right now, if you have\n> > an extension that needs some other query-ID definition, you can do it,\n> > you just can't run that extension alongside pg_stat_statements.\n> > But you'll be out of luck if the core code starts filling that field.\n> >\n> > I'd be happier about having the core code compute a query ID if we\n> > had a definition that was not so obviously slapped together.\n>\n> But the queryId itself is stored in core. Exposing it in\n> pg_stat_activity or log_line_prefix would still allow users to choose\n> the implementation of their choice, or none. That seems like a\n> different complaint from asking pgss integration in core to have all\n> its metrics available by default (or at least without a restart).\n>\n> Maybe we could add a GUC for pg_stat_statements to choose whether it\n> should set the queryid itself and not, if anyone wants to have its\n> metrics but with different queryid semantics?\n>\n\nThanks a lot for really good points!! I did not expected I will get this many points of view. :PI have identical experience with Robert when other extension calculate the id different as PGSS, PGSS will overwritten that id when it is on. But Tom got a point that if we centralize the logic that pgss has, then other extension will have no way to change it unless we have some new config to toggle pointed out by Julien. Also Tom got the concern about the current PGSS jumble query logic is not bullet proof and may take time then impact the perf.Let's take one step back. Since queryId is stored in core as Julien pointed out, can we just add that global to the pg_stat_get_activity and ultimately exposed in pg_stat_activity view? Then no matter whether PGSS is on or off, or however the customer extensions are updating that filed, we expose that field in that view then enable user to leverage that id to join with pgss or their extension. Will this sounds a good idea?Thanks again,YunOn Sat, Mar 16, 2019 at 11:01 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Sat, Mar 16, 2019 at 5:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Mar 15, 2019 at 9:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> pg_stat_statements has a notion of query ID, but that notion might be\n> >> quite inappropriate for other usages, which is why it's an extension\n> >> and not core.\n>\n> > Having written an extension that also wanted a query ID, I disagree\n> > with this position.\n>\n> [ shrug... ] The fact remains that pg_stat_statements's definition is\n> pretty lame. There's a lot of judgment calls in which query fields\n> it chooses to examine or ignore, and there's been no attempt at all\n> to make the ID PG-version-independent, and I rather doubt that it's\n> platform-independent either. Nor will the IDs survive a dump/reload\n> even on the same server, since object OIDs will likely change.\n>\n> These things are OK, or at least mostly tolerable, for pg_stat_statements'\n> usage ... but I don't think it's a good idea to have the core code\n> dictating that definition to all extensions. Right now, if you have\n> an extension that needs some other query-ID definition, you can do it,\n> you just can't run that extension alongside pg_stat_statements.\n> But you'll be out of luck if the core code starts filling that field.\n>\n> I'd be happier about having the core code compute a query ID if we\n> had a definition that was not so obviously slapped together.\n\nBut the queryId itself is stored in core. Exposing it in\npg_stat_activity or log_line_prefix would still allow users to choose\nthe implementation of their choice, or none. That seems like a\ndifferent complaint from asking pgss integration in core to have all\nits metrics available by default (or at least without a restart).\n\nMaybe we could add a GUC for pg_stat_statements to choose whether it\nshould set the queryid itself and not, if anyone wants to have its\nmetrics but with different queryid semantics?",
"msg_date": "Mon, 18 Mar 2019 10:23:43 -0700",
"msg_from": "Yun Li <liyunjuanyong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hello\n\nOn Sat, Mar 16, 2019 at 7:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Also, I think this is now the third independent request to expose\n> query ID in pg_stat_statements. I think we should give the people\n> what they want.\n>\n\nCount me as the 4th.\n\nThis would be a very important feature for automated query analysis.\npg_stat_statements lacks query examples, and the only way to get them is\nfrom the logs.\nWhere we don't have queryid as well. So people end up either doing it\nmanually or writing\nyet another set of nasty regular expressions.\n\nRouting query analysis s a crucial for any large project. If there are\nchances to implement\nqueryid for pg_stat_activity (or anything that will allow to automate query\nanalysis)\nin Postgres 12 or later -- this would be a great news and huge support for\nengineers.\nSame level as recently implemented sampling for statement logging.\n\nBy the way, if queryid goes to the core someday, I'm sure it is worth to\nconsider using\nit in logs as well.\n\nThanks,\nNik\n\nHelloOn Sat, Mar 16, 2019 at 7:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\nAlso, I think this is now the third independent request to expose\nquery ID in pg_stat_statements. I think we should give the people\nwhat they want.Count me as the 4th.This would be a very important feature for automated query analysis.pg_stat_statements lacks query examples, and the only way to get them is from the logs.Where we don't have queryid as well. So people end up either doing it manually or writingyet another set of nasty regular expressions.Routing query analysis s a crucial for any large project. If there are chances to implementqueryid for pg_stat_activity (or anything that will allow to automate query analysis)in Postgres 12 or later -- this would be a great news and huge support for engineers.Same level as recently implemented sampling for statement logging.By the way, if queryid goes to the core someday, I'm sure it is worth to consider usingit in logs as well.Thanks,Nik",
"msg_date": "Mon, 18 Mar 2019 11:24:19 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 6:23 PM Yun Li <liyunjuanyong@gmail.com> wrote:\n>\n> Let's take one step back. Since queryId is stored in core as Julien pointed out, can we just add that global to the pg_stat_get_activity and ultimately exposed in pg_stat_activity view? Then no matter whether PGSS is on or off, or however the customer extensions are updating that filed, we expose that field in that view then enable user to leverage that id to join with pgss or their extension. Will this sounds a good idea?\n\nI'd greatly welcome expose queryid exposure in pg_stat_activity, and\nalso in log_line_prefix. I'm afraid that it's too late for pg12\ninclusion, but I'll be happy to provide a patch for that for pg13.\n\n",
"msg_date": "Mon, 18 Mar 2019 19:33:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On 3/16/19 5:32 PM, Robert Haas wrote:\n\n> There's only one query ID field available, and\n> you can't use two extensions that care about query ID unless they\n> compute it the same way, and replicating all the code that computes\n> the query ID into each new extension that wants one sucks. I think we\n> should actually bite the bullet and move all of that code into core,\n> and then just let extensions say whether they care about it getting\n> set.\n\n\n+1.\n\nBut I think that enough to integrate into core the query normalization \nroutine and store generalized query strings (from which the queryId is \nproduced) in shared memory (for example, hashtable that maps queryId to \nthe text representation of generalized query). And activate \nnormalization routine and filling the table of generalized queries by \nspecified GUC.\n\nThis allows to unbind extensions that require queryId from using \npg_stat_statements and consider such computing of queryId as canonical.\n\n\n-- \nRegards,\nMaksim Milyutin\n\n\n",
"msg_date": "Tue, 19 Mar 2019 16:45:17 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 2:45 PM Maksim Milyutin <milyutinma@gmail.com> wrote:\n>\n> But I think that enough to integrate into core the query normalization\n> routine and store generalized query strings (from which the queryId is\n> produced) in shared memory (for example, hashtable that maps queryId to\n> the text representation of generalized query).\n\nThat's more or less how pg_stat_statements was previously behaving,\nand it had too many problems. Current implementation, with an\nexternal file, is a better alternative.\n\n> And activate\n> normalization routine and filling the table of generalized queries by\n> specified GUC.\n>\n> This allows to unbind extensions that require queryId from using\n> pg_stat_statements and consider such computing of queryId as canonical.\n\nThe problem I see with this approach is that if you want a different\nimplementation, you'll have to reimplement the in-core normalised\nqueries saving and retrieval, but with a different set of SQL-visible\nfunctions. I don't think that's it's acceptable, unless we add a\nspecific hook for query normalisation and queryid computing. But it\nisn't ideal either, as it would be a total mess if someone changes the\nimplementation without resetting the previously saved normalised\nqueries.\n\n",
"msg_date": "Tue, 19 Mar 2019 15:43:31 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 7:33 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Mar 18, 2019 at 6:23 PM Yun Li <liyunjuanyong@gmail.com> wrote:\n> >\n> > Let's take one step back. Since queryId is stored in core as Julien pointed out, can we just add that global to the pg_stat_get_activity and ultimately exposed in pg_stat_activity view? Then no matter whether PGSS is on or off, or however the customer extensions are updating that filed, we expose that field in that view then enable user to leverage that id to join with pgss or their extension. Will this sounds a good idea?\n>\n> I'd greatly welcome expose queryid exposure in pg_stat_activity, and\n> also in log_line_prefix. I'm afraid that it's too late for pg12\n> inclusion, but I'll be happy to provide a patch for that for pg13.\n\nHere's a prototype patch for queryid exposure in pg_stat_activity and\nlog_line prefix.",
"msg_date": "Tue, 19 Mar 2019 15:51:37 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "The queryId depends on oids, so it is not stable enough for some purposes. \nFor example, to create a SQL identifier that survives across a server\nupgrade, or that can be shipped to another database, the queryId isn't\nusable. \n\nThe apg_plan_mgmt extensions keeps both its own stable SQL identifier as\nwell as the queryId, so it can be used to join to pg_stat_statements if\ndesired. If we were to standardize on one SQL identifier, it should be\nstable enough to survive a major version upgrade or to be the same in\ndifferent databases.\n\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 19 Mar 2019 10:23:55 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 1:24 PM Jim Finnerty <jfinnert@amazon.com> wrote:\n> The queryId depends on oids, so it is not stable enough for some purposes.\n> For example, to create a SQL identifier that survives across a server\n> upgrade, or that can be shipped to another database, the queryId isn't\n> usable.\n>\n> The apg_plan_mgmt extensions keeps both its own stable SQL identifier as\n> well as the queryId, so it can be used to join to pg_stat_statements if\n> desired. If we were to standardize on one SQL identifier, it should be\n> stable enough to survive a major version upgrade or to be the same in\n> different databases.\n\nIf Amazon would like to open-source its (AIUI) proprietary technology\nfor computing query IDs and propose it for inclusion in PostgreSQL,\ncool, but I think that is a separate question from whether people\nwould like more convenient access to the query ID technology that we\nhave today. I think it's 100% clear that they would like that, even\nas things stand, and therefore it does not make sense to block that\nbehind Amazon deciding to share what it already has or somebody else\ntrying to reimplement it.\n\nIf we need to have a space for both a core-standard query ID and\nanother query ID that is available for extension use, adding one more\nfield to struct Query, so we can have both coreQueryId and\nextensionQueryId or whatever, would be easy to do. It appears that\nthere's more use case than I would have guessed for custom query IDs.\nOn the other hand, it also appears that a lot of people would be very,\nvery happy to just be able to see the query ID field that already\nexists, both in pg_stat_statements in pg_stat_activity, and we\nshouldn't throw up unnecessary impediments in the way of making that\nhappen, at least IMHO.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Mar 2019 15:00:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Great, \nthank you Julien !\n\nWould it make sense to add it in auto explain ?\nI don't know for explain itself, but maybe ...\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 19 Mar 2019 12:38:05 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 8:38 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Would it make sense to add it in auto explain ?\n> I don't know for explain itself, but maybe ...\n\nI'd think that people interested in getting the queryid in the logs\nwould configure the log_line_prefix to display it consistently rather\nthan having it in only a subset of cases, so that's probably not\nreally needed.\n\n",
"msg_date": "Wed, 20 Mar 2019 00:52:44 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi Jim, Robert,\n\nAs this is a distinct subject from adding QueryId to pg_stat_activity,\nwould it be possible to continue the discussion \"new QueryId definition\" \n(for postgres open source software) here:\n\nhttps://www.postgresql.org/message-id/1553029215728-0.post@n3.nabble.com\n\nThanks in advance.\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 12:21:30 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": ">> Would it make sense to add it in auto explain ?\n>> I don't know for explain itself, but maybe ...\n\n> I'd think that people interested in getting the queryid in the logs\n> would configure the log_line_prefix to display it consistently rather\n> than having it in only a subset of cases, so that's probably not\n> really needed.\n\nOk.\nShoudn't you add this to commitfest ?\n\n\n\n\n\n\n\n\n\n\n\n\n>> Would it make sense to add it in auto explain ?\n>> I don't know for explain itself, but maybe ...\n\n> I'd think that people interested in getting the queryid in the logs\n> would configure the log_line_prefix to display it consistently rather\n> than having it in only a subset of cases, so that's probably not\n> really needed.\n\nOk.\nShoudn't you add this to commitfest ?",
"msg_date": "Mon, 25 Mar 2019 11:36:48 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 12:36 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> >> Would it make sense to add it in auto explain ?\n> >> I don't know for explain itself, but maybe ...\n>\n> > I'd think that people interested in getting the queryid in the logs\n> > would configure the log_line_prefix to display it consistently rather\n> > than having it in only a subset of cases, so that's probably not\n> > really needed.\n>\n> Ok.\n> Shoudn't you add this to commitfest ?\n\nI added it last week, see https://commitfest.postgresql.org/23/2069/\n\n",
"msg_date": "Mon, 25 Mar 2019 12:43:13 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": ">> Shoudn't you add this to commitfest ?\n\n> I added it last week, see https://commitfest.postgresql.org/23/2069/\n\nOups, sorry for the noise\n\n\n\n\n\n\n\n>> Shoudn't you add this to commitfest ?\n\n\n\n> I added it last week, see \nhttps://commitfest.postgresql.org/23/2069/\n\nOups, sorry for the noise",
"msg_date": "Mon, 25 Mar 2019 11:49:42 +0000",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 3:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Mar 18, 2019 at 7:33 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Mar 18, 2019 at 6:23 PM Yun Li <liyunjuanyong@gmail.com> wrote:\n> > >\n> > > Let's take one step back. Since queryId is stored in core as Julien pointed out, can we just add that global to the pg_stat_get_activity and ultimately exposed in pg_stat_activity view? Then no matter whether PGSS is on or off, or however the customer extensions are updating that filed, we expose that field in that view then enable user to leverage that id to join with pgss or their extension. Will this sounds a good idea?\n> >\n> > I'd greatly welcome expose queryid exposure in pg_stat_activity, and\n> > also in log_line_prefix. I'm afraid that it's too late for pg12\n> > inclusion, but I'll be happy to provide a patch for that for pg13.\n>\n> Here's a prototype patch for queryid exposure in pg_stat_activity and\n> log_line prefix.\n\nPatch doesn't apply anymore, PFA rebased v2.",
"msg_date": "Fri, 28 Jun 2019 16:39:15 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 4:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 19, 2019 at 3:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Mon, Mar 18, 2019 at 7:33 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 18, 2019 at 6:23 PM Yun Li <liyunjuanyong@gmail.com> wrote:\n> > > >\n> > > > Let's take one step back. Since queryId is stored in core as Julien pointed out, can we just add that global to the pg_stat_get_activity and ultimately exposed in pg_stat_activity view? Then no matter whether PGSS is on or off, or however the customer extensions are updating that filed, we expose that field in that view then enable user to leverage that id to join with pgss or their extension. Will this sounds a good idea?\n> > >\n> > > I'd greatly welcome expose queryid exposure in pg_stat_activity, and\n> > > also in log_line_prefix. I'm afraid that it's too late for pg12\n> > > inclusion, but I'll be happy to provide a patch for that for pg13.\n> >\n> > Here's a prototype patch for queryid exposure in pg_stat_activity and\n> > log_line prefix.\n>\n> Patch doesn't apply anymore, PFA rebased v2.\n\nSorry, I missed the new pg_stat_gssapi view.",
"msg_date": "Fri, 28 Jun 2019 17:46:24 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 12:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On the other hand, it also appears that a lot of people would be very,\n> very happy to just be able to see the query ID field that already\n> exists, both in pg_stat_statements in pg_stat_activity, and we\n> shouldn't throw up unnecessary impediments in the way of making that\n> happen, at least IMHO.\n\n+1.\n\npg_stat_statements will already lose all the statistics that it\naggregated in the event of a hard crash. The trade-off that the query\njumbling logic makes is not a bad one, all things considered.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 28 Jun 2019 11:46:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 12:38 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n> Would it make sense to add it in auto explain ?\n> I don't know for explain itself, but maybe ...\n\nI think that it should appear in EXPLAIN. pg_stat_statements already\ncannot have a query hash of zero, so it might be okay to display it\nonly when its value is non-zero.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 28 Jun 2019 11:49:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "What reason to use pg_atomic_uint64?\r\nIn docs:\r\noccured - > occurred",
"msg_date": "Wed, 31 Jul 2019 08:54:18 +0000",
"msg_from": "Evgeny Efimkin <efimkin@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hello,\n\nOn Wed, Jul 31, 2019 at 10:55 AM Evgeny Efimkin <efimkin@yandex-team.ru> wrote:\n>\n> What reason to use pg_atomic_uint64?\n\nThe queryid is read and written without holding any lock on the PGPROC\nentry, so the pg_atomic_uint64 will guarantee that we get a consistent\nvalue in pg_stat_get_activity(). Other reads shouldn't be a problem\nas far as I remember.\n\n> In docs:\n> occured - > occurred\n\nThanks! I fixed it on my local branch.\n\n\n",
"msg_date": "Wed, 31 Jul 2019 23:51:40 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-31 23:51:40 +0200, Julien Rouhaud wrote:\n> On Wed, Jul 31, 2019 at 10:55 AM Evgeny Efimkin <efimkin@yandex-team.ru> wrote:\n> > What reason to use pg_atomic_uint64?\n> \n> The queryid is read and written without holding any lock on the PGPROC\n> entry, so the pg_atomic_uint64 will guarantee that we get a consistent\n> value in pg_stat_get_activity(). Other reads shouldn't be a problem\n> as far as I remember.\n\nHm, I don't think that's necessary in this case. That's what the\nst_changecount protocol is trying to ensure, no?\n\n\t/*\n\t * To avoid locking overhead, we use the following protocol: a backend\n\t * increments st_changecount before modifying its entry, and again after\n\t * finishing a modification. A would-be reader should note the value of\n\t * st_changecount, copy the entry into private memory, then check\n\t * st_changecount again. If the value hasn't changed, and if it's even,\n\t * the copy is valid; otherwise start over. This makes updates cheap\n\t * while reads are potentially expensive, but that's the tradeoff we want.\n\t *\n\t * The above protocol needs memory barriers to ensure that the apparent\n\t * order of execution is as it desires. Otherwise, for example, the CPU\n\t * might rearrange the code so that st_changecount is incremented twice\n\t * before the modification on a machine with weak memory ordering. Hence,\n\t * use the macros defined below for manipulating st_changecount, rather\n\t * than touching it directly.\n\t */\n\tint\t\t\tst_changecount;\n\n\nAnd if it were necessary, why wouldn't any of the other fields in\nPgBackendStatus need it? There's plenty of other fields written to\nwithout a lock, and several of those are also 8 bytes (so it's not a\ncase of assuming that 8 byte reads might not be atomic, but for byte\nreads are).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2019 14:59:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 11:59 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-07-31 23:51:40 +0200, Julien Rouhaud wrote:\n> > On Wed, Jul 31, 2019 at 10:55 AM Evgeny Efimkin <efimkin@yandex-team.ru> wrote:\n> > > What reason to use pg_atomic_uint64?\n> >\n> > The queryid is read and written without holding any lock on the PGPROC\n> > entry, so the pg_atomic_uint64 will guarantee that we get a consistent\n> > value in pg_stat_get_activity(). Other reads shouldn't be a problem\n> > as far as I remember.\n>\n> Hm, I don't think that's necessary in this case. That's what the\n> st_changecount protocol is trying to ensure, no?\n>\n> /*\n> * To avoid locking overhead, we use the following protocol: a backend\n> * increments st_changecount before modifying its entry, and again after\n> * finishing a modification. A would-be reader should note the value of\n> * st_changecount, copy the entry into private memory, then check\n> * st_changecount again. If the value hasn't changed, and if it's even,\n> * the copy is valid; otherwise start over. This makes updates cheap\n> * while reads are potentially expensive, but that's the tradeoff we want.\n> *\n> * The above protocol needs memory barriers to ensure that the apparent\n> * order of execution is as it desires. Otherwise, for example, the CPU\n> * might rearrange the code so that st_changecount is incremented twice\n> * before the modification on a machine with weak memory ordering. Hence,\n> * use the macros defined below for manipulating st_changecount, rather\n> * than touching it directly.\n> */\n> int st_changecount;\n>\n>\n> And if it were necessary, why wouldn't any of the other fields in\n> PgBackendStatus need it? There's plenty of other fields written to\n> without a lock, and several of those are also 8 bytes (so it's not a\n> case of assuming that 8 byte reads might not be atomic, but for byte\n> reads are).\n\nThis patch is actually storing the queryid in PGPROC, not in\nPgBackendStatus, thus the need for an atomic. I used PGPROC because\nthe value needs to be available in log_line_prefix() and spi.c, so\npgstat.c / PgBackendStatus didn't seem like the best interface in that\ncase. Is widening PGPROC is too expensive for this purpose?\n\n\n",
"msg_date": "Thu, 1 Aug 2019 08:45:45 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 2:46 AM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> This patch is actually storing the queryid in PGPROC, not in\n> PgBackendStatus, thus the need for an atomic. I used PGPROC because\n> the value needs to be available in log_line_prefix() and spi.c, so\n> pgstat.c / PgBackendStatus didn't seem like the best interface in that\n> case. Is widening PGPROC is too expensive for this purpose?\n\nI doubt it.\n\nHowever, I think that the fact that this patch adds 15 new calls to\npg_atomic_write_u64(&MyProc->queryId, ...) is probably not a good\nsign. It seems like we ought to be able to centralize it better than\nthat.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 1 Aug 2019 14:20:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 08:45:45 +0200, Julien Rouhaud wrote:\n> On Wed, Jul 31, 2019 at 11:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > And if it were necessary, why wouldn't any of the other fields in\n> > PgBackendStatus need it? There's plenty of other fields written to\n> > without a lock, and several of those are also 8 bytes (so it's not a\n> > case of assuming that 8 byte reads might not be atomic, but for byte\n> > reads are).\n> \n> This patch is actually storing the queryid in PGPROC, not in\n> PgBackendStatus, thus the need for an atomic. I used PGPROC because\n> the value needs to be available in log_line_prefix() and spi.c, so\n> pgstat.c / PgBackendStatus didn't seem like the best interface in that\n> case.\n\nHm. I'm not convinced that really is the case? You can just access\nMyBEentry, and read and update it? I mean, we do so at a frequency\nroughtly as high as high as the new queryid updates for things like\npgstat_report_activity(). Reading the value of your own backend you'd\nnot need to follow the changecount algorithm, I think, because it's only\nupdated from the current backend. If reading were a problem, you\ntrivially just could have a cache in a local variable, to avoid\naccessing shared memory.\n\n\n> Is widening PGPROC is too expensive for this purpose?\n\nWell, I'm mostly not a fan of putting even more in there, because it's\npretty hard to understand already. To me it architecturally status\ninformation doesn't belong there (In fact, I'm somewhat unhappy that\nwait_event_info etc in there, but that's at least commonly updated at\nthe same time as other fields in PGPROC).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 11:36:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2019-08-01 14:20:46 -0400, Robert Haas wrote:\n> However, I think that the fact that this patch adds 15 new calls to\n> pg_atomic_write_u64(&MyProc->queryId, ...) is probably not a good\n> sign. It seems like we ought to be able to centralize it better than\n> that.\n\n+1\n\n\n",
"msg_date": "Thu, 1 Aug 2019 11:36:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 8:36 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-08-01 08:45:45 +0200, Julien Rouhaud wrote:\n> > On Wed, Jul 31, 2019 at 11:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > And if it were necessary, why wouldn't any of the other fields in\n> > > PgBackendStatus need it? There's plenty of other fields written to\n> > > without a lock, and several of those are also 8 bytes (so it's not a\n> > > case of assuming that 8 byte reads might not be atomic, but for byte\n> > > reads are).\n> >\n> > This patch is actually storing the queryid in PGPROC, not in\n> > PgBackendStatus, thus the need for an atomic. I used PGPROC because\n> > the value needs to be available in log_line_prefix() and spi.c, so\n> > pgstat.c / PgBackendStatus didn't seem like the best interface in that\n> > case.\n>\n> Hm. I'm not convinced that really is the case? You can just access\n> MyBEentry, and read and update it?\n\nSure, but it requires extra wrapper functions, and the st_changecount\ndance when writing the new value.\n\n> I mean, we do so at a frequency\n> roughtly as high as high as the new queryid updates for things like\n> pgstat_report_activity().\n\npgstat_report_activity() is only called for top-level statement. For\nthe queryid we need to track it down to all nested statements, which\ncould be way higher. But pgstat_progress_update_param() is called way\nmore than that.\n\n> Reading the value of your own backend you'd\n> not need to follow the changecount algorithm, I think, because it's only\n> updated from the current backend. If reading were a problem, you\n> trivially just could have a cache in a local variable, to avoid\n> accessing shared memory.\n\nYes definitely, except for pgstat_get_activity(), all reads are\nbackend local and should be totally safe to read as is.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:42:23 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 8:36 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-08-01 14:20:46 -0400, Robert Haas wrote:\n> > However, I think that the fact that this patch adds 15 new calls to\n> > pg_atomic_write_u64(&MyProc->queryId, ...) is probably not a good\n> > sign. It seems like we ought to be able to centralize it better than\n> > that.\n>\n> +1\n\nUnfortunately I didn't find a better way to do that. Since you can\nhave nested execution, I don't see how to avoid adding extra code in\nevery parts of query execution.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:49:48 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 22:42:23 +0200, Julien Rouhaud wrote:\n> On Thu, Aug 1, 2019 at 8:36 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-08-01 08:45:45 +0200, Julien Rouhaud wrote:\n> > > On Wed, Jul 31, 2019 at 11:59 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > And if it were necessary, why wouldn't any of the other fields in\n> > > > PgBackendStatus need it? There's plenty of other fields written to\n> > > > without a lock, and several of those are also 8 bytes (so it's not a\n> > > > case of assuming that 8 byte reads might not be atomic, but for byte\n> > > > reads are).\n> > >\n> > > This patch is actually storing the queryid in PGPROC, not in\n> > > PgBackendStatus, thus the need for an atomic. I used PGPROC because\n> > > the value needs to be available in log_line_prefix() and spi.c, so\n> > > pgstat.c / PgBackendStatus didn't seem like the best interface in that\n> > > case.\n> >\n> > Hm. I'm not convinced that really is the case? You can just access\n> > MyBEentry, and read and update it?\n> \n> Sure, but it requires extra wrapper functions, and the st_changecount\n> dance when writing the new value.\n\nSo? You need a wrapper function anyway, there's no way we're going to\nadd all those separate pg_atomic_write* calls directly.\n\n\n> > I mean, we do so at a frequency\n> > roughtly as high as high as the new queryid updates for things like\n> > pgstat_report_activity().\n> \n> pgstat_report_activity() is only called for top-level statement. For\n> the queryid we need to track it down to all nested statements, which\n> could be way higher.\n\nCompared to the overhead of executing a separate query the cost of\nsingle function call containing a MyBEentry update of an 8byte value\nseems almost guaranteed to be immeasurable. The executor startup alone\nis several orders of magnitude more expensive.\n\nI also think this proposed column should probably respect\nthe track_activities GUC.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 13:51:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 22:49:48 +0200, Julien Rouhaud wrote:\n> On Thu, Aug 1, 2019 at 8:36 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2019-08-01 14:20:46 -0400, Robert Haas wrote:\n> > > However, I think that the fact that this patch adds 15 new calls to\n> > > pg_atomic_write_u64(&MyProc->queryId, ...) is probably not a good\n> > > sign. It seems like we ought to be able to centralize it better than\n> > > that.\n> >\n> > +1\n> \n> Unfortunately I didn't find a better way to do that. Since you can\n> have nested execution, I don't see how to avoid adding extra code in\n> every parts of query execution.\n\nAt least my +1 is not primarily about the number of sites that need to\nhandle queryid changes, but that they all need to know about the way the\nqueryid is stored. Including how atomicity etc is handled. That\nknowledge should be in one or two places, not more. In a file where that\nknowledge makes sense.\n\nI'm *also* concerned about the number of places, as that makes it likely\nthat some have been missed/new ones will be introduced without the\nqueryid handling. But that wasn't what I was referring to above.\n\n\nI'm actually quite unconvinced that it's sensible to update the global\nvalue for nested queries. That'll mean e.g. the log_line_prefix and\npg_stat_activity values are most of the time going to be bogus while\nnested, because the querystring that's associated with those will *not*\nbe the value that the queryid corresponds to. elog.c uses\ndebug_query_string to log the statement, which is only updated for\ntop-level queries (outside of some exceptions like parallel workers for\nparallel queries in a function or stuff like that). And pg_stat_activity\nis also only updated for top level queries.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 14:05:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 10:52 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-08-01 22:42:23 +0200, Julien Rouhaud wrote:\n> > Sure, but it requires extra wrapper functions, and the st_changecount\n> > dance when writing the new value.\n>\n> So? You need a wrapper function anyway, there's no way we're going to\n> add all those separate pg_atomic_write* calls directly.\n\nOk\n\n> I also think this proposed column should probably respect\n> the track_activities GUC.\n\nOh indeed, I'll fix that when I'll be sure of the semantics to implement.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 23:08:37 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 11:05 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I'm actually quite unconvinced that it's sensible to update the global\n> value for nested queries. That'll mean e.g. the log_line_prefix and\n> pg_stat_activity values are most of the time going to be bogus while\n> nested, because the querystring that's associated with those will *not*\n> be the value that the queryid corresponds to. elog.c uses\n> debug_query_string to log the statement, which is only updated for\n> top-level queries (outside of some exceptions like parallel workers for\n> parallel queries in a function or stuff like that). And pg_stat_activity\n> is also only updated for top level queries.\n\nHaving the nested queryid seems indeed quite broken for\nlog_line_prefix. However having the nested queryid in\npg_stat_activity would be convenient to track what is a long stored\nfunctions currently doing. Maybe we could expose something like\ntop_level_queryid and current_queryid instead?\n\n\n",
"msg_date": "Fri, 2 Aug 2019 10:54:35 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 10:54:35 +0200, Julien Rouhaud wrote:\n> On Thu, Aug 1, 2019 at 11:05 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > I'm actually quite unconvinced that it's sensible to update the global\n> > value for nested queries. That'll mean e.g. the log_line_prefix and\n> > pg_stat_activity values are most of the time going to be bogus while\n> > nested, because the querystring that's associated with those will *not*\n> > be the value that the queryid corresponds to. elog.c uses\n> > debug_query_string to log the statement, which is only updated for\n> > top-level queries (outside of some exceptions like parallel workers for\n> > parallel queries in a function or stuff like that). And pg_stat_activity\n> > is also only updated for top level queries.\n> \n> Having the nested queryid seems indeed quite broken for\n> log_line_prefix. However having the nested queryid in\n> pg_stat_activity would be convenient to track what is a long stored\n> functions currently doing. Maybe we could expose something like\n> top_level_queryid and current_queryid instead?\n\nGiven that the query string is the toplevel one, I think that'd just be\nconfusing. And given the fact that it adds *substantial* additional\ncomplexity, I'd just rip the subcommand bits out.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 16:20:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi,\n\nOn Sat, Aug 3, 2019 at 1:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-08-02 10:54:35 +0200, Julien Rouhaud wrote:\n> > However having the nested queryid in\n> > pg_stat_activity would be convenient to track what is a long stored\n> > functions currently doing. Maybe we could expose something like\n> > top_level_queryid and current_queryid instead?\n>\n> Given that the query string is the toplevel one, I think that'd just be\n> confusing. And given the fact that it adds *substantial* additional\n> complexity, I'd just rip the subcommand bits out.\n\nOk, so here's a version that only exposes the top-level queryid only.\nThere can still be discrepancies with the query field, if a\nmulti-command string is provided. The queryid will be updated each\ntime a new top level statement is executed.\n\nAs the queryid cannot be immediately known, and may never exist at all\nif a query fails to parse, here are the heuristic I used to update the\nstored queryid:\n\n- it's reset to 0 each time pgstat_report_activity(STATE_RUNNING) is\ncalled. This way, we're sure that we don't display last query's\nqueryid in the logs if the next query fails to parse\n- it's also reset to 0 at the beginning of exec_simple_query() loop on\nthe parsetree_list (for multi-command string case)\n- pg_analyze_and_rewrite() and pg_analyze_and_rewrite_params() will\nreport the new queryid after parse analysis.\n- a non-zero queryid will only be updated if the stored one is zero\n\nThis should also work as intended for background worker using SPI,\nprovided that they correctly call pgstat_report_activity. I also\nmodified ExecInitParallelPlan() to publish the queryId in the\nserialized plannedStmt, so ParallelQueryMain() can report it to make\nthe queryid available in the parallel workers too.\n\nNote that this patch makes it clear that a zero queryid means no\nqueryid computed (and NULL will be displayed in such case in\npg_stat_activity). pg_stat_statements already makes sure that it\ncannot compute a zero queryid.\n\nIt also assume that any extension computing a queryid will do that in\nthe post_parse_analysis hook, which seems like a sane requirement. We\nmay want to have a dedicated hook for that instead, if more people get\ninterested in having the queryid only, possibly different\nimplementations, if it becomes available outside pgss.",
"msg_date": "Sat, 3 Aug 2019 23:58:13 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "> However having the nested queryid in \n> pg_stat_activity would be convenient to track\n> what is a long stored functions currently doing.\n\n+1\n\nAnd this could permit to get wait event sampling per queryid when\npg_stat_statements.track = all\n\nRegards \nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sun, 4 Aug 2019 00:04:01 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "At Sun, 4 Aug 2019 00:04:01 -0700 (MST), legrand legrand <legrand_legrand@hotmail.com> wrote in <1564902241482-0.post@n3.nabble.com>\n> > However having the nested queryid in \n> > pg_stat_activity would be convenient to track\n> > what is a long stored functions currently doing.\n> \n> +1\n> \n> And this could permit to get wait event sampling per queryid when\n> pg_stat_statements.track = all\n\nI'm strongly on this side emotionally, but also I'm on Tom and\nAndres's side that exposing querid that way is not the right\nthing.\n\nDoing that means we don't need exact correspondence between\ntop-level query and queryId (in nested or multistatement queries)\nin this patch. pg_stat_statements will allow us to do the same\nthing by having additional uint64[MaxBackends] array in\npgssSharedState, instead of expanding PgBackendStatus array in\ncore by the same size.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Aug 2019 16:28:24 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 9:28 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sun, 4 Aug 2019 00:04:01 -0700 (MST), legrand legrand <legrand_legrand@hotmail.com> wrote in <1564902241482-0.post@n3.nabble.com>\n> > > However having the nested queryid in\n> > > pg_stat_activity would be convenient to track\n> > > what is a long stored functions currently doing.\n> >\n> > +1\n> >\n> > And this could permit to get wait event sampling per queryid when\n> > pg_stat_statements.track = all\n>\n> I'm strongly on this side emotionally, but also I'm on Tom and\n> Andres's side that exposing querid that way is not the right\n> thing.\n>\n> Doing that means we don't need exact correspondence between\n> top-level query and queryId (in nested or multistatement queries)\n> in this patch. pg_stat_statements will allow us to do the same\n> thing by having additional uint64[MaxBackends] array in\n> pgssSharedState, instead of expanding PgBackendStatus array in\n> core by the same size.\n\nSure, but the problem with this approach is that all extensions that\ncompute their own queryid would have to do the same. I hope that we\ncan come up with an approach friendlier for those extensions.\n\n\n",
"msg_date": "Mon, 5 Aug 2019 10:35:11 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Kyotaro Horiguchi-4 wrote\n> At Sun, 4 Aug 2019 00:04:01 -0700 (MST), legrand legrand <\n\n> legrand_legrand@\n\n> > wrote in <\n\n> 1564902241482-0.post@.nabble\n\n>>\n>> > However having the nested queryid in \n>> > pg_stat_activity would be convenient to track\n>> > what is a long stored functions currently doing.\n>> \n>> +1\n>> \n>> And this could permit to get wait event sampling per queryid when\n>> pg_stat_statements.track = all\n> \n> I'm strongly on this side emotionally, but also I'm on Tom and\n> Andres's side that exposing querid that way is not the right\n> thing.\n> \n> Doing that means we don't need exact correspondence between\n> top-level query and queryId (in nested or multistatement queries)\n> in this patch. pg_stat_statements will allow us to do the same\n> thing by having additional uint64[MaxBackends] array in\n> pgssSharedState, instead of expanding PgBackendStatus array in\n> core by the same size.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\nHi Kyotaro,\nThank you for this answer.\nWhat you propose here is already available \nInside pg_stat_sql_plans extension (a derivative from \nPg_stat_statements and pg_store_plans)\nAnd I’m used to this queryid behavior with top Level\nQueries...\nMy emotion was high but I will accept it !\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 5 Aug 2019 10:30:29 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHI!\r\npatch is look good for me.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 07 Aug 2019 09:03:21 +0000",
"msg_from": "Evgeny Efimkin <efimkin@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Aug 07, 2019 at 09:03:21AM +0000, Evgeny Efimkin wrote:\n> The new status of this patch is: Ready for Committer\n\nI may be wrong of course, but it looks that this is wanted and the\ncurrent shape of the patch looks sensible:\n- Register the query ID using a backend entry.\n- Only consider the top-level query.\n\nAn invalid query ID is assumed to be 0 in the patch, per the way it is\ndefined in pg_stat_statements. However this also maps with the case\nwhere we have a utility statement.\n\n+ * We only report the top-level query identifiers. The stored queryid is\n+ * reset when a backend call pgstat_report_activity(STATE_RUNNING), or with\ns/call/calls/\n\n+ /*\n+ * We only report the top-level query identifiers. The stored queryid is\n+ * reset when a backend call pgstat_report_activity(STATE_RUNNING), or with\n+ * an explicit call to this function. If the saved query identifier is not\n+ * zero it means that it's not a top-level command, so ignore the one\n+ * provided unless it's an explicit call to reset the identifier.\n+ */\n+ if (queryId != 0 && beentry->st_queryid != 0)\n+ return;\nHmm. I am wondering if we shouldn't have an API dedicated to the\nreset of the query ID. That logic looks rather brittle..\n\nWouldn't it be better (and more consistent) to update the query ID in\nparse_analyze_varparams() and parse_analyze() as well after going\nthrough the post_parse_analyze hook instead of pg_analyze_and_rewrite?\n\n+ /*\n+ * If a new query is started, we reset the query identifier as it'll only\n+ * be known after parse analysis, to avoid reporting last query's\n+ * identifier.\n+ */\n+ if (state == STATE_RUNNING)\n+ beentry->st_queryid = 0\nI don't quite get why you don't reset the counter in other cases as\nwell. If the backend entry is idle in transaction or in an idle\nstate, it seems to me that we should not report the query ID of the\nlast query run in the transaction. And that would make the reset in\nexec_simple_query() unnecessary, no?\n--\nMichael",
"msg_date": "Wed, 11 Sep 2019 13:45:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Thanks for looking at it!\n\nOn Wed, Sep 11, 2019 at 6:45 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> An invalid query ID is assumed to be 0 in the patch, per the way it is\n> defined in pg_stat_statements. However this also maps with the case\n> where we have a utility statement.\n\nOh indeed. Which means that if a utility statements later calls\nparse_analyze or friends, this patch would report an unexpected\nqueryid. That's at least possible for something like\n\nCOPY (SELECT * FROM tbl) TO ...\n\nThe thing is that pg_stat_statements assigns a 0 queryid in the\npost_parse_analyze_hook to recognize utility statements and avoid\ntracking instrumentation twice in case of utility statements, and then\ncompute a queryid base on a hash of the query text. Maybe we could\ninstead fully reserve queryid \"2\" for utility statements (so forcing\nqueryid \"1\" for standard queries if jumbling returns 0 *or* 2 instead\nof only 0), and use \"2\" as the identifier for utility statement\ninstead of \"0\"?\n\n> + /*\n> + * We only report the top-level query identifiers. The stored queryid is\n> + * reset when a backend call pgstat_report_activity(STATE_RUNNING), or with\n> + * an explicit call to this function. If the saved query identifier is not\n> + * zero it means that it's not a top-level command, so ignore the one\n> + * provided unless it's an explicit call to reset the identifier.\n> + */\n> + if (queryId != 0 && beentry->st_queryid != 0)\n> + return;\n> Hmm. I am wondering if we shouldn't have an API dedicated to the\n> reset of the query ID. That logic looks rather brittle..\n\nHow about adding a \"bool force\" parameter to allow resetting the queryid to 0?\n\n> Wouldn't it be better (and more consistent) to update the query ID in\n> parse_analyze_varparams() and parse_analyze() as well after going\n> through the post_parse_analyze hook instead of pg_analyze_and_rewrite?\n\nI thought about it without knowing what would be best. I'll change to\nreport the queryid right after calling post_parse_analyze_hook then.\n\n> + /*\n> + * If a new query is started, we reset the query identifier as it'll only\n> + * be known after parse analysis, to avoid reporting last query's\n> + * identifier.\n> + */\n> + if (state == STATE_RUNNING)\n> + beentry->st_queryid = 0\n> I don't quite get why you don't reset the counter in other cases as\n> well. If the backend entry is idle in transaction or in an idle\n> state, it seems to me that we should not report the query ID of the\n> last query run in the transaction. And that would make the reset in\n> exec_simple_query() unnecessary, no?\n\nI'm reproducing the same behavior as for the query text, ie. showing\nthe information about the last executed query text if state is idle:\n\n+ <entry><structfield>queryid</structfield></entry>\n+ <entry><type>bigint</type></entry>\n+ <entry>Identifier of this backend's most recent query. If\n+ <structfield>state</structfield> is <literal>active</literal> this field\n+ shows the identifier of the currently executing query. In all other\n+ states, it shows the identifier of last query that was executed.\n\nI think that showing the last executed query's queryid is as useful as\nthe query text. Also, while avoiding a reset in exec_simple_query()\nit'd be required to do such reset in case of error during query\nexecution, so that wouldn't make things quite simpler..\n\n\n",
"msg_date": "Wed, 11 Sep 2019 18:30:22 +0200",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Sep 11, 2019 at 06:30:22PM +0200, Julien Rouhaud wrote:\n> The thing is that pg_stat_statements assigns a 0 queryid in the\n> post_parse_analyze_hook to recognize utility statements and avoid\n> tracking instrumentation twice in case of utility statements, and then\n> compute a queryid base on a hash of the query text. Maybe we could\n> instead fully reserve queryid \"2\" for utility statements (so forcing\n> queryid \"1\" for standard queries if jumbling returns 0 *or* 2 instead\n> of only 0), and use \"2\" as the identifier for utility statement\n> instead of \"0\"?\n\nHmm. Not sure. At this stage it would be nice to gather more input\non the matter, and FWIW, I don't like much the assumption that a query\nID of 0 is perhaps a utility statement, or perhaps nothing depending\non the state of a backend entry, or even perhaps something else\ndepending how on how modules make use and define such query IDs.\n--\nMichael",
"msg_date": "Mon, 11 Nov 2019 17:37:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 05:37:30PM +0900, Michael Paquier wrote:\n> On Wed, Sep 11, 2019 at 06:30:22PM +0200, Julien Rouhaud wrote:\n> > The thing is that pg_stat_statements assigns a 0 queryid in the\n> > post_parse_analyze_hook to recognize utility statements and avoid\n> > tracking instrumentation twice in case of utility statements, and then\n> > compute a queryid base on a hash of the query text. Maybe we could\n> > instead fully reserve queryid \"2\" for utility statements (so forcing\n> > queryid \"1\" for standard queries if jumbling returns 0 *or* 2 instead\n> > of only 0), and use \"2\" as the identifier for utility statement\n> > instead of \"0\"?\n> \n> Hmm. Not sure. At this stage it would be nice to gather more input\n> on the matter, and FWIW, I don't like much the assumption that a query\n> ID of 0 is perhaps a utility statement, or perhaps nothing depending\n> on the state of a backend entry, or even perhaps something else\n> depending how on how modules make use and define such query IDs.\n\nI thought each extension would export a function to compute the query\nid, and you would all that function with the pg_stat_activity.query\nstring.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 12 Nov 2019 22:15:23 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 4:15 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Nov 11, 2019 at 05:37:30PM +0900, Michael Paquier wrote:\n> > On Wed, Sep 11, 2019 at 06:30:22PM +0200, Julien Rouhaud wrote:\n> > > The thing is that pg_stat_statements assigns a 0 queryid in the\n> > > post_parse_analyze_hook to recognize utility statements and avoid\n> > > tracking instrumentation twice in case of utility statements, and then\n> > > compute a queryid base on a hash of the query text. Maybe we could\n> > > instead fully reserve queryid \"2\" for utility statements (so forcing\n> > > queryid \"1\" for standard queries if jumbling returns 0 *or* 2 instead\n> > > of only 0), and use \"2\" as the identifier for utility statement\n> > > instead of \"0\"?\n> >\n> > Hmm. Not sure. At this stage it would be nice to gather more input\n> > on the matter, and FWIW, I don't like much the assumption that a query\n> > ID of 0 is perhaps a utility statement, or perhaps nothing depending\n> > on the state of a backend entry, or even perhaps something else\n> > depending how on how modules make use and define such query IDs.\n>\n> I thought each extension would export a function to compute the query\n> id, and you would all that function with the pg_stat_activity.query\n> string.\n\nI'd really like to have the queryid function available through SQL,\nbut I think that this specific case wouldn't work very well for\npg_stat_statements' approach as it's working with oid. The query\nstring in pg_stat_activity is the user provided one rather than a\nfully-qualified version, so in order to get that query's queryid, you\nneed to know the exact search_path in use in that backend, and that's\nnot something available.\n\n\n",
"msg_date": "Wed, 13 Nov 2019 12:53:09 +0100",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Nov 13, 2019 at 12:53:09PM +0100, Julien Rouhaud wrote:\n> I'd really like to have the queryid function available through SQL,\n> but I think that this specific case wouldn't work very well for\n> pg_stat_statements' approach as it's working with oid. The query\n> string in pg_stat_activity is the user provided one rather than a\n> fully-qualified version, so in order to get that query's queryid, you\n> need to know the exact search_path in use in that backend, and that's\n> not something available.\n\nYeah.. So, we have a patch marked as ready for committer here, and it\nseems to me that we have a couple of issues to discuss more about\nfirst particularly this query ID of 0. Again, do others have more\nany input to offer?\n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 15:19:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 03:19:49PM +0900, Michael Paquier wrote:\n> On Wed, Nov 13, 2019 at 12:53:09PM +0100, Julien Rouhaud wrote:\n>> I'd really like to have the queryid function available through SQL,\n>> but I think that this specific case wouldn't work very well for\n>> pg_stat_statements' approach as it's working with oid. The query\n>> string in pg_stat_activity is the user provided one rather than a\n>> fully-qualified version, so in order to get that query's queryid, you\n>> need to know the exact search_path in use in that backend, and that's\n>> not something available.\n> \n> Yeah.. So, we have a patch marked as ready for committer here, and it\n> seems to me that we have a couple of issues to discuss more about\n> first particularly this query ID of 0. Again, do others have more\n> any input to offer?\n\nAnd while on it, the latest patch does not apply, so a rebase is\nneeded here. \n--\nMichael",
"msg_date": "Fri, 29 Nov 2019 15:20:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 7:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Nov 29, 2019 at 03:19:49PM +0900, Michael Paquier wrote:\n> > On Wed, Nov 13, 2019 at 12:53:09PM +0100, Julien Rouhaud wrote:\n> >> I'd really like to have the queryid function available through SQL,\n> >> but I think that this specific case wouldn't work very well for\n> >> pg_stat_statements' approach as it's working with oid. The query\n> >> string in pg_stat_activity is the user provided one rather than a\n> >> fully-qualified version, so in order to get that query's queryid, you\n> >> need to know the exact search_path in use in that backend, and that's\n> >> not something available.\n> >\n> > Yeah.. So, we have a patch marked as ready for committer here, and it\n> > seems to me that we have a couple of issues to discuss more about\n> > first particularly this query ID of 0. Again, do others have more\n> > any input to offer?\n\nI just realized that with current infrastructure it's not possible to\ndisplay a utility queryid. We need to recognize utility to not\nprocess the counters twice (once in processUtility, once in the\nunderlying executor), so we don't provide a queryid for utility\nstatements in parse analysis. Current magic value 0 has the side\neffect of showing an invalid queryid for all utilty statements, and\nusing a magic value different from 0 will just always display that\nmagic value. We could instead add another field in the Query and\nPlannedStmt structs, say \"int queryid_flags\", that extensions could\nuse for their needs?\n\n> And while on it, the latest patch does not apply, so a rebase is\n> needed here.\n\nYep, I noticed that this morning. I already rebased the patch\nlocally, I'll send a new version with new modifications when we reach\nan agreement on the utility issue.\n\n\n",
"msg_date": "Fri, 29 Nov 2019 09:39:09 +0100",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Nov 29, 2019 at 09:39:09AM +0100, Julien Rouhaud wrote:\n>On Fri, Nov 29, 2019 at 7:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, Nov 29, 2019 at 03:19:49PM +0900, Michael Paquier wrote:\n>> > On Wed, Nov 13, 2019 at 12:53:09PM +0100, Julien Rouhaud wrote:\n>> >> I'd really like to have the queryid function available through SQL,\n>> >> but I think that this specific case wouldn't work very well for\n>> >> pg_stat_statements' approach as it's working with oid. The query\n>> >> string in pg_stat_activity is the user provided one rather than a\n>> >> fully-qualified version, so in order to get that query's queryid, you\n>> >> need to know the exact search_path in use in that backend, and that's\n>> >> not something available.\n>> >\n>> > Yeah.. So, we have a patch marked as ready for committer here, and it\n>> > seems to me that we have a couple of issues to discuss more about\n>> > first particularly this query ID of 0. Again, do others have more\n>> > any input to offer?\n>\n>I just realized that with current infrastructure it's not possible to\n>display a utility queryid. We need to recognize utility to not\n>process the counters twice (once in processUtility, once in the\n>underlying executor), so we don't provide a queryid for utility\n>statements in parse analysis. Current magic value 0 has the side\n>effect of showing an invalid queryid for all utilty statements, and\n>using a magic value different from 0 will just always display that\n>magic value. We could instead add another field in the Query and\n>PlannedStmt structs, say \"int queryid_flags\", that extensions could\n>use for their needs?\n>\n>> And while on it, the latest patch does not apply, so a rebase is\n>> needed here.\n>\n>Yep, I noticed that this morning. I already rebased the patch\n>locally, I'll send a new version with new modifications when we reach\n>an agreement on the utility issue.\n>\n\nWell, this patch was in WoA since November, but now that I look at it\nthat might have been wrong - we're clearly waiting for agreement on how\nto handle queryid for utility commands. I suspect the WoA status might\nhave been driving people away from this thread :-(\n\nI've switched the patch to \"needs review\" and moved it to the next CF.\nWhat I think needs to happen is we get a patch implementing one of the\nproposed solutions, and discuss that.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Feb 2020 12:30:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sat, Feb 1, 2020 at 12:30 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Fri, Nov 29, 2019 at 09:39:09AM +0100, Julien Rouhaud wrote:\n> >On Fri, Nov 29, 2019 at 7:21 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Fri, Nov 29, 2019 at 03:19:49PM +0900, Michael Paquier wrote:\n> >> > On Wed, Nov 13, 2019 at 12:53:09PM +0100, Julien Rouhaud wrote:\n> >> >> I'd really like to have the queryid function available through SQL,\n> >> >> but I think that this specific case wouldn't work very well for\n> >> >> pg_stat_statements' approach as it's working with oid. The query\n> >> >> string in pg_stat_activity is the user provided one rather than a\n> >> >> fully-qualified version, so in order to get that query's queryid, you\n> >> >> need to know the exact search_path in use in that backend, and that's\n> >> >> not something available.\n> >> >\n> >> > Yeah.. So, we have a patch marked as ready for committer here, and it\n> >> > seems to me that we have a couple of issues to discuss more about\n> >> > first particularly this query ID of 0. Again, do others have more\n> >> > any input to offer?\n> >\n> >I just realized that with current infrastructure it's not possible to\n> >display a utility queryid. We need to recognize utility to not\n> >process the counters twice (once in processUtility, once in the\n> >underlying executor), so we don't provide a queryid for utility\n> >statements in parse analysis. Current magic value 0 has the side\n> >effect of showing an invalid queryid for all utilty statements, and\n> >using a magic value different from 0 will just always display that\n> >magic value. We could instead add another field in the Query and\n> >PlannedStmt structs, say \"int queryid_flags\", that extensions could\n> >use for their needs?\n> >\n> >> And while on it, the latest patch does not apply, so a rebase is\n> >> needed here.\n> >\n> >Yep, I noticed that this morning. I already rebased the patch\n> >locally, I'll send a new version with new modifications when we reach\n> >an agreement on the utility issue.\n> >\n>\n> Well, this patch was in WoA since November, but now that I look at it\n> that might have been wrong - we're clearly waiting for agreement on how\n> to handle queryid for utility commands. I suspect the WoA status might\n> have been driving people away from this thread :-(\n\nOh, indeed.\n\n> I've switched the patch to \"needs review\" and moved it to the next CF.\n\nThanks\n\n> What I think needs to happen is we get a patch implementing one of the\n> proposed solutions, and discuss that.\n\nThere's also the possibility to reserve 1 bit of the hash to know if\nthis is a utility command or not, although I don't recall right now\nall the possible issues with utility commands and some special\nhandling of them. I'll work on it before the next commitfest.\n\n\n",
"msg_date": "Wed, 5 Feb 2020 15:32:36 +0100",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 9:32 AM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> There's also the possibility to reserve 1 bit of the hash to know if\n> this is a utility command or not, although I don't recall right now\n> all the possible issues with utility commands and some special\n> handling of them. I'll work on it before the next commitfest.\n\nFWIW, I don't really see why it would be bad to have 0 mean that\n\"there's no query ID for some reason\" without caring whether that's\nbecause the current statement is a utility statement or because\nthere's no statement in progress at all or whatever else. The user\nprobably doesn't need our help to distinguish between \"no statement\"\nand \"utility statement\", right?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Feb 2020 14:59:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Feb 06, 2020 at 02:59:09PM -0500, Robert Haas wrote:\n> On Wed, Feb 5, 2020 at 9:32 AM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> > There's also the possibility to reserve 1 bit of the hash to know if\n> > this is a utility command or not, although I don't recall right now\n> > all the possible issues with utility commands and some special\n> > handling of them. I'll work on it before the next commitfest.\n>\n> FWIW, I don't really see why it would be bad to have 0 mean that\n> \"there's no query ID for some reason\" without caring whether that's\n> because the current statement is a utility statement or because\n> there's no statement in progress at all or whatever else. The user\n> probably doesn't need our help to distinguish between \"no statement\"\n> and \"utility statement\", right?\n\nSure, but if we don't fix that it means that we also won't expose any queryid\nfor utility statement, even if pg_stat_statements is configured to track those\n(with a very poor queryid handling, but still).\n\nWhile looking at this again, I realized that pg_stat_statements doesn't compute\na queryid during the post parse analysis hook just to make sure that no query\nidentifier will be set during executorStart and the rest of executor functions.\n\nAFAICT, that can't happen anyway since pg_plan_queries() will discard any\ncomputed queryid for utility statements. This seems to be an oversight due to\noriginal pg_stat_statements implementation, so I fixed this.\n\nThen, as processUtility is called between parse analysis and executor, I think\nthat we can simply work around this by computing utility statements query\nidentifier during parse analysis, removing it in pgss_ProcessUtility and\nkeeping a copy of it for the pgss_store calls in that function, as done in the\nattached v5.\n\nThis fixes everything except EXECUTE statements, which has to get the\nunderlying query's queryid. The problem is that EXECUTE won't get through\nparse analysis, so while it's correctly handled for execution and pgss_store,\nit's not being exposed in pg_stat_activity and log_line_prefix. To fix it, I\nadded an extra call to pgstat_report_queryid in executorStart. As this\nfunction is a no-op if a queryid is already exposed, this shouldn't cause any\nharm and fix any other cases of query execution that don't go through parse\nanalysis.\n\nFinally, DEALLOCATE is entirely ignored by pg_stat_statements, so those\nstatements will always be reported with a NULL/0 queryid, but this is\nconsistent as it's also not present in pg_stat_statements() SRF.",
"msg_date": "Fri, 7 Feb 2020 11:12:50 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 11:12 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Feb 06, 2020 at 02:59:09PM -0500, Robert Haas wrote:\n> > On Wed, Feb 5, 2020 at 9:32 AM Julien Rouhaud <julien.rouhaud@free.fr> wrote:\n> > > There's also the possibility to reserve 1 bit of the hash to know if\n> > > this is a utility command or not, although I don't recall right now\n> > > all the possible issues with utility commands and some special\n> > > handling of them. I'll work on it before the next commitfest.\n> >\n> > FWIW, I don't really see why it would be bad to have 0 mean that\n> > \"there's no query ID for some reason\" without caring whether that's\n> > because the current statement is a utility statement or because\n> > there's no statement in progress at all or whatever else. The user\n> > probably doesn't need our help to distinguish between \"no statement\"\n> > and \"utility statement\", right?\n>\n> Sure, but if we don't fix that it means that we also won't expose any queryid\n> for utility statement, even if pg_stat_statements is configured to track those\n> (with a very poor queryid handling, but still).\n>\n> While looking at this again, I realized that pg_stat_statements doesn't compute\n> a queryid during the post parse analysis hook just to make sure that no query\n> identifier will be set during executorStart and the rest of executor functions.\n>\n> AFAICT, that can't happen anyway since pg_plan_queries() will discard any\n> computed queryid for utility statements. This seems to be an oversight due to\n> original pg_stat_statements implementation, so I fixed this.\n>\n> Then, as processUtility is called between parse analysis and executor, I think\n> that we can simply work around this by computing utility statements query\n> identifier during parse analysis, removing it in pgss_ProcessUtility and\n> keeping a copy of it for the pgss_store calls in that function, as done in the\n> attached v5.\n>\n> This fixes everything except EXECUTE statements, which has to get the\n> underlying query's queryid. The problem is that EXECUTE won't get through\n> parse analysis, so while it's correctly handled for execution and pgss_store,\n> it's not being exposed in pg_stat_activity and log_line_prefix. To fix it, I\n> added an extra call to pgstat_report_queryid in executorStart. As this\n> function is a no-op if a queryid is already exposed, this shouldn't cause any\n> harm and fix any other cases of query execution that don't go through parse\n> analysis.\n>\n> Finally, DEALLOCATE is entirely ignored by pg_stat_statements, so those\n> statements will always be reported with a NULL/0 queryid, but this is\n> consistent as it's also not present in pg_stat_statements() SRF.\n\ncfbot reports a failure since 2f9661311b (command completion tag\nchange), so here's a rebased v6, no change otherwise.",
"msg_date": "Tue, 3 Mar 2020 16:24:59 +0100",
"msg_from": "Julien Rouhaud <julien.rouhaud@free.fr>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 11:49:53AM -0700, Peter Geoghegan wrote:\n> On Tue, Mar 19, 2019 at 12:38 PM legrand legrand\n> <legrand_legrand@hotmail.com> wrote:\n> > Would it make sense to add it in auto explain ?\n> > I don't know for explain itself, but maybe ...\n>\n> I think that it should appear in EXPLAIN. pg_stat_statements already\n> cannot have a query hash of zero, so it might be okay to display it\n> only when its value is non-zero.\n\nI had forgotten about this. After looking at it, I can see a few issues.\n\nFor now post_parse_analyze_hook isn't called for the underlying statement, so\nwe don't have the queryid. And we can't compute the queryid for the underlying\nquery in the initial post_parse_analyze_hook call as we don't want the executor\nto have a queryid set in that case to avoid cumulating counters for both the\nexplain and the query.\n\nWe could add an extra call in ExplainQuery, but this will be ignored by\npg_stat_statements unless you set pg_stat_statements.track to all. Also,\npgss_post_parse_analyze will try to record an entry with the normalized query\ntext if no one exists yet and if any constant where removed. The problem is\nthat, as I already mentioned in [1], the underlying query doesn't have\nquery_location or query_len valued, so the recorded query text will at least\ncontain the explain part of the input query.\n\n[1] https://www.postgresql.org/message-id/CAOBaU_Y-y%2BVOhTZgDOuDk6-9V72-ZXdWccXo_kx0P4DDBEEh9A%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 8 Mar 2020 15:26:44 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 03, 2020 at 04:24:59PM +0100, Julien Rouhaud wrote:\n>\n> cfbot reports a failure since 2f9661311b (command completion tag\n> change), so here's a rebased v6, no change otherwise.\n\n\nConflict with 8e8a0becb3 (Unify several ways to tracking backend type), thanks\nagain to cfbot, rebased v7 attached.",
"msg_date": "Sat, 14 Mar 2020 18:53:51 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sat, Mar 14, 2020 at 06:53:51PM +0100, Julien Rouhaud wrote:\n> On Tue, Mar 03, 2020 at 04:24:59PM +0100, Julien Rouhaud wrote:\n> >\n> > cfbot reports a failure since 2f9661311b (command completion tag\n> > change), so here's a rebased v6, no change otherwise.\n>\n>\n> Conflict with 8e8a0becb3 (Unify several ways to tracking backend type), thanks\n> again to cfbot, rebased v7 attached.\n\n\nBit repetita.",
"msg_date": "Mon, 16 Mar 2020 15:43:12 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "New conflict, rebased v9 attached.",
"msg_date": "Thu, 2 Apr 2020 15:25:06 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi Julien,\n\nOn 2020/04/02 22:25, Julien Rouhaud wrote:\n> New conflict, rebased v9 attached.\n\nI tested the patch on the head (c7654f6a3) and\nthe result was fine. See below:\n\n$ make installcheck-world\n=====================\n All 1 tests passed.\n=====================\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Tue, 07 Apr 2020 15:40:34 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Apr 7, 2020 at 8:40 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>\n> Hi Julien,\n>\n> On 2020/04/02 22:25, Julien Rouhaud wrote:\n> > New conflict, rebased v9 attached.\n>\n> I tested the patch on the head (c7654f6a3) and\n> the result was fine. See below:\n>\n> $ make installcheck-world\n> =====================\n> All 1 tests passed.\n> =====================\n\nThanks Yamada-san! Unfortunately this patch still didn't attract any\ncommitter, so I moved it to the next commitfest.\n\n\n",
"msg_date": "Wed, 8 Apr 2020 16:37:45 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi,\n\nv9 patch fails to apply to HEAD, could you check and rebase it?\n\nAnd here are minor typos.\n\n 79 + * utility statements. Note that we don't compute a queryId\nfor prepared\n 80 + * statemets related utility, as those will inherit from the\nunderlying\n 81 + * statements's one (except DEALLOCATE which is entirely\nuntracked).\n\nstatemets -> statements\nstatements's -> statements' or statement's?\n\nRegards,\n\n--\nAtsushi Torikoshi\n\nOn Wed, Apr 8, 2020 at 11:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Apr 7, 2020 at 8:40 AM Tatsuro Yamada\n> <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> >\n> > Hi Julien,\n> >\n> > On 2020/04/02 22:25, Julien Rouhaud wrote:\n> > > New conflict, rebased v9 attached.\n> >\n> > I tested the patch on the head (c7654f6a3) and\n> > the result was fine. See below:\n> >\n> > $ make installcheck-world\n> > =====================\n> > All 1 tests passed.\n> > =====================\n>\n> Thanks Yamada-san! Unfortunately this patch still didn't attract any\n> committer, so I moved it to the next commitfest.\n>\n>\n>\n\nHi,v9 patch fails to apply to HEAD, could you check and rebase it?And here are minor typos. 79 + * utility statements. Note that we don't compute a queryId for prepared 80 + * statemets related utility, as those will inherit from the underlying 81 + * statements's one (except DEALLOCATE which is entirely untracked).statemets -> statementsstatements's -> statements' or statement's?Regards,--Atsushi TorikoshiOn Wed, Apr 8, 2020 at 11:38 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Apr 7, 2020 at 8:40 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>\n> Hi Julien,\n>\n> On 2020/04/02 22:25, Julien Rouhaud wrote:\n> > New conflict, rebased v9 attached.\n>\n> I tested the patch on the head (c7654f6a3) and\n> the result was fine. See below:\n>\n> $ make installcheck-world\n> =====================\n> All 1 tests passed.\n> =====================\n\nThanks Yamada-san! Unfortunately this patch still didn't attract any\ncommitter, so I moved it to the next commitfest.",
"msg_date": "Tue, 14 Jul 2020 19:11:02 +0900",
"msg_from": "Atsushi Torikoshi <atorik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 07:11:02PM +0900, Atsushi Torikoshi wrote:\n> Hi,\n> \n> v9 patch fails to apply to HEAD, could you check and rebase it?\n\nThanks for the notice, v10 attached!\n\n> And here are minor typos.\n> \n> 79 + * utility statements. Note that we don't compute a queryId\n> for prepared\n> 80 + * statemets related utility, as those will inherit from the\n> underlying\n> 81 + * statements's one (except DEALLOCATE which is entirely\n> untracked).\n> \n> statemets -> statements\n> statements's -> statements' or statement's?\n\nThanks! I went with \"statement's\".",
"msg_date": "Tue, 14 Jul 2020 13:24:53 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2020-07-14 20:24, Julien Rouhaud wrote:\n> On Tue, Jul 14, 2020 at 07:11:02PM +0900, Atsushi Torikoshi wrote:\n>> Hi,\n>> \n>> v9 patch fails to apply to HEAD, could you check and rebase it?\n> \n> Thanks for the notice, v10 attached!\n> \n>> And here are minor typos.\n>> \n>> 79 + * utility statements. Note that we don't compute a \n>> queryId\n>> for prepared\n>> 80 + * statemets related utility, as those will inherit from \n>> the\n>> underlying\n>> 81 + * statements's one (except DEALLOCATE which is entirely\n>> untracked).\n>> \n>> statemets -> statements\n>> statements's -> statements' or statement's?\n> \n> Thanks! I went with \"statement's\".\n\nThanks for updating!\nI tested the patch setting log_statement = 'all', but %Q in \nlog_line_prefix\nwas always 0 even when pg_stat_statements.queryid and\npg_stat_activity.queryid are not 0.\n\nIs this an intentional behavior?\n\n\n```\n $ initdb --no-locale -D data\n\n\n $ edit postgresql.conf\n shared_preload_libraries = 'pg_stat_statements'\n logging_collector = on\n log_line_prefix = '%m [%p] queryid:%Q '\n log_statement = 'all'\n\n $ pg_ctl start -D data\n\n $ psql\n =# CREATE EXTENSION pg_stat_statements;\n\n =# CREATE TABLE t1 (i int);\n =# INSERT INTO t1 VALUES (0),(1);\n =# SELECT queryid, query FROM pg_stat_activity;\n\n -- query ids are all 0 on the log\n $ view log\n 2020-07-28 15:57:58.475 EDT [4480] queryid:0 LOG: statement: CREATE \nTABLE t1 (i int);\n 2020-07-28 15:58:13.730 EDT [4480] queryid:0 LOG: statement: INSERT \nINTO t1 VALUES (0),(1);\n 2020-07-28 15:59:28.389 EDT [4480] queryid:0 LOG: statement: SELECT * \nFROM t1;\n\n -- on pg_stat_activity and pgss, query ids are not 0\n $ psql\n =# SELECT queryid, query FROM pg_stat_activity WHERE query LIKE \n'%t1%';\n queryid | query\n \n----------------------+----------------------------------------------------------------------\n 1109063694563750779 | SELECT * FROM t1;\n -2582225123719476948 | SELECT queryid, query FROM pg_stat_activity \nWHERE query LIKE '%t1%';\n (2 rows)\n\n =# SELECT queryid, query FROM pg_stat_statements WHERE query LIKE \n'%t1%';\n queryid | query\n ----------------------+---------------------------------\n -5028988130796701553 | CREATE TABLE t1 (i int)\n 1109063694563750779 | SELECT * FROM t1\n 2726469050076420724 | INSERT INTO t1 VALUES ($1),($2)\n\n```\n\n\nAnd here is a minor typo.\n optionnally -> optionally\n\n\n> 753 + /* query identifier, optionnally computed using \n> post_parse_analyze_hook */\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 28 Jul 2020 17:07:02 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Jul 28, 2020 at 10:07 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n>\n> Thanks for updating!\n> I tested the patch setting log_statement = 'all', but %Q in\n> log_line_prefix\n> was always 0 even when pg_stat_statements.queryid and\n> pg_stat_activity.queryid are not 0.\n>\n> Is this an intentional behavior?\n>\n>[...]\n\nThanks for the tests! That's indeed an expected behavior (although I\nwasn't aware of it), which isn't documented in this patch (I'll fix\nit). The reason for that is that log_statements is done right after\nparsing the query:\n\n /*\n * Do basic parsing of the query or queries (this should be safe even if\n * we are in aborted transaction state!)\n */\n parsetree_list = pg_parse_query(query_string);\n\n /* Log immediately if dictated by log_statement */\n if (check_log_statement(parsetree_list))\n {\n ereport(LOG,\n (errmsg(\"statement: %s\", query_string),\n errhidestmt(true),\n errdetail_execute(parsetree_list)));\n was_logged = true;\n }\n\nAs parse analysis is not yet done, no queryid can be computed at that\npoint, so we always print 0. That's a limitation that can't be\nremoved without changing the semantics of log_statements, so we'll\nprobably have to live with it.\n\n> And here is a minor typo.\n> optionnally -> optionally\n>\n>\n> > 753 + /* query identifier, optionnally computed using\n> > post_parse_analyze_hook */\n\nThanks, I fixed it locally!\n\n\n",
"msg_date": "Tue, 28 Jul 2020 10:55:04 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Jul 28, 2020 at 10:55:04AM +0200, Julien Rouhaud wrote:\n> On Tue, Jul 28, 2020 at 10:07 AM torikoshia <torikoshia@oss.nttdata.com> wrote:\n> >\n> > Thanks for updating!\n> > I tested the patch setting log_statement = 'all', but %Q in\n> > log_line_prefix\n> > was always 0 even when pg_stat_statements.queryid and\n> > pg_stat_activity.queryid are not 0.\n> >\n> > Is this an intentional behavior?\n> >\n> >[...]\n> \n> Thanks for the tests! That's indeed an expected behavior (although I\n> wasn't aware of it), which isn't documented in this patch (I'll fix\n> it). The reason for that is that log_statements is done right after\n> parsing the query:\n> \n> /*\n> * Do basic parsing of the query or queries (this should be safe even if\n> * we are in aborted transaction state!)\n> */\n> parsetree_list = pg_parse_query(query_string);\n> \n> /* Log immediately if dictated by log_statement */\n> if (check_log_statement(parsetree_list))\n> {\n> ereport(LOG,\n> (errmsg(\"statement: %s\", query_string),\n> errhidestmt(true),\n> errdetail_execute(parsetree_list)));\n> was_logged = true;\n> }\n> \n> As parse analysis is not yet done, no queryid can be computed at that\n> point, so we always print 0. That's a limitation that can't be\n> removed without changing the semantics of log_statements, so we'll\n> probably have to live with it.\n> \n> > And here is a minor typo.\n> > optionnally -> optionally\n> >\n> >\n> > > 753 + /* query identifier, optionnally computed using\n> > > post_parse_analyze_hook */\n> \n> Thanks, I fixed it locally!\n\n\nRecent conflict, rebased v11 attached.",
"msg_date": "Wed, 19 Aug 2020 16:19:30 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Aug 19, 2020 at 04:19:30PM +0200, Julien Rouhaud wrote:\n> Similarly to other fields in pg_stat_activity, only the queryid from the top\n> level statements are exposed, and if the backends status isn't active then the\n> queryid from the last executed statements is displayed.\n> \n> Also add a %Q placeholder to include the queryid in the log_line_prefix, which\n> will also only expose top level statements.\n\nI would like to apply this patch (I know it has been in the commitfest\nsince July 2019), but I have some questions about the user API. Does it\nmake sense to have a column in pg_stat_actvity and an option in\nlog_line_prefix that will be empty unless pg_stat_statements is\ninstalled? Is there no clean way to move the query hash computation out\nof pg_stat_statements and into the main code so the query id is always\nvisible? (Also, did we decide _not_ to make the pg_stat_statements\nqueryid always a positive value?)\n\nAlso, in the doc patch:\n\n\tBy default, query identifiers are not computed, so this field will always\n\tbe null, unless an additional module that compute query identifiers, such\n\tas <xref linkend=\"pgstatstatements\"/>, is configured.\n\nwhy are you saying \"such as\"? Isn't pg_stat_statements the only way to\nsee the queryid? This command allowed the queryid to be displayed in\npg_stat_activity:\n\n\tALTER SYSTEM SET shared_preload_libraries = 'pg_stat_statements';\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 5 Oct 2020 17:24:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I would like to apply this patch (I know it has been in the commitfest\n> since July 2019), but I have some questions about the user API. Does it\n> make sense to have a column in pg_stat_actvity and an option in\n> log_line_prefix that will be empty unless pg_stat_statements is\n> installed? Is there no clean way to move the query hash computation out\n> of pg_stat_statements and into the main code so the query id is always\n> visible? (Also, did we decide _not_ to make the pg_stat_statements\n> queryid always a positive value?)\n\nFWIW, I think this proposal is a mess. I was willing to hold my nose\nand have a queryId field in the internal Query struct without any solid\nconsensus about what its semantics are and which extensions get to use it.\nExposing it to end users seems like a bridge too far, though. In\nparticular, I'm afraid that that will cause people to expect it to have\nconsistent values across PG versions, or even just across architectures\nwithin one version.\n\nThe larger picture here is that there's lots of room to doubt whether\npg_stat_statements' decisions about what to ignore or include in the ID\nwill be satisfactory to everybody. If that were not so, we'd just move\nthe computation into core.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Oct 2020 17:42:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2020-Oct-05, Tom Lane wrote:\n\n> FWIW, I think this proposal is a mess. I was willing to hold my nose\n> and have a queryId field in the internal Query struct without any solid\n> consensus about what its semantics are and which extensions get to use it.\n> Exposing it to end users seems like a bridge too far, though. In\n> particular, I'm afraid that that will cause people to expect it to have\n> consistent values across PG versions, or even just across architectures\n> within one version.\n\nI wonder if it would help to purposefully change the computation so that\nit is not -- for instance, hash the system_identifier as initial value.\nThen users would be forced to accept that it'll change as soon as it\nmigrates to another server or is upgraded to a new major version.\n\n\n",
"msg_date": "Mon, 5 Oct 2020 19:58:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 5, 2020 at 07:58:42PM -0300, �lvaro Herrera wrote:\n> On 2020-Oct-05, Tom Lane wrote:\n> \n> > FWIW, I think this proposal is a mess. I was willing to hold my nose\n> > and have a queryId field in the internal Query struct without any solid\n> > consensus about what its semantics are and which extensions get to use it.\n> > Exposing it to end users seems like a bridge too far, though. In\n> > particular, I'm afraid that that will cause people to expect it to have\n> > consistent values across PG versions, or even just across architectures\n> > within one version.\n> \n> I wonder if it would help to purposefully change the computation so that\n> it is not -- for instance, hash the system_identifier as initial value.\n> Then users would be forced to accept that it'll change as soon as it\n> migrates to another server or is upgraded to a new major version.\n\nThat seems like a good idea, but it would prevent cross-cluster\nsame-major-version comparisons, which seems like a negative. Perhaps we\nshould add the major version into the hash to handle this. Ideally,\nlet's just put a queryid-hash-version into to the hash, so if we change\nthe computation, we just update the hash version and nothing matches\nanymore.\n\nI do think the queryid has to display independent of pg_stat_statements,\nbecause I can see people using queryid for log file and pg_stat_activity\ncomparisons. I also think the ability to have queryid accessible is an\nimportant feature outside of pg_stat_statements, so I do think we need a\nway to move this idea forward.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 5 Oct 2020 22:18:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Oct 6, 2020 at 10:18 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Oct 5, 2020 at 07:58:42PM -0300, Álvaro Herrera wrote:\n> > On 2020-Oct-05, Tom Lane wrote:\n> >\n> > > FWIW, I think this proposal is a mess. I was willing to hold my nose\n> > > and have a queryId field in the internal Query struct without any solid\n> > > consensus about what its semantics are and which extensions get to use it.\n> > > Exposing it to end users seems like a bridge too far, though. In\n> > > particular, I'm afraid that that will cause people to expect it to have\n> > > consistent values across PG versions, or even just across architectures\n> > > within one version.\n> >\n> > I wonder if it would help to purposefully change the computation so that\n> > it is not -- for instance, hash the system_identifier as initial value.\n> > Then users would be forced to accept that it'll change as soon as it\n> > migrates to another server or is upgraded to a new major version.\n>\n> That seems like a good idea, but it would prevent cross-cluster\n> same-major-version comparisons, which seems like a negative. Perhaps we\n> should add the major version into the hash to handle this. Ideally,\n> let's just put a queryid-hash-version into to the hash, so if we change\n> the computation, we just update the hash version and nothing matches\n> anymore.\n>\n> I do think the queryid has to display independent of pg_stat_statements,\n> because I can see people using queryid for log file and pg_stat_activity\n> comparisons. I also think the ability to have queryid accessible is an\n> important feature outside of pg_stat_statements, so I do think we need a\n> way to move this idea forward.\n\nFor the record, for now any extension can compute a queryid and there\nare at least 2 other published extensions that already do that, one of\nthem having different semantics on how to compute the queryid. I'm\nnot sure that we'll ever get a consensus on those semantics due to\nperformance tradeoff, so removing the ability to let people put their\nown code for that doesn't seem like the best way forward.\n\nMaybe we could add a new hook for only queryid computation, and add a\nGUC to let people choose between no queryid computed, core computation\n(current pg_stat_statement) and 3rd party plugin?\n\n\n",
"msg_date": "Tue, 6 Oct 2020 11:11:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Oct 6, 2020 at 11:11:27AM +0800, Julien Rouhaud wrote:\n> > I do think the queryid has to display independent of pg_stat_statements,\n> > because I can see people using queryid for log file and pg_stat_activity\n> > comparisons. I also think the ability to have queryid accessible is an\n> > important feature outside of pg_stat_statements, so I do think we need a\n> > way to move this idea forward.\n> \n> For the record, for now any extension can compute a queryid and there\n> are at least 2 other published extensions that already do that, one of\n> them having different semantics on how to compute the queryid. I'm\n> not sure that we'll ever get a consensus on those semantics due to\n> performance tradeoff, so removing the ability to let people put their\n> own code for that doesn't seem like the best way forward.\n> \n> Maybe we could add a new hook for only queryid computation, and add a\n> GUC to let people choose between no queryid computed, core computation\n> (current pg_stat_statement) and 3rd party plugin?\n\nThat all seems very complicated. If we go in that direction, I suggest\nwe just give up getting any of this into core.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 5 Oct 2020 23:23:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 05:24:06PM -0400, Bruce Momjian wrote:\n> (Also, did we decide _not_ to make the pg_stat_statements queryid\n> always a positive value?)\n\nThis specific point has been discussed a couple of years ago, please\nsee cff440d and its related thread:\nhttps://www.postgresql.org/message-id/CA+TgmobG_Kp4cBKFmsznUAaM1GWW6hhRNiZC0KjRMOOeYnz5Yw@mail.gmail.com\n--\nMichael",
"msg_date": "Tue, 6 Oct 2020 14:02:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 05, 2020 at 11:23:50PM -0400, Bruce Momjian wrote:\n> On Tue, Oct 6, 2020 at 11:11:27AM +0800, Julien Rouhaud wrote:\n>> Maybe we could add a new hook for only queryid computation, and add a\n>> GUC to let people choose between no queryid computed, core computation\n>> (current pg_stat_statement) and 3rd party plugin?\n> \n> That all seems very complicated. If we go in that direction, I suggest\n> we just give up getting any of this into core.\n\nA GUC would have at least the advantage to make the computation\nconsistent for any system willing to consume it, with the option to\nnot pay any potential performance impact, though I have to admit that\njust moving the query ID computation of PGSS into core may not be the\nbest option as a query ID of 0 means the same thing for a utility, for\nan initialization, and for a backend running a query with an unknown\nvalue, but that could be worked out.\n\nFWIW, I think that adding the system ID in the hash is too\nrestrictive, as it could be interesting for users to do stat\ncomparisons across multiple systems running the same major version.\nIt would be better to not give any strong guarantee that the query ID\ncomputed will remain consistent across major versions so as it is\npossible to keep improving it. Also, if nothing has been done that\nchanges the hashing computation, I see little benefit in forcing a\nbreakage by adding something like PG_MAJORVERSION_NUM or such in the\nhash computation.\n--\nMichael",
"msg_date": "Tue, 6 Oct 2020 14:34:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Oct 6, 2020 at 02:34:58PM +0900, Michael Paquier wrote:\n> On Mon, Oct 05, 2020 at 11:23:50PM -0400, Bruce Momjian wrote:\n> > On Tue, Oct 6, 2020 at 11:11:27AM +0800, Julien Rouhaud wrote:\n> >> Maybe we could add a new hook for only queryid computation, and add a\n> >> GUC to let people choose between no queryid computed, core computation\n> >> (current pg_stat_statement) and 3rd party plugin?\n> > \n> > That all seems very complicated. If we go in that direction, I suggest\n> > we just give up getting any of this into core.\n> \n> A GUC would have at least the advantage to make the computation\n> consistent for any system willing to consume it, with the option to\n> not pay any potential performance impact, though I have to admit that\n> just moving the query ID computation of PGSS into core may not be the\n> best option as a query ID of 0 means the same thing for a utility, for\n> an initialization, and for a backend running a query with an unknown\n> value, but that could be worked out.\n> \n> FWIW, I think that adding the system ID in the hash is too\n> restrictive, as it could be interesting for users to do stat\n> comparisons across multiple systems running the same major version.\n> It would be better to not give any strong guarantee that the query ID\n> computed will remain consistent across major versions so as it is\n> possible to keep improving it. Also, if nothing has been done that\n> changes the hashing computation, I see little benefit in forcing a\n> breakage by adding something like PG_MAJORVERSION_NUM or such in the\n> hash computation.\n\nI thought some more about this. First, I think having the queryid hash\ncode in the server, without requiring pg_stat_statements, is a\nrequirement --- I think too many people will want to use this feature\nindependent of pg_stat_statements. Second, I understand the desire to\nhave different hash computation methods, depending on what level of\ndetail/matching you want.\n\nI propose moving the pg_stat_statements queryid hash code into the\nserver (with a version number), and also adding a postgressql.conf\nvariable that lets you control how detailed the queryid hash is\ncomputed. This addresses the problem of people wanting different hash\nmethods.\n\nWhen computing a hash, the queryid detail level and version number will\nbe mixed into the hash, so only a hash that used a similar query and\nidentical queryid detail level would match.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 6 Oct 2020 09:22:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Oct 06, 2020 at 09:22:29AM -0400, Bruce Momjian wrote:\n> I propose moving the pg_stat_statements queryid hash code into the\n> server (with a version number), and also adding a postgresql.conf\n> variable that lets you control how detailed the queryid hash is\n> computed. This addresses the problem of people wanting different hash\n> methods.\n\nIn terms of making this part expendable in the future, there could be\na point in having an enum here, but are we sure that we will have a\nneed for that in the future? What I get from this discussion is that\nwe want a unique source of truth that users can consume, and that the\nonly source of truth proposed is the PGSS hashing. We may change the\nway we compute the query ID in the future, for example if it gets\nexpanded to some utility statements, etc. But that would be\ncontrolled by the version number in the hash, not the GUC itself.\n\n> When computing a hash, the queryid detail level and version number will\n> be mixed into the hash, so only a hash that used a similar query and\n> identical queryid detail level would match.\n\nYes, having a version number directly dependent on the hashing sounds\nlike a good compromise to me.\n--\nMichael",
"msg_date": "Wed, 7 Oct 2020 10:42:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Oct 7, 2020 at 10:42:49AM +0900, Michael Paquier wrote:\n> On Tue, Oct 06, 2020 at 09:22:29AM -0400, Bruce Momjian wrote:\n> > I propose moving the pg_stat_statements queryid hash code into the\n> > server (with a version number), and also adding a postgresql.conf\n> > variable that lets you control how detailed the queryid hash is\n> > computed. This addresses the problem of people wanting different hash\n> > methods.\n> \n> In terms of making this part expendable in the future, there could be\n> a point in having an enum here, but are we sure that we will have a\n> need for that in the future? What I get from this discussion is that\n> we want a unique source of truth that users can consume, and that the\n> only source of truth proposed is the PGSS hashing. We may change the\n> way we compute the query ID in the future, for example if it gets\n> expanded to some utility statements, etc. But that would be\n> controlled by the version number in the hash, not the GUC itself.\n\nOh, if that is true, then I agree let's just go with the version number.\n\n> > When computing a hash, the queryid detail level and version number will\n> > be mixed into the hash, so only a hash that used a similar query and\n> > identical queryid detail level would match.\n> \n> Yes, having a version number directly dependent on the hashing sounds\n> like a good compromise to me.\n\nGood, much simpler. I think there is enough demand for a queryid that I\nwould like to get this moving forward.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 6 Oct 2020 21:53:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Oct 7, 2020 at 9:53 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Oct 7, 2020 at 10:42:49AM +0900, Michael Paquier wrote:\n> > On Tue, Oct 06, 2020 at 09:22:29AM -0400, Bruce Momjian wrote:\n> > > I propose moving the pg_stat_statements queryid hash code into the\n> > > server (with a version number), and also adding a postgresql.conf\n> > > variable that lets you control how detailed the queryid hash is\n> > > computed. This addresses the problem of people wanting different hash\n> > > methods.\n> >\n> > In terms of making this part expendable in the future, there could be\n> > a point in having an enum here, but are we sure that we will have a\n> > need for that in the future? What I get from this discussion is that\n> > we want a unique source of truth that users can consume, and that the\n> > only source of truth proposed is the PGSS hashing. We may change the\n> > way we compute the query ID in the future, for example if it gets\n> > expanded to some utility statements, etc. But that would be\n> > controlled by the version number in the hash, not the GUC itself.\n>\n> Oh, if that is true, then I agree let's just go with the version number.\n\nBut there are many people that aren't happy with the current hashing\napproach. If we're going to move the computation in core, shouldn't\nwe listen to their complaints and let them pay some probably quite\nhigh overhead to base the hash on name and/or fully qualified name\nrather than OID?\nFor instance people using logical replication to upgrade to a newer\nversion may want to easily compare query performance on the new\nversion, or people with multi-tenant databases may want to ignore the\nschema name to keep a low number of different queryid.\n\nIt would probably still be possible to have a custom queryid hashing\nby disabling the core one and computing a new one in a custom\nextension, but that seems a bit hackish.\n\nJumping back on Tom's point that there are judgment calls on what is\nexamined or not, after a quick look I see at least two possible\nproblems of ignored clauses:\n- WITH TIES clause\n- OVERRIDING clause\n\nI personally think that they shouldn't be ignored, but I don't know if\nthey were only forgotten or ignored on purpose.\n\n\n",
"msg_date": "Mon, 12 Oct 2020 16:20:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Mon, Oct 12, 2020 at 04:20:05PM +0800, Julien Rouhaud wrote:\n> But there are many people that aren't happy with the current hashing\n> approach. If we're going to move the computation in core, shouldn't\n> we listen to their complaints and let them pay some probably quite\n> high overhead to base the hash on name and/or fully qualified name\n> rather than OID?\n> For instance people using logical replication to upgrade to a newer\n> version may want to easily compare query performance on the new\n> version, or people with multi-tenant databases may want to ignore the\n> schema name to keep a low number of different queryid.\n\nWell, we have to consider how complex the user interface has to be to\nallow more flexibility. We don't need to allow every option a user will\nwant.\n\nWith a version number, we have the ability to improve the algorithm or\nadd customization, but for the first use, we are probably better off\nkeeping it simple.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 12 Oct 2020 10:14:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 12, 2020 at 10:14 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Mon, Oct 12, 2020 at 04:20:05PM +0800, Julien Rouhaud wrote:\n> > But there are many people that aren't happy with the current hashing\n> > approach. If we're going to move the computation in core, shouldn't\n> > we listen to their complaints and let them pay some probably quite\n> > high overhead to base the hash on name and/or fully qualified name\n> > rather than OID?\n> > For instance people using logical replication to upgrade to a newer\n> > version may want to easily compare query performance on the new\n> > version, or people with multi-tenant databases may want to ignore the\n> > schema name to keep a low number of different queryid.\n>\n> Well, we have to consider how complex the user interface has to be to\n> allow more flexibility. We don't need to allow every option a user will\n> want.\n>\n> With a version number, we have the ability to improve the algorithm or\n> add customization, but for the first use, we are probably better off\n> keeping it simple.\n\nI thought your earlier idea of allowing this to be controlled by a GUC\nwas good. There could be a default method built into core, matching\nwhat pg_stat_statements does, so you could select no hashing or that\nmethod no matter what. Then extensions could provide other methods\nwhich could be selected via the GUC.\n\nI don't really understand how a version number helps. It's not like\nthere is going to be a v2 that is in all ways better than v1. If there\nare different algorithms here, they are going to be customized for\ndifferent needs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 12 Oct 2020 13:30:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't really understand how a version number helps. It's not like\n> there is going to be a v2 that is in all ways better than v1. If there\n> are different algorithms here, they are going to be customized for\n> different needs.\n\nYeah, I agree --- a version number is the wrong way to think about this.\nIt's gonna be more like algorithm foo versus algorithm bar versus\nalgorithm baz, where each one is better for a specific set of use-cases.\nJulien already noted the point about hashing object OIDs versus object\nnames; one can easily imagine disagreeing with pg_stat_statement's\nchoices about ignoring values of constants; other properties of statements\nmight be irrelevant for some use-cases; and so on.\n\nI'm okay with moving pg_stat_statement's existing algorithm into core as\nlong as there's a way for extensions to override it. With proper design,\nthat would allow extensions that do override it to coexist with\npg_stat_statements (thereby redefining the latter's idea of which\nstatements are \"the same\"), which is something that doesn't really work\nnicely today.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Oct 2020 14:26:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I don't really understand how a version number helps. It's not like\n> > there is going to be a v2 that is in all ways better than v1. If there\n> > are different algorithms here, they are going to be customized for\n> > different needs.\n> \n> Yeah, I agree --- a version number is the wrong way to think about this.\n> It's gonna be more like algorithm foo versus algorithm bar versus\n> algorithm baz, where each one is better for a specific set of use-cases.\n> Julien already noted the point about hashing object OIDs versus object\n> names; one can easily imagine disagreeing with pg_stat_statement's\n> choices about ignoring values of constants; other properties of statements\n> might be irrelevant for some use-cases; and so on.\n\nThe version number was to invalidate _all_ query hashes if the\nalgorithm is slightly modified, rather than invalidating just some of\nthem, which could lead to confusion. The idea of selectable hash\nalgorithms is nice if people feel there is sufficient need for that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 12 Oct 2020 15:54:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n>> Yeah, I agree --- a version number is the wrong way to think about this.\n\n> The version number was to invalidate _all_ query hashes if the\n> algorithm is slightly modified, rather than invalidating just some of\n> them, which could lead to confusion.\n\nColor me skeptical as to the use-case for that. From users' standpoints,\nthe hash is mainly going to change when we change the set of parse node\nfields that get hashed. Which is going to happen at every major release\nand no (or at least epsilon) minor releases. So I do not see a point in\ntracking an algorithm version number as such. Seems like make-work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Oct 2020 16:07:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Oct 12, 2020 at 04:07:30PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> >> Yeah, I agree --- a version number is the wrong way to think about this.\n> \n> > The version number was to invalidate _all_ query hashes if the\n> > algorithm is slightly modified, rather than invalidating just some of\n> > them, which could lead to confusion.\n> \n> Color me skeptical as to the use-case for that. From users' standpoints,\n> the hash is mainly going to change when we change the set of parse node\n> fields that get hashed. Which is going to happen at every major release\n> and no (or at least epsilon) minor releases. So I do not see a point in\n> tracking an algorithm version number as such. Seems like make-work.\n\nOK, I came up with the hash idea only to address one of your concerns\nabout mismatched hashes for algorithm improvements/changes. Seems we\nmight as well just document that cross-version hashes are different.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Mon, 12 Oct 2020 16:53:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Oct 13, 2020 at 4:53 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Oct 12, 2020 at 04:07:30PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> > >> Yeah, I agree --- a version number is the wrong way to think about this.\n> >\n> > > The version number was to invalidate _all_ query hashes if the\n> > > algorithm is slightly modified, rather than invalidating just some of\n> > > them, which could lead to confusion.\n> >\n> > Color me skeptical as to the use-case for that. From users' standpoints,\n> > the hash is mainly going to change when we change the set of parse node\n> > fields that get hashed. Which is going to happen at every major release\n> > and no (or at least epsilon) minor releases. So I do not see a point in\n> > tracking an algorithm version number as such. Seems like make-work.\n>\n> OK, I came up with the hash idea only to address one of your concerns\n> about mismatched hashes for algorithm improvements/changes. Seems we\n> might as well just document that cross-version hashes are different.\n\nOk, so I tried to implement what seems to be the consensus. First\nattached patch moves the current pgss queryid computation in core,\nwith a new compute_queryid GUC (on/off). One thing I don't really\nlike about this patch is that the JumbleState that pgss needs in order\nto normalize the query string (the constants location and such) has to\nbe done by the core while computing the queryid and provided to pgss\nin post_parse_analyse hook. That isn't ideal as it looks very\nspecific to pgss needs. On the other hand it means that you can now\nuse pgss with custom queryid heuristics by disabling compute_queryid\nand having your module doing only that in post_parse_analyse_hook.\nYou'll however need to be careful to configure\nshared_preload_libraries such that your custom module's\npost_parse_analyse_hook is called first, so pgss' one can be called\nwith the needed JumbleState. Note that if no JumbleState is provided\npgss will store non normalized queries, but will otherwise behave as\nintended.\n\nThe 2nd patch is the rebased original queryid exposure patch. No big\nchanges, except that it now handles utility statements queryid\ngenerated during post_parse_analysis, same as regular queries. This\nshould simplify the work needed for custom queryid third party\nmodules.\n\nThe 3rd patch changes explain (verbose) to display the queryid if one\nhas been generated, whether by core or a third-party module. For\ninstance:\n\nrjuju=# set compute_queryid = on;\nSET\nrjuju=# explain (verbose) select relname from pg_class;\n QUERY PLAN\n-----------------------------------------------------------------------\n Seq Scan on pg_catalog.pg_class (cost=0.00..16.90 rows=390 width=64)\n Output: relname\n Query Identifier: -5494854185674379299\n(3 rows)",
"msg_date": "Wed, 14 Oct 2020 17:43:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 5:43 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Oct 13, 2020 at 4:53 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, Oct 12, 2020 at 04:07:30PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> > > >> Yeah, I agree --- a version number is the wrong way to think about this.\n> > >\n> > > > The version number was to invalidate _all_ query hashes if the\n> > > > algorithm is slightly modified, rather than invalidating just some of\n> > > > them, which could lead to confusion.\n> > >\n> > > Color me skeptical as to the use-case for that. From users' standpoints,\n> > > the hash is mainly going to change when we change the set of parse node\n> > > fields that get hashed. Which is going to happen at every major release\n> > > and no (or at least epsilon) minor releases. So I do not see a point in\n> > > tracking an algorithm version number as such. Seems like make-work.\n> >\n> > OK, I came up with the hash idea only to address one of your concerns\n> > about mismatched hashes for algorithm improvements/changes. Seems we\n> > might as well just document that cross-version hashes are different.\n>\n> Ok, so I tried to implement what seems to be the consensus. First\n> attached patch moves the current pgss queryid computation in core,\n> with a new compute_queryid GUC (on/off). One thing I don't really\n> like about this patch is that the JumbleState that pgss needs in order\n> to normalize the query string (the constants location and such) has to\n> be done by the core while computing the queryid and provided to pgss\n> in post_parse_analyse hook. That isn't ideal as it looks very\n> specific to pgss needs. On the other hand it means that you can now\n> use pgss with custom queryid heuristics by disabling compute_queryid\n> and having your module doing only that in post_parse_analyse_hook.\n> You'll however need to be careful to configure\n> shared_preload_libraries such that your custom module's\n> post_parse_analyse_hook is called first, so pgss' one can be called\n> with the needed JumbleState. Note that if no JumbleState is provided\n> pgss will store non normalized queries, but will otherwise behave as\n> intended.\n>\n> The 2nd patch is the rebased original queryid exposure patch. No big\n> changes, except that it now handles utility statements queryid\n> generated during post_parse_analysis, same as regular queries. This\n> should simplify the work needed for custom queryid third party\n> modules.\n>\n> The 3rd patch changes explain (verbose) to display the queryid if one\n> has been generated, whether by core or a third-party module. For\n> instance:\n>\n> rjuju=# set compute_queryid = on;\n> SET\n> rjuju=# explain (verbose) select relname from pg_class;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Seq Scan on pg_catalog.pg_class (cost=0.00..16.90 rows=390 width=64)\n> Output: relname\n> Query Identifier: -5494854185674379299\n> (3 rows)\n\nThere was a possibly uninitialized var issue in the previous patches\n(thanks cfbot), v13 fixes that.",
"msg_date": "Wed, 14 Oct 2020 20:25:00 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 05:43:33PM +0800, Julien Rouhaud wrote:\n> On Tue, Oct 13, 2020 at 4:53 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Mon, Oct 12, 2020 at 04:07:30PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> > > >> Yeah, I agree --- a version number is the wrong way to think about this.\n> > >\n> > > > The version number was to invalidate _all_ query hashes if the\n> > > > algorithm is slightly modified, rather than invalidating just some of\n> > > > them, which could lead to confusion.\n> > >\n> > > Color me skeptical as to the use-case for that. From users' standpoints,\n> > > the hash is mainly going to change when we change the set of parse node\n> > > fields that get hashed. Which is going to happen at every major release\n> > > and no (or at least epsilon) minor releases. So I do not see a point in\n> > > tracking an algorithm version number as such. Seems like make-work.\n> >\n> > OK, I came up with the hash idea only to address one of your concerns\n> > about mismatched hashes for algorithm improvements/changes. Seems we\n> > might as well just document that cross-version hashes are different.\n> \n> Ok, so I tried to implement what seems to be the consensus. First\n> attached patch moves the current pgss queryid computation in core,\n> with a new compute_queryid GUC (on/off). One thing I don't really\n\nWhy would someone turn compute_queryid off? Overhead?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 14 Oct 2020 10:09:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 10:09 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Oct 14, 2020 at 05:43:33PM +0800, Julien Rouhaud wrote:\n> > On Tue, Oct 13, 2020 at 4:53 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > >\n> > > On Mon, Oct 12, 2020 at 04:07:30PM -0400, Tom Lane wrote:\n> > > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > > On Mon, Oct 12, 2020 at 02:26:15PM -0400, Tom Lane wrote:\n> > > > >> Yeah, I agree --- a version number is the wrong way to think about this.\n> > > >\n> > > > > The version number was to invalidate _all_ query hashes if the\n> > > > > algorithm is slightly modified, rather than invalidating just some of\n> > > > > them, which could lead to confusion.\n> > > >\n> > > > Color me skeptical as to the use-case for that. From users' standpoints,\n> > > > the hash is mainly going to change when we change the set of parse node\n> > > > fields that get hashed. Which is going to happen at every major release\n> > > > and no (or at least epsilon) minor releases. So I do not see a point in\n> > > > tracking an algorithm version number as such. Seems like make-work.\n> > >\n> > > OK, I came up with the hash idea only to address one of your concerns\n> > > about mismatched hashes for algorithm improvements/changes. Seems we\n> > > might as well just document that cross-version hashes are different.\n> >\n> > Ok, so I tried to implement what seems to be the consensus. First\n> > attached patch moves the current pgss queryid computation in core,\n> > with a new compute_queryid GUC (on/off). One thing I don't really\n>\n> Why would someone turn compute_queryid off? Overhead?\n\nYes, or possibly to use a different algorithm.\n\n\n",
"msg_date": "Wed, 14 Oct 2020 22:21:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 10:21:24PM +0800, Julien Rouhaud wrote:\n> On Wed, Oct 14, 2020 at 10:09 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > OK, I came up with the hash idea only to address one of your concerns\n> > > > about mismatched hashes for algorithm improvements/changes. Seems we\n> > > > might as well just document that cross-version hashes are different.\n> > >\n> > > Ok, so I tried to implement what seems to be the consensus. First\n> > > attached patch moves the current pgss queryid computation in core,\n> > > with a new compute_queryid GUC (on/off). One thing I don't really\n> >\n> > Why would someone turn compute_queryid off? Overhead?\n> \n> Yes, or possibly to use a different algorithm.\n\nIs there a measureable overhead when this is turned on, since it is off\nby default and maybe should default to on.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 14 Oct 2020 10:25:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Is there a measureable overhead when this is turned on, since it is off\n> by default and maybe should default to on.\n\nI don't believe that \"default to on\" can even be in the discussion.\nThere is no in-core feature that would use this by default.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Oct 2020 10:31:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Is there a measureable overhead when this is turned on, since it is off\n> > by default and maybe should default to on.\n>\n> I don't believe that \"default to on\" can even be in the discussion.\n> There is no in-core feature that would use this by default.\n\nIf the 2nd patch is applied there would be pg_stat_activity.queryid\ncolumn, but I doubt that's a strong enough argument.\n\n\n",
"msg_date": "Wed, 14 Oct 2020 22:34:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 10:34:31PM +0800, Julien Rouhaud wrote:\n> On Wed, Oct 14, 2020 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Is there a measureable overhead when this is turned on, since it is off\n> > > by default and maybe should default to on.\n> >\n> > I don't believe that \"default to on\" can even be in the discussion.\n> > There is no in-core feature that would use this by default.\n> \n> If the 2nd patch is applied there would be pg_stat_activity.queryid\n> column, but I doubt that's a strong enough argument.\n\nThere is that, and log_line_prefix, which I can imaging being useful. \nMy point is that if the queryid is visible, there should be a reason it\ndefaults to show empty.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 14 Oct 2020 10:40:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Oct 14, 2020 at 10:40 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Oct 14, 2020 at 10:34:31PM +0800, Julien Rouhaud wrote:\n> > On Wed, Oct 14, 2020 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Is there a measureable overhead when this is turned on, since it is off\n> > > > by default and maybe should default to on.\n> > >\n> > > I don't believe that \"default to on\" can even be in the discussion.\n> > > There is no in-core feature that would use this by default.\n> >\n> > If the 2nd patch is applied there would be pg_stat_activity.queryid\n> > column, but I doubt that's a strong enough argument.\n>\n> There is that, and log_line_prefix, which I can imaging being useful.\n> My point is that if the queryid is visible, there should be a reason it\n> defaults to show empty.\n\nI did some naive benchmarking. Using a custom pgbench script with this query:\n\nSELECT *\nFROM pg_class c\nJOIN pg_attribute a ON a.attrelid = c.oid\nORDER BY 1 DESC\nLIMIT 1;\n\nI can see around 2% overhead (this query is reported with ~ 3ms\nlatency average). Adding a few joins, overhead goes down to 1%.\nAdding on top of the join some WHERE and GROUP BY conditions, overhead\ngoes down to 0.2% (at that point average latency is around 9ms on my\nlaptop). So having this enabled by default is probably only going to\nhit people with OLTP-style workload with a majority of queries running\nin a couple of milliseconds or less, which isn't that uncommon.\n\n\n",
"msg_date": "Thu, 15 Oct 2020 11:41:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Oct 15, 2020 at 11:41:23AM +0800, Julien Rouhaud wrote:\n> On Wed, Oct 14, 2020 at 10:40 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > There is that, and log_line_prefix, which I can imaging being useful.\n> > My point is that if the queryid is visible, there should be a reason it\n> > defaults to show empty.\n> \n> I did some naive benchmarking. Using a custom pgbench script with this query:\n> \n> SELECT *\n> FROM pg_class c\n> JOIN pg_attribute a ON a.attrelid = c.oid\n> ORDER BY 1 DESC\n> LIMIT 1;\n> \n> I can see around 2% overhead (this query is reported with ~ 3ms\n> latency average). Adding a few joins, overhead goes down to 1%.\n\nThat number is too high to enable this by default. I suggest we either\nimprove the performance of this, or clearly document that you have to\nenable the hash computation to see the pg_stat_activity and\nlog_line_prefix fields.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 16 Oct 2020 11:04:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2020-Oct-16, Bruce Momjian wrote:\n\n> On Thu, Oct 15, 2020 at 11:41:23AM +0800, Julien Rouhaud wrote:\n\n> > I did some naive benchmarking. Using a custom pgbench script with this query:\n\n> > I can see around 2% overhead (this query is reported with ~ 3ms\n> > latency average). Adding a few joins, overhead goes down to 1%.\n> \n> That number is too high to enable this by default. I suggest we either\n> improve the performance of this, or clearly document that you have to\n> enable the hash computation to see the pg_stat_activity and\n> log_line_prefix fields.\n\nAgreed. This is similar to how we used to deal with query strings: an\noptional feature, disabled by default (cf. commit b13c9686d084).\n\nIn this case, I suppose using pg_stat_statement would require to have it\nenabled, and it'd just not collect anything if disabled. Similarly, the\nfield would show NULL in pg_stat_activity or an empty string in\nlog_line_prefix/CSV logs.\n\nSo users that want it can easily have it, and users that don't are not\npaying the price.\n\nFor maximum user-friendliness, pg_stat_statement could be loaded and\nshmem-initialized even when query ID computation is turned off, and\nyou'd be able to enable query ID computation with just SIGHUP; so you\ndon't have to restart the server in order to enable statement tracking.\n(I suppose we would forbid users from disabling query ID with SET,\nthough.)\n\n\n",
"msg_date": "Fri, 16 Oct 2020 13:03:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> In this case, I suppose using pg_stat_statement would require to have it\n> enabled, and it'd just not collect anything if disabled.\n\nAlternatively, pg_stat_statement might be able to force it on\n(applying a non-overridable PGC_INTERNAL-level setting) on load?\nNot sure if that'd be desirable or not.\n\nIf the behavior of pg_stat_statement is to do nothing when it\nsees a query without the ID calculated (which I guess it'd have to)\nthen there's a potential security issue if the GUC is USERSET level:\na user could hide her queries from pg_stat_statement by turning the\nGUC off. So this line of thought suggests the GUC needs to be at\nleast SUSET, and maybe higher ... doesn't pg_stat_statement need it\nto have the same value cluster-wide?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Oct 2020 12:23:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Oct 16, 2020 at 01:03:55PM -0300, �lvaro Herrera wrote:\n> On 2020-Oct-16, Bruce Momjian wrote:\n> \n> > On Thu, Oct 15, 2020 at 11:41:23AM +0800, Julien Rouhaud wrote:\n> \n> > > I did some naive benchmarking. Using a custom pgbench script with this query:\n> \n> > > I can see around 2% overhead (this query is reported with ~ 3ms\n> > > latency average). Adding a few joins, overhead goes down to 1%.\n> > \n> > That number is too high to enable this by default. I suggest we either\n> > improve the performance of this, or clearly document that you have to\n> > enable the hash computation to see the pg_stat_activity and\n> > log_line_prefix fields.\n> \n> Agreed. This is similar to how we used to deal with query strings: an\n> optional feature, disabled by default (cf. commit b13c9686d084).\n> \n> In this case, I suppose using pg_stat_statement would require to have it\n> enabled, and it'd just not collect anything if disabled. Similarly, the\n> field would show NULL in pg_stat_activity or an empty string in\n> log_line_prefix/CSV logs.\n\nYes, and at each use point, e.g., pg_stat_activity, log_line_prefix, we\nhave to remind people how to turn hash compuation on.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Fri, 16 Oct 2020 12:47:12 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sat, Oct 17, 2020 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > In this case, I suppose using pg_stat_statement would require to have it\n> > enabled, and it'd just not collect anything if disabled.\n\nYes, my idea was to be able to have pg_stat_statements enabled even if\nno queryid is computed without that being a problem, and the patch I\nsent should handle that properly, as pgss_store (and a few other\nplaces) check for a non-zero queryid before doing any work.\n\nAlso, we can't have pg_stat_statements have any specific behavior\nbased on the new GUC, as there could alternatively be another module\nthat handles the queryid generation.\n\n> Alternatively, pg_stat_statement might be able to force it on\n> (applying a non-overridable PGC_INTERNAL-level setting) on load?\n> Not sure if that'd be desirable or not.\n>\n> If the behavior of pg_stat_statement is to do nothing when it\n> sees a query without the ID calculated (which I guess it'd have to)\n\nYes that's what it does.\n\n> then there's a potential security issue if the GUC is USERSET level:\n> a user could hide her queries from pg_stat_statement by turning the\n> GUC off. So this line of thought suggests the GUC needs to be at\n> least SUSET, and maybe higher ... doesn't pg_stat_statement need it\n> to have the same value cluster-wide?\n\nWell, I don't think that there's any guarantee that pg_stat_statemens\nwill display all activity that has been run, since there's a limited\namount of (userid, dbid, queryid) that can be stored, but I agree that\nallowing random user to hide their activity isn't nice. Note that I\ndefined the GUC as SUSET, but maybe it should be SIGHUP?\n\n\n",
"msg_date": "Sat, 17 Oct 2020 11:28:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Oct 16, 2020 at 11:04 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, Oct 15, 2020 at 11:41:23AM +0800, Julien Rouhaud wrote:\n> > On Wed, Oct 14, 2020 at 10:40 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > There is that, and log_line_prefix, which I can imaging being useful.\n> > > My point is that if the queryid is visible, there should be a reason it\n> > > defaults to show empty.\n> >\n> > I did some naive benchmarking. Using a custom pgbench script with this query:\n> >\n> > SELECT *\n> > FROM pg_class c\n> > JOIN pg_attribute a ON a.attrelid = c.oid\n> > ORDER BY 1 DESC\n> > LIMIT 1;\n> >\n> > I can see around 2% overhead (this query is reported with ~ 3ms\n> > latency average). Adding a few joins, overhead goes down to 1%.\n>\n> That number is too high to enable this by default. I suggest we either\n> improve the performance of this, or clearly document that you have to\n> enable the hash computation to see the pg_stat_activity and\n> log_line_prefix fields.\n\nI realize that I didn't update the documentation part to reflect the\nnew GUC. I'll fix that and add more warnings about the requirements\nto have values displayed in pg_stat_acitivity and log_line_prefix.\n\n\n",
"msg_date": "Sat, 17 Oct 2020 11:31:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On 2020-Oct-17, Julien Rouhaud wrote:\n\n> On Sat, Oct 17, 2020 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > then there's a potential security issue if the GUC is USERSET level:\n> > a user could hide her queries from pg_stat_statement by turning the\n> > GUC off. So this line of thought suggests the GUC needs to be at\n> > least SUSET, and maybe higher ... doesn't pg_stat_statement need it\n> > to have the same value cluster-wide?\n> \n> Well, I don't think that there's any guarantee that pg_stat_statemens\n> will display all activity that has been run, since there's a limited\n> amount of (userid, dbid, queryid) that can be stored, but I agree that\n> allowing random user to hide their activity isn't nice. Note that I\n> defined the GUC as SUSET, but maybe it should be SIGHUP?\n\nI don't think we should consider pg_stat_statement a bulletproof defense\nfor security problems. It is already lossy by design.\n\nI do think it'd be preferrable if we allowed it to be disabled at the\nconfig file level only, not with SET (prevent users from hiding stuff);\nbut I think it is useful to allow users to enable it for specific\nqueries or for specific sessions only, while globally disabled. This\nmight mean we need to mark it PGC_SIGHUP and then have the check hook\ndisallow it from being changed under such-and-such conditions.\n\n\n",
"msg_date": "Sat, 17 Oct 2020 12:59:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2020-Oct-17, Julien Rouhaud wrote:\n>> On Sat, Oct 17, 2020 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> then there's a potential security issue if the GUC is USERSET level:\n>>> a user could hide her queries from pg_stat_statement by turning the\n>>> GUC off. So this line of thought suggests the GUC needs to be at\n>>> least SUSET, and maybe higher ... doesn't pg_stat_statement need it\n>>> to have the same value cluster-wide?\n\n> I don't think we should consider pg_stat_statement a bulletproof defense\n> for security problems. It is already lossy by design.\n\nFair point, but if we allow several different values to be set in\ndifferent sessions, what ends up happening in pg_stat_statements?\n\nOn the other hand, maybe that's just a matter for documentation.\n\"If the 'same' query is processed with two different queryID settings,\nthat will generally result in two separate table entries, because\nthe same ID hash is unlikely to be produced in both cases\". There\nis certainly a use-case for wanting to be able to do this, if for\nexample you'd like different query aggregation behavior for different\napplications.\n\n> I do think it'd be preferrable if we allowed it to be disabled at the\n> config file level only, not with SET (prevent users from hiding stuff);\n> but I think it is useful to allow users to enable it for specific\n> queries or for specific sessions only, while globally disabled.\n\nIndeed. I'm kind of talking myself into the idea that USERSET, or\nat most SUSET, is fine, so long as we document what happens when it\nhas different values in different sessions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Oct 2020 12:28:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2020-Oct-17, Tom Lane wrote:\n\n> Fair point, but if we allow several different values to be set in\n> different sessions, what ends up happening in pg_stat_statements?\n> \n> On the other hand, maybe that's just a matter for documentation.\n> \"If the 'same' query is processed with two different queryID settings,\n> that will generally result in two separate table entries, because\n> the same ID hash is unlikely to be produced in both cases\".\n\nWait ... what? I've been thinking that this GUC is just to enable or\ndisable the computation of query ID, not to change the algorithm to do\nso. Do we really need to allow different algorithms in different\nsessions?\n\n\n",
"msg_date": "Sun, 18 Oct 2020 00:01:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Wait ... what? I've been thinking that this GUC is just to enable or\n> disable the computation of query ID, not to change the algorithm to do\n> so. Do we really need to allow different algorithms in different\n> sessions?\n\nWe established that some time ago, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Oct 2020 00:20:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sun, Oct 18, 2020 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Wait ... what? I've been thinking that this GUC is just to enable or\n> > disable the computation of query ID, not to change the algorithm to do\n> > so. Do we really need to allow different algorithms in different\n> > sessions?\n>\n> We established that some time ago, no?\n\nI thought we established the need for allowing different algorithms,\nbut I assumed globally not per session. Anyway, allowing to enable or\ndisable compute_queryid per session would technically allow that,\nassuming that you have another module loaded that computes a queryid\nonly if no-one was already computed. In that case pg_stat_statements\nworks as you would expect, you will get a new entry, with a duplicated\nquery text.\n\nWith a bit more thinking, there's at least one use case where it's\ninteresting to disable pg_stat_statements: queries using temporary\ntables. In that case you're guaranteed to generate an infinity of\ndifferent queryid. That doesn't really help since you're not\naggregating anything anymore, and it also makes pg_stat_statements\nvirtually unusable as once you have a workload that needs frequent\neviction, the overhead is so bad that you basically have to disable\npg_stat_statements. We could alternatively add a GUC to disable\nqueryid computation when one of the tables is a temporary table, but\nthat's yet one among many considerations that are probably best\nanswered with a custom implementation.\n\nI'm also attaching an updated patch with some attempt to improve the\ndocumentation. I mention that in-core algorithm may not suits\neveryone's needs, but we don't actually document what heuristics are.\nShould we give more details on them and what are the most direct\nconsequences?",
"msg_date": "Sun, 18 Oct 2020 16:12:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Sun, Oct 18, 2020 at 4:12 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Sun, Oct 18, 2020 at 12:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > > Wait ... what? I've been thinking that this GUC is just to enable or\n> > > disable the computation of query ID, not to change the algorithm to do\n> > > so. Do we really need to allow different algorithms in different\n> > > sessions?\n> >\n> > We established that some time ago, no?\n>\n> I thought we established the need for allowing different algorithms,\n> but I assumed globally not per session. Anyway, allowing to enable or\n> disable compute_queryid per session would technically allow that,\n> assuming that you have another module loaded that computes a queryid\n> only if no-one was already computed. In that case pg_stat_statements\n> works as you would expect, you will get a new entry, with a duplicated\n> query text.\n>\n> With a bit more thinking, there's at least one use case where it's\n> interesting to disable pg_stat_statements: queries using temporary\n> tables. In that case you're guaranteed to generate an infinity of\n> different queryid. That doesn't really help since you're not\n> aggregating anything anymore, and it also makes pg_stat_statements\n> virtually unusable as once you have a workload that needs frequent\n> eviction, the overhead is so bad that you basically have to disable\n> pg_stat_statements. We could alternatively add a GUC to disable\n> queryid computation when one of the tables is a temporary table, but\n> that's yet one among many considerations that are probably best\n> answered with a custom implementation.\n>\n> I'm also attaching an updated patch with some attempt to improve the\n> documentation. I mention that in-core algorithm may not suits\n> everyone's needs, but we don't actually document what heuristics are.\n> Should we give more details on them and what are the most direct\n> consequences?\n\nv15 that fixes recent conflicts.",
"msg_date": "Fri, 8 Jan 2021 01:07:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Jan 8, 2021 at 1:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> v15 that fixes recent conflicts.\n\nRebase only, thanks to the cfbot! V16 attached.",
"msg_date": "Wed, 20 Jan 2021 00:43:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Hi Julien,\n\n> Rebase only, thanks to the cfbot! V16 attached.\n\nI tested the v16 patch on a0efda88a by using \"make installcheck-parallel\", and\nmy result is the following. Attached file is regression.diffs.\n\n========================\n 1 of 202 tests failed.\n========================\n\nThe differences that caused some tests to fail can be viewed in the\nfile \"/home/postgres/PG140/src/test/regress/regression.diffs\". A copy of the test summary that you see\nabove is saved in the file \"/home/postgres/PG140/src/test/regress/regression.out\".\n\n\nsrc/test/regress/regression.diffs\n---------------------------------\ndiff -U3 /home/postgres/PG140/src/test/regress/expected/rules.out /home/postgres/PG140/src/test/regress/results/rules.out\n--- /home/postgres/PG140/src/test/regress/expected/rules.out 2021-01-20 08:41:16.383175559 +0900\n+++ /home/postgres/PG140/src/test/regress/results/rules.out 2021-01-20 08:43:46.589171774 +0900\n@@ -1760,10 +1760,9 @@\n s.state,\n s.backend_xid,\n s.backend_xmin,\n- s.queryid,\n s.query,\n s.backend_type\n- FROM ((pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, wait_event_type, wait_event, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port, backend_xid, backend_xmin, backend_type, ssl, sslversion, sslcipher, sslbits, sslcompression, ssl_client_dn, ssl_client_serial, ssl_issuer_dn, gss_auth, gss_princ, gss_enc, leader_pid, queryid)\n+ FROM ((pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, wait_event_type, wait_event, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port, backend_xid, backend_xmin, backend_type, ssl, sslversion, sslcipher, sslbits, sslcompression, ssl_client_dn, ssl_client_serial, ssl_issuer_dn, gss_auth, gss_princ, gss_enc, leader_pid)\n...\n\nThanks,\nTatsuro Yamada",
"msg_date": "Wed, 20 Jan 2021 08:58:22 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi Julien,\n\n\n>> Rebase only, thanks to the cfbot! V16 attached.\n> \n> I tested the v16 patch on a0efda88a by using \"make installcheck-parallel\", and\n> my result is the following. Attached file is regression.diffs.\n\n\nSorry, my environment was not suitable for the test when I sent my previous email.\nI fixed my environment and tested it again, and it was a success. See below:\n\n=======================\n All 202 tests passed.\n=======================\n\nRegards,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Wed, 20 Jan 2021 11:05:54 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hello Yamada-san,\n\nOn Wed, Jan 20, 2021 at 10:06 AM Tatsuro Yamada\n<tatsuro.yamada.tf@nttcom.co.jp> wrote:\n>\n> Hi Julien,\n>\n>\n> >> Rebase only, thanks to the cfbot! V16 attached.\n> >\n> > I tested the v16 patch on a0efda88a by using \"make installcheck-parallel\", and\n> > my result is the following. Attached file is regression.diffs.\n>\n>\n> Sorry, my environment was not suitable for the test when I sent my previous email.\n> I fixed my environment and tested it again, and it was a success. See below:\n>\n> =======================\n> All 202 tests passed.\n> =======================\n\nNo worries, thanks a lot for testing!\n\n\n",
"msg_date": "Wed, 20 Jan 2021 10:28:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Jan 20, 2021 at 12:43 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Jan 8, 2021 at 1:07 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > v15 that fixes recent conflicts.\n>\n> Rebase only, thanks to the cfbot! V16 attached.\n\nRecent commit exposed that the explain_filter() doesn't filter\nnegative sign. This can now be a problem with query identifiers in\nexplain output as they use the whole bigint range. v17 attached fixes\nthat, also rebased against current HEAD.",
"msg_date": "Tue, 2 Mar 2021 11:43:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "Recent conflict, thanks to cfbot. v18 attached.",
"msg_date": "Sun, 14 Mar 2021 16:06:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 04:06:45PM +0800, Julien Rouhaud wrote:\n> Recent conflict, thanks to cfbot. v18 attached.\n\nWe are reaching the two-year mark on this feature, that everyone seems\nto agree is needed. Is any committer going to work on this to get it\ninto PG 14? Should I take it?\n\nI just read the thread and I didn't see any open issues. Are there any?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 17 Mar 2021 11:17:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> We are reaching the two-year mark on this feature, that everyone seems\n> to agree is needed. Is any committer going to work on this to get it\n> into PG 14? Should I take it?\n\nI still say that it's a serious mistake to sanctify a query ID calculation\nmethod that was designed only for pg_stat_statement's needs as the one\ntrue way to do it. But that's what exposing it in a core view would do.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Mar 2021 11:28:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > We are reaching the two-year mark on this feature, that everyone seems\n> > to agree is needed. Is any committer going to work on this to get it\n> > into PG 14? Should I take it?\n> \n> I still say that it's a serious mistake to sanctify a query ID calculation\n> method that was designed only for pg_stat_statement's needs as the one\n> true way to do it. But that's what exposing it in a core view would do.\n\nOK, I am fine with creating a new method, and maybe having\npg_stat_statements use it. Is that the direction we should be going in?\nI do think we need _some_ method in core if we are going to be exposing\nthis value in pg_stat_activity and log_line_prefix.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 17 Mar 2021 11:40:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n>> I still say that it's a serious mistake to sanctify a query ID calculation\n>> method that was designed only for pg_stat_statement's needs as the one\n>> true way to do it. But that's what exposing it in a core view would do.\n\n> OK, I am fine with creating a new method, and maybe having\n> pg_stat_statements use it. Is that the direction we should be going in?\n\nThe point is that we've understood Query.queryId as something that\ndifferent extensions might calculate differently for their own needs.\nIn particular it's easy to imagine extensions that want an ID that is\nless fuzzy than what pg_stat_statements wants. We never had a plan for\nhow two such extensions could co-exist, but at least it was possible\nto use one if you didn't use another. If this gets moved into core\nthen there will basically be only one way that anyone can do it.\n\nMaybe what we need is a design for allowing more than one query ID.\n\n> I do think we need _some_ method in core if we are going to be exposing\n> this value in pg_stat_activity and log_line_prefix.\n\nI'm basically objecting to the conclusion that we should do either\nof those. There is no way around the fact that it will break every\nuser of Query.queryId other than pg_stat_statements, unless they\nare okay with whatever definition pg_stat_statements is using (which\nis a moving target BTW).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Mar 2021 12:01:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 12:01:38PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n> >> I still say that it's a serious mistake to sanctify a query ID calculation\n> >> method that was designed only for pg_stat_statement's needs as the one\n> >> true way to do it. But that's what exposing it in a core view would do.\n> \n> > OK, I am fine with creating a new method, and maybe having\n> > pg_stat_statements use it. Is that the direction we should be going in?\n> \n> The point is that we've understood Query.queryId as something that\n> different extensions might calculate differently for their own needs.\n> In particular it's easy to imagine extensions that want an ID that is\n> less fuzzy than what pg_stat_statements wants. We never had a plan for\n> how two such extensions could co-exist, but at least it was possible\n> to use one if you didn't use another. If this gets moved into core\n> then there will basically be only one way that anyone can do it.\n\nWell, the patch docs say:\n\n Enables or disables in core query identifier computation.arameter. The\n <xref linkend=\"pgstatstatements\"/> extension requires a query\n--> identifier to be computed. Note that an external module can\n--> alternatively be used if the in core query identifier computation\n specification doesn't suit your need. In this case, in core\n computation must be disabled. The default is <literal>off</literal>.\n\n> Maybe what we need is a design for allowing more than one query ID.\n> \n> > I do think we need _some_ method in core if we are going to be exposing\n> > this value in pg_stat_activity and log_line_prefix.\n> \n> I'm basically objecting to the conclusion that we should do either\n> of those. There is no way around the fact that it will break every\n> user of Query.queryId other than pg_stat_statements, unless they\n> are okay with whatever definition pg_stat_statements is using (which\n> is a moving target BTW).\n\nI thought the above doc patch feature avoided this problem because an\nextension can override the build-in query id.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 17 Mar 2021 12:13:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "st 17. 3. 2021 v 17:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n> >> I still say that it's a serious mistake to sanctify a query ID\n> calculation\n> >> method that was designed only for pg_stat_statement's needs as the one\n> >> true way to do it. But that's what exposing it in a core view would do.\n>\n> > OK, I am fine with creating a new method, and maybe having\n> > pg_stat_statements use it. Is that the direction we should be going in?\n>\n> The point is that we've understood Query.queryId as something that\n> different extensions might calculate differently for their own needs.\n> In particular it's easy to imagine extensions that want an ID that is\n> less fuzzy than what pg_stat_statements wants. We never had a plan for\n> how two such extensions could co-exist, but at least it was possible\n> to use one if you didn't use another. If this gets moved into core\n> then there will basically be only one way that anyone can do it.\n>\n> Maybe what we need is a design for allowing more than one query ID.\n>\n\nTheoretically there can be a hook for calculation of queryid, that can be\nby used extension. Default can be assigned with a method that is used by\npg_stat_statements.\n\nI don't think it is possible to use more different query id for\npg_stat_statements so this solution can be simple.\n\nregards\n\nPavel\n\n\n\n\n\n>\n> > I do think we need _some_ method in core if we are going to be exposing\n> > this value in pg_stat_activity and log_line_prefix.\n>\n> I'm basically objecting to the conclusion that we should do either\n> of those. There is no way around the fact that it will break every\n> user of Query.queryId other than pg_stat_statements, unless they\n> are okay with whatever definition pg_stat_statements is using (which\n> is a moving target BTW).\n>\n> regards, tom lane\n>\n>\n>\n\nst 17. 3. 2021 v 17:03 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n>> I still say that it's a serious mistake to sanctify a query ID calculation\n>> method that was designed only for pg_stat_statement's needs as the one\n>> true way to do it. But that's what exposing it in a core view would do.\n\n> OK, I am fine with creating a new method, and maybe having\n> pg_stat_statements use it. Is that the direction we should be going in?\n\nThe point is that we've understood Query.queryId as something that\ndifferent extensions might calculate differently for their own needs.\nIn particular it's easy to imagine extensions that want an ID that is\nless fuzzy than what pg_stat_statements wants. We never had a plan for\nhow two such extensions could co-exist, but at least it was possible\nto use one if you didn't use another. If this gets moved into core\nthen there will basically be only one way that anyone can do it.\n\nMaybe what we need is a design for allowing more than one query ID.Theoretically there can be a hook for calculation of queryid, that can be by used extension. Default can be assigned with a method that is used by pg_stat_statements. I don't think it is possible to use more different query id for pg_stat_statements so this solution can be simple.regardsPavel \n\n> I do think we need _some_ method in core if we are going to be exposing\n> this value in pg_stat_activity and log_line_prefix.\n\nI'm basically objecting to the conclusion that we should do either\nof those. There is no way around the fact that it will break every\nuser of Query.queryId other than pg_stat_statements, unless they\nare okay with whatever definition pg_stat_statements is using (which\nis a moving target BTW).\n\n regards, tom lane",
"msg_date": "Wed, 17 Mar 2021 17:16:50 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 05:16:50PM +0100, Pavel Stehule wrote:\n> \n> \n> st 17. 3. 2021 v�17:03 odes�latel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> \n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n> >> I still say that it's a serious mistake to sanctify a query ID\n> calculation\n> >> method that was designed only for pg_stat_statement's needs as the one\n> >> true way to do it.� But that's what exposing it in a core view would do.\n> \n> > OK, I am fine with creating a new method, and maybe having\n> > pg_stat_statements use it.� Is that the direction we should be going in?\n> \n> The point is that we've understood Query.queryId as something that\n> different extensions might calculate differently for their own needs.\n> In particular it's easy to imagine extensions that want an ID that is\n> less fuzzy than what pg_stat_statements wants.� We never had a plan for\n> how two such extensions could co-exist, but at least it was possible\n> to use one if you didn't use another.� If this gets moved into core\n> then there will basically be only one way that anyone can do it.\n> \n> Maybe what we need is a design for allowing more than one query ID.\n> \n> \n> Theoretically there can be a hook for calculation of queryid, that can be by\n> used extension. Default can be assigned with a method that is used by\n> pg_stat_statements.\n\nYes, that is what the code patch says it does.\n\n> I don't think it is possible to use more different query id for\n> pg_stat_statements so this solution can be simple.\n\nAgreed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 17 Mar 2021 12:24:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 12:24:44PM -0400, Bruce Momjian wrote:\n> On Wed, Mar 17, 2021 at 05:16:50PM +0100, Pavel Stehule wrote:\n> > \n> > \n> > st 17. 3. 2021 v�17:03 odes�latel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> > \n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > On Wed, Mar 17, 2021 at 11:28:38AM -0400, Tom Lane wrote:\n> > >> I still say that it's a serious mistake to sanctify a query ID\n> > calculation\n> > >> method that was designed only for pg_stat_statement's needs as the one\n> > >> true way to do it.� But that's what exposing it in a core view would do.\n> > \n> > > OK, I am fine with creating a new method, and maybe having\n> > > pg_stat_statements use it.� Is that the direction we should be going in?\n> > \n> > The point is that we've understood Query.queryId as something that\n> > different extensions might calculate differently for their own needs.\n> > In particular it's easy to imagine extensions that want an ID that is\n> > less fuzzy than what pg_stat_statements wants.� We never had a plan for\n> > how two such extensions could co-exist, but at least it was possible\n> > to use one if you didn't use another.� If this gets moved into core\n> > then there will basically be only one way that anyone can do it.\n> > \n> > Maybe what we need is a design for allowing more than one query ID.\n> > \n> > \n> > Theoretically there can be a hook for calculation of queryid, that can be by\n> > used extension. Default can be assigned with a method that is used by\n> > pg_stat_statements.\n> \n> Yes, that is what the code patch says it does.\n> \n> > I don't think it is possible to use more different query id for\n> > pg_stat_statements so this solution can be simple.\n> \n> Agreed.\n\nActually, putting the query identifer computation in the core makes it way more\ntunable, even if it's conterintuitive. What it means is that you can now chose\nto use usual pgss' algorithm or a different one for log_line_prefix and\npg_stat_activity.queryid, but also that you can now use pgss with a different\nquery id algorithm. That's another thing that user were asking for a long\ntime.\n\nI originally suggested to make it clearer by having an enum GUC rather than a\nboolean, say compute_queryid = [ none | core | external ], and if set to\nexternal then a hook would be explicitely called. Right now, \"none\" and\n\"external\" are binded with compute_queryid = off, and depends on whether an\nextension is computing a queryid during post_parse_analyse_hook.\n\nIt could later be extended to suit other needs if we ever come to some\nagreement (for instance \"legacy\", \"logical_replication_stable\" or whatever\nbetter name we can find for something that doesn't depend on Oid).\n\n\n",
"msg_date": "Thu, 18 Mar 2021 00:49:07 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 12:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I originally suggested to make it clearer by having an enum GUC rather than a\n> boolean, say compute_queryid = [ none | core | external ], and if set to\n> external then a hook would be explicitely called. Right now, \"none\" and\n> \"external\" are binded with compute_queryid = off, and depends on whether an\n> extension is computing a queryid during post_parse_analyse_hook.\n\nI would just make it a Boolean and have a hook. The Boolean controls\nwhether it gets computed at all, and the hook lets an external module\noverride the way it gets computed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Mar 2021 16:04:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 04:04:44PM -0400, Robert Haas wrote:\n> On Wed, Mar 17, 2021 at 12:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > I originally suggested to make it clearer by having an enum GUC rather than a\n> > boolean, say compute_queryid = [ none | core | external ], and if set to\n> > external then a hook would be explicitely called. Right now, \"none\" and\n> > \"external\" are binded with compute_queryid = off, and depends on whether an\n> > extension is computing a queryid during post_parse_analyse_hook.\n> \n> I would just make it a Boolean and have a hook. The Boolean controls\n> whether it gets computed at all, and the hook lets an external module\n> override the way it gets computed.\n\nOK, is that what everyone wants? I think that is what the patch already\ndoes.\n\nI think having multiple queryids used in a single cluster is much too\nconfusing to support. You would have to label and control which queryid\nis displayed by pg_stat_activity and log_line_prefix, and that seems too\nconfusing and not useful.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:32:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 06:32:16PM -0400, Bruce Momjian wrote:\n> On Wed, Mar 17, 2021 at 04:04:44PM -0400, Robert Haas wrote:\n> > On Wed, Mar 17, 2021 at 12:48 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > I originally suggested to make it clearer by having an enum GUC rather than a\n> > > boolean, say compute_queryid = [ none | core | external ], and if set to\n> > > external then a hook would be explicitely called. Right now, \"none\" and\n> > > \"external\" are binded with compute_queryid = off, and depends on whether an\n> > > extension is computing a queryid during post_parse_analyse_hook.\n> > \n> > I would just make it a Boolean and have a hook. The Boolean controls\n> > whether it gets computed at all, and the hook lets an external module\n> > override the way it gets computed.\n> \n> OK, is that what everyone wants? I think that is what the patch already\n> does.\n\nNote exactly. Right now a custom queryid can be computed even if\ncompute_queryid is off, if some extension does that in post_parse_analyze_hook.\n\nI'm assuming that what Robert was thinking was more like:\n\nif (compute_queryid)\n{\n if (queryid_hook)\n queryId = queryid_hook(...);\n else\n queryId = JumbeQuery(...);\n}\nelse\n queryId = 0;\n\nAnd that should be done *after* post_parse_analyse_hook so that it's clear that\nthis hook is no longer the place to compute queryid.\n\nIs that what should be done?\n\n> I think having multiple queryids used in a single cluster is much too\n> confusing to support. You would have to label and control which queryid\n> is displayed by pg_stat_activity and log_line_prefix, and that seems too\n> confusing and not useful.\n\nI agree.\n\n\n",
"msg_date": "Thu, 18 Mar 2021 07:29:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 07:29:56AM +0800, Julien Rouhaud wrote:\n> On Wed, Mar 17, 2021 at 06:32:16PM -0400, Bruce Momjian wrote:\n> > OK, is that what everyone wants? I think that is what the patch already\n> > does.\n> \n> Note exactly. Right now a custom queryid can be computed even if\n> compute_queryid is off, if some extension does that in post_parse_analyze_hook.\n> \n> I'm assuming that what Robert was thinking was more like:\n> \n> if (compute_queryid)\n> {\n> if (queryid_hook)\n> queryId = queryid_hook(...);\n> else\n> queryId = JumbeQuery(...);\n> }\n> else\n> queryId = 0;\n> \n> And that should be done *after* post_parse_analyse_hook so that it's clear that\n> this hook is no longer the place to compute queryid.\n> \n> Is that what should be done?\n\nNo, I don't think so. I think having extensions change behavior\ncontrolled by GUCs is a bad interface.\n\nThe docs are going to say that you have to enable compute_queryid to see\nthe query id in pg_stat_activity and log_line_prefix, but if you install\nan extension, the query id will be visible even if you don't have\ncompute_queryid enabled. I think you need to only honor the hook if\ncompute_queryid is enabled, and update the pg_stat_statements docs to\nsay you have to enable compute_queryid for pg_stat_statements to work.\n\nAlso, should it be compute_queryid or compute_query_id?\n\nAlso, the overhead of computing the query id was reported as 2% --- that\nseems quite high for what it does. Do we know why it is so high?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 18 Mar 2021 09:47:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 09:47:29AM -0400, Bruce Momjian wrote:\n> On Thu, Mar 18, 2021 at 07:29:56AM +0800, Julien Rouhaud wrote:\n> > On Wed, Mar 17, 2021 at 06:32:16PM -0400, Bruce Momjian wrote:\n> > > OK, is that what everyone wants? I think that is what the patch already\n> > > does.\n> > \n> > Note exactly. Right now a custom queryid can be computed even if\n> > compute_queryid is off, if some extension does that in post_parse_analyze_hook.\n> > \n> > I'm assuming that what Robert was thinking was more like:\n> > \n> > if (compute_queryid)\n> > {\n> > if (queryid_hook)\n> > queryId = queryid_hook(...);\n> > else\n> > queryId = JumbeQuery(...);\n> > }\n> > else\n> > queryId = 0;\n> > \n> > And that should be done *after* post_parse_analyse_hook so that it's clear that\n> > this hook is no longer the place to compute queryid.\n> > \n> > Is that what should be done?\n> \n> No, I don't think so. I think having extensions change behavior\n> controlled by GUCs is a bad interface.\n> \n> The docs are going to say that you have to enable compute_queryid to see\n> the query id in pg_stat_activity and log_line_prefix, but if you install\n> an extension, the query id will be visible even if you don't have\n> compute_queryid enabled. I think you need to only honor the hook if\n> compute_queryid is enabled, and update the pg_stat_statements docs to\n> say you have to enable compute_queryid for pg_stat_statements to work.\n\nI'm confused, what you described really looks like what I described.\n\nLet me try to clarify:\n\n- if compute_queryid is off, a queryid should never be seen no matter how hard\n an extension tries\n\n- if compute_queryid is on, the calculation will be done by the core\n (using pgss JumbeQuery) unless an extension computed one already. The only\n way to know what algorithm is used is to check the list of extension loaded.\n\n- if some extension calculates a queryid during post_parse_analyze_hook, we\n will always reset it.\n\nIs that the approach you want?\n\nNote that the only way to not honor the hook is iff the new GUC is disabled is\nto have a new queryid_hook, as we can't stop calling post_parse_analyze_hook if\nthe new GUC is off, and we don't want to pay the queryid calculation overhead\nif the admin explicitly said it wasn't needed.\n\n> Also, should it be compute_queryid or compute_query_id?\n\nMaybe compute_query_identifier?\n\n> Also, the overhead of computing the query id was reported as 2% --- that\n> seems quite high for what it does. Do we know why it is so high?\n\nThe 2% was a worst case scenario, for a query with a single join over\nridiculously small pg_class and pg_attribute, in read only. The whole workload\nwas in shared buffers so the planning and execution is quite fast. Adding some\ncomplexity in the query really limited the overhead.\n\nNote that this was done on an old laptop with quite slow CPU. Maybe\nsomeone with a better hardware than a 5/6yo laptop could get some more\nrealistic results (I unfortunately don't have anything to try on).\n\n\n",
"msg_date": "Fri, 19 Mar 2021 02:06:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 02:06:56AM +0800, Julien Rouhaud wrote:\n> On Thu, Mar 18, 2021 at 09:47:29AM -0400, Bruce Momjian wrote:\n> > On Thu, Mar 18, 2021 at 07:29:56AM +0800, Julien Rouhaud wrote:\n> > > Note exactly. Right now a custom queryid can be computed even if\n> > > compute_queryid is off, if some extension does that in post_parse_analyze_hook.\n\nThe above text is the part that made me think an extension could display\na query id even if disabled by the GUC.\n\n> > The docs are going to say that you have to enable compute_queryid to see\n> > the query id in pg_stat_activity and log_line_prefix, but if you install\n> > an extension, the query id will be visible even if you don't have\n> > compute_queryid enabled. I think you need to only honor the hook if\n> > compute_queryid is enabled, and update the pg_stat_statements docs to\n> > say you have to enable compute_queryid for pg_stat_statements to work.\n> \n> I'm confused, what you described really looks like what I described.\n> \n> Let me try to clarify:\n> \n> - if compute_queryid is off, a queryid should never be seen no matter how hard\n> an extension tries\n\nOh, OK. I can see an extension setting the query id on its own --- we\ncan't prevent that from happening. It is probably enough to tell\nextensions to honor the GUC, since they would want it enabled so it\ndisplays in pg_stat_activity and log_line_prefix.\n\n> - if compute_queryid is on, the calculation will be done by the core\n> (using pgss JumbeQuery) unless an extension computed one already. The only\n> way to know what algorithm is used is to check the list of extension loaded.\n\nOK.\n\n> - if some extension calculates a queryid during post_parse_analyze_hook, we\n> will always reset it.\n\nOK, good.\n\n> Is that the approach you want?\n\nYes, I think so.\n\n> Note that the only way to not honor the hook is iff the new GUC is disabled is\n> to have a new queryid_hook, as we can't stop calling post_parse_analyze_hook if\n> the new GUC is off, and we don't want to pay the queryid calculation overhead\n> if the admin explicitly said it wasn't needed.\n\nRight, let's just get the extensions to honor the GUC --- we don't need\nto block them or anything.\n\n> > Also, should it be compute_queryid or compute_query_id?\n> \n> Maybe compute_query_identifier?\n\nI think compute_query_id works, and is shorter.\n\n> > Also, the overhead of computing the query id was reported as 2% --- that\n> > seems quite high for what it does. Do we know why it is so high?\n> \n> The 2% was a worst case scenario, for a query with a single join over\n> ridiculously small pg_class and pg_attribute, in read only. The whole workload\n> was in shared buffers so the planning and execution is quite fast. Adding some\n> complexity in the query really limited the overhead.\n> \n> Note that this was done on an old laptop with quite slow CPU. Maybe\n> someone with a better hardware than a 5/6yo laptop could get some more\n> realistic results (I unfortunately don't have anything to try on).\n\nOK, good to know. I can run some tests here if people would like me to.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:23:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 03:23:49PM -0400, Bruce Momjian wrote:\n> On Fri, Mar 19, 2021 at 02:06:56AM +0800, Julien Rouhaud wrote:\n> \n> The above text is the part that made me think an extension could display\n> a query id even if disabled by the GUC.\n\nWith the last version of the patch I sent it was the case.\n\n> Oh, OK. I can see an extension setting the query id on its own --- we\n> can't prevent that from happening. It is probably enough to tell\n> extensions to honor the GUC, since they would want it enabled so it\n> displays in pg_stat_activity and log_line_prefix.\n\nOk. So no new hook, and we keep using post_parse_analyze_hook as the official\nway to have custom queryid implementation, with this new behavior:\n\n> > - if some extension calculates a queryid during post_parse_analyze_hook, we\n> > will always reset it.\n> \n> OK, good.\n\nNow that I'm back on the code I remember why I did it this way. It's\nunfortunately not really possible to make things work this way.\n\npg_stat_statements' post_parse_analyze_hook relies on a queryid already being\ncomputed, as it's where we know where the constants are recorded. It means:\n\n- we have to call post_parse_analyze_hook *after* doing core queryid\n calculation\n- if users want to use a third party module to calculate a queryid, they'll\n have to make sure that the module's post_parse_analyze_hook is called\n *before* pg_stat_statements' one.\n- even if they do so, they'll still have to pay the price of core queryid\n calculation\n\nSo it would be very hard to configure and will be too expensive. I think that\nwe have to choose to either we make compute_query_id only trigger core\ncalculation (like it was in previous patch version), or introduce a new hook.\n\n> I think compute_query_id works, and is shorter.\n\nWFM.\n\n> OK, good to know. I can run some tests here if people would like me to.\n\n+1. A read only pgbench will be some kind od worse case scenario that can be\nused I think.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 11:16:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 11:16:50AM +0800, Julien Rouhaud wrote:\n> On Thu, Mar 18, 2021 at 03:23:49PM -0400, Bruce Momjian wrote:\n> > On Fri, Mar 19, 2021 at 02:06:56AM +0800, Julien Rouhaud wrote:\n> > \n> > The above text is the part that made me think an extension could display\n> > a query id even if disabled by the GUC.\n> \n> With the last version of the patch I sent it was the case.\n> \n> > Oh, OK. I can see an extension setting the query id on its own --- we\n> > can't prevent that from happening. It is probably enough to tell\n> > extensions to honor the GUC, since they would want it enabled so it\n> > displays in pg_stat_activity and log_line_prefix.\n> \n> Ok. So no new hook, and we keep using post_parse_analyze_hook as the official\n> way to have custom queryid implementation, with this new behavior:\n> \n> > > - if some extension calculates a queryid during post_parse_analyze_hook, we\n> > > will always reset it.\n> > \n> > OK, good.\n> \n> Now that I'm back on the code I remember why I did it this way. It's\n> unfortunately not really possible to make things work this way.\n> \n> pg_stat_statements' post_parse_analyze_hook relies on a queryid already being\n> computed, as it's where we know where the constants are recorded. It means:\n> \n> - we have to call post_parse_analyze_hook *after* doing core queryid\n> calculation\n> - if users want to use a third party module to calculate a queryid, they'll\n> have to make sure that the module's post_parse_analyze_hook is called\n> *before* pg_stat_statements' one.\n> - even if they do so, they'll still have to pay the price of core queryid\n> calculation\n\nOK, that makes perfect sense. I think the best solution is to document\nthat compute_query_id just controls the built-in computation of the\nquery id, and that extensions can also compute it if this is off, and\npg_stat_activity and log_line_prefix will display built-in or extension\ncomputed query ids.\n\nIt might be interesting someday to check if the hook changed a\npre-computed query id and warn the user in the logs, but that could\ncause more log-spam problems than help. I am a little worried that\nsomeone might have compute_query_id enabled and then install an\nextension that overwrites it, but we will just have to document this\nissue. Hopefully extensions will be clear that they are computing their\nown query id.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 09:29:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 2:29 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> OK, that makes perfect sense. I think the best solution is to document\n> that compute_query_id just controls the built-in computation of the\n> query id, and that extensions can also compute it if this is off, and\n> pg_stat_activity and log_line_prefix will display built-in or extension\n> computed query ids.\n>\n> It might be interesting someday to check if the hook changed a\n> pre-computed query id and warn the user in the logs, but that could\n> cause more log-spam problems than help.\n\nThe log-spam could be mitigated by logging it just once per connection\nthe first time it is overridden\n\nAlso, we could ask the extensions to expose the \"method name\" in a read-only GUC\n\nso one can do\n\nSHOW compute_query_id_method;\n\nand get the name of method use\n\ncompute_query_id_method\n------------------------------------\nbuiltin\n\nAnd it may even dynamically change to indicate the overriding of builtin\n\ncompute_query_id_method\n---------------------------------------------------\nfancy_compute_query_id (overrides builtin)\n\n\n",
"msg_date": "Fri, 19 Mar 2021 14:54:16 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 09:29:06AM -0400, Bruce Momjian wrote:\n> On Fri, Mar 19, 2021 at 11:16:50AM +0800, Julien Rouhaud wrote:\n> > Now that I'm back on the code I remember why I did it this way. It's\n> > unfortunately not really possible to make things work this way.\n> > \n> > pg_stat_statements' post_parse_analyze_hook relies on a queryid already being\n> > computed, as it's where we know where the constants are recorded. It means:\n> > \n> > - we have to call post_parse_analyze_hook *after* doing core queryid\n> > calculation\n> > - if users want to use a third party module to calculate a queryid, they'll\n> > have to make sure that the module's post_parse_analyze_hook is called\n> > *before* pg_stat_statements' one.\n> > - even if they do so, they'll still have to pay the price of core queryid\n> > calculation\n> \n> OK, that makes perfect sense. I think the best solution is to document\n> that compute_query_id just controls the built-in computation of the\n> query id, and that extensions can also compute it if this is off, and\n> pg_stat_activity and log_line_prefix will display built-in or extension\n> computed query ids.\n\nSo the last version of the patch should implement that behavior right? It's\njust missing some explicit guidance that third-party extensions should only\ncalculate a queryid if compute_query_id is off\n\n> It might be interesting someday to check if the hook changed a\n> pre-computed query id and warn the user in the logs, but that could\n> cause more log-spam problems than help. I am a little worried that\n> someone might have compute_query_id enabled and then install an\n> extension that overwrites it, but we will just have to document this\n> issue. Hopefully extensions will be clear that they are computing their\n> own query id.\n\nI agree. And hopefully they will split the queryid calculation from the rest\nof the extension so that users can use the combination they want.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 22:27:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 02:54:16PM +0100, Hannu Krosing wrote:\n> On Fri, Mar 19, 2021 at 2:29 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > OK, that makes perfect sense. I think the best solution is to document\n> > that compute_query_id just controls the built-in computation of the\n> > query id, and that extensions can also compute it if this is off, and\n> > pg_stat_activity and log_line_prefix will display built-in or extension\n> > computed query ids.\n> >\n> > It might be interesting someday to check if the hook changed a\n> > pre-computed query id and warn the user in the logs, but that could\n> > cause more log-spam problems than help.\n> \n> The log-spam could be mitigated by logging it just once per connection\n> the first time it is overridden\n\nYes, but it might still generate a significant amount of additional lines.\n\nIf extensions authors follow the recommendations and only calculate a queryid\nwhen compute_query_id is off, it shoule be easy to check that you have\neverything setup properly.\n\n> Also, we could ask the extensions to expose the \"method name\" in a read-only GUC\n> \n> so one can do\n> \n> SHOW compute_query_id_method;\n> \n> and get the name of method use\n> \n> compute_query_id_method\n> ------------------------------------\n> builtin\n> \n> And it may even dynamically change to indicate the overriding of builtin\n> \n> compute_query_id_method\n> ---------------------------------------------------\n> fancy_compute_query_id (overrides builtin)\n\nThis could be nice, but I'm not sure that it would work well if someones\ninstall multiple extensions that calculate a queryid (which would be silly but\nstill), or load another one at runtime.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 22:35:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 10:35:21PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 19, 2021 at 02:54:16PM +0100, Hannu Krosing wrote:\n> > On Fri, Mar 19, 2021 at 2:29 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > The log-spam could be mitigated by logging it just once per connection\n> > the first time it is overridden\n> \n> Yes, but it might still generate a significant amount of additional lines.\n> \n> If extensions authors follow the recommendations and only calculate a queryid\n> when compute_query_id is off, it shoule be easy to check that you have\n> everything setup properly.\n\nSeems extensions that want to generate their own query id should just\nerror out with a message to the log file if compute_query_id is set ---\nthat should fix the entire issue --- but see below.\n\n> > Also, we could ask the extensions to expose the \"method name\" in a read-only GUC\n> > \n> > so one can do\n> > \n> > SHOW compute_query_id_method;\n> > \n> > and get the name of method use\n> > \n> > compute_query_id_method\n> > ------------------------------------\n> > builtin\n> > \n> > And it may even dynamically change to indicate the overriding of builtin\n> > \n> > compute_query_id_method\n> > ---------------------------------------------------\n> > fancy_compute_query_id (overrides builtin)\n> \n> This could be nice, but I'm not sure that it would work well if someones\n> install multiple extensions that calculate a queryid (which would be silly but\n> still), or load another one at runtime.\n\nWell, given we don't really want to support multiple query id types\nbeing generated or displayed, the \"error out\" above should fix it. \n\nLet's do this --- tell extensions to error out if the query id is\nalready set, either by compute_query_id or another extension. If an\nextension wants to generate its own query id and store is internal to\nthe extension, that is fine, but the server-displayed query id should be\ngenerated once and never overwritten by an extension.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 12:53:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 10:27:51PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 19, 2021 at 09:29:06AM -0400, Bruce Momjian wrote:\n> > OK, that makes perfect sense. I think the best solution is to document\n> > that compute_query_id just controls the built-in computation of the\n> > query id, and that extensions can also compute it if this is off, and\n> > pg_stat_activity and log_line_prefix will display built-in or extension\n> > computed query ids.\n> \n> So the last version of the patch should implement that behavior right? It's\n> just missing some explicit guidance that third-party extensions should only\n> calculate a queryid if compute_query_id is off\n\nYes, I think we are now down to just how the extensions should be told\nto behave, and how we document this --- see the email I just sent.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 12:54:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "It would be really convenient if user-visible serialisations of the query\nid had something that identifies the computation method.\n\nmaybe prefix 'N' for internal, 'S' for pg_stat_statements etc.\n\nThis would immediately show in logs at what point the id calculator was\nchanged\n\nOn Fri, Mar 19, 2021 at 5:54 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Mar 19, 2021 at 10:27:51PM +0800, Julien Rouhaud wrote:\n> > On Fri, Mar 19, 2021 at 09:29:06AM -0400, Bruce Momjian wrote:\n> > > OK, that makes perfect sense. I think the best solution is to document\n> > > that compute_query_id just controls the built-in computation of the\n> > > query id, and that extensions can also compute it if this is off, and\n> > > pg_stat_activity and log_line_prefix will display built-in or extension\n> > > computed query ids.\n> >\n> > So the last version of the patch should implement that behavior right?\n> It's\n> > just missing some explicit guidance that third-party extensions should\n> only\n> > calculate a queryid if compute_query_id is off\n>\n> Yes, I think we are now down to just how the extensions should be told\n> to behave, and how we document this --- see the email I just sent.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n>\n\nIt would be really convenient if user-visible serialisations of the query id had something that identifies the computation method.maybe prefix 'N' for internal, 'S' for pg_stat_statements etc.This would immediately show in logs at what point the id calculator was changedOn Fri, Mar 19, 2021 at 5:54 PM Bruce Momjian <bruce@momjian.us> wrote:On Fri, Mar 19, 2021 at 10:27:51PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 19, 2021 at 09:29:06AM -0400, Bruce Momjian wrote:\n> > OK, that makes perfect sense. I think the best solution is to document\n> > that compute_query_id just controls the built-in computation of the\n> > query id, and that extensions can also compute it if this is off, and\n> > pg_stat_activity and log_line_prefix will display built-in or extension\n> > computed query ids.\n> \n> So the last version of the patch should implement that behavior right? It's\n> just missing some explicit guidance that third-party extensions should only\n> calculate a queryid if compute_query_id is off\n\nYes, I think we are now down to just how the extensions should be told\nto behave, and how we document this --- see the email I just sent.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Sat, 20 Mar 2021 01:03:16 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 01:03:16AM +0100, Hannu Krosing wrote:\n> It would be really convenient if user-visible serialisations of the query id\n> had something that identifies the computation method.\n> \n> maybe prefix 'N' for internal, 'S' for pg_stat_statements etc.\n> \n> This would immediately�show in logs at what point the id�calculator�was changed\n\nYeah, but it an integer, and I don't think we want to change that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 19 Mar 2021 20:10:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 08:10:54PM -0400, Bruce Momjian wrote:\n> On Sat, Mar 20, 2021 at 01:03:16AM +0100, Hannu Krosing wrote:\n> > It would be really convenient if user-visible serialisations of the query id\n> > had something that identifies the computation method.\n> > \n> > maybe prefix 'N' for internal, 'S' for pg_stat_statements etc.\n> > \n> > This would immediately�show in logs at what point the id�calculator�was changed\n> \n> Yeah, but it an integer, and I don't think we want to change that.\n\nAlso, with Bruce's approach to ask extensions to error out if they would\noverwrite a queryid the only way to change the calculation method is a restart.\nSo only one source can exist in the system.\n\nHopefully that's a big enough hammer that administrators will know what method\nthey're using.\n\n\n",
"msg_date": "Sat, 20 Mar 2021 13:28:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 12:53:18PM -0400, Bruce Momjian wrote:\n> \n> Well, given we don't really want to support multiple query id types\n> being generated or displayed, the \"error out\" above should fix it. \n> \n> Let's do this --- tell extensions to error out if the query id is\n> already set, either by compute_query_id or another extension. If an\n> extension wants to generate its own query id and store is internal to\n> the extension, that is fine, but the server-displayed query id should be\n> generated once and never overwritten by an extension.\n\nAgreed, this will ensure that you won't dynamically change the queryid source.\n\nWe should also document that changing it requires a restart and calling\npg_stat_statements_reset() afterwards.\n\nv19 adds some changes, plus extra documentation for pg_stat_statements about\nthe requirement for a queryid to be calculated, and a note that all documented\ndetails only apply for in-core source. I'm not sure if this is still the best\nplace to document those details anymore though.",
"msg_date": "Sat, 20 Mar 2021 14:12:34 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 02:12:34PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 19, 2021 at 12:53:18PM -0400, Bruce Momjian wrote:\n> > \n> > Well, given we don't really want to support multiple query id types\n> > being generated or displayed, the \"error out\" above should fix it. \n> > \n> > Let's do this --- tell extensions to error out if the query id is\n> > already set, either by compute_query_id or another extension. If an\n> > extension wants to generate its own query id and store is internal to\n> > the extension, that is fine, but the server-displayed query id should be\n> > generated once and never overwritten by an extension.\n> \n> Agreed, this will ensure that you won't dynamically change the queryid source.\n> \n> We should also document that changing it requires a restart and calling\n> pg_stat_statements_reset() afterwards.\n> \n> v19 adds some changes, plus extra documentation for pg_stat_statements about\n> the requirement for a queryid to be calculated, and a note that all documented\n> details only apply for in-core source. I'm not sure if this is still the best\n> place to document those details anymore though.\n\nOK, after reading the entire thread, I don't think there are any\nremaining open issues with this patch and I think this is ready for\ncommitting. I have adjusted the doc section of the patches, attached. \nI have marked myself as committer in the commitfest app and hope to\napply it in the next few days based on feedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Mon, 22 Mar 2021 17:55:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi,\nFor queryjumble.c :\n\n+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n\nThe year should be updated.\nSame with queryjumble.h\n\nCheers\n\nOn Mon, Mar 22, 2021 at 2:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sat, Mar 20, 2021 at 02:12:34PM +0800, Julien Rouhaud wrote:\n> > On Fri, Mar 19, 2021 at 12:53:18PM -0400, Bruce Momjian wrote:\n> > >\n> > > Well, given we don't really want to support multiple query id types\n> > > being generated or displayed, the \"error out\" above should fix it.\n> > >\n> > > Let's do this --- tell extensions to error out if the query id is\n> > > already set, either by compute_query_id or another extension. If an\n> > > extension wants to generate its own query id and store is internal to\n> > > the extension, that is fine, but the server-displayed query id should\n> be\n> > > generated once and never overwritten by an extension.\n> >\n> > Agreed, this will ensure that you won't dynamically change the queryid\n> source.\n> >\n> > We should also document that changing it requires a restart and calling\n> > pg_stat_statements_reset() afterwards.\n> >\n> > v19 adds some changes, plus extra documentation for pg_stat_statements\n> about\n> > the requirement for a queryid to be calculated, and a note that all\n> documented\n> > details only apply for in-core source. I'm not sure if this is still\n> the best\n> > place to document those details anymore though.\n>\n> OK, after reading the entire thread, I don't think there are any\n> remaining open issues with this patch and I think this is ready for\n> committing. I have adjusted the doc section of the patches, attached.\n> I have marked myself as committer in the commitfest app and hope to\n> apply it in the next few days based on feedback.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n\nHi,For queryjumble.c :+ * Portions Copyright (c) 1996-2020, PostgreSQL Global Development GroupThe year should be updated.Same with queryjumble.hCheersOn Mon, Mar 22, 2021 at 2:56 PM Bruce Momjian <bruce@momjian.us> wrote:On Sat, Mar 20, 2021 at 02:12:34PM +0800, Julien Rouhaud wrote:\n> On Fri, Mar 19, 2021 at 12:53:18PM -0400, Bruce Momjian wrote:\n> > \n> > Well, given we don't really want to support multiple query id types\n> > being generated or displayed, the \"error out\" above should fix it. \n> > \n> > Let's do this --- tell extensions to error out if the query id is\n> > already set, either by compute_query_id or another extension. If an\n> > extension wants to generate its own query id and store is internal to\n> > the extension, that is fine, but the server-displayed query id should be\n> > generated once and never overwritten by an extension.\n> \n> Agreed, this will ensure that you won't dynamically change the queryid source.\n> \n> We should also document that changing it requires a restart and calling\n> pg_stat_statements_reset() afterwards.\n> \n> v19 adds some changes, plus extra documentation for pg_stat_statements about\n> the requirement for a queryid to be calculated, and a note that all documented\n> details only apply for in-core source. I'm not sure if this is still the best\n> place to document those details anymore though.\n\nOK, after reading the entire thread, I don't think there are any\nremaining open issues with this patch and I think this is ready for\ncommitting. I have adjusted the doc section of the patches, attached. \nI have marked myself as committer in the commitfest app and hope to\napply it in the next few days based on feedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Mon, 22 Mar 2021 17:17:15 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 05:17:15PM -0700, Zhihong Yu wrote:\n> Hi,\n> For�queryjumble.c :\n> \n> + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> \n> The year should be updated.\n> Same with�queryjumble.h\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 22 Mar 2021 20:43:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 05:55:54PM -0400, Bruce Momjian wrote:\n> \n> OK, after reading the entire thread, I don't think there are any\n> remaining open issues with this patch and I think this is ready for\n> committing. I have adjusted the doc section of the patches, attached. \n> I have marked myself as committer in the commitfest app and hope to\n> apply it in the next few days based on feedback.\n\nThanks a lot Bruce!\n\nI looked at the changes in the attached patches and that's a clear\nimprovements, thanks a lot for that.\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:31:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Mar 22, 2021 at 08:43:40PM -0400, Bruce Momjian wrote:\n> On Mon, Mar 22, 2021 at 05:17:15PM -0700, Zhihong Yu wrote:\n> > Hi,\n> > For�queryjumble.c :\n> > \n> > + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> > \n> > The year should be updated.\n> > Same with�queryjumble.h\n> \n> Thanks, fixed.\n\nThanks also for taking care of that. While at it I see that current HEAD has a\nlot of files with the same problem:\n\n$ git grep \"\\-2020\"\nconfig/config.guess:# Copyright 1992-2020 Free Software Foundation, Inc.\nconfig/config.guess:Copyright 1992-2020 Free Software Foundation, Inc.\nconfig/config.sub:# Copyright 1992-2020 Free Software Foundation, Inc.\nconfig/config.sub:Copyright 1992-2020 Free Software Foundation, Inc.\ncontrib/pageinspect/gistfuncs.c: * Copyright (c) 2014-2020, PostgreSQL Global Development Group\nsrc/backend/rewrite/rewriteSearchCycle.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/backend/utils/adt/jsonbsubs.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/bin/pg_archivecleanup/po/de.po:# Copyright (C) 2019-2020 PostgreSQL Global Development Group\nsrc/bin/pg_rewind/po/de.po:# Copyright (C) 2015-2020 PostgreSQL Global Development Group\nsrc/bin/pg_rewind/po/de.po:# Peter Eisentraut <peter@eisentraut.org>, 2015-2020.\nsrc/common/hex.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/common/sha1.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/common/sha1_int.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/include/common/hex.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/include/common/sha1.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/include/port/pg_iovec.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/include/rewrite/rewriteSearchCycle.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\nsrc/interfaces/ecpg/preproc/po/de.po:# Copyright (C) 2009-2020 PostgreSQL Global Development Group\nsrc/interfaces/ecpg/preproc/po/de.po:# Peter Eisentraut <peter@eisentraut.org>, 2009-2020.\n\nIs that an oversight in ca3b37487be333a1d241dab1bbdd17a211a88f43, at least for\nnon .po files?\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:36:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 02:36:27PM +0800, Julien Rouhaud wrote:\n> On Mon, Mar 22, 2021 at 08:43:40PM -0400, Bruce Momjian wrote:\n> > On Mon, Mar 22, 2021 at 05:17:15PM -0700, Zhihong Yu wrote:\n> > > Hi,\n> > > For�queryjumble.c :\n> > > \n> > > + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> > > \n> > > The year should be updated.\n> > > Same with�queryjumble.h\n> > \n> > Thanks, fixed.\n> \n> Thanks also for taking care of that. While at it I see that current HEAD has a\n> lot of files with the same problem:\n> \n> $ git grep \"\\-2020\"\n> config/config.guess:# Copyright 1992-2020 Free Software Foundation, Inc.\n> config/config.guess:Copyright 1992-2020 Free Software Foundation, Inc.\n> config/config.sub:# Copyright 1992-2020 Free Software Foundation, Inc.\n> config/config.sub:Copyright 1992-2020 Free Software Foundation, Inc.\n> contrib/pageinspect/gistfuncs.c: * Copyright (c) 2014-2020, PostgreSQL Global Development Group\n> src/backend/rewrite/rewriteSearchCycle.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/backend/utils/adt/jsonbsubs.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/bin/pg_archivecleanup/po/de.po:# Copyright (C) 2019-2020 PostgreSQL Global Development Group\n> src/bin/pg_rewind/po/de.po:# Copyright (C) 2015-2020 PostgreSQL Global Development Group\n> src/bin/pg_rewind/po/de.po:# Peter Eisentraut <peter@eisentraut.org>, 2015-2020.\n> src/common/hex.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/common/sha1.c: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/common/sha1_int.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/include/common/hex.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/include/common/sha1.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/include/port/pg_iovec.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/include/rewrite/rewriteSearchCycle.h: * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group\n> src/interfaces/ecpg/preproc/po/de.po:# Copyright (C) 2009-2020 PostgreSQL Global Development Group\n> src/interfaces/ecpg/preproc/po/de.po:# Peter Eisentraut <peter@eisentraut.org>, 2009-2020.\n> \n> Is that an oversight in ca3b37487be333a1d241dab1bbdd17a211a88f43, at least for\n> non .po files?\n\nNo, I don't think so. We don't change the Free Software Foundation\ncopyrights, and the .po files get loaded from another repository\noccasionally. The hex/sha copyrights came from patches developed in\n2020 but committed in 2021. These will mostly be corrected in 2022.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 10:34:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Mar-22, Bruce Momjian wrote:\n\n> --- a/doc/src/sgml/ref/explain.sgml\n> +++ b/doc/src/sgml/ref/explain.sgml\n> @@ -136,8 +136,10 @@ ROLLBACK;\n> the output column list for each node in the plan tree, schema-qualify\n> table and function names, always label variables in expressions with\n> their range table alias, and always print the name of each trigger for\n> - which statistics are displayed. This parameter defaults to\n> - <literal>FALSE</literal>.\n> + which statistics are displayed. The query identifier will also be\n> + displayed if one has been compute, see <xref\n> + linkend=\"guc-compute-query-id\"/> for more details. This parameter\n> + defaults to <literal>FALSE</literal>.\n\nTypo here, \"has been computed\".\n\nIs the intention to commit each of these patches separately?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:12:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:12:03PM -0300, �lvaro Herrera wrote:\n> On 2021-Mar-22, Bruce Momjian wrote:\n> \n> > --- a/doc/src/sgml/ref/explain.sgml\n> > +++ b/doc/src/sgml/ref/explain.sgml\n> > @@ -136,8 +136,10 @@ ROLLBACK;\n> > the output column list for each node in the plan tree, schema-qualify\n> > table and function names, always label variables in expressions with\n> > their range table alias, and always print the name of each trigger for\n> > - which statistics are displayed. This parameter defaults to\n> > - <literal>FALSE</literal>.\n> > + which statistics are displayed. The query identifier will also be\n> > + displayed if one has been compute, see <xref\n> > + linkend=\"guc-compute-query-id\"/> for more details. This parameter\n> > + defaults to <literal>FALSE</literal>.\n> \n> Typo here, \"has been computed\".\n\nGood catch, fixed.\n\n> Is the intention to commit each of these patches separately?\n\nNo, I was thinking of just doing a single commit. Should I do three\ncommits? I posted it as three patches since that is how it was posted\nby the author, and reviewing is easier. It also will need a catversion\nbump.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:27:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 10:34:38AM -0400, Bruce Momjian wrote:\n> On Tue, Mar 23, 2021 at 02:36:27PM +0800, Julien Rouhaud wrote:\n> > \n> > Is that an oversight in ca3b37487be333a1d241dab1bbdd17a211a88f43, at least for\n> > non .po files?\n> \n> No, I don't think so. We don't change the Free Software Foundation\n> copyrights, and the .po files get loaded from another repository\n> occasionally. The hex/sha copyrights came from patches developed in\n> 2020 but committed in 2021. These will mostly be corrected in 2022.\n\nOk, thanks for the clarification!\n\n\n",
"msg_date": "Wed, 24 Mar 2021 11:02:39 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:27:10PM -0400, Bruce Momjian wrote:\n> \n> No, I was thinking of just doing a single commit. Should I do three\n> commits? I posted it as three patches since that is how it was posted\n> by the author, and reviewing is easier. It also will need a catversion\n> bump.\n\nYes, I originally split the commit because it was easier to write this way and\nit seemed better to send different patches too to ease review.\n\nI think that it would make sense to commit the first patch separately, but I'm\nfine with a single commit if you prefer.\n\n\n",
"msg_date": "Wed, 24 Mar 2021 11:07:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Mar-22, Bruce Momjian wrote:\n\n> diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\n> index e259531f60..9550de0798 100644\n> --- a/src/include/catalog/pg_proc.dat\n> +++ b/src/include/catalog/pg_proc.dat\n> @@ -5249,9 +5249,9 @@\n> proname => 'pg_stat_get_activity', prorows => '100', proisstrict => 'f',\n> proretset => 't', provolatile => 's', proparallel => 'r',\n> prorettype => 'record', proargtypes => 'int4',\n> - proallargtypes => '{int4,oid,int4,oid,text,text,text,text,text,timestamptz,timestamptz,timestamptz,timestamptz,inet,text,int4,xid,xid,text,bool,text,text,int4,text,numeric,text,bool,text,bool,int4}',\n> - proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> - proargnames => '{pid,datid,pid,usesysid,application_name,state,query,wait_event_type,wait_event,xact_start,query_start,backend_start,state_change,client_addr,client_hostname,client_port,backend_xid,backend_xmin,backend_type,ssl,sslversion,sslcipher,sslbits,ssl_client_dn,ssl_client_serial,ssl_issuer_dn,gss_auth,gss_princ,gss_enc,leader_pid}',\n> + proallargtypes => '{int4,oid,int4,oid,text,text,text,text,text,timestamptz,timestamptz,timestamptz,timestamptz,inet,text,int4,xid,xid,text,bool,text,text,int4,text,numeric,text,bool,text,bool,int4,int8}',\n> + proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> + proargnames => '{pid,datid,pid,usesysid,application_name,state,query,wait_event_type,wait_event,xact_start,query_start,backend_start,state_change,client_addr,client_hostname,client_port,backend_xid,backend_xmin,backend_type,ssl,sslversion,sslcipher,sslbits,ssl_client_dn,ssl_client_serial,ssl_issuer_dn,gss_auth,gss_princ,gss_enc,leader_pid,queryid}',\n\nBTW why do you put the queryid column at the end of the column list\nhere? It seems awkward. Can we put it perhaps between state and query?\n\n\n> -const char *clean_querytext(const char *query, int *location, int *len);\n> +const char *CleanQuerytext(const char *query, int *location, int *len);\n> JumbleState *JumbleQuery(Query *query, const char *querytext);\n\nI think pushing in more than one commit is a reasonable approach if they\nare well-contained, but if you do that it'd be better to avoid\nintroducing a function with one name and renaming it in your next\ncommit.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Just treat us the way you want to be treated + some extra allowance\n for ignorance.\" (Michael Brusser)\n\n\n",
"msg_date": "Wed, 24 Mar 2021 05:12:35 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 05:12:35AM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-22, Bruce Momjian wrote:\n> \n> > diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\n> > index e259531f60..9550de0798 100644\n> > --- a/src/include/catalog/pg_proc.dat\n> > +++ b/src/include/catalog/pg_proc.dat\n> > @@ -5249,9 +5249,9 @@\n> > proname => 'pg_stat_get_activity', prorows => '100', proisstrict => 'f',\n> > proretset => 't', provolatile => 's', proparallel => 'r',\n> > prorettype => 'record', proargtypes => 'int4',\n> > - proallargtypes => '{int4,oid,int4,oid,text,text,text,text,text,timestamptz,timestamptz,timestamptz,timestamptz,inet,text,int4,xid,xid,text,bool,text,text,int4,text,numeric,text,bool,text,bool,int4}',\n> > - proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> > - proargnames => '{pid,datid,pid,usesysid,application_name,state,query,wait_event_type,wait_event,xact_start,query_start,backend_start,state_change,client_addr,client_hostname,client_port,backend_xid,backend_xmin,backend_type,ssl,sslversion,sslcipher,sslbits,ssl_client_dn,ssl_client_serial,ssl_issuer_dn,gss_auth,gss_princ,gss_enc,leader_pid}',\n> > + proallargtypes => '{int4,oid,int4,oid,text,text,text,text,text,timestamptz,timestamptz,timestamptz,timestamptz,inet,text,int4,xid,xid,text,bool,text,text,int4,text,numeric,text,bool,text,bool,int4,int8}',\n> > + proargmodes => '{i,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o,o}',\n> > + proargnames => '{pid,datid,pid,usesysid,application_name,state,query,wait_event_type,wait_event,xact_start,query_start,backend_start,state_change,client_addr,client_hostname,client_port,backend_xid,backend_xmin,backend_type,ssl,sslversion,sslcipher,sslbits,ssl_client_dn,ssl_client_serial,ssl_issuer_dn,gss_auth,gss_princ,gss_enc,leader_pid,queryid}',\n> \n> BTW why do you put the queryid column at the end of the column list\n> here? It seems awkward. Can we put it perhaps between state and query?\n\nI thought that it would be better to have it at the end as it can always be\nNULL (and will be by default), which I guess was also the reason to have\nleader_pid there. I'm all in favor to have queryid near the query, and\nwhile at it leader_pid near the pid.\n\n> > -const char *clean_querytext(const char *query, int *location, int *len);\n> > +const char *CleanQuerytext(const char *query, int *location, int *len);\n> > JumbleState *JumbleQuery(Query *query, const char *querytext);\n> \n> I think pushing in more than one commit is a reasonable approach if they\n> are well-contained\n\nThey should, as I incrementally built on top of the first one. I also just\ndouble checked the patchset and each new commit compiles and passes the\nregression tests.\n\n> but if you do that it'd be better to avoid\n> introducing a function with one name and renaming it in your next\n> commit.\n\nOops, I apparently messed a fixup when working on it. Bruce, should I take\ncare of that of do you want to? I think you have some local modifications\nalready I'd rather not miss some changes.\n\n\n",
"msg_date": "Wed, 24 Mar 2021 16:51:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 04:51:40PM +0800, Julien Rouhaud wrote:\n> > but if you do that it'd be better to avoid\n> > introducing a function with one name and renaming it in your next\n> > commit.\n> \n> Oops, I apparently messed a fixup when working on it. Bruce, should I take\n> care of that of do you want to? I think you have some local modifications\n> already I'd rather not miss some changes.\n\nI have no local modifications. Please modify the patch I posted and\nrepost your version, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 24 Mar 2021 08:13:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 08:13:40AM -0400, Bruce Momjian wrote:\n> On Wed, Mar 24, 2021 at 04:51:40PM +0800, Julien Rouhaud wrote:\n> > > but if you do that it'd be better to avoid\n> > > introducing a function with one name and renaming it in your next\n> > > commit.\n> > \n> > Oops, I apparently messed a fixup when working on it. Bruce, should I take\n> > care of that of do you want to? I think you have some local modifications\n> > already I'd rather not miss some changes.\n> \n> I have no local modifications. Please modify the patch I posted and\n> repost your version, thanks.\n\nOk! I used the last version of the patch you sent and addressed the following\ncomments from earlier messages in attached v20:\n\n- copyright year to 2021\n- s/has has been compute/has been compute/\n- use the name CleanQuerytext in the first commit\n\nI didn't change the position of queryid in pg_stat_get_activity(), as the\n\"real\" order is actually define in system_views.sql when creating\npg_stat_activity view. Adding the new fields at the end of\npg_stat_get_activity() helps to keep the C code simpler and less bug prone, so\nI think it's best to continue this way.\n\nI also used the previous commit message if that helps.",
"msg_date": "Wed, 24 Mar 2021 23:20:49 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Mar-24, Julien Rouhaud wrote:\n\n> From e08c9d5fc86ba722844d97000798de868890aba3 Mon Sep 17 00:00:00 2001\n> From: Bruce Momjian <bruce@momjian.us>\n> Date: Mon, 22 Mar 2021 17:43:23 -0400\n> Subject: [PATCH v20 2/3] Expose queryid in pg_stat_activity and\n\n> src/backend/executor/execMain.c | 9 ++\n> src/backend/executor/execParallel.c | 14 ++-\n> src/backend/executor/nodeGather.c | 3 +-\n> src/backend/executor/nodeGatherMerge.c | 4 +-\n\nHmm...\n\nI find it odd that there's executor code that acquires the current query\nID from pgstat, after having been put there by planner or ExecutorStart\nitself. Seems like a modularity violation. I wonder if it would make\nmore sense to have the value maybe in struct EState (or perhaps there's\na better place -- but I don't think they have a way to reach the\nQueryDesc anyhow), put there by ExecutorStart, so that places such as\nexecParallel, nodeGather etc don't have to fetch it from pgstat but from\nEState.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"We're here to devour each other alive\" (Hobbes)\n\n\n",
"msg_date": "Wed, 24 Mar 2021 13:02:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 01:02:00PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-24, Julien Rouhaud wrote:\n> \n> > From e08c9d5fc86ba722844d97000798de868890aba3 Mon Sep 17 00:00:00 2001\n> > From: Bruce Momjian <bruce@momjian.us>\n> > Date: Mon, 22 Mar 2021 17:43:23 -0400\n> > Subject: [PATCH v20 2/3] Expose queryid in pg_stat_activity and\n> \n> > src/backend/executor/execMain.c | 9 ++\n> > src/backend/executor/execParallel.c | 14 ++-\n> > src/backend/executor/nodeGather.c | 3 +-\n> > src/backend/executor/nodeGatherMerge.c | 4 +-\n> \n> Hmm...\n> \n> I find it odd that there's executor code that acquires the current query\n> ID from pgstat, after having been put there by planner or ExecutorStart\n> itself. Seems like a modularity violation. I wonder if it would make\n> more sense to have the value maybe in struct EState (or perhaps there's\n> a better place -- but I don't think they have a way to reach the\n> QueryDesc anyhow), put there by ExecutorStart, so that places such as\n> execParallel, nodeGather etc don't have to fetch it from pgstat but from\n> EState.\n\nThe current queryid is already available in the Estate, as the underlying\nPlannedStmt contains it. The problem is that we want to display the top level\nqueryid, not the current query one, and the top level queryid is held in\npgstat.\n\n\n",
"msg_date": "Thu, 25 Mar 2021 10:36:38 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 11:20:49PM +0800, Julien Rouhaud wrote:\n> On Wed, Mar 24, 2021 at 08:13:40AM -0400, Bruce Momjian wrote:\n> > I have no local modifications. Please modify the patch I posted and\n> > repost your version, thanks.\n> \n> Ok! I used the last version of the patch you sent and addressed the following\n> comments from earlier messages in attached v20:\n> \n> - copyright year to 2021\n> - s/has has been compute/has been compute/\n> - use the name CleanQuerytext in the first commit\n\nMy apologies --- yes, I made those two changes after I posted my version\nof the patch. I should have reposted my version with those changes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 25 Mar 2021 17:40:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 10:36:38AM +0800, Julien Rouhaud wrote:\n> On Wed, Mar 24, 2021 at 01:02:00PM -0300, Alvaro Herrera wrote:\n> > On 2021-Mar-24, Julien Rouhaud wrote:\n> > \n> > > From e08c9d5fc86ba722844d97000798de868890aba3 Mon Sep 17 00:00:00 2001\n> > > From: Bruce Momjian <bruce@momjian.us>\n> > > Date: Mon, 22 Mar 2021 17:43:23 -0400\n> > > Subject: [PATCH v20 2/3] Expose queryid in pg_stat_activity and\n> > \n> > > src/backend/executor/execMain.c | 9 ++\n> > > src/backend/executor/execParallel.c | 14 ++-\n> > > src/backend/executor/nodeGather.c | 3 +-\n> > > src/backend/executor/nodeGatherMerge.c | 4 +-\n> > \n> > Hmm...\n> > \n> > I find it odd that there's executor code that acquires the current query\n> > ID from pgstat, after having been put there by planner or ExecutorStart\n> > itself. Seems like a modularity violation. I wonder if it would make\n> > more sense to have the value maybe in struct EState (or perhaps there's\n> > a better place -- but I don't think they have a way to reach the\n> > QueryDesc anyhow), put there by ExecutorStart, so that places such as\n> > execParallel, nodeGather etc don't have to fetch it from pgstat but from\n> > EState.\n> \n> The current queryid is already available in the Estate, as the underlying\n> PlannedStmt contains it. The problem is that we want to display the top level\n> queryid, not the current query one, and the top level queryid is held in\n> pgstat.\n\nSo is the current approach ok? If not I'm afraid that detecting and caching\nthe top level queryid in the executor parts would lead to some code\nduplication.\n\n\n",
"msg_date": "Wed, 31 Mar 2021 11:25:32 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 11:25:32AM +0800, Julien Rouhaud wrote:\n> On Thu, Mar 25, 2021 at 10:36:38AM +0800, Julien Rouhaud wrote:\n> > On Wed, Mar 24, 2021 at 01:02:00PM -0300, Alvaro Herrera wrote:\n> > > On 2021-Mar-24, Julien Rouhaud wrote:\n> > > \n> > > > From e08c9d5fc86ba722844d97000798de868890aba3 Mon Sep 17 00:00:00 2001\n> > > > From: Bruce Momjian <bruce@momjian.us>\n> > > > Date: Mon, 22 Mar 2021 17:43:23 -0400\n> > > > Subject: [PATCH v20 2/3] Expose queryid in pg_stat_activity and\n> > > \n> > > > src/backend/executor/execMain.c | 9 ++\n> > > > src/backend/executor/execParallel.c | 14 ++-\n> > > > src/backend/executor/nodeGather.c | 3 +-\n> > > > src/backend/executor/nodeGatherMerge.c | 4 +-\n> > > \n> > > Hmm...\n> > > \n> > > I find it odd that there's executor code that acquires the current query\n> > > ID from pgstat, after having been put there by planner or ExecutorStart\n> > > itself. Seems like a modularity violation. I wonder if it would make\n> > > more sense to have the value maybe in struct EState (or perhaps there's\n> > > a better place -- but I don't think they have a way to reach the\n> > > QueryDesc anyhow), put there by ExecutorStart, so that places such as\n> > > execParallel, nodeGather etc don't have to fetch it from pgstat but from\n> > > EState.\n> > \n> > The current queryid is already available in the Estate, as the underlying\n> > PlannedStmt contains it. The problem is that we want to display the top level\n> > queryid, not the current query one, and the top level queryid is held in\n> > pgstat.\n> \n> So is the current approach ok? If not I'm afraid that detecting and caching\n> the top level queryid in the executor parts would lead to some code\n> duplication.\n\nI assume it is since Alvaro didn't reply. I am planning to apply this\nsoon.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 31 Mar 2021 09:06:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Mar-31, Bruce Momjian wrote:\n\n> On Wed, Mar 31, 2021 at 11:25:32AM +0800, Julien Rouhaud wrote:\n> > On Thu, Mar 25, 2021 at 10:36:38AM +0800, Julien Rouhaud wrote:\n> > > On Wed, Mar 24, 2021 at 01:02:00PM -0300, Alvaro Herrera wrote:\n\n> > > > I find it odd that there's executor code that acquires the current query\n> > > > ID from pgstat, after having been put there by planner or ExecutorStart\n> > > > itself. Seems like a modularity violation. I wonder if it would make\n> > > > more sense to have the value maybe in struct EState (or perhaps there's\n> > > > a better place -- but I don't think they have a way to reach the\n> > > > QueryDesc anyhow), put there by ExecutorStart, so that places such as\n> > > > execParallel, nodeGather etc don't have to fetch it from pgstat but from\n> > > > EState.\n> > > \n> > > The current queryid is already available in the Estate, as the underlying\n> > > PlannedStmt contains it. The problem is that we want to display the top level\n> > > queryid, not the current query one, and the top level queryid is held in\n> > > pgstat.\n> > \n> > So is the current approach ok? If not I'm afraid that detecting and caching\n> > the top level queryid in the executor parts would lead to some code\n> > duplication.\n> \n> I assume it is since Alvaro didn't reply. I am planning to apply this\n> soon.\n\nI'm afraid I don't know enough about how parallel query works to make a\ngood assessment on this being a good approach or not -- and no time at\npresent to figure it all out.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n\n\n",
"msg_date": "Wed, 31 Mar 2021 11:18:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 11:18:45AM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-31, Bruce Momjian wrote:\n> > \n> > I assume it is since Alvaro didn't reply. I am planning to apply this\n> > soon.\n> \n> I'm afraid I don't know enough about how parallel query works to make a\n> good assessment on this being a good approach or not -- and no time at\n> present to figure it all out.\n\nI'm far from being an expert either, but at the time I wrote it and\nlooking at the code around it probably seemed sensible. We could directly call\npgstat_get_my_queryid() in ExecSerializePlan() rather than passing it from the\nvarious callers though, at least there would be a single source for it.\n\n\n",
"msg_date": "Thu, 1 Apr 2021 23:05:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 01, 2021 at 11:05:24PM +0800, Julien Rouhaud wrote:\n> On Wed, Mar 31, 2021 at 11:18:45AM -0300, Alvaro Herrera wrote:\n> > On 2021-Mar-31, Bruce Momjian wrote:\n> > > \n> > > I assume it is since Alvaro didn't reply. I am planning to apply this\n> > > soon.\n> > \n> > I'm afraid I don't know enough about how parallel query works to make a\n> > good assessment on this being a good approach or not -- and no time at\n> > present to figure it all out.\n> \n> I'm far from being an expert either, but at the time I wrote it and\n> looking at the code around it probably seemed sensible. We could directly call\n> pgstat_get_my_queryid() in ExecSerializePlan() rather than passing it from the\n> various callers though, at least there would be a single source for it.\n\nHere's a v21 that includes the mentioned change.",
"msg_date": "Thu, 1 Apr 2021 23:30:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 11:30:15PM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 01, 2021 at 11:05:24PM +0800, Julien Rouhaud wrote:\n> > On Wed, Mar 31, 2021 at 11:18:45AM -0300, Alvaro Herrera wrote:\n> > > On 2021-Mar-31, Bruce Momjian wrote:\n> > > > \n> > > > I assume it is since Alvaro didn't reply. I am planning to apply this\n> > > > soon.\n> > > \n> > > I'm afraid I don't know enough about how parallel query works to make a\n> > > good assessment on this being a good approach or not -- and no time at\n> > > present to figure it all out.\n> > \n> > I'm far from being an expert either, but at the time I wrote it and\n> > looking at the code around it probably seemed sensible. We could directly call\n> > pgstat_get_my_queryid() in ExecSerializePlan() rather than passing it from the\n> > various callers though, at least there would be a single source for it.\n> \n> Here's a v21 that includes the mentioned change.\n\nYou are using:\n\n\t/* ----------\n\t * pgstat_get_my_queryid() -\n\t *\n\t *\tReturn current backend's query identifier.\n\t */\n\tuint64\n\tpgstat_get_my_queryid(void)\n\t{\n\t\tif (!MyBEEntry)\n\t\t\treturn 0;\n\t\n\t\treturn MyBEEntry->st_queryid;\n\t}\n\nLooking at log_statement:\n\n\t/* Log immediately if dictated by log_statement */\n\tif (check_log_statement(parsetree_list))\n\t{\n\t ereport(LOG,\n\t (errmsg(\"statement: %s\", query_string),\n\t errhidestmt(true),\n\t errdetail_execute(parsetree_list)));\n\t was_logged = true;\n\t}\n\nit uses the global variable query_string. I wonder if the query hash\nshould be a global variable too --- this would more clearly match how we\nhandle top-level info like query_string. Digging into the stats system\nto get top-level info does seem odd.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 13:56:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 01:56:42PM -0400, Bruce Momjian wrote:\n> You are using:\n> \n> \t/* ----------\n> \t * pgstat_get_my_queryid() -\n> \t *\n> \t *\tReturn current backend's query identifier.\n> \t */\n> \tuint64\n> \tpgstat_get_my_queryid(void)\n> \t{\n> \t\tif (!MyBEEntry)\n> \t\t\treturn 0;\n> \t\n> \t\treturn MyBEEntry->st_queryid;\n> \t}\n> \n> Looking at log_statement:\n> \n> \t/* Log immediately if dictated by log_statement */\n> \tif (check_log_statement(parsetree_list))\n> \t{\n> \t ereport(LOG,\n> \t (errmsg(\"statement: %s\", query_string),\n> \t errhidestmt(true),\n> \t errdetail_execute(parsetree_list)));\n> \t was_logged = true;\n> \t}\n> \n> it uses the global variable query_string. I wonder if the query hash\n> should be a global variable too --- this would more clearly match how we\n> handle top-level info like query_string. Digging into the stats system\n> to get top-level info does seem odd.\n\nAlso, if you go in that direction, make sure the hash it set in the same\nplaces the query string is set, though I am unclear how extensions would\nhandle that.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 13:59:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 01, 2021 at 01:59:15PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 1, 2021 at 01:56:42PM -0400, Bruce Momjian wrote:\n> > You are using:\n> > \n> > \t/* ----------\n> > \t * pgstat_get_my_queryid() -\n> > \t *\n> > \t *\tReturn current backend's query identifier.\n> > \t */\n> > \tuint64\n> > \tpgstat_get_my_queryid(void)\n> > \t{\n> > \t\tif (!MyBEEntry)\n> > \t\t\treturn 0;\n> > \t\n> > \t\treturn MyBEEntry->st_queryid;\n> > \t}\n> > \n> > Looking at log_statement:\n> > \n> > \t/* Log immediately if dictated by log_statement */\n> > \tif (check_log_statement(parsetree_list))\n> > \t{\n> > \t ereport(LOG,\n> > \t (errmsg(\"statement: %s\", query_string),\n> > \t errhidestmt(true),\n> > \t errdetail_execute(parsetree_list)));\n> > \t was_logged = true;\n> > \t}\n> > \n> > it uses the global variable query_string.\n\nUnless I'm missing something query_string isn't a global variable, it's a\nparameter passed to exec_simple_query() from postgresMain().\n\nIt's then passed to the stats collector to be able to be displayed in\npg_stat_activity through pgstat_report_activity() a bit like what I do for the\nqueryid.\n\nThere's a global variable debug_query_string, but it's only for debugging\npurpose.\n\n> > I wonder if the query hash\n> > should be a global variable too --- this would more clearly match how we\n> > handle top-level info like query_string. Digging into the stats system\n> > to get top-level info does seem odd.\n\nThe main difference is that there's a single top level query_string,\neven if it contains multiple statements. But there would be multiple queryid\ncalculated in that case and we don't want to change it during a top level\nmulti-statements execution, so we can't use the same approach.\n\nAlso, the query_string is directly logged from this code path, while the\nqueryid is logged as a log_line_prefix, and almost all the code there also\nretrieve information from some shared structure.\n\nAnd since it also has to be available in pg_stat_activity, having a single\nsource of truth looked like a better approach.\n\n> Also, if you go in that direction, make sure the hash it set in the same\n> places the query string is set, though I am unclear how extensions would\n> handle that.\n\nIt should be transparent for application, it's extracting the first queryid\nseen for each top level statement and export it. The rest of the code still\ncontinue to see the queryid that corresponds to the really executed single\nstatement.\n\n\n",
"msg_date": "Fri, 2 Apr 2021 02:28:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 02:28:02AM +0800, Julien Rouhaud wrote:\n> Unless I'm missing something query_string isn't a global variable, it's a\n> parameter passed to exec_simple_query() from postgresMain().\n> \n> It's then passed to the stats collector to be able to be displayed in\n> pg_stat_activity through pgstat_report_activity() a bit like what I do for the\n> queryid.\n> \n> There's a global variable debug_query_string, but it's only for debugging\n> purpose.\n> \n> > > I wonder if the query hash\n> > > should be a global variable too --- this would more clearly match how we\n> > > handle top-level info like query_string. Digging into the stats system\n> > > to get top-level info does seem odd.\n> \n> The main difference is that there's a single top level query_string,\n> even if it contains multiple statements. But there would be multiple queryid\n> calculated in that case and we don't want to change it during a top level\n> multi-statements execution, so we can't use the same approach.\n> \n> Also, the query_string is directly logged from this code path, while the\n> queryid is logged as a log_line_prefix, and almost all the code there also\n> retrieve information from some shared structure.\n> \n> And since it also has to be available in pg_stat_activity, having a single\n> source of truth looked like a better approach.\n> \n> > Also, if you go in that direction, make sure the hash it set in the same\n> > places the query string is set, though I am unclear how extensions would\n> > handle that.\n> \n> It should be transparent for application, it's extracting the first queryid\n> seen for each top level statement and export it. The rest of the code still\n> continue to see the queryid that corresponds to the really executed single\n> statement.\n\nOK, I am happy with your design decisions, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 15:27:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 01, 2021 at 03:27:11PM -0400, Bruce Momjian wrote:\n> \n> OK, I am happy with your design decisions, thanks.\n\nThanks! While double checking I noticed that I failed to remove a (now)\nuseless include of pgstat.h in nodeGatherMerge.c in last version. I'm\nattaching v22 to fix that, no other change.",
"msg_date": "Fri, 2 Apr 2021 13:33:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 02, 2021 at 01:33:28PM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 01, 2021 at 03:27:11PM -0400, Bruce Momjian wrote:\n> > \n> > OK, I am happy with your design decisions, thanks.\n> \n> Thanks! While double checking I noticed that I failed to remove a (now)\n> useless include of pgstat.h in nodeGatherMerge.c in last version. I'm\n> attaching v22 to fix that, no other change.\n\nThere was a conflict since e1025044c (Split backend status and progress related\nfunctionality out of pgstat.c).\n\nAttached v23 is a rebase against current HEAD, and I also added a few\nUINT64CONST() macro usage for consistency.",
"msg_date": "Sun, 4 Apr 2021 22:18:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Sun, Apr 4, 2021 at 10:18:50PM +0800, Julien Rouhaud wrote:\n> On Fri, Apr 02, 2021 at 01:33:28PM +0800, Julien Rouhaud wrote:\n> > On Thu, Apr 01, 2021 at 03:27:11PM -0400, Bruce Momjian wrote:\n> > > \n> > > OK, I am happy with your design decisions, thanks.\n> > \n> > Thanks! While double checking I noticed that I failed to remove a (now)\n> > useless include of pgstat.h in nodeGatherMerge.c in last version. I'm\n> > attaching v22 to fix that, no other change.\n> \n> There was a conflict since e1025044c (Split backend status and progress related\n> functionality out of pgstat.c).\n> \n> Attached v23 is a rebase against current HEAD, and I also added a few\n> UINT64CONST() macro usage for consistency.\n\nThanks. I struggled with merging the statistics collection changes into\nmy cluster file encryption branches because my patch made changes to\ncode that moved to another C file.\n\nI plan to apply this tomorrow.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 5 Apr 2021 13:16:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "I have reviewed the code. Here are a few minor comments.\n\n1.\n+void\n+pgstat_report_queryid(uint64 queryId, bool force)\n+{\n+ volatile PgBackendStatus *beentry = MyBEEntry;\n+\n+ if (!beentry)\n+ return;\n+\n+ /*\n+ * if track_activities is disabled, st_queryid should already have been\n+ * reset\n+ */\n+ if (!pgstat_track_activities)\n+ return;\n\nThe above two conditions can be clubbed together in a single condition.\n\n2.\n+/* ----------\n+ * pgstat_get_my_queryid() -\n+ *\n+ * Return current backend's query identifier.\n+ */\n+uint64\n+pgstat_get_my_queryid(void)\n+{\n+ if (!MyBEEntry)\n+ return 0;\n+\n+ return MyBEEntry->st_queryid;\n+}\n\nIs it safe to directly read the data from MyBEEntry without\ncalling pgstat_begin_read_activity() and pgstat_end_read_activity(). Kindly\nref pgstat_get_backend_current_activity() for more information. Kindly let\nme know if I am wrong.\n\nThanks and Regards,\nNitin Jadhav\n\nOn Mon, Apr 5, 2021 at 10:46 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Sun, Apr 4, 2021 at 10:18:50PM +0800, Julien Rouhaud wrote:\n> > On Fri, Apr 02, 2021 at 01:33:28PM +0800, Julien Rouhaud wrote:\n> > > On Thu, Apr 01, 2021 at 03:27:11PM -0400, Bruce Momjian wrote:\n> > > >\n> > > > OK, I am happy with your design decisions, thanks.\n> > >\n> > > Thanks! While double checking I noticed that I failed to remove a\n> (now)\n> > > useless include of pgstat.h in nodeGatherMerge.c in last version. I'm\n> > > attaching v22 to fix that, no other change.\n> >\n> > There was a conflict since e1025044c (Split backend status and progress\n> related\n> > functionality out of pgstat.c).\n> >\n> > Attached v23 is a rebase against current HEAD, and I also added a few\n> > UINT64CONST() macro usage for consistency.\n>\n> Thanks. I struggled with merging the statistics collection changes into\n> my cluster file encryption branches because my patch made changes to\n> code that moved to another C file.\n>\n> I plan to apply this tomorrow.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n>\n\nI have reviewed the code. Here are a few minor comments.1. +void+pgstat_report_queryid(uint64 queryId, bool force)+{+ volatile PgBackendStatus *beentry = MyBEEntry;++ if (!beentry)+ return;++ /*+ * if track_activities is disabled, st_queryid should already have been+ * reset+ */+ if (!pgstat_track_activities)+ return;The above two conditions can be clubbed together in a single condition.2. +/* ----------+ * pgstat_get_my_queryid() -+ *+ * Return current backend's query identifier.+ */+uint64+pgstat_get_my_queryid(void)+{+ if (!MyBEEntry)+ return 0;++ return MyBEEntry->st_queryid;+}Is it safe to directly read the data from MyBEEntry without calling pgstat_begin_read_activity() and pgstat_end_read_activity(). Kindly ref pgstat_get_backend_current_activity() for more information. Kindly let me know if I am wrong.Thanks and Regards,Nitin JadhavOn Mon, Apr 5, 2021 at 10:46 PM Bruce Momjian <bruce@momjian.us> wrote:On Sun, Apr 4, 2021 at 10:18:50PM +0800, Julien Rouhaud wrote:\n> On Fri, Apr 02, 2021 at 01:33:28PM +0800, Julien Rouhaud wrote:\n> > On Thu, Apr 01, 2021 at 03:27:11PM -0400, Bruce Momjian wrote:\n> > > \n> > > OK, I am happy with your design decisions, thanks.\n> > \n> > Thanks! While double checking I noticed that I failed to remove a (now)\n> > useless include of pgstat.h in nodeGatherMerge.c in last version. I'm\n> > attaching v22 to fix that, no other change.\n> \n> There was a conflict since e1025044c (Split backend status and progress related\n> functionality out of pgstat.c).\n> \n> Attached v23 is a rebase against current HEAD, and I also added a few\n> UINT64CONST() macro usage for consistency.\n\nThanks. I struggled with merging the statistics collection changes into\nmy cluster file encryption branches because my patch made changes to\ncode that moved to another C file.\n\nI plan to apply this tomorrow.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Tue, 6 Apr 2021 20:05:19 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 08:05:19PM +0530, Nitin Jadhav wrote:\n> \n> 1.\n> +void\n> +pgstat_report_queryid(uint64 queryId, bool force)\n> +{\n> + volatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> + if (!beentry)\n> + return;\n> +\n> + /*\n> + * if track_activities is disabled, st_queryid should already have been\n> + * reset\n> + */\n> + if (!pgstat_track_activities)\n> + return;\n> \n> The above two conditions can be clubbed together in a single condition.\n\nRight, I just kept it separate as the comment is only relevant for the 2nd\ntest. I'm fine with merging both if needed.\n\n> 2.\n> +/* ----------\n> + * pgstat_get_my_queryid() -\n> + *\n> + * Return current backend's query identifier.\n> + */\n> +uint64\n> +pgstat_get_my_queryid(void)\n> +{\n> + if (!MyBEEntry)\n> + return 0;\n> +\n> + return MyBEEntry->st_queryid;\n> +}\n> \n> Is it safe to directly read the data from MyBEEntry without\n> calling pgstat_begin_read_activity() and pgstat_end_read_activity(). Kindly\n> ref pgstat_get_backend_current_activity() for more information. Kindly let\n> me know if I am wrong.\n\nThis field is only written by a backend for its own entry.\npg_stat_get_activity already has required protection, so the rest of the calls\nto read that field shouldn't have any risk of reading torn values on platform\nwhere this isn't an atomic operation due to concurrent write, as it will be\nfrom the same backend that originally wrote it. It avoids some overhead to\nretrieve the queryid, but if people think it's worth having the loop (or a\ncomment explaining why there's no loop) I'm also fine with it.\n\n\n",
"msg_date": "Tue, 6 Apr 2021 23:11:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Apr-06, Nitin Jadhav wrote:\n\n> I have reviewed the code. Here are a few minor comments.\n> \n> 1.\n> +void\n> +pgstat_report_queryid(uint64 queryId, bool force)\n> +{\n> + volatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> + if (!beentry)\n> + return;\n> +\n> + /*\n> + * if track_activities is disabled, st_queryid should already have been\n> + * reset\n> + */\n> + if (!pgstat_track_activities)\n> + return;\n> \n> The above two conditions can be clubbed together in a single condition.\n\nI wonder if it wouldn't make more sense to put the assignment *after* we\nhave checked the second condition.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 6 Apr 2021 11:41:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 11:41:52AM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-06, Nitin Jadhav wrote:\n> \n> > I have reviewed the code. Here are a few minor comments.\n> > \n> > 1.\n> > +void\n> > +pgstat_report_queryid(uint64 queryId, bool force)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry)\n> > + return;\n> > +\n> > + /*\n> > + * if track_activities is disabled, st_queryid should already have been\n> > + * reset\n> > + */\n> > + if (!pgstat_track_activities)\n> > + return;\n> > \n> > The above two conditions can be clubbed together in a single condition.\n> \n> I wonder if it wouldn't make more sense to put the assignment *after* we\n> have checked the second condition.\n\nAll other pgstat_report_* functions do the assignment before doing any test on\nbeentry and/or pgstat_track_activities, I think we should keep this code\nconsistent.\n\n\n",
"msg_date": "Tue, 6 Apr 2021 23:49:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": ">\n> >\n> > 1.\n> > +void\n> > +pgstat_report_queryid(uint64 queryId, bool force)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry)\n> > + return;\n> > +\n> > + /*\n> > + * if track_activities is disabled, st_queryid should already have been\n> > + * reset\n> > + */\n> > + if (!pgstat_track_activities)\n> > + return;\n> >\n> > The above two conditions can be clubbed together in a single condition.\n> Right, I just kept it separate as the comment is only relevant for the 2nd\n> test. I'm fine with merging both if needed.\n\n\nI feel we should merge both of the conditions as it is done in\npgstat_report_xact_timestamp(). Probably we can write a common comment to\nexplain both the conditions.\n\n> 2.\n> > +/* ----------\n> > + * pgstat_get_my_queryid() -\n> > + *\n> > + * Return current backend's query identifier.\n> > + */\n> > +uint64\n> > +pgstat_get_my_queryid(void)\n> > +{\n> > + if (!MyBEEntry)\n> > + return 0;\n> > +\n> > + return MyBEEntry->st_queryid;\n> > +}\n> >\n> > Is it safe to directly read the data from MyBEEntry without\n> > calling pgstat_begin_read_activity() and pgstat_end_read_activity().\n> Kindly\n> > ref pgstat_get_backend_current_activity() for more information. Kindly\n> let\n> > me know if I am wrong.\n> This field is only written by a backend for its own entry.\n> pg_stat_get_activity already has required protection, so the rest of the\n> calls\n> to read that field shouldn't have any risk of reading torn values on\n> platform\n> where this isn't an atomic operation due to concurrent write, as it will be\n> from the same backend that originally wrote it. It avoids some overhead to\n> retrieve the queryid, but if people think it's worth having the loop (or a\n> comment explaining why there's no loop) I'm also fine with it.\n\n\nThanks for the explanation. Please add a comment explaining why there is no\nloop.\n\nThanks and Regards,\nNitin Jadhav\n\nOn Tue, Apr 6, 2021 at 8:40 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Apr 06, 2021 at 08:05:19PM +0530, Nitin Jadhav wrote:\n> >\n> > 1.\n> > +void\n> > +pgstat_report_queryid(uint64 queryId, bool force)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry)\n> > + return;\n> > +\n> > + /*\n> > + * if track_activities is disabled, st_queryid should already have been\n> > + * reset\n> > + */\n> > + if (!pgstat_track_activities)\n> > + return;\n> >\n> > The above two conditions can be clubbed together in a single condition.\n>\n> Right, I just kept it separate as the comment is only relevant for the 2nd\n> test. I'm fine with merging both if needed.\n>\n> > 2.\n> > +/* ----------\n> > + * pgstat_get_my_queryid() -\n> > + *\n> > + * Return current backend's query identifier.\n> > + */\n> > +uint64\n> > +pgstat_get_my_queryid(void)\n> > +{\n> > + if (!MyBEEntry)\n> > + return 0;\n> > +\n> > + return MyBEEntry->st_queryid;\n> > +}\n> >\n> > Is it safe to directly read the data from MyBEEntry without\n> > calling pgstat_begin_read_activity() and pgstat_end_read_activity().\n> Kindly\n> > ref pgstat_get_backend_current_activity() for more information. Kindly\n> let\n> > me know if I am wrong.\n>\n> This field is only written by a backend for its own entry.\n> pg_stat_get_activity already has required protection, so the rest of the\n> calls\n> to read that field shouldn't have any risk of reading torn values on\n> platform\n> where this isn't an atomic operation due to concurrent write, as it will be\n> from the same backend that originally wrote it. It avoids some overhead to\n> retrieve the queryid, but if people think it's worth having the loop (or a\n> comment explaining why there's no loop) I'm also fine with it.\n>\n\n> > 1.> +void> +pgstat_report_queryid(uint64 queryId, bool force)> +{> + volatile PgBackendStatus *beentry = MyBEEntry;> +> + if (!beentry)> + return;> +> + /*> + * if track_activities is disabled, st_queryid should already have been> + * reset> + */> + if (!pgstat_track_activities)> + return;> > The above two conditions can be clubbed together in a single condition.Right, I just kept it separate as the comment is only relevant for the 2ndtest. I'm fine with merging both if needed.I feel we should merge both of the conditions as it is done in pgstat_report_xact_timestamp(). Probably we can write a common comment to explain both the conditions.> 2.> +/* ----------> + * pgstat_get_my_queryid() -> + *> + * Return current backend's query identifier.> + */> +uint64> +pgstat_get_my_queryid(void)> +{> + if (!MyBEEntry)> + return 0;> +> + return MyBEEntry->st_queryid;> +}> > Is it safe to directly read the data from MyBEEntry without> calling pgstat_begin_read_activity() and pgstat_end_read_activity(). Kindly> ref pgstat_get_backend_current_activity() for more information. Kindly let> me know if I am wrong.This field is only written by a backend for its own entry.pg_stat_get_activity already has required protection, so the rest of the callsto read that field shouldn't have any risk of reading torn values on platformwhere this isn't an atomic operation due to concurrent write, as it will befrom the same backend that originally wrote it. It avoids some overhead toretrieve the queryid, but if people think it's worth having the loop (or acomment explaining why there's no loop) I'm also fine with it. Thanks for the explanation. Please add a comment explaining why there is no loop.Thanks and Regards,Nitin JadhavOn Tue, Apr 6, 2021 at 8:40 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Apr 06, 2021 at 08:05:19PM +0530, Nitin Jadhav wrote:\n> \n> 1.\n> +void\n> +pgstat_report_queryid(uint64 queryId, bool force)\n> +{\n> + volatile PgBackendStatus *beentry = MyBEEntry;\n> +\n> + if (!beentry)\n> + return;\n> +\n> + /*\n> + * if track_activities is disabled, st_queryid should already have been\n> + * reset\n> + */\n> + if (!pgstat_track_activities)\n> + return;\n> \n> The above two conditions can be clubbed together in a single condition.\n\nRight, I just kept it separate as the comment is only relevant for the 2nd\ntest. I'm fine with merging both if needed.\n\n> 2.\n> +/* ----------\n> + * pgstat_get_my_queryid() -\n> + *\n> + * Return current backend's query identifier.\n> + */\n> +uint64\n> +pgstat_get_my_queryid(void)\n> +{\n> + if (!MyBEEntry)\n> + return 0;\n> +\n> + return MyBEEntry->st_queryid;\n> +}\n> \n> Is it safe to directly read the data from MyBEEntry without\n> calling pgstat_begin_read_activity() and pgstat_end_read_activity(). Kindly\n> ref pgstat_get_backend_current_activity() for more information. Kindly let\n> me know if I am wrong.\n\nThis field is only written by a backend for its own entry.\npg_stat_get_activity already has required protection, so the rest of the calls\nto read that field shouldn't have any risk of reading torn values on platform\nwhere this isn't an atomic operation due to concurrent write, as it will be\nfrom the same backend that originally wrote it. It avoids some overhead to\nretrieve the queryid, but if people think it's worth having the loop (or a\ncomment explaining why there's no loop) I'm also fine with it.",
"msg_date": "Wed, 7 Apr 2021 18:15:27 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": ">\n> On Tue, Apr 06, 2021 at 11:41:52AM -0400, Alvaro Herrera wrote:\n> > On 2021-Apr-06, Nitin Jadhav wrote:\n> >\n> > > I have reviewed the code. Here are a few minor comments.\n> > >\n> > > 1.\n> > > +void\n> > > +pgstat_report_queryid(uint64 queryId, bool force)\n> > > +{\n> > > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > > +\n> > > + if (!beentry)\n> > > + return;\n> > > +\n> > > + /*\n> > > + * if track_activities is disabled, st_queryid should already have\n> been\n> > > + * reset\n> > > + */\n> > > + if (!pgstat_track_activities)\n> > > + return;\n> > >\n> > > The above two conditions can be clubbed together in a single condition.\n> >\n> > I wonder if it wouldn't make more sense to put the assignment *after* we\n> > have checked the second condition.\n> All other pgstat_report_* functions do the assignment before doing any\n> test on\n> beentry and/or pgstat_track_activities, I think we should keep this code\n> consistent.\n\n\nI agree about this.\n\nThanks and Regards,\nNitin Jadhav\n\n\nOn Tue, Apr 6, 2021 at 9:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Tue, Apr 06, 2021 at 11:41:52AM -0400, Alvaro Herrera wrote:\n> > On 2021-Apr-06, Nitin Jadhav wrote:\n> >\n> > > I have reviewed the code. Here are a few minor comments.\n> > >\n> > > 1.\n> > > +void\n> > > +pgstat_report_queryid(uint64 queryId, bool force)\n> > > +{\n> > > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > > +\n> > > + if (!beentry)\n> > > + return;\n> > > +\n> > > + /*\n> > > + * if track_activities is disabled, st_queryid should already have\n> been\n> > > + * reset\n> > > + */\n> > > + if (!pgstat_track_activities)\n> > > + return;\n> > >\n> > > The above two conditions can be clubbed together in a single condition.\n> >\n> > I wonder if it wouldn't make more sense to put the assignment *after* we\n> > have checked the second condition.\n>\n> All other pgstat_report_* functions do the assignment before doing any\n> test on\n> beentry and/or pgstat_track_activities, I think we should keep this code\n> consistent.\n>\n\nOn Tue, Apr 06, 2021 at 11:41:52AM -0400, Alvaro Herrera wrote:> On 2021-Apr-06, Nitin Jadhav wrote:> > > I have reviewed the code. Here are a few minor comments.> > > > 1.> > +void> > +pgstat_report_queryid(uint64 queryId, bool force)> > +{> > + volatile PgBackendStatus *beentry = MyBEEntry;> > +> > + if (!beentry)> > + return;> > +> > + /*> > + * if track_activities is disabled, st_queryid should already have been> > + * reset> > + */> > + if (!pgstat_track_activities)> > + return;> > > > The above two conditions can be clubbed together in a single condition.> > I wonder if it wouldn't make more sense to put the assignment *after* we> have checked the second condition.All other pgstat_report_* functions do the assignment before doing any test onbeentry and/or pgstat_track_activities, I think we should keep this codeconsistent.I agree about this.Thanks and Regards,Nitin Jadhav On Tue, Apr 6, 2021 at 9:18 PM Julien Rouhaud <rjuju123@gmail.com> wrote:On Tue, Apr 06, 2021 at 11:41:52AM -0400, Alvaro Herrera wrote:\n> On 2021-Apr-06, Nitin Jadhav wrote:\n> \n> > I have reviewed the code. Here are a few minor comments.\n> > \n> > 1.\n> > +void\n> > +pgstat_report_queryid(uint64 queryId, bool force)\n> > +{\n> > + volatile PgBackendStatus *beentry = MyBEEntry;\n> > +\n> > + if (!beentry)\n> > + return;\n> > +\n> > + /*\n> > + * if track_activities is disabled, st_queryid should already have been\n> > + * reset\n> > + */\n> > + if (!pgstat_track_activities)\n> > + return;\n> > \n> > The above two conditions can be clubbed together in a single condition.\n> \n> I wonder if it wouldn't make more sense to put the assignment *after* we\n> have checked the second condition.\n\nAll other pgstat_report_* functions do the assignment before doing any test on\nbeentry and/or pgstat_track_activities, I think we should keep this code\nconsistent.",
"msg_date": "Wed, 7 Apr 2021 18:17:11 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 06:15:27PM +0530, Nitin Jadhav wrote:\n> \n> I feel we should merge both of the conditions as it is done in\n> pgstat_report_xact_timestamp(). Probably we can write a common comment to\n> explain both the conditions.\n> \n> [...]\n> \n> Thanks for the explanation. Please add a comment explaining why there is no\n> loop.\n\nPFA v24.",
"msg_date": "Wed, 7 Apr 2021 20:57:26 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 08:57:26PM +0800, Julien Rouhaud wrote:\n> On Wed, Apr 07, 2021 at 06:15:27PM +0530, Nitin Jadhav wrote:\n> > \n> > I feel we should merge both of the conditions as it is done in\n> > pgstat_report_xact_timestamp(). Probably we can write a common comment to\n> > explain both the conditions.\n> > \n> > [...]\n> > \n> > Thanks for the explanation. Please add a comment explaining why there is no\n> > loop.\n> \n> PFA v24.\n\nPatch applied. I am ready to adjust this with any improvements people\nmight have. Thank you for all the good feedback we got on this, and I\nknow many users have waited a long time for this feature.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:12:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 02:12:11PM -0400, Bruce Momjian wrote:\n> Patch applied. I am ready to adjust this with any improvements people\n> might have. Thank you for all the good feedback we got on this, and I\n> know many users have waited a long time for this feature.\n\nIf you support log_line_prefix 'Q', then you should also add to write_csvlog().\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Apr 2021 14:10:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Patch applied. I am ready to adjust this with any improvements people\n> might have. Thank you for all the good feedback we got on this, and I\n> know many users have waited a long time for this feature.\n\nFor starters, you could try to make the buildfarm green again.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 16:15:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "aOn Wed, Apr 7, 2021 at 04:15:50PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Patch applied. I am ready to adjust this with any improvements people\n> > might have. Thank you for all the good feedback we got on this, and I\n> > know many users have waited a long time for this feature.\n> \n> For starters, you could try to make the buildfarm green again.\n\nWow, that's odd. The cfbot was green, so I never even looked at the\nbuildfarm. I will look at that now, and the CVS log issue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 16:22:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 04:22:55PM -0400, Bruce Momjian wrote:\n> aOn Wed, Apr 7, 2021 at 04:15:50PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Patch applied. I am ready to adjust this with any improvements people\n> > > might have. Thank you for all the good feedback we got on this, and I\n> > > know many users have waited a long time for this feature.\n> > \n> > For starters, you could try to make the buildfarm green again.\n> \n> Wow, that's odd. The cfbot was green, so I never even looked at the\n> buildfarm. I will look at that now, and the CVS log issue.\n\nSorry about that. The issue came from animals with jit_above_cost = 0\noutputting more lines than expected. I fixed that by using the same query as\nbefore in explain.sql, as they don't generate any JIT output.\n\nI also added the queryid to the csvlog output and fixed the documentation that\nmention how to create a table to access the data.",
"msg_date": "Thu, 8 Apr 2021 05:56:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 05:56:25AM +0800, Julien Rouhaud wrote:\n> \n> I also added the queryid to the csvlog output and fixed the documentation that\n> mention how to create a table to access the data.\n\nNote that I chose to output a 0 queryid if none has been computed rather that\noutputting nothing. Let me know if that's not the wanted behavior.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 06:15:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 05:56:25AM +0800, Julien Rouhaud wrote:\n> On Wed, Apr 07, 2021 at 04:22:55PM -0400, Bruce Momjian wrote:\n> > aOn Wed, Apr 7, 2021 at 04:15:50PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Patch applied. I am ready to adjust this with any improvements people\n> > > > might have. Thank you for all the good feedback we got on this, and I\n> > > > know many users have waited a long time for this feature.\n> > > \n> > > For starters, you could try to make the buildfarm green again.\n> > \n> > Wow, that's odd. The cfbot was green, so I never even looked at the\n> > buildfarm. I will look at that now, and the CVS log issue.\n> \n> Sorry about that. The issue came from animals with jit_above_cost = 0\n> outputting more lines than expected. I fixed that by using the same query as\n> before in explain.sql, as they don't generate any JIT output.\n\nYes, I just came to the same conclusion, that 'SELECT 1' didn't generate\nthe proper output lines to allow explain_filter() to strip out the JIT\nlines. I have applied your patch for this, which should fix the build\nfarm. (I see my first green report now.)\n\n> I also added the queryid to the csvlog output and fixed the documentation that\n> mention how to create a table to access the data.\n\nUh, I think your patch missed a few things. First, you use \"%zd\"\n(size_t) for the printf string, but calls to pgstat_get_my_queryid() in\nsrc/backend/utils/error/elog.c used \"%ld\". Which is correct? I see\npgstat_get_my_queryid() as returning uint64, but I didn't think a uint64\nfits in a BIGINT SQL column.\n\nAlso, you missed the SGML paragraph doc change, but you correctly\nchanged the SQL table definition.\n\nI am attaching my version of the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 7 Apr 2021 18:38:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Uh, I think your patch missed a few things. First, you use \"%zd\"\n> (size_t) for the printf string, but calls to pgstat_get_my_queryid() in\n> src/backend/utils/error/elog.c used \"%ld\". Which is correct? I see\n> pgstat_get_my_queryid() as returning uint64, but I didn't think a uint64\n> fits in a BIGINT SQL column.\n\nNeither is correct. Project standard these days for printing [u]int64\nis to write \"%lld\" or \"%llu\", with an explicit (long long) cast on\nthe printf argument.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 19:01:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 07:01:25PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Uh, I think your patch missed a few things. First, you use \"%zd\"\n> > (size_t) for the printf string, but calls to pgstat_get_my_queryid() in\n> > src/backend/utils/error/elog.c used \"%ld\". Which is correct? I see\n> > pgstat_get_my_queryid() as returning uint64, but I didn't think a uint64\n> > fits in a BIGINT SQL column.\n> \n> Neither is correct. Project standard these days for printing [u]int64\n> is to write \"%lld\" or \"%llu\", with an explicit (long long) cast on\n> the printf argument.\n\nYep, got it. The attached patch fixes all the calls to use %lld, and\nadds casts. In implementing cvslog, I noticed that internally we pass\nthe hash as uint64, but output as int64, which I think is a requirement\nfor how pg_stat_statements has output it, and the use of bigint. Is\nthat OK?\n\nI am also confused about the inconsistency of calling the GUC\ncompute_query_id (with underscore), but pg_stat_activity.queryid. If we\nmake it pg_stat_activity.query_id, it doesn't match most of the other\n*id columsns in the table, leader_pid, usesysid, backend_xid. Is that\nOK?I know I suggested pg_stat_activity.query_id, but maybe I was wrong.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 7 Apr 2021 19:38:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 07:38:35PM -0400, Bruce Momjian wrote:\n> On Wed, Apr 7, 2021 at 07:01:25PM -0400, Tom Lane wrote:\n> > Bruce Momjian <bruce@momjian.us> writes:\n> > > Uh, I think your patch missed a few things. First, you use \"%zd\"\n> > > (size_t) for the printf string, but calls to pgstat_get_my_queryid() in\n> > > src/backend/utils/error/elog.c used \"%ld\". Which is correct? I see\n> > > pgstat_get_my_queryid() as returning uint64, but I didn't think a uint64\n> > > fits in a BIGINT SQL column.\n> > \n> > Neither is correct. Project standard these days for printing [u]int64\n> > is to write \"%lld\" or \"%llu\", with an explicit (long long) cast on\n> > the printf argument.\n> \n> Yep, got it. The attached patch fixes all the calls to use %lld, and\n> adds casts. In implementing cvslog, I noticed that internally we pass\n> the hash as uint64, but output as int64, which I think is a requirement\n> for how pg_stat_statements has output it, and the use of bigint. Is\n> that OK?\n\nIndeed, this is due to how we expose the value in SQL. The original discussion\nis at\nhttps://www.postgresql.org/message-id/CAH2-WzkueMfAmY3onoXLi+g67SJoKY65Cg9Z1QOhSyhCEU8w3g@mail.gmail.com.\nAs far as I know this is OK, as we want to show consistent values everywhere.\n\n> I am also confused about the inconsistency of calling the GUC\n> compute_query_id (with underscore), but pg_stat_activity.queryid. If we\n> make it pg_stat_activity.query_id, it doesn't match most of the other\n> *id columsns in the table, leader_pid, usesysid, backend_xid. Is that\n> OK?I know I suggested pg_stat_activity.query_id, but maybe I was wrong.\n\nMmm, most of the columns in pg_stat_activity do have a \"_\", so using query_id\nwould make more sense.\n\n@@ -2967,6 +2967,10 @@ write_csvlog(ErrorData *edata)\n\n \tappendStringInfoChar(&buf, '\\n');\n\n+\t/* query id */\n+\tappendStringInfo(&buf, \"%lld\", (long long) pgstat_get_my_queryid());\n+\tappendStringInfoChar(&buf, ',');\n+\n\n \t/* If in the syslogger process, try to write messages direct to file */\n \tif (MyBackendType == B_LOGGER)\n \t\twrite_syslogger_file(buf.data, buf.len, LOG_DESTINATION_CSVLOG);\n\n\nUnless I'm missing something this will output the query id in the next log\nline? The new code should be added before the newline is output, and the comma\nshould also be output before the queryid.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 08:47:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 08:47:48AM +0800, Julien Rouhaud wrote:\n> On Wed, Apr 07, 2021 at 07:38:35PM -0400, Bruce Momjian wrote:\n> > On Wed, Apr 7, 2021 at 07:01:25PM -0400, Tom Lane wrote:\n> > > Bruce Momjian <bruce@momjian.us> writes:\n> > > > Uh, I think your patch missed a few things. First, you use \"%zd\"\n> > > > (size_t) for the printf string, but calls to pgstat_get_my_queryid() in\n> > > > src/backend/utils/error/elog.c used \"%ld\". Which is correct? I see\n> > > > pgstat_get_my_queryid() as returning uint64, but I didn't think a uint64\n> > > > fits in a BIGINT SQL column.\n> > > \n> > > Neither is correct. Project standard these days for printing [u]int64\n> > > is to write \"%lld\" or \"%llu\", with an explicit (long long) cast on\n> > > the printf argument.\n> > \n> > Yep, got it. The attached patch fixes all the calls to use %lld, and\n> > adds casts. In implementing cvslog, I noticed that internally we pass\n> > the hash as uint64, but output as int64, which I think is a requirement\n> > for how pg_stat_statements has output it, and the use of bigint. Is\n> > that OK?\n> \n> Indeed, this is due to how we expose the value in SQL. The original discussion\n> is at\n> https://www.postgresql.org/message-id/CAH2-WzkueMfAmY3onoXLi+g67SJoKY65Cg9Z1QOhSyhCEU8w3g@mail.gmail.com.\n> As far as I know this is OK, as we want to show consistent values everywhere.\n\nOK, yes, I do remember the discussion. I was wondering if there should\nbe a C comment about this anywhere.\n\n> > I am also confused about the inconsistency of calling the GUC\n> > compute_query_id (with underscore), but pg_stat_activity.queryid. If we\n> > make it pg_stat_activity.query_id, it doesn't match most of the other\n> > *id columsns in the table, leader_pid, usesysid, backend_xid. Is that\n> > OK?I know I suggested pg_stat_activity.query_id, but maybe I was wrong.\n> \n> Mmm, most of the columns in pg_stat_activity do have a \"_\", so using query_id\n> would make more sense.\n\nOK, let me work on a patch to change that part.\n\n> @@ -2967,6 +2967,10 @@ write_csvlog(ErrorData *edata)\n> \n> \tappendStringInfoChar(&buf, '\\n');\n> \n> +\t/* query id */\n> +\tappendStringInfo(&buf, \"%lld\", (long long) pgstat_get_my_queryid());\n> +\tappendStringInfoChar(&buf, ',');\n> +\n> \n> \t/* If in the syslogger process, try to write messages direct to file */\n> \tif (MyBackendType == B_LOGGER)\n> \t\twrite_syslogger_file(buf.data, buf.len, LOG_DESTINATION_CSVLOG);\n>\n> Unless I'm missing something this will output the query id in the next log\n> line? The new code should be added before the newline is output, and the comma\n> should also be output before the queryid.\n\nYes, correct, updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 7 Apr 2021 20:54:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 08:54:02PM -0400, Bruce Momjian wrote:\n> > > I am also confused about the inconsistency of calling the GUC\n> > > compute_query_id (with underscore), but pg_stat_activity.queryid. If we\n> > > make it pg_stat_activity.query_id, it doesn't match most of the other\n> > > *id columsns in the table, leader_pid, usesysid, backend_xid. Is that\n> > > OK?I know I suggested pg_stat_activity.query_id, but maybe I was wrong.\n> > \n> > Mmm, most of the columns in pg_stat_activity do have a \"_\", so using query_id\n> > would make more sense.\n> \n> OK, let me work on a patch to change that part.\n\nUh, it is 'queryid' in pg_stat_statements:\n\n\thttps://www.postgresql.org/docs/13/pgstatstatements.html\n\n\tqueryid bigint\n\tInternal hash code, computed from the statement's parse tree\n\nI am not sure if we should have pg_stat_activity use underscore, or the\nGUC use underscore. The problem is that queryid can easily look like\nquer-yid.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 21:00:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 08:54:02PM -0400, Bruce Momjian wrote:\n> > Unless I'm missing something this will output the query id in the next log\n> > line? The new code should be added before the newline is output, and the comma\n> > should also be output before the queryid.\n> \n> Yes, correct, updated patch attached.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 22:31:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 10:31:01PM -0400, Bruce Momjian wrote:\n> On Wed, Apr 7, 2021 at 08:54:02PM -0400, Bruce Momjian wrote:\n> > > Unless I'm missing something this will output the query id in the next log\n> > > line? The new code should be added before the newline is output, and the comma\n> > > should also be output before the queryid.\n> > \n> > Yes, correct, updated patch attached.\n> \n> Patch applied.\n\nThanks! And I agree with using query_id in the new field names while keeping\nqueryid for pg_stat_statements to avoid unnecessary query breakage.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:38:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 10:38:08AM +0800, Julien Rouhaud wrote:\n> On Wed, Apr 07, 2021 at 10:31:01PM -0400, Bruce Momjian wrote:\n> > On Wed, Apr 7, 2021 at 08:54:02PM -0400, Bruce Momjian wrote:\n> > > > Unless I'm missing something this will output the query id in the next log\n> > > > line? The new code should be added before the newline is output, and the comma\n> > > > should also be output before the queryid.\n> > > \n> > > Yes, correct, updated patch attached.\n> > \n> > Patch applied.\n> \n> Thanks! And I agree with using query_id in the new field names while keeping\n> queryid for pg_stat_statements to avoid unnecessary query breakage.\n\nI think we need more feedback from the group. Do people agree with the\nidea above? The question is what to call:\n\n\tGUC compute_queryid\n\tpg_stat_activity.queryid\n\tpg_stat_statements.queryid\n\nusing \"queryid\" or \"query_id\", and do they have to match?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 22:42:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Apr-07, Bruce Momjian wrote:\n\n> On Thu, Apr 8, 2021 at 10:38:08AM +0800, Julien Rouhaud wrote:\n\n> > Thanks! And I agree with using query_id in the new field names while keeping\n> > queryid for pg_stat_statements to avoid unnecessary query breakage.\n> \n> I think we need more feedback from the group. Do people agree with the\n> idea above? The question is what to call:\n> \n> \tGUC compute_queryid\n> \tpg_stat_activity.queryid\n> \tpg_stat_statements.queryid\n> \n> using \"queryid\" or \"query_id\", and do they have to match?\n\nSeems a matter of personal preference. Mine is to have the underscore\neverywhere in backend code (where this is new), and let it without the\nunderscore in pg_stat_statements to avoid breaking existing code. Seems\nto match what Julien is saying.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Wed, 7 Apr 2021 23:27:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 07, 2021 at 02:12:11PM -0400, Bruce Momjian wrote:\n> \n> Patch applied. I am ready to adjust this with any improvements people\n> might have. Thank you for all the good feedback we got on this, and I\n> know many users have waited a long time for this feature.\n\nThanks a lot Bruce and everyone! I hope that the users who waited a long time\nfor this will find everything they need.\n\nJust to validate that this patchset also allows user to use pg_stat_statements,\nany additional third-party module and the new added infrastructure with the\nqueryid algorithm of their choice, I created a POC extension ([1]) which works\nas expected.\n\nBasically:\n\nSHOW shared_preload_libraries;\n shared_preload_libraries\n--------------------------\n pg_stat_statements, pg_queryid\n(1 row)\n\nSET pg_queryid.use_object_names TO on;\nSET pg_queryid.ignore_schema TO on;\n\nCREATE SCHEMA ns1; CREATE TABLE ns1.tbl1(id integer);\nCREATE SCHEMA ns2; CREATE TABLE ns2.tbl1(id integer);\n\nSET search_path TO ns1;\nSELECT COUNT(*) FROM tbl1;\nSET search_path TO ns2;\nSELECT COUNT(*) FROM tbl1;\n\nSELECT queryid, query, calls\nFROM public.pg_stat_statements\nWHERE query LIKE '%tbl%';\n queryid | query | calls\n---------------------+---------------------------+-------\n 4629593225724429059 | SELECT count(*) from tbl1 | 2\n(1 row)\n\nSo whether that's a good idea to do that or not, users now have a choice.\n\n[1]: https://github.com/rjuju/pg_queryid\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:17:58 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "Hi Julien, Bruce,\n\nA warning appears on 32 bit systems:\n\nIn file included from pgstatfuncs.c:15:\npgstatfuncs.c: In function 'pg_stat_get_activity':\n../../../../src/include/postgres.h:593:29: warning: cast to pointer\nfrom integer of different size [-Wint-to-pointer-cast]\n 593 | #define DatumGetPointer(X) ((Pointer) (X))\n | ^\n../../../../src/include/postgres.h:678:42: note: in expansion of macro\n'DatumGetPointer'\n 678 | #define DatumGetUInt64(X) (* ((uint64 *) DatumGetPointer(X)))\n | ^~~~~~~~~~~~~~~\npgstatfuncs.c:920:18: note: in expansion of macro 'DatumGetUInt64'\n 920 | values[29] = DatumGetUInt64(beentry->st_queryid);\n | ^~~~~~~~~~~~~~\n\nHmm, maybe this should be UInt64GetDatum()?\n\n\n",
"msg_date": "Thu, 8 Apr 2021 23:36:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 11:36:48PM +1200, Thomas Munro wrote:\n> Hi Julien, Bruce,\n> \n> A warning appears on 32 bit systems:\n> \n> In file included from pgstatfuncs.c:15:\n> pgstatfuncs.c: In function 'pg_stat_get_activity':\n> ../../../../src/include/postgres.h:593:29: warning: cast to pointer\n> from integer of different size [-Wint-to-pointer-cast]\n> 593 | #define DatumGetPointer(X) ((Pointer) (X))\n> | ^\n> ../../../../src/include/postgres.h:678:42: note: in expansion of macro\n> 'DatumGetPointer'\n> 678 | #define DatumGetUInt64(X) (* ((uint64 *) DatumGetPointer(X)))\n> | ^~~~~~~~~~~~~~~\n> pgstatfuncs.c:920:18: note: in expansion of macro 'DatumGetUInt64'\n> 920 | values[29] = DatumGetUInt64(beentry->st_queryid);\n> | ^~~~~~~~~~~~~~\n\nWow, that's really embarrassing :(\n\n> Hmm, maybe this should be UInt64GetDatum()?\n\nYes definitely. I'm attaching the previous patch for force_parallel_mode to\nnot forget it + a new one for this issue.",
"msg_date": "Thu, 8 Apr 2021 20:12:54 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 9:47 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Apr 07, 2021 at 02:12:11PM -0400, Bruce Momjian wrote:\n> >\n> > Patch applied. I am ready to adjust this with any improvements people\n> > might have. Thank you for all the good feedback we got on this, and I\n> > know many users have waited a long time for this feature.\n>\n> Thanks a lot Bruce and everyone! I hope that the users who waited a long time\n> for this will find everything they need.\n>\n\n@@ -1421,8 +1421,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc)\n /* Setting debug_query_string for individual workers */\n debug_query_string = queryDesc->sourceText;\n\n- /* Report workers' query for monitoring purposes */\n+ /* Report workers' query and queryId for monitoring purposes */\n pgstat_report_activity(STATE_RUNNING, debug_query_string);\n+ pgstat_report_queryid(queryDesc->plannedstmt->queryId, false);\n\n\nBelow lines down in ParallelQueryMain, we call ExecutorStart which\nwill report queryid, so do we need it here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 17:46:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 05:46:07PM +0530, Amit Kapila wrote:\n> \n> @@ -1421,8 +1421,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc)\n> /* Setting debug_query_string for individual workers */\n> debug_query_string = queryDesc->sourceText;\n> \n> - /* Report workers' query for monitoring purposes */\n> + /* Report workers' query and queryId for monitoring purposes */\n> pgstat_report_activity(STATE_RUNNING, debug_query_string);\n> + pgstat_report_queryid(queryDesc->plannedstmt->queryId, false);\n> \n> \n> Below lines down in ParallelQueryMain, we call ExecutorStart which\n> will report queryid, so do we need it here?\n\nCorrect, it's not actually needed. The overhead should be negligible but let's\nget rid of it. Updated fix patchset attached.",
"msg_date": "Thu, 8 Apr 2021 20:27:20 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 08:27:20PM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 08, 2021 at 05:46:07PM +0530, Amit Kapila wrote:\n> > \n> > @@ -1421,8 +1421,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc)\n> > /* Setting debug_query_string for individual workers */\n> > debug_query_string = queryDesc->sourceText;\n> > \n> > - /* Report workers' query for monitoring purposes */\n> > + /* Report workers' query and queryId for monitoring purposes */\n> > pgstat_report_activity(STATE_RUNNING, debug_query_string);\n> > + pgstat_report_queryid(queryDesc->plannedstmt->queryId, false);\n> > \n> > \n> > Below lines down in ParallelQueryMain, we call ExecutorStart which\n> > will report queryid, so do we need it here?\n> \n> Correct, it's not actually needed. The overhead should be negligible but let's\n> get rid of it. Updated fix patchset attached.\n\nSorry I messed up the last commit, v4 is ok.",
"msg_date": "Thu, 8 Apr 2021 21:31:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 09:31:27PM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 08, 2021 at 08:27:20PM +0800, Julien Rouhaud wrote:\n> > On Thu, Apr 08, 2021 at 05:46:07PM +0530, Amit Kapila wrote:\n> > > \n> > > @@ -1421,8 +1421,9 @@ ParallelQueryMain(dsm_segment *seg, shm_toc *toc)\n> > > /* Setting debug_query_string for individual workers */\n> > > debug_query_string = queryDesc->sourceText;\n> > > \n> > > - /* Report workers' query for monitoring purposes */\n> > > + /* Report workers' query and queryId for monitoring purposes */\n> > > pgstat_report_activity(STATE_RUNNING, debug_query_string);\n> > > + pgstat_report_queryid(queryDesc->plannedstmt->queryId, false);\n> > > \n> > > \n> > > Below lines down in ParallelQueryMain, we call ExecutorStart which\n> > > will report queryid, so do we need it here?\n> > \n> > Correct, it's not actually needed. The overhead should be negligible but let's\n> > get rid of it. Updated fix patchset attached.\n> \n> Sorry I messed up the last commit, v4 is ok.\n\nPatch applied, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:17:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 7, 2021 at 11:27:04PM -0400, �lvaro Herrera wrote:\n> On 2021-Apr-07, Bruce Momjian wrote:\n> \n> > On Thu, Apr 8, 2021 at 10:38:08AM +0800, Julien Rouhaud wrote:\n> \n> > > Thanks! And I agree with using query_id in the new field names while keeping\n> > > queryid for pg_stat_statements to avoid unnecessary query breakage.\n> > \n> > I think we need more feedback from the group. Do people agree with the\n> > idea above? The question is what to call:\n> > \n> > \tGUC compute_queryid\n> > \tpg_stat_activity.queryid\n> > \tpg_stat_statements.queryid\n> > \n> > using \"queryid\" or \"query_id\", and do they have to match?\n> \n> Seems a matter of personal preference. Mine is to have the underscore\n> everywhere in backend code (where this is new), and let it without the\n> underscore in pg_stat_statements to avoid breaking existing code. Seems\n> to match what Julien is saying.\n\nOK, let's get some details. First, pg_stat_statements.queryid already\nexists (no underscore), and I don't think anyone wants to change that. \n\npg_stat_activity.queryid is new, but I can imagine cases where you would\njoin pg_stat_activity to pg_stat_statements to get an estimate of how\nlong the query will take --- having one using an underscore and another\none not seems odd. Also, looking at the existing pg_stat_activity\ncolumns, those don't use underscores before the \"id\" unless there is a\nmodifier before the \"id\", e.g. \"pid\", \"xid\":\n\n\tSELECT\tattname\n\tFROM\tpg_namespace JOIN pg_class ON (pg_namespace.oid = relnamespace)\n\t \tJOIN pg_attribute ON (pg_class.oid = pg_attribute.attrelid)\n\tWHERE\tnspname = 'pg_catalog' AND\n\t\trelname = 'pg_stat_activity' AND\n\t\tattname ~ 'id$';\n\t attname\n\t-------------\n\t backend_xid\n\t datid\n\t leader_pid\n\t pid\n\t queryid\n\t usesysid\n\nWe don't have a modifier before queryid.\n\nIf people like query_id, and I do too, I am thinking we just keep\nquery_id as the GUC (compute_query_id), and just accept that the GUC and\nSQL levels will not match. This is exactly what we have now. I brought\nit up to be sure this is what we want,\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:34:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 11:34:25AM -0400, Bruce Momjian wrote:\n> \n> OK, let's get some details. First, pg_stat_statements.queryid already\n> exists (no underscore), and I don't think anyone wants to change that. \n> \n> pg_stat_activity.queryid is new, but I can imagine cases where you would\n> join pg_stat_activity to pg_stat_statements to get an estimate of how\n> long the query will take --- having one using an underscore and another\n> one not seems odd.\n\nIndeed, and also being able to join with a USING clause rather than an ON could\nalso save some keystrokes. But unfortunately, we already have (userid, dbid)\non pg_stat_statements side vs (usesysid, datid) on pg_stat_activity side, so\nthis unfortunately won't fix all the oddities.\n\n\n",
"msg_date": "Fri, 9 Apr 2021 00:38:29 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 12:38:29AM +0800, Julien Rouhaud wrote:\n> On Thu, Apr 08, 2021 at 11:34:25AM -0400, Bruce Momjian wrote:\n> > \n> > OK, let's get some details. First, pg_stat_statements.queryid already\n> > exists (no underscore), and I don't think anyone wants to change that. \n> > \n> > pg_stat_activity.queryid is new, but I can imagine cases where you would\n> > join pg_stat_activity to pg_stat_statements to get an estimate of how\n> > long the query will take --- having one using an underscore and another\n> > one not seems odd.\n> \n> Indeed, and also being able to join with a USING clause rather than an ON could\n> also save some keystrokes. But unfortunately, we already have (userid, dbid)\n> on pg_stat_statements side vs (usesysid, datid) on pg_stat_activity side, so\n> this unfortunately won't fix all the oddities.\n\nWow, good point. Shame they don't match.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:48:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Apr-08, Bruce Momjian wrote:\n\n> pg_stat_activity.queryid is new, but I can imagine cases where you would\n> join pg_stat_activity to pg_stat_statements to get an estimate of how\n> long the query will take --- having one using an underscore and another\n> one not seems odd.\n\nOK. So far, you have one vote for queryid (your own) and two votes for\nquery_id (mine and Julien's). And even yourself were hesitating about\nit earlier in the thread.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 8 Apr 2021 12:51:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 12:51:06PM -0400, �lvaro Herrera wrote:\n> On 2021-Apr-08, Bruce Momjian wrote:\n> \n> > pg_stat_activity.queryid is new, but I can imagine cases where you would\n> > join pg_stat_activity to pg_stat_statements to get an estimate of how\n> > long the query will take --- having one using an underscore and another\n> > one not seems odd.\n> \n> OK. So far, you have one vote for queryid (your own) and two votes for\n> query_id (mine and Julien's). And even yourself were hesitating about\n> it earlier in the thread.\n\nOK, if people are fine with pg_stat_activity.query_id not matching\npg_stat_statements.queryid, I am fine with that. I just don't want\nsomeone to say it was a big mistake later. ;-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 13:01:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 01:01:42PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 8, 2021 at 12:51:06PM -0400, �lvaro Herrera wrote:\n> > On 2021-Apr-08, Bruce Momjian wrote:\n> > \n> > > pg_stat_activity.queryid is new, but I can imagine cases where you would\n> > > join pg_stat_activity to pg_stat_statements to get an estimate of how\n> > > long the query will take --- having one using an underscore and another\n> > > one not seems odd.\n> > \n> > OK. So far, you have one vote for queryid (your own) and two votes for\n> > query_id (mine and Julien's). And even yourself were hesitating about\n> > it earlier in the thread.\n> \n> OK, if people are fine with pg_stat_activity.query_id not matching\n> pg_stat_statements.queryid, I am fine with that. I just don't want\n> someone to say it was a big mistake later. ;-)\n\nOK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\nhave not changed any of the APIs which existed before this feature was\nadded, and are called \"queryid\" or \"queryId\" --- it is kind of a mess. \nI assume I should leave those unchanged. It will also need a catversion\nbump.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Mon, 12 Apr 2021 22:12:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:12:46PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 8, 2021 at 01:01:42PM -0400, Bruce Momjian wrote:\n> > On Thu, Apr 8, 2021 at 12:51:06PM -0400, �lvaro Herrera wrote:\n> > > On 2021-Apr-08, Bruce Momjian wrote:\n> > > \n> > > > pg_stat_activity.queryid is new, but I can imagine cases where you would\n> > > > join pg_stat_activity to pg_stat_statements to get an estimate of how\n> > > > long the query will take --- having one using an underscore and another\n> > > > one not seems odd.\n> > > \n> > > OK. So far, you have one vote for queryid (your own) and two votes for\n> > > query_id (mine and Julien's). And even yourself were hesitating about\n> > > it earlier in the thread.\n> > \n> > OK, if people are fine with pg_stat_activity.query_id not matching\n> > pg_stat_statements.queryid, I am fine with that. I just don't want\n> > someone to say it was a big mistake later. ;-)\n> \n> OK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\n> have not changed any of the APIs which existed before this feature was\n> added, and are called \"queryid\" or \"queryId\" --- it is kind of a mess. \n> I assume I should leave those unchanged. It will also need a catversion\n> bump.\n\n-\tuint64\t\tst_queryid;\n+\tuint64\t\tst_query_id;\n\nI thought we would internally keep queryid/queryId, at least for the variable\nnames as this is the name of the saved field in PlannedStmt.\n\n-extern void pgstat_report_queryid(uint64 queryId, bool force);\n+extern void pgstat_report_query_id(uint64 queryId, bool force);\n\nBut if we don't then it should be \"uint64 query_id\".\n\n\n",
"msg_date": "Tue, 13 Apr 2021 16:06:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 2021-Apr-12, Bruce Momjian wrote:\n\n> OK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\n> have not changed any of the APIs which existed before this feature was\n> added, and are called \"queryid\" or \"queryId\" --- it is kind of a mess. \n> I assume I should leave those unchanged. It will also need a catversion\n> bump.\n\nI think it is fine actually. These names appear in structs Query and\nPlannedStmt, and every single member of those already uses camelCase\nnaming. Changing those to use \"query_id\" would look out of place.\nYou did change the one in PgBackendStatus to st_query_id, which also\nmatches the naming style in that struct, so that looks fine also.\n\nSo I'm -1 on Julien's first proposed change, and +1 on his second\nproposed change (the name of the first argument of\npgstat_report_query_id should be query_id).\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 13 Apr 2021 13:30:16 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Apr 13, 2021 at 01:30:16PM -0400, �lvaro Herrera wrote:\n> On 2021-Apr-12, Bruce Momjian wrote:\n> \n> > OK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\n> > have not changed any of the APIs which existed before this feature was\n> > added, and are called \"queryid\" or \"queryId\" --- it is kind of a mess. \n> > I assume I should leave those unchanged. It will also need a catversion\n> > bump.\n> \n> I think it is fine actually. These names appear in structs Query and\n> PlannedStmt, and every single member of those already uses camelCase\n> naming. Changing those to use \"query_id\" would look out of place.\n> You did change the one in PgBackendStatus to st_query_id, which also\n> matches the naming style in that struct, so that looks fine also.\n> \n> So I'm -1 on Julien's first proposed change, and +1 on his second\n> proposed change (the name of the first argument of\n> pgstat_report_query_id should be query_id).\n\nThanks for your analysis. Updated patch attached with the change\nsuggested above.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.",
"msg_date": "Wed, 14 Apr 2021 14:33:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 02:33:26PM -0400, Bruce Momjian wrote:\n> On Tue, Apr 13, 2021 at 01:30:16PM -0400, �lvaro Herrera wrote:\n> > \n> > So I'm -1 on Julien's first proposed change, and +1 on his second\n> > proposed change (the name of the first argument of\n> > pgstat_report_query_id should be query_id).\n> \n> Thanks for your analysis. Updated patch attached with the change\n> suggested above.\n\nThanks Bruce. It looks good to me.\n\n\n",
"msg_date": "Thu, 15 Apr 2021 16:11:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Apr 14, 2021 at 02:33:26PM -0400, Bruce Momjian wrote:\n> On Tue, Apr 13, 2021 at 01:30:16PM -0400, �lvaro Herrera wrote:\n> > On 2021-Apr-12, Bruce Momjian wrote:\n> > \n> > > OK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\n> > > have not changed any of the APIs which existed before this feature was\n> > > added, and are called \"queryid\" or \"queryId\" --- it is kind of a mess. \n> > > I assume I should leave those unchanged. It will also need a catversion\n> > > bump.\n> > \n> > I think it is fine actually. These names appear in structs Query and\n> > PlannedStmt, and every single member of those already uses camelCase\n> > naming. Changing those to use \"query_id\" would look out of place.\n> > You did change the one in PgBackendStatus to st_query_id, which also\n> > matches the naming style in that struct, so that looks fine also.\n> > \n> > So I'm -1 on Julien's first proposed change, and +1 on his second\n> > proposed change (the name of the first argument of\n> > pgstat_report_query_id should be query_id).\n> \n> Thanks for your analysis. Updated patch attached with the change\n> suggested above.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 20 Apr 2021 12:22:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "\n\nOn 2021/04/21 1:22, Bruce Momjian wrote:\n> On Wed, Apr 14, 2021 at 02:33:26PM -0400, Bruce Momjian wrote:\n>> On Tue, Apr 13, 2021 at 01:30:16PM -0400, Álvaro Herrera wrote:\n>>> On 2021-Apr-12, Bruce Momjian wrote:\n>>>\n>>>> OK, the attached patch renames pg_stat_activity.queryid to 'query_id'. I\n>>>> have not changed any of the APIs which existed before this feature was\n>>>> added, and are called \"queryid\" or \"queryId\" --- it is kind of a mess.\n>>>> I assume I should leave those unchanged. It will also need a catversion\n>>>> bump.\n>>>\n>>> I think it is fine actually. These names appear in structs Query and\n>>> PlannedStmt, and every single member of those already uses camelCase\n>>> naming. Changing those to use \"query_id\" would look out of place.\n>>> You did change the one in PgBackendStatus to st_query_id, which also\n>>> matches the naming style in that struct, so that looks fine also.\n>>>\n>>> So I'm -1 on Julien's first proposed change, and +1 on his second\n>>> proposed change (the name of the first argument of\n>>> pgstat_report_query_id should be query_id).\n>>\n>> Thanks for your analysis. Updated patch attached with the change\n>> suggested above.\n> \n> Patch applied.\n\nI found another small issue in pg_stat_statements docs. The following\ndescription in the docs should be updated so that toplevel is included?\n\n> This view contains one row for each distinct database ID, user ID and query ID\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 22 Apr 2021 00:28:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n> \n> I found another small issue in pg_stat_statements docs. The following\n> description in the docs should be updated so that toplevel is included?\n> \n> > This view contains one row for each distinct database ID, user ID and query ID\n\nIndeed! I'm adding Magnus in Cc.\n\nPFA a patch to fix at, and also mention that toplevel will only\ncontain True values if pg_stat_statements.track is set to top.",
"msg_date": "Thu, 22 Apr 2021 17:23:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "\n\nOn 2021/04/22 18:23, Julien Rouhaud wrote:\n> On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n>>\n>> I found another small issue in pg_stat_statements docs. The following\n>> description in the docs should be updated so that toplevel is included?\n>>\n>>> This view contains one row for each distinct database ID, user ID and query ID\n> \n> Indeed! I'm adding Magnus in Cc.\n> \n> PFA a patch to fix at, and also mention that toplevel will only\n> contain True values if pg_stat_statements.track is set to top.\n\nThanks for the patch! LGTM.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 23 Apr 2021 16:10:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/22 18:23, Julien Rouhaud wrote:\n> > On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n> >>\n> >> I found another small issue in pg_stat_statements docs. The following\n> >> description in the docs should be updated so that toplevel is included?\n> >>\n> >>> This view contains one row for each distinct database ID, user ID and query ID\n> >\n> > Indeed! I'm adding Magnus in Cc.\n> >\n> > PFA a patch to fix at, and also mention that toplevel will only\n> > contain True values if pg_stat_statements.track is set to top.\n>\n> Thanks for the patch! LGTM.\n\nAgreed, in general. But going by the example a few lines down, I\nchanged the second part to:\n True if the query was executed as a top level statement\n+ (if <varname>pg_stat_statements.track</varname> is set to\n+ <literal>all</literal>, otherwise always false)\n\n(changes the wording, but also the name of the parameter is\npg_stat_statements.track, not pg_stat_statements.toplevel (that's the\ncolumn, not the parameter). Same error in the commit msg except there\nyou called it pg_stat_statements.top - but that one needed some more\nfix as well)\n\nWith those changes, applied. Thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 23 Apr 2021 11:46:34 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "\n\nOn 2021/04/23 18:46, Magnus Hagander wrote:\n> On Fri, Apr 23, 2021 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/04/22 18:23, Julien Rouhaud wrote:\n>>> On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n>>>>\n>>>> I found another small issue in pg_stat_statements docs. The following\n>>>> description in the docs should be updated so that toplevel is included?\n>>>>\n>>>>> This view contains one row for each distinct database ID, user ID and query ID\n>>>\n>>> Indeed! I'm adding Magnus in Cc.\n>>>\n>>> PFA a patch to fix at, and also mention that toplevel will only\n>>> contain True values if pg_stat_statements.track is set to top.\n>>\n>> Thanks for the patch! LGTM.\n> \n> Agreed, in general. But going by the example a few lines down, I\n> changed the second part to:\n> True if the query was executed as a top level statement\n> + (if <varname>pg_stat_statements.track</varname> is set to\n> + <literal>all</literal>, otherwise always false)\n\nIsn't this confusing? Users may mistakenly read this as that the toplevel\ncolumn always indicates false if pg_stat_statements.track is not \"all\".\n\n\n> (changes the wording, but also the name of the parameter is\n> pg_stat_statements.track, not pg_stat_statements.toplevel (that's the\n> column, not the parameter). Same error in the commit msg except there\n> you called it pg_stat_statements.top - but that one needed some more\n> fix as well)\n> \n> With those changes, applied. Thanks!\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 23 Apr 2021 19:04:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 12:04 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/23 18:46, Magnus Hagander wrote:\n> > On Fri, Apr 23, 2021 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2021/04/22 18:23, Julien Rouhaud wrote:\n> >>> On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n> >>>>\n> >>>> I found another small issue in pg_stat_statements docs. The following\n> >>>> description in the docs should be updated so that toplevel is included?\n> >>>>\n> >>>>> This view contains one row for each distinct database ID, user ID and query ID\n> >>>\n> >>> Indeed! I'm adding Magnus in Cc.\n> >>>\n> >>> PFA a patch to fix at, and also mention that toplevel will only\n> >>> contain True values if pg_stat_statements.track is set to top.\n> >>\n> >> Thanks for the patch! LGTM.\n> >\n> > Agreed, in general. But going by the example a few lines down, I\n> > changed the second part to:\n> > True if the query was executed as a top level statement\n> > + (if <varname>pg_stat_statements.track</varname> is set to\n> > + <literal>all</literal>, otherwise always false)\n>\n> Isn't this confusing? Users may mistakenly read this as that the toplevel\n> column always indicates false if pg_stat_statements.track is not \"all\".\n\nHmm. I think you're right. It should say \"always true\", shouldn't it?\nSo not just confusing, but completely wrong? :)\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 23 Apr 2021 12:11:31 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "\n\nOn 2021/04/23 19:11, Magnus Hagander wrote:\n> On Fri, Apr 23, 2021 at 12:04 PM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>\n>>\n>> On 2021/04/23 18:46, Magnus Hagander wrote:\n>>> On Fri, Apr 23, 2021 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>\n>>>>\n>>>>\n>>>> On 2021/04/22 18:23, Julien Rouhaud wrote:\n>>>>> On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n>>>>>>\n>>>>>> I found another small issue in pg_stat_statements docs. The following\n>>>>>> description in the docs should be updated so that toplevel is included?\n>>>>>>\n>>>>>>> This view contains one row for each distinct database ID, user ID and query ID\n>>>>>\n>>>>> Indeed! I'm adding Magnus in Cc.\n>>>>>\n>>>>> PFA a patch to fix at, and also mention that toplevel will only\n>>>>> contain True values if pg_stat_statements.track is set to top.\n>>>>\n>>>> Thanks for the patch! LGTM.\n>>>\n>>> Agreed, in general. But going by the example a few lines down, I\n>>> changed the second part to:\n>>> True if the query was executed as a top level statement\n>>> + (if <varname>pg_stat_statements.track</varname> is set to\n>>> + <literal>all</literal>, otherwise always false)\n>>\n>> Isn't this confusing? Users may mistakenly read this as that the toplevel\n>> column always indicates false if pg_stat_statements.track is not \"all\".\n> \n> Hmm. I think you're right. It should say \"always true\", shouldn't it?\n\nYou're thinking something like the following?\n\n True if the query was executed as a top level statement\n (if <varname>pg_stat_statements.track</varname> is set to\n <literal>top</literal>, always true)\n\n> So not just confusing, but completely wrong? :)\n\nYeah :)\n\nI'm fine with the original wording by Julien.\nOf course, the parameter name should be corrected as you did, though.\n\nOr what about the following?\n\n True if the query was executed as a top level statement\n (this can be <literal>false</literal> only if\n <varname>pg_stat_statements.track</varname> is set to\n <literal>all</literal> and nested statements are also tracked)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 23 Apr 2021 19:40:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 12:40 PM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/04/23 19:11, Magnus Hagander wrote:\n> > On Fri, Apr 23, 2021 at 12:04 PM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> >>\n> >>\n> >>\n> >> On 2021/04/23 18:46, Magnus Hagander wrote:\n> >>> On Fri, Apr 23, 2021 at 9:10 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> On 2021/04/22 18:23, Julien Rouhaud wrote:\n> >>>>> On Thu, Apr 22, 2021 at 12:28:11AM +0900, Fujii Masao wrote:\n> >>>>>>\n> >>>>>> I found another small issue in pg_stat_statements docs. The following\n> >>>>>> description in the docs should be updated so that toplevel is included?\n> >>>>>>\n> >>>>>>> This view contains one row for each distinct database ID, user ID and query ID\n> >>>>>\n> >>>>> Indeed! I'm adding Magnus in Cc.\n> >>>>>\n> >>>>> PFA a patch to fix at, and also mention that toplevel will only\n> >>>>> contain True values if pg_stat_statements.track is set to top.\n> >>>>\n> >>>> Thanks for the patch! LGTM.\n> >>>\n> >>> Agreed, in general. But going by the example a few lines down, I\n> >>> changed the second part to:\n> >>> True if the query was executed as a top level statement\n> >>> + (if <varname>pg_stat_statements.track</varname> is set to\n> >>> + <literal>all</literal>, otherwise always false)\n> >>\n> >> Isn't this confusing? Users may mistakenly read this as that the toplevel\n> >> column always indicates false if pg_stat_statements.track is not \"all\".\n> >\n> > Hmm. I think you're right. It should say \"always true\", shouldn't it?\n>\n> You're thinking something like the following?\n>\n> True if the query was executed as a top level statement\n> (if <varname>pg_stat_statements.track</varname> is set to\n> <literal>top</literal>, always true)\n>\n> > So not just confusing, but completely wrong? :)\n>\n> Yeah :)\n\nUgh. I completely lost track of this email.\n\nI've applied the change suggested above with another slight reordering\nof the words:\n\n+ (always true if <varname>pg_stat_statements.track</varname> is set to\n+ <literal>top</literal>)\n\n\n> I'm fine with the original wording by Julien.\n> Of course, the parameter name should be corrected as you did, though.\n>\n> Or what about the following?\n>\n> True if the query was executed as a top level statement\n> (this can be <literal>false</literal> only if\n> <varname>pg_stat_statements.track</varname> is set to\n> <literal>all</literal> and nested statements are also tracked)\n\nI found my suggestion, once the final reordering of words was done,\neasier to parse.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 May 2021 11:02:29 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On 22.04.21 11:23, Julien Rouhaud wrote:\n> The statistics gathered by the module are made available via a\n> view named <structname>pg_stat_statements</structname>. This view\n> - contains one row for each distinct database ID, user ID and query\n> - ID (up to the maximum number of distinct statements that the module\n> + contains one row for each distinct database ID, user ID, query ID and\n> + toplevel (up to the maximum number of distinct statements that the module\n> can track). The columns of the view are shown in\n> <xref linkend=\"pgstatstatements-columns\"/>.\n\nI'm having trouble parsing this new sentence. It now says essentially\n\n\"This view contains one row for each distinct database ID, each distinct \nuser ID, each distinct query ID, and each distinct toplevel.\"\n\nThat last part doesn't make sense.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 22:02:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 10:02:59PM +0200, Peter Eisentraut wrote:\n> On 22.04.21 11:23, Julien Rouhaud wrote:\n> > The statistics gathered by the module are made available via a\n> > view named <structname>pg_stat_statements</structname>. This view\n> > - contains one row for each distinct database ID, user ID and query\n> > - ID (up to the maximum number of distinct statements that the module\n> > + contains one row for each distinct database ID, user ID, query ID and\n> > + toplevel (up to the maximum number of distinct statements that the module\n> > can track). The columns of the view are shown in\n> > <xref linkend=\"pgstatstatements-columns\"/>.\n> \n> I'm having trouble parsing this new sentence. It now says essentially\n> \n> \"This view contains one row for each distinct database ID, each distinct\n> user ID, each distinct query ID, and each distinct toplevel.\"\n\nIsn't it each distinct permutation of all those fields?\n\n> That last part doesn't make sense.\n\nI'm not sure what you mean by that. Maybe it's not really self explanatory\nwithout referring to what toplevel is, which is a bool flag stating whether the\nstatement was exected as a top level statement or not.\n\nSo every distinct permutation of (dbid, userid, queryid) can indeed be stored\ntwice, if pg_stat_statements.track is set to all. However in practice most\nstatements are not executed both as top level and nested statements.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 14:10:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 8:10 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 10:02:59PM +0200, Peter Eisentraut wrote:\n> > On 22.04.21 11:23, Julien Rouhaud wrote:\n> > > The statistics gathered by the module are made available via a\n> > > view named <structname>pg_stat_statements</structname>. This view\n> > > - contains one row for each distinct database ID, user ID and query\n> > > - ID (up to the maximum number of distinct statements that the module\n> > > + contains one row for each distinct database ID, user ID, query ID and\n> > > + toplevel (up to the maximum number of distinct statements that the module\n> > > can track). The columns of the view are shown in\n> > > <xref linkend=\"pgstatstatements-columns\"/>.\n> >\n> > I'm having trouble parsing this new sentence. It now says essentially\n> >\n> > \"This view contains one row for each distinct database ID, each distinct\n> > user ID, each distinct query ID, and each distinct toplevel.\"\n>\n> Isn't it each distinct permutation of all those fields?\n\nMaybe a problem for the readability of it is that the three other\nfields are listed by their description and not by their fieldname, and\ntoplevel is fieldname?\n\nMaybe \"each distinct database id, each distinct user id, each distinct\nquery id, and whether it is a top level statement or not\"?\n\nOr maybe \"each distinct combination of database id, user id, query id\nand whether it's a top level statement or not\"?\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 13 Jul 2021 10:58:12 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 10:58:12AM +0200, Magnus Hagander wrote:\n> \n> Maybe a problem for the readability of it is that the three other\n> fields are listed by their description and not by their fieldname, and\n> toplevel is fieldname?\n\nI think so too.\n\n> Maybe \"each distinct database id, each distinct user id, each distinct\n> query id, and whether it is a top level statement or not\"?\n> \n> Or maybe \"each distinct combination of database id, user id, query id\n> and whether it's a top level statement or not\"?\n\nI like the 2nd one better. What about \"and its top level status\"? It would\nkeep the sentence short and the full description is right after if needed.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 17:38:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On 13.07.21 10:58, Magnus Hagander wrote:\n> Maybe \"each distinct database id, each distinct user id, each distinct\n> query id, and whether it is a top level statement or not\"?\n> \n> Or maybe \"each distinct combination of database id, user id, query id\n> and whether it's a top level statement or not\"?\n\nOkay, now I understand what is meant here. The second one sounds good \nto me.\n\n\n",
"msg_date": "Wed, 14 Jul 2021 06:36:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity view?"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 6:36 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 13.07.21 10:58, Magnus Hagander wrote:\n> > Maybe \"each distinct database id, each distinct user id, each distinct\n> > query id, and whether it is a top level statement or not\"?\n> >\n> > Or maybe \"each distinct combination of database id, user id, query id\n> > and whether it's a top level statement or not\"?\n>\n> Okay, now I understand what is meant here. The second one sounds good\n> to me.\n\nThanks, will push a fix like that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 14 Jul 2021 11:09:54 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature improvement: can we add queryId for\n pg_catalog.pg_stat_activity\n view?"
}
] |
[
{
"msg_contents": "Hi,\n\nPL/Java implements JDBC Savepoints using BeginInternalSubTransaction/\nReleaseCurrentSubTransaction/RollbackAndReleaseCurrentSubTransaction.\n\nThat seems to be the Accepted Way of Doing Things within backend PLs\nthat want control over error recovery, am I right?\n\nPL/Java also strictly enforces that such a subxact set within a Java\nfunction must be released or rolled back by the time that function\nreturns.\n\nThe reasoning there is less obvious to me; my intuition would have been\nthat a subtransaction could remain in play for the life of its containing\ntransaction, which could have been started outside of this Java function;\nby holding a reference to the JDBC Savepoint object, a later Java function\ncalled in the same transaction could release it or roll it back.\n\nBut I am beginning to suspect that the limitation may be essential, given\nthe comments in xact.c around StartSubTransaction and how its effects would\nget clobbered on exit from a Portal, so a subxact started by an actual\nSAVEPOINT is started in two steps, the later one after the Portal has\nexited. By contrast, within a function (being executed inside a Portal?),\nI have to use BeginInternalSubTransaction, which combines the multiple steps\ninto one, but whose effects wouldn't survive the exit of the Portal.\n\nHave I reverse-engineered this reasoning correctly? If so, I'll add some\ncomments about it in the PL/Java code where somebody may be thankful for\nthem later.\n\nOr, if it turns out the limitation isn't so inescapable, and could be\nrelaxed to allow a subxact lifetime longer than the single function that\nstarts it, I could look into doing that.\n\nThanks!\n-Chap\n\n",
"msg_date": "Fri, 15 Mar 2019 21:06:22 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Lifespan of a BeginInternalSubTransaction subxact ?"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> PL/Java implements JDBC Savepoints using BeginInternalSubTransaction/\n> ReleaseCurrentSubTransaction/RollbackAndReleaseCurrentSubTransaction.\n> That seems to be the Accepted Way of Doing Things within backend PLs\n> that want control over error recovery, am I right?\n\nSounds about right, though I haven't checked the details exactly.\n\n> PL/Java also strictly enforces that such a subxact set within a Java\n> function must be released or rolled back by the time that function\n> returns.\n\nYup.\n\n> The reasoning there is less obvious to me; my intuition would have been\n> that a subtransaction could remain in play for the life of its containing\n> transaction, which could have been started outside of this Java function;\n\nWhen control returns from a function, we resume executing the statement\nthat called it. This can *not* be in a different (sub)transaction than\nthe statement started in; that wouldn't make any sense logically, and\nit certainly won't work from an implementation standpoint either.\n\nThe rules are laxer for procedures, I believe; at the very least those are\nallowed to commit the calling transaction and start a new one. I'm less\nsure about how they can interact with subtransactions. To support this,\na CALL statement has to not have any internal state that persists past\nthe procedure call. But a function cannot expect that the calling\nstatement lacks internal state.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 15 Mar 2019 21:20:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lifespan of a BeginInternalSubTransaction subxact ?"
}
] |
[
{
"msg_contents": "Hi all,\n(related folks in CC)\n\nSergei Kornilov has reported here an issue with pg_checksums:\nhttps://www.postgresql.org/message-id/5217311552474471@myt2-66bcb87429e6.qloud-c.yandex.net\n\nIf the block size the tool is compiled with does not match the data\nfolder block size, then users would get incorrect checksums failures,\nwhich is confusing. As pg_checksum_block() uses directly the block\nsize, this cannot really be made dynamic yet, so we had better issue\nan error on that. Michael Banck has sent a patch for that:\nhttps://www.postgresql.org/message-id/1552476561.4947.67.camel@credativ.de\n\nThe error message proposed is like that:\n+ if (ControlFile->blcksz != BLCKSZ)\n+ {\n+ fprintf(stderr, _(\"%s: data directory block size %d is different to compiled-in block size %d.\\n\"),\n+ progname, ControlFile->blcksz, BLCKSZ);\n+ exit(1);\n+ }\nStill I think that we could do better.\n\nHere is a proposal of message which looks more natural to me, and more\nconsistent with what xlog.c complains about:\ndatabase files are incompatible with pg_checksums.\nThe database cluster was initialized with BLCKSZ %d, but pg_checksums\nwas compiled with BLCKSZ %d.\n\nHas somebody a better wording for that? Attached is a proposal of\npatch.\n--\nMichael",
"msg_date": "Sat, 16 Mar 2019 10:21:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n> If the block size the tool is compiled with does not match the data\n> folder block size, then users would get incorrect checksums failures,\n\nOr worse, incorrect checksump writing under \"enabling\"?\n\nInitial proposal:\n\n \"%s: data directory block size %d is different to compiled-in block size %d.\\n\"\n\n> Has somebody a better wording for that? Attached is a proposal of\n> patch.\n\n \"%s: database files are incompatible with pg_checksums.\\n\"\n \"%s: The database cluster was initialized with BLCKSZ %u, but pg_checksums was compiled with BLCKSZ %u.\"\n\nSecond line is missing a \"\\n\". \"pg_checksums\" does not need to appear, it \nis already the progname, and if it differs there is no point in giving a \nwrong name. I think it could be shorter. What about:\n\n \"%s: cannot compute checksums, command compiled with BLCKSZ %u but cluster initialized with BLCKSZ %u.\\n\"\n\nI think it would be better to adapt the checksum computation, but this is \nindeed non trivial because of the way the BLCKSZ constant is hardwired \ninto type declarations.\n\n-- \nFabien.",
"msg_date": "Sat, 16 Mar 2019 09:18:34 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 2:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n> (related folks in CC)\n>\n> Sergei Kornilov has reported here an issue with pg_checksums:\n>\n> https://www.postgresql.org/message-id/5217311552474471@myt2-66bcb87429e6.qloud-c.yandex.net\n>\n> If the block size the tool is compiled with does not match the data\n> folder block size, then users would get incorrect checksums failures,\n> which is confusing. As pg_checksum_block() uses directly the block\n> size, this cannot really be made dynamic yet, so we had better issue\n> an error on that. Michael Banck has sent a patch for that:\n> https://www.postgresql.org/message-id/1552476561.4947.67.camel@credativ.de\n>\n> The error message proposed is like that:\n> + if (ControlFile->blcksz != BLCKSZ)\n> + {\n> + fprintf(stderr, _(\"%s: data directory block size %d is different\n> to compiled-in block size %d.\\n\"),\n> + progname, ControlFile->blcksz, BLCKSZ);\n> + exit(1);\n> + }\n> Still I think that we could do better.\n>\n> Here is a proposal of message which looks more natural to me, and more\n> consistent with what xlog.c complains about:\n> database files are incompatible with pg_checksums.\n> The database cluster was initialized with BLCKSZ %d, but pg_checksums\n> was compiled with BLCKSZ %d.\n>\n\nBLCKSZ is very much an internal term. The exposed name through pg_settings\nis block_size, so I think the original was better. Combining that one with\nyours into \"initialized with block size %d\" etc, makes it a lot nicer.\n\nThe \"incompatible with pg_checksums\" part may be a bit redundant with the\ncommandname at the start as well, as I now realized Fabien pointed out\ndownthread. But I would suggest just cutting it and saying \"%s: database\nfiles are incompatible\" or maybe \"%s: data directory is incompatible\" even?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Mar 16, 2019 at 2:22 AM Michael Paquier <michael@paquier.xyz> wrote:Hi all,\n(related folks in CC)\n\nSergei Kornilov has reported here an issue with pg_checksums:\nhttps://www.postgresql.org/message-id/5217311552474471@myt2-66bcb87429e6.qloud-c.yandex.net\n\nIf the block size the tool is compiled with does not match the data\nfolder block size, then users would get incorrect checksums failures,\nwhich is confusing. As pg_checksum_block() uses directly the block\nsize, this cannot really be made dynamic yet, so we had better issue\nan error on that. Michael Banck has sent a patch for that:\nhttps://www.postgresql.org/message-id/1552476561.4947.67.camel@credativ.de\n\nThe error message proposed is like that:\n+ if (ControlFile->blcksz != BLCKSZ)\n+ {\n+ fprintf(stderr, _(\"%s: data directory block size %d is different to compiled-in block size %d.\\n\"),\n+ progname, ControlFile->blcksz, BLCKSZ);\n+ exit(1);\n+ }\nStill I think that we could do better.\n\nHere is a proposal of message which looks more natural to me, and more\nconsistent with what xlog.c complains about:\ndatabase files are incompatible with pg_checksums.\nThe database cluster was initialized with BLCKSZ %d, but pg_checksums\nwas compiled with BLCKSZ %d.BLCKSZ is very much an internal term. The exposed name through pg_settings is block_size, so I think the original was better. Combining that one with yours into \"initialized with block size %d\" etc, makes it a lot nicer.The \"incompatible with pg_checksums\" part may be a bit redundant with the commandname at the start as well, as I now realized Fabien pointed out downthread. But I would suggest just cutting it and saying \"%s: database files are incompatible\" or maybe \"%s: data directory is incompatible\" even? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 16 Mar 2019 11:18:17 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 09:18:34AM +0100, Fabien COELHO wrote:\n>> If the block size the tool is compiled with does not match the data\n>> folder block size, then users would get incorrect checksums failures,\n> \n> Or worse, incorrect checksump writing under \"enabling\"?\n\nLet's hope that we make that possible for v12. We'll see.\n\n> Second line is missing a \"\\n\". \"pg_checksums\" does not need to appear, it is\n> already the progname, and if it differs there is no point in giving a wrong\n> name. I think it could be shorter. What about:\n\nSomething like \"%s: database folder is incompatible\" for the first\nline sounds kind of better per the feedback gathered. And then on the\nsecond line:\n\"The database cluster was initialized with block size %u, but\npg_checksums was compiled with block size %u.\"\n\n> I think it would be better to adapt the checksum computation, but this is\n> indeed non trivial because of the way the BLCKSZ constant is hardwired into\n> type declarations.\n\nThat's actually the possibility I was pointing out upthread. I am not\nsure that the use cases are worth the effort though.\n--\nMichael",
"msg_date": "Sun, 17 Mar 2019 14:46:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "\n> Something like \"%s: database folder is incompatible\" for the first\n> line sounds kind of better per the feedback gathered. And then on the\n> second line:\n> \"The database cluster was initialized with block size %u, but\n> pg_checksums was compiled with block size %u.\"\n\nOk. \"%s\" progname instead of \"pg_checksums\", or just \"the command\"?\n\nI'm not sure that prefixing the two lines with the comment line is very \nelegant, I'd suggest to put spaces, and would still try to shorten the \nsecond sentence, maybe:\n\n%s: incompatible database cluster of block size %u, while the command\n is compiled for block size %u.\n\n>> I think it would be better to adapt the checksum computation, but this is\n>> indeed non trivial because of the way the BLCKSZ constant is hardwired into\n>> type declarations.\n>\n> That's actually the possibility I was pointing out upthread.\n\nYes, I was expressing my agreement.\n\n> I am not sure that the use cases are worth the effort though.\n\nIndeed, not for \"pg_checksums\" only.\n\nA few years I considered to have an dynamic initdb-set block size, but \nBLCKSZ is used a lot as a constant for struct declaration and to compute \nother constants, so that was a lot of changes. I think it would be worth \nthe effort because the current page size is suboptimal especially on SSD \nwhere 4 KiB would provide over 10% better performance for OLTP load. \nHowever, having to recompile to change it is a pain and not very package \nfriendly.\n\n-- \nFabien.\n\n",
"msg_date": "Sun, 17 Mar 2019 09:17:02 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 09:17:02AM +0100, Fabien COELHO wrote:\n> I'm not sure that prefixing the two lines with the comment line is very\n> elegant, I'd suggest to put spaces, and would still try to shorten the\n> second sentence, maybe:\n\nI suggest to keep two lines, and only prefix the first one.\n--\nMichael",
"msg_date": "Sun, 17 Mar 2019 17:25:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 11:18:17AM +0100, Magnus Hagander wrote:\n> BLCKSZ is very much an internal term. The exposed name through pg_settings\n> is block_size, so I think the original was better. Combining that one with\n> yours into \"initialized with block size %d\" etc, makes it a lot nicer.\n\nYes, what Fabien and you say here makes sense.\n\n> The \"incompatible with pg_checksums\" part may be a bit redundant with the\n> commandname at the start as well, as I now realized Fabien pointed out\n> downthread. But I would suggest just cutting it and saying \"%s: database\n> files are incompatible\" or maybe \"%s: data directory is incompatible\" even?\n\n\"Cluster\" is more consistent with the surroundings. So what about the\nattached then?\n--\nMichael",
"msg_date": "Sun, 17 Mar 2019 18:10:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "\n>> I'm not sure that prefixing the two lines with the comment line is very\n>> elegant, I'd suggest to put spaces, and would still try to shorten the\n>> second sentence, maybe:\n>\n> I suggest to keep two lines, and only prefix the first one.\n\nAs you feel. For me the shorter the better, though, if the information is \nclear and all there.\n\n-- \nFabien.\n\n",
"msg_date": "Sun, 17 Mar 2019 10:11:32 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 10:10 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Sat, Mar 16, 2019 at 11:18:17AM +0100, Magnus Hagander wrote:\n> > BLCKSZ is very much an internal term. The exposed name through\n> pg_settings\n> > is block_size, so I think the original was better. Combining that one\n> with\n> > yours into \"initialized with block size %d\" etc, makes it a lot nicer.\n>\n> Yes, what Fabien and you say here makes sense.\n>\n> > The \"incompatible with pg_checksums\" part may be a bit redundant with the\n> > commandname at the start as well, as I now realized Fabien pointed out\n> > downthread. But I would suggest just cutting it and saying \"%s: database\n> > files are incompatible\" or maybe \"%s: data directory is incompatible\"\n> even?\n>\n> \"Cluster\" is more consistent with the surroundings. So what about the\n> attached then?\n>\n\nLGTM.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Mar 17, 2019 at 10:10 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Mar 16, 2019 at 11:18:17AM +0100, Magnus Hagander wrote:\n> BLCKSZ is very much an internal term. The exposed name through pg_settings\n> is block_size, so I think the original was better. Combining that one with\n> yours into \"initialized with block size %d\" etc, makes it a lot nicer.\n\nYes, what Fabien and you say here makes sense.\n\n> The \"incompatible with pg_checksums\" part may be a bit redundant with the\n> commandname at the start as well, as I now realized Fabien pointed out\n> downthread. But I would suggest just cutting it and saying \"%s: database\n> files are incompatible\" or maybe \"%s: data directory is incompatible\" even?\n\n\"Cluster\" is more consistent with the surroundings. So what about the\nattached then?LGTM.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 17 Mar 2019 12:01:31 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 6:47 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Mar 16, 2019 at 09:18:34AM +0100, Fabien COELHO wrote:\n> > I think it would be better to adapt the checksum computation, but this is\n> > indeed non trivial because of the way the BLCKSZ constant is hardwired\n> into\n> > type declarations.\n>\n> That's actually the possibility I was pointing out upthread. I am not\n> sure that the use cases are worth the effort though.\n>\n\nIt may be worthwhile, but I think we shouldn't target that for v12 --\nconsider it a potential improvement for upcoming version. Let's focus on\nthe things we have now to make sure we get those polished and applied first.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Mar 17, 2019 at 6:47 AM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Mar 16, 2019 at 09:18:34AM +0100, Fabien COELHO wrote:> I think it would be better to adapt the checksum computation, but this is\n> indeed non trivial because of the way the BLCKSZ constant is hardwired into\n> type declarations.\n\nThat's actually the possibility I was pointing out upthread. I am not\nsure that the use cases are worth the effort though.It may be worthwhile, but I think we shouldn't target that for v12 -- consider it a potential improvement for upcoming version. Let's focus on the things we have now to make sure we get those polished and applied first.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 17 Mar 2019 12:03:04 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 12:01:31PM +0100, Magnus Hagander wrote:\n> LGTM.\n\nOkay, committed.\n--\nMichael",
"msg_date": "Mon, 18 Mar 2019 09:14:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Make pg_checksums complain if compiled BLCKSZ and data folder's\n block size differ"
}
] |
[
{
"msg_contents": "I noticed an odd buildfarm failure today:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2019-03-16%2012%3A12%3A20\n\nof which the key bit seems to be\n\n2019-03-16 15:20:43.835 UTC [10879304] 003_promote.pl LOG: received replication command: BASE_BACKUP LABEL 'pg_basebackup base backup' NOWAIT \n2019-03-16 15:20:45.857 UTC [10879304] 003_promote.pl ERROR: could not request checkpoint because checkpointer not running\n2019-03-16 15:20:47.227 UTC [61604144] LOG: received immediate shutdown request\n\nDigging in the buildfarm archives finds seven other occurrences of the\nsame error in the past three months (I didn't look back further).\n\nThe cause of this error is that RequestCheckpoint will give up and fail\nafter just 2 seconds, which evidently is not long enough on slow or\nheavily loaded machines. Since there isn't any good reason why the\ncheckpointer wouldn't be running, I'm inclined to swing a large hammer\nand kick this timeout up to 60 seconds. Thoughts?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 12:07:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Unduly short fuse in RequestCheckpoint"
},
{
"msg_contents": "I wrote:\n> The cause of this error is that RequestCheckpoint will give up and fail\n> after just 2 seconds, which evidently is not long enough on slow or\n> heavily loaded machines. Since there isn't any good reason why the\n> checkpointer wouldn't be running, I'm inclined to swing a large hammer\n> and kick this timeout up to 60 seconds. Thoughts?\n\nSo I had imagined this as about a 2-line patch, s/2/60/g and be done.\nLooking closer, though, there's other pre-existing problems in this code:\n\n1. As it's currently coded, the requesting process can wait for up to 2\nseconds for the checkpointer to start *even if the caller did not say\nCHECKPOINT_WAIT*. That seems a little bogus, and an unwanted 60-second\nwait would be a lot bogus.\n\n2. If the timeout does elapse, and we didn't have the CHECKPOINT_WAIT\nflag, we just log the failure and return. When the checkpointer\nultimately does start, it will have no idea that it should set to work\nright away on a checkpoint. (I wonder if this accounts for any other\nof the irreproducible buildfarm failures we get on slow machines. From\nthe calling code's viewpoint, it'd seem like it was taking a darn long\ntime to perform a successfully-requested checkpoint. Given that most\ncheckpoint requests are non-WAIT, this seems not very nice.)\n\nAfter some thought I came up with the attached proposed patch. The\nbasic idea here is that we record a checkpoint request by ensuring\nthat the shared-memory ckpt_flags word is nonzero. (It's not clear\nto me that a valid request would always have at least one of the\nexisting flag bits set, so I just added an extra always-set bit to\nguarantee this.) Then, whether the signal gets sent or not, there is\na persistent record of the request in shmem, which the checkpointer\nwill eventually notice. In the expected case where the problem is\nthat the checkpointer hasn't started just yet, it will see the flag\nduring its first main loop and begin a checkpoint right away.\nI took out the local checkpoint_requested flag altogether.\n\nA possible objection to this fix is that up to now, it's been possible\nto trigger a checkpoint just by sending SIGINT to the checkpointer\nprocess, without touching shmem at all. None of the core code depends\non that, and since the checkpointer's PID is difficult to find out\nfrom \"outside\", it's hard to believe that anybody's got custom tooling\nthat depends on it, but perhaps they do. I thought about keeping the\ncheckpoint_requested flag to allow that to continue to work, but if\nwe do so then we have a race condition: the checkpointer could see the\nshmem flag set and start a checkpoint, then receive the signal a moment\nlater and believe that that represents a second, independent request\nrequiring a second checkpoint. So I think we should just blow off that\nhypothetical possibility and do it like this.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 17 Mar 2019 15:41:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unduly short fuse in RequestCheckpoint"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 3:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I think we should just blow off that\n> hypothetical possibility and do it like this.\n\nMakes sense to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 10:54:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Unduly short fuse in RequestCheckpoint"
}
] |
[
{
"msg_contents": "Hello all,\n\nWhile looking over the new jsonpath stuff I noticed the keyword table\nwasn't declared const. Shouldn't the table and the actual keyword\nstrings both be declared const? Perhaps something like the attached\n(untested) patch.\n\n-Mark",
"msg_date": "Sat, 16 Mar 2019 14:54:18 -0400",
"msg_from": "Mark G <markg735@gmail.com>",
"msg_from_op": true,
"msg_subject": "Keyword table constness in jsonpath scanner."
},
{
"msg_contents": "On Sat, Mar 16, 2019 at 10:47 PM Mark G <markg735@gmail.com> wrote:\n> While looking over the new jsonpath stuff I noticed the keyword table\n> wasn't declared const. Shouldn't the table and the actual keyword\n> strings both be declared const? Perhaps something like the attached\n> (untested) patch.\n\nLooks good to me. Pushed, thanks.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Sun, 17 Mar 2019 12:52:56 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Keyword table constness in jsonpath scanner."
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm investigating the issue I reported here:\nhttps://www.postgresql.org/message-id/flat/153478795159.1302.9617586466368699403%40wrigleys.postgresql.org\n\nAs Tom Lane mentioned there, the docs (8.13) indicate xmloption = CONTENT\nshould accept all valid XML. At this time, XML with a DOCTYPE declaration\nis not accepted with this setting even though it is considered valid XML.\nI'd like to work on a patch to address this issue and make it work as\nadvertised.\n\nI traced the source of the error to line ~1500 in\n/src/backend/utils/adt/xml.c\n\nres_code = xmlParseBalancedChunkMemory(doc, NULL, NULL, 0, utf8string +\ncount, NULL);\n\nIt looks like it is xmlParseBalancedChunkMemory from libxml that doesn't\nwork when there's a DOCTYPE in the XML data. My assumption is the DOCTYPE\nelement makes the XML not well-balanced. From:\n\nhttp://xmlsoft.org/html/libxml-parser.html#xmlParseBalancedChunkMemory\n\nThis function returns:\n\n> 0 if the chunk is well balanced, -1 in case of args problem and the parser\n> error code otherwise\n\n\nI see xmlParseBalancedChunkMemoryRecover that might provide the\nfunctionality needed. That function returns:\n\n0 if the chunk is well balanced, -1 in case of args problem and the parser\n> error code otherwise In case recover is set to 1, the nodelist will not be\n> empty even if the parsed chunk is not well balanced, assuming the parsing\n> succeeded to some extent.\n\n\nI haven't tested yet to see if this parses the data w/ DOCTYPE successfully\nyet. If it does, I don't think it would be difficult to update the check\non res_code to not fail. I'm making another assumption that there is a\ndistinct code from libxml to differentiate from other errors, but I\ncouldn't find those codes quickly. The current check is this:\n\nif (res_code != 0 || xmlerrcxt->err_occurred)\n\nDoes this sound reasonable? Have I missed some major aspect? If this is\non the right track I can work on creating a patch to move this forward.\n\nThanks,\n\n*Ryan Lambert*\nRustProof Labs\nwww.rustprooflabs.com\n\nHi all,I'm investigating the issue I reported here: https://www.postgresql.org/message-id/flat/153478795159.1302.9617586466368699403%40wrigleys.postgresql.orgAs Tom Lane mentioned there, the docs (8.13) indicate xmloption = CONTENT should accept all valid XML. At this time, XML with a DOCTYPE declaration is not accepted with this setting even though it is considered valid XML. I'd like to work on a patch to address this issue and make it work as advertised.I traced the source of the error to line ~1500 in /src/backend/utils/adt/xml.c res_code = xmlParseBalancedChunkMemory(doc, NULL, NULL, 0, utf8string + count, NULL);It looks like it is xmlParseBalancedChunkMemory from libxml that doesn't work when there's a DOCTYPE in the XML data. My assumption is the DOCTYPE element makes the XML not well-balanced. From:http://xmlsoft.org/html/libxml-parser.html#xmlParseBalancedChunkMemoryThis function returns:0 if the chunk is well balanced, -1 in case of args problem and the parser error code otherwiseI see xmlParseBalancedChunkMemoryRecover that might provide the functionality needed. That function returns:0 if the chunk is well balanced, -1 in case of args problem and the parser error code otherwise In case recover is set to 1, the nodelist will not be empty even if the parsed chunk is not well balanced, assuming the parsing succeeded to some extent.I haven't tested yet to see if this parses the data w/ DOCTYPE successfully yet. If it does, I don't think it would be difficult to update the check on res_code to not fail. I'm making another assumption that there is a distinct code from libxml to differentiate from other errors, but I couldn't find those codes quickly. The current check is this: if (res_code != 0 || xmlerrcxt->err_occurred)Does this sound reasonable? Have I missed some major aspect? If this is on the right track I can work on creating a patch to move this forward.Thanks,Ryan LambertRustProof Labswww.rustprooflabs.com",
"msg_date": "Sat, 16 Mar 2019 14:10:56 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/16/19 16:10, Ryan Lambert wrote:\n> As Tom Lane mentioned there, the docs (8.13) indicate xmloption = CONTENT\n> should accept all valid XML. At this time, XML with a DOCTYPE declaration\n> is not accepted with this setting even though it is considered valid XML.\n\nHello Ryan,\n\nA patch for your issue is currently registered in the 2019-03 commitfest[1].\n\nIf it attracts somebody to review it before the end of the month, it might\nmake it into PG v12.\n\nIt is the xml-content-2006-2.patch found on the email thread [2]. (The other\npatch found there is associated documentation fixes, and also needs to be\nreviewed.)\n\nFurther conversation should probably be on that email thread so that it\nstays associated with the commitfest entry.\n\nThanks for your interest in the issue!\n\nRegards,\nChapman Flack\n\n[1] https://commitfest.postgresql.org/22/1872/\n[2] https://www.postgresql.org/message-id/flat/5C81F8C0.6090901@anastigmatix.net\n\n",
"msg_date": "Sat, 16 Mar 2019 16:31:18 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Ryan Lambert <ryan@rustprooflabs.com> writes:\n> I'm investigating the issue I reported here:\n> https://www.postgresql.org/message-id/flat/153478795159.1302.9617586466368699403%40wrigleys.postgresql.org\n> I'd like to work on a patch to address this issue and make it work as\n> advertised.\n\nGood idea, because it doesn't seem like anybody else cares ...\n\n> I see xmlParseBalancedChunkMemoryRecover that might provide the\n> functionality needed.\n\nTBH, our experience with libxml has not been so positive that I'd think\nadding dependencies on new parts of its API would be a good plan.\n\nExperimenting with different inputs, it seems like removing the\n\"<!DOCTYPE ...>\" tag is enough to make it work. So what I'm wondering\nabout is writing something like parse_xml_decl() to skip over that.\n\nBear in mind though that I know next to zip about XML. There may be\nsome good reason why we don't want to strip off the !DOCTYPE part\nfrom what libxml sees.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 16:42:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> A patch for your issue is currently registered in the 2019-03 commitfest[1].\n\nOh! I apologize for saying nobody was working on this issue.\n\nTaking a very quick look at your patch, though, I dunno --- it seems like\nit adds a boatload of new assumptions about libxml's data structures and\nerror-reporting behavior. I'm sure it works for you, but will it work\nacross a wide range of libxml versions?\n\nWhat do you think of the idea I just posted about parsing off the DOCTYPE\nthing for ourselves, and not letting libxml see it?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 16:55:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/16/19 16:42, Tom Lane wrote:\n> Ryan Lambert <ryan@rustprooflabs.com> writes:\n>> I'm investigating the issue I reported here:\n>> https://www.postgresql.org/message-id/flat/153478795159.1302.9617586466368699403%40wrigleys.postgresql.org\n>> I'd like to work on a patch to address this issue and make it work as\n>> advertised.\n> \n> Good idea, because it doesn't seem like anybody else cares ...\n\nahem\n\n",
"msg_date": "Sat, 16 Mar 2019 16:55:44 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/16/19 16:55, Tom Lane wrote:\n> What do you think of the idea I just posted about parsing off the DOCTYPE\n> thing for ourselves, and not letting libxml see it?\n\nThe principled way of doing that would be to pre-parse to find a DOCTYPE,\nand if there is one, leave it there and parse the input as we do for\n'document'. Per XML, if there is a DOCTYPE, the document must satisfy\nthe 'document' syntax requirements, and per SQL/XML:2006-and-later,\n'content' is a proper superset of 'document', so if we were asked for\n'content' and can successfully parse it as 'document', we're good,\nand if we see a DOCTYPE and yet it incurs a parse error as 'document',\nwell, that's what needed to happen.\n\nThe DOCTYPE can appear arbitrarily far in, but the only things that\ncan precede it are the XML decl, whitespace, XML comments, and XML\nprocessing instructions. None of those things nest, so the preceding\nstuff makes a regular language, and a regular expression that matches\nany amount of that stuff ending in <!DOCTYPE would be enough to detect\nthat the parse should be shunted off to get 'document' treatment.\n\nThe patch I submitted essentially relies on libxml to do that same\nparsing up to that same point and detect the error, so it would add\nno upfront cost in the majority of cases that aren't this corner one.\n\nBut keeping a little compiled regex around and testing the input with that\nwould hardly break the bank, either.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sat, 16 Mar 2019 17:11:29 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/16/19 16:55, Tom Lane wrote:\n>> What do you think of the idea I just posted about parsing off the DOCTYPE\n>> thing for ourselves, and not letting libxml see it?\n\n> The principled way of doing that would be to pre-parse to find a DOCTYPE,\n> and if there is one, leave it there and parse the input as we do for\n> 'document'. Per XML, if there is a DOCTYPE, the document must satisfy\n> the 'document' syntax requirements, and per SQL/XML:2006-and-later,\n> 'content' is a proper superset of 'document', so if we were asked for\n> 'content' and can successfully parse it as 'document', we're good,\n> and if we see a DOCTYPE and yet it incurs a parse error as 'document',\n> well, that's what needed to happen.\n\nHm, so, maybe just\n\n(1) always try to parse as document. If successful, we're done.\n\n(2) otherwise, if allowed by xmloption, try to parse using our\ncurrent logic for the CONTENT case.\n\nThis avoids adding any new assumptions about how libxml acts,\nwhich is what I was hoping to achieve.\n\nOne interesting question is which error to report if both (1) and (2)\nfail.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 16 Mar 2019 17:21:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/16/19 17:21, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> On 03/16/19 16:55, Tom Lane wrote:\n>>> What do you think of the idea I just posted about parsing off the DOCTYPE\n>>> thing for ourselves, and not letting libxml see it?\n> \n>> The principled way of doing that would be to pre-parse to find a DOCTYPE,\n>> and if there is one, leave it there and parse the input as we do for\n>> 'document'. Per XML, if there is a DOCTYPE, the document must satisfy\n>> the 'document' syntax requirements, and per SQL/XML:2006-and-later,\n>> 'content' is a proper superset of 'document', so if we were asked for\n>> 'content' and can successfully parse it as 'document', we're good,\n>> and if we see a DOCTYPE and yet it incurs a parse error as 'document',\n>> well, that's what needed to happen.\n> \n> Hm, so, maybe just\n> \n> (1) always try to parse as document. If successful, we're done.\n> \n> (2) otherwise, if allowed by xmloption, try to parse using our\n> current logic for the CONTENT case.\n\nWhat I don't like about that is that (a) the input could be\narbitrarily long and complex to parse (not that you can't imagine\na database populated with lots of short little XML snippets, but\nat the same time, a query could quite plausibly deal in yooge ones),\nand (b), step (1) could fail at the last byte of the input, followed\nby total reparsing as (2).\n\nI think the safer structure is clearly that of the current patch,\nmodulo whether the \"has a DOCTYPE\" test is done by libxml itself\n(with the assumptions you don't like) or by a pre-scan.\n\nSo the current structure is:\n\nrestart:\n asked for document?\n parse as document, or fail\n else asked for content:\n parse as content\n failed?\n because DOCTYPE? restart as if document\n else fail\n\nand a pre-scan structure could be very similar:\n\nrestart:\n asked for document?\n parse as document, or fail\n else asked for content:\n pre-scan finds DOCTYPE?\n restart as if document\n else parse as content, or fail\n\nThe pre-scan is a simple linear search and will ordinarily say yes or no\nwithin a couple dozen characters--you could *have* an input with 20k of\nleading whitespace and comments, but it's hardly the norm. Just trying to\nparse as 'document' first could easily parse a large fraction of the input\nbefore discovering it's followed by something that can't follow a document\nelement.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sat, 16 Mar 2019 18:33:19 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Thank you both! I had glanced at that item in the commitfest but didn't\nnotice it would fix this issue.\nI'll try to test/review this before the end of the month, much better than\nstarting from scratch myself. A quick glance at the patch looks logical\nand looks like it should work for my use case.\n\nThanks,\n\nRyan Lambert\n\n\nOn Sat, Mar 16, 2019 at 4:33 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 03/16/19 17:21, Tom Lane wrote:\n> > Chapman Flack <chap@anastigmatix.net> writes:\n> >> On 03/16/19 16:55, Tom Lane wrote:\n> >>> What do you think of the idea I just posted about parsing off the\n> DOCTYPE\n> >>> thing for ourselves, and not letting libxml see it?\n> >\n> >> The principled way of doing that would be to pre-parse to find a\n> DOCTYPE,\n> >> and if there is one, leave it there and parse the input as we do for\n> >> 'document'. Per XML, if there is a DOCTYPE, the document must satisfy\n> >> the 'document' syntax requirements, and per SQL/XML:2006-and-later,\n> >> 'content' is a proper superset of 'document', so if we were asked for\n> >> 'content' and can successfully parse it as 'document', we're good,\n> >> and if we see a DOCTYPE and yet it incurs a parse error as 'document',\n> >> well, that's what needed to happen.\n> >\n> > Hm, so, maybe just\n> >\n> > (1) always try to parse as document. If successful, we're done.\n> >\n> > (2) otherwise, if allowed by xmloption, try to parse using our\n> > current logic for the CONTENT case.\n>\n> What I don't like about that is that (a) the input could be\n> arbitrarily long and complex to parse (not that you can't imagine\n> a database populated with lots of short little XML snippets, but\n> at the same time, a query could quite plausibly deal in yooge ones),\n> and (b), step (1) could fail at the last byte of the input, followed\n> by total reparsing as (2).\n>\n> I think the safer structure is clearly that of the current patch,\n> modulo whether the \"has a DOCTYPE\" test is done by libxml itself\n> (with the assumptions you don't like) or by a pre-scan.\n>\n> So the current structure is:\n>\n> restart:\n> asked for document?\n> parse as document, or fail\n> else asked for content:\n> parse as content\n> failed?\n> because DOCTYPE? restart as if document\n> else fail\n>\n> and a pre-scan structure could be very similar:\n>\n> restart:\n> asked for document?\n> parse as document, or fail\n> else asked for content:\n> pre-scan finds DOCTYPE?\n> restart as if document\n> else parse as content, or fail\n>\n> The pre-scan is a simple linear search and will ordinarily say yes or no\n> within a couple dozen characters--you could *have* an input with 20k of\n> leading whitespace and comments, but it's hardly the norm. Just trying to\n> parse as 'document' first could easily parse a large fraction of the input\n> before discovering it's followed by something that can't follow a document\n> element.\n>\n> Regards,\n> -Chap\n>\n\nThank you both! I had glanced at that item in the commitfest but didn't notice it would fix this issue.I'll try to test/review this before the end of the month, much better than starting from scratch myself. A quick glance at the patch looks logical and looks like it should work for my use case.Thanks, Ryan LambertOn Sat, Mar 16, 2019 at 4:33 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/16/19 17:21, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> On 03/16/19 16:55, Tom Lane wrote:\n>>> What do you think of the idea I just posted about parsing off the DOCTYPE\n>>> thing for ourselves, and not letting libxml see it?\n> \n>> The principled way of doing that would be to pre-parse to find a DOCTYPE,\n>> and if there is one, leave it there and parse the input as we do for\n>> 'document'. Per XML, if there is a DOCTYPE, the document must satisfy\n>> the 'document' syntax requirements, and per SQL/XML:2006-and-later,\n>> 'content' is a proper superset of 'document', so if we were asked for\n>> 'content' and can successfully parse it as 'document', we're good,\n>> and if we see a DOCTYPE and yet it incurs a parse error as 'document',\n>> well, that's what needed to happen.\n> \n> Hm, so, maybe just\n> \n> (1) always try to parse as document. If successful, we're done.\n> \n> (2) otherwise, if allowed by xmloption, try to parse using our\n> current logic for the CONTENT case.\n\nWhat I don't like about that is that (a) the input could be\narbitrarily long and complex to parse (not that you can't imagine\na database populated with lots of short little XML snippets, but\nat the same time, a query could quite plausibly deal in yooge ones),\nand (b), step (1) could fail at the last byte of the input, followed\nby total reparsing as (2).\n\nI think the safer structure is clearly that of the current patch,\nmodulo whether the \"has a DOCTYPE\" test is done by libxml itself\n(with the assumptions you don't like) or by a pre-scan.\n\nSo the current structure is:\n\nrestart:\n asked for document?\n parse as document, or fail\n else asked for content:\n parse as content\n failed?\n because DOCTYPE? restart as if document\n else fail\n\nand a pre-scan structure could be very similar:\n\nrestart:\n asked for document?\n parse as document, or fail\n else asked for content:\n pre-scan finds DOCTYPE?\n restart as if document\n else parse as content, or fail\n\nThe pre-scan is a simple linear search and will ordinarily say yes or no\nwithin a couple dozen characters--you could *have* an input with 20k of\nleading whitespace and comments, but it's hardly the norm. Just trying to\nparse as 'document' first could easily parse a large fraction of the input\nbefore discovering it's followed by something that can't follow a document\nelement.\n\nRegards,\n-Chap",
"msg_date": "Sat, 16 Mar 2019 16:43:43 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/16/19 18:33, Chapman Flack wrote:\n> The pre-scan is a simple linear search and will ordinarily say yes or no\n> within a couple dozen characters--you could *have* an input with 20k of\n> leading whitespace and comments, but it's hardly the norm. Just trying to\n\nIf the available regexp functions want to start by munging the entire input\ninto a pg_wchar array, then it may be better to implement the pre-scan as\nopen code, the same way parse_xml_decl() is already implemented.\n\nGiven that parse_xml_decl() already covers the first optional thing that\ncan precede the doctype, the remaining scan routine would only need to\nrecognize comments, PIs, and whitespace. That would be pretty straightforward.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sat, 16 Mar 2019 23:50:15 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/16/19 17:21, Tom Lane wrote:\n>> Hm, so, maybe just\n>> \n>> (1) always try to parse as document. If successful, we're done.\n>> \n>> (2) otherwise, if allowed by xmloption, try to parse using our\n>> current logic for the CONTENT case.\n\n> What I don't like about that is that (a) the input could be\n> arbitrarily long and complex to parse (not that you can't imagine\n> a database populated with lots of short little XML snippets, but\n> at the same time, a query could quite plausibly deal in yooge ones),\n> and (b), step (1) could fail at the last byte of the input, followed\n> by total reparsing as (2).\n\nThat doesn't seem particularly likely to me: based on what's been\nsaid here, I'd expect parsing with the wrong expectation to usually\nfail near the start of the input. In any case, the other patch\nalso requires repeat parsing, no? It's just doing that in a different\nset of cases.\n\nThe reason I'm pressing you for a simpler patch is that dump/reload\nfailures are pretty bad, so ideally we'd find a fix that we're\ncomfortable with back-patching into the released branches.\nPersonally I would never dare to back-patch the proposed patch:\nit's too complex, so it's not real clear that it doesn't have unwanted\nside-effects, and it's not at all certain that there aren't libxml\nversion dependencies in it. (Maybe another committer with more\nfamiliarity with libxml would evaluate the risks differently, but\nI doubt it.) But I think that something close to what I sketched\nabove would pass muster as safe-to-backpatch.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Mar 2019 11:45:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/17/19 11:45, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> On 03/16/19 17:21, Tom Lane wrote:\n>>> (1) always try to parse as document. If successful, we're done.\n>>> (2) otherwise, if allowed by xmloption, try to parse using our\n> \n>> What I don't like about that is that (a) the input could be\n>> arbitrarily long and complex to parse (not that you can't imagine\n>> a database populated with lots of short little XML snippets, but\n>> at the same time, a query could quite plausibly deal in yooge ones),\n>> and (b), step (1) could fail at the last byte of the input, followed\n>> by total reparsing as (2).\n> \n> That doesn't seem particularly likely to me: based on what's been\n> said here, I'd expect parsing with the wrong expectation to usually\n> fail near the start of the input. In any case, the other patch\n> also requires repeat parsing, no? It's just doing that in a different\n> set of cases.\n\nI'll do up a version with the open-coded prescan I proposed last night.\n\nWhether parsing with the wrong expectation is likely to fail near the\nstart of the input depends on both the input and the expectation. If\nyour expectation is DOCUMENT and the input is CONTENT, it's possible\nfor the determining difference to be something that follows the first\nelement, and a first element can be (and often is) nearly all of the input.\n\nWhat I was doing in the patch is the reverse: parsing with the expectation\nof CONTENT to see if a DTD gets tripped over. It isn't allowed for an\nelement to precede a DTD, so that approach can be expected to fail fast\nif the other branch needs to be taken.\n\nBut a quick pre-scan for the same thing would have the same property,\nwithout the libxml dependencies that bother you here. Watch this space.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sun, 17 Mar 2019 13:11:28 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> What I was doing in the patch is the reverse: parsing with the expectation\n> of CONTENT to see if a DTD gets tripped over. It isn't allowed for an\n> element to precede a DTD, so that approach can be expected to fail fast\n> if the other branch needs to be taken.\n\nAh, right. I don't have any problem with trying the CONTENT approach\nbefore the DOCUMENT approach rather than vice-versa. What I was concerned\nabout was adding a lot of assumptions about exactly how libxml would\nreport the failure. IMO a maximally-safe patch would just rearrange\nthings we're already doing without adding new things.\n\n> But a quick pre-scan for the same thing would have the same property,\n> without the libxml dependencies that bother you here. Watch this space.\n\nDo we need a pre-scan at all?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Mar 2019 13:16:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/17/19 13:16, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> What I was doing in the patch is the reverse: parsing with the expectation\n>> of CONTENT to see if a DTD gets tripped over. It isn't allowed for an\n>> element to precede a DTD, so that approach can be expected to fail fast\n>> if the other branch needs to be taken.\n> \n> Ah, right. I don't have any problem with trying the CONTENT approach\n> before the DOCUMENT approach rather than vice-versa. What I was concerned\n> about was adding a lot of assumptions about exactly how libxml would\n> report the failure. IMO a maximally-safe patch would just rearrange\n> things we're already doing without adding new things.\n> \n>> But a quick pre-scan for the same thing would have the same property,\n>> without the libxml dependencies that bother you here. Watch this space.\n> \n> Do we need a pre-scan at all?\n\nWithout it, we double the time to a failure result in every case that\nshould actually fail, as well as in this one corner case that we want to\nsee succeed, and the question you posed earlier about which error message\nto return becomes thornier.\n\nIf the query asked for CONTENT, any error result should be one you could get\nwhen parsing as CONTENT. If we switch and try parsing as DOCUMENT _because\nthe input is claiming to have the form of a DOCUMENT_, then it's defensible\nto return errors explaining why it's not a DOCUMENT ... but not in the\ngeneral case of just throwing DOCUMENT at it any time CONTENT parse fails.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sun, 17 Mar 2019 14:13:03 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/17/19 13:16, Tom Lane wrote:\n>> Do we need a pre-scan at all?\n\n> Without it, we double the time to a failure result in every case that\n> should actually fail, as well as in this one corner case that we want to\n> see succeed, and the question you posed earlier about which error message\n> to return becomes thornier.\n\nI have absolutely zero concern about whether it takes twice as long to\ndetect bad input; nobody should be sending bad input if they're concerned\nabout performance. (The costs of the ensuing transaction abort would\nlikely dwarf xml_in's runtime in any case.) Besides, with what we're\ntalking about doing here,\n\n(1) the extra runtime is consumed only in cases that would fail up to now,\nso nobody can complain about a performance regression;\n(2) doing a pre-scan *would* be a performance regression for cases that\nwork today; not a large one we hope, but still...\n\nThe error message issue is indeed a concern, but I don't see why it's\ncomplicated if you agree that\n\n> If the query asked for CONTENT, any error result should be one you could get\n> when parsing as CONTENT.\n\nThat just requires us to save the first error message and be sure to issue\nthat one not the DOCUMENT one, no? That's what we'd want to do from a\nbackwards-compatibility standpoint anyhow, since that's the error message\nwording you'd get with today's code.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Mar 2019 15:06:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/17/19 15:06, Tom Lane wrote:\n> The error message issue is indeed a concern, but I don't see why it's\n> complicated if you agree that\n> \n>> If the query asked for CONTENT, any error result should be one you could get\n>> when parsing as CONTENT.\n> \n> That just requires us to save the first error message and be sure to issue\n> that one not the DOCUMENT one, no?\n\nI confess I haven't looked hard yet at how to do that. The way errors come\nout of libxml is more involved than \"here's a message\", so there's a choice\nof (a) try to copy off that struct in a way that's sure to survive\nre-executing the parser, and then use the copy, or (b) generate a message\nright away from the structured information and save that, and I guess b\nmight not be too bad; a might not be too bad, or it might, and slide right\nback into the kind of libxml-behavior-assumptions you're wanting to avoid.\n\nMeanwhile, here is a patch on the lines I proposed earlier, with a\npre-check. Any performance hit that it could entail (which I'd really\nexpect to be de minimis, though I haven't benchmarked) ought to be\ncompensated by the strlen I changed to strnlen in parse_xml_decl (as\nthere's really no need to run off and count the whole rest of the input\njust to know if 1, 2, 3, or 4 bytes are available to decode a UTF-8 char).\n\n... and, yes, I know that could be an independent patch, and then the\nperformance effect here should be measured from there. But it was near\nwhat I was doing anyway, so I included it here.\n\nAttaching both still-outstanding patches (this one and docfix) so the\nCF app doesn't lose one.\n\nRegards,\n-Chap",
"msg_date": "Sun, 17 Mar 2019 18:31:00 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "There might be too many different email threads on this with patches,\nbut in case it went under the radar, xml-content-2006-3.patch appeared\nin my previous message on this thread[1].\n\nIt is based on a simple pre-check of the prefix of the input, determining\nwhich form of parse to apply. That may or may not be simpler than parse-\nonce-save-error-parse-again-report-first-error, but IMV it's a more direct\nsolution and clearer (the logic is clearly about \"how do I determine the way\nthis input should be parsed?\" which is the problem on the table, rather\nthan \"how should I save and regurgitate this libxml error?\" which turns the\nproblem on the table to a different one).\n\nI decided, for a first point of reference, to wear the green eyeshade and\nwrite a pre-check that exactly implements the applicable rules. That gives\na starting point for simplifications that are probably safe.\n\nFor example, a bunch of lines at the end have to do with verifying the\ncontent inside of a processing-instruction, after finding where it ends.\nWe could reasonably decide that, for the purpose of skipping it, knowing\nwhere it ends is enough, as libxml will parse it next and report any errors\nanyway.\n\nThat would slightly violate my intention of sending input to (the parser\nthat wasn't asked for) /only/ when it's completely clear (from the prefix\nwe've seen) that that's where it should go. The relaxed version could do\nthat in completely-clear cases and cases with an invalid PI ahead of what\nlooks like a DTD. But you'd pretty much expect both parsers to produce\nthe same message for a bad PI anyway.\n\nThat made me just want to try it now, and--surprise!--the messages from\nlibxml are not the same. So maybe I would lean to keeping the green-eyeshade\nrules in the test, if you can stomach them, but I would understand taking\nthem out.\n\nRegards,\n-Chap\n\n[1] https://www.postgresql.org/message-id/5C8ECAA4.3090301@anastigmatix.net\n\n",
"msg_date": "Mon, 18 Mar 2019 13:27:10 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I decided, for a first point of reference, to wear the green eyeshade and\n> write a pre-check that exactly implements the applicable rules. That gives\n> a starting point for simplifications that are probably safe.\n> For example, a bunch of lines at the end have to do with verifying the\n> content inside of a processing-instruction, after finding where it ends.\n> We could reasonably decide that, for the purpose of skipping it, knowing\n> where it ends is enough, as libxml will parse it next and report any errors\n> anyway.\n\nYeah, I did not like that code too much, particularly not all the magic\nUnicode-code-point numbers. I removed that, made some other changes to\nbring the patch more in line with PG coding style, and pushed it.\n\n> That made me just want to try it now, and--surprise!--the messages from\n> libxml are not the same. So maybe I would lean to keeping the green-eyeshade\n> rules in the test, if you can stomach them, but I would understand taking\n> them out.\n\nI doubt anyone will care too much about whether error messages for bad\nXML input are exactly like what they were before; and even if someone\ndoes, I doubt that these extra tests would be enough to ensure that\nthe messages don't change. You're not really validating that the input\nis something that libxml would accept, unless its processing of XML PIs\nis far stupider than I would expect it to be.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Mar 2019 16:59:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/23/19 16:59, Tom Lane wrote:\n> Unicode-code-point numbers. I removed that, made some other changes to\n> bring the patch more in line with PG coding style, and pushed it.\n\nThanks! It looks good. I'm content with the extra PI checking being gone.\n\nThe magic Unicode-code-point numbers come straight from the XML standard;\nI couldn't make that stuff up. :)\n\n> > You're not really validating that the input\n> is something that libxml would accept, unless its processing of XML PIs\n> is far stupider than I would expect it to be.\n\nOut of curiosity, what further processing would you expect libxml to do?\n\nXML parsers are supposed to be transparent PI-preservers, except in the\nrare case of seeing a PI that actually means something to the embedding\napplication, which isn't going to be the case for a database simply\nimplementing an XML data type.\n\nThe standard literally requires that the target must be a NAME, and\ncan't match [Xx][Mm][Ll], and if there's whitespace and anything after\nthat, there can't be an embedded ?> ... and that's it.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sat, 23 Mar 2019 17:53:24 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/23/19 16:59, Tom Lane wrote:\n>> You're not really validating that the input\n>> is something that libxml would accept, unless its processing of XML PIs\n>> is far stupider than I would expect it to be.\n\n> Out of curiosity, what further processing would you expect libxml to do?\n\nHm, I'd have thought it'd try to parse the arguments to some extent,\nbut maybe not. Does everybody reimplement attribute parsing for\nthemselves when using PIs?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sat, 23 Mar 2019 18:22:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/23/19 18:22, Tom Lane wrote:\n>> Out of curiosity, what further processing would you expect libxml to do?\n> \n> Hm, I'd have thought it'd try to parse the arguments to some extent,\n> but maybe not. Does everybody reimplement attribute parsing for\n> themselves when using PIs?\n\nYeah, the content of a PI (whatever's after the target name) is left\nall to be defined by whatever XML-using application might care about\nthat PI.\n\nIt could have an attribute=value syntax inspired by XML elements, or\nsome other form entirely, but there'd just better not be any ?> in it.\n\nRegards,\n-Chap\n\n",
"msg_date": "Sat, 23 Mar 2019 19:07:21 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "I am unable to get latest patches I found [1] to apply cleanly to current\nbranches. It's possible I missed the latest patches so if I'm using the\nwrong ones please let me know. I tried against master, 11.2 stable and the\n11.2 tag with similar results. It's quite possible it's user error on my\nend, I am new to this process but didn't have issues with the previous\npatches when I tested those a couple weeks ago.\n\n[1] https://www.postgresql.org/message-id/5C8ECAA4.3090301@anastigmatix.net\n\nRyan Lambert\n\n\nOn Sat, Mar 23, 2019 at 5:07 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 03/23/19 18:22, Tom Lane wrote:\n> >> Out of curiosity, what further processing would you expect libxml to do?\n> >\n> > Hm, I'd have thought it'd try to parse the arguments to some extent,\n> > but maybe not. Does everybody reimplement attribute parsing for\n> > themselves when using PIs?\n>\n> Yeah, the content of a PI (whatever's after the target name) is left\n> all to be defined by whatever XML-using application might care about\n> that PI.\n>\n> It could have an attribute=value syntax inspired by XML elements, or\n> some other form entirely, but there'd just better not be any ?> in it.\n>\n> Regards,\n> -Chap\n>\n\nI am unable to get latest patches I found [1] to apply cleanly to current branches. It's possible I missed the latest patches so if I'm using the wrong ones please let me know. I tried against master, 11.2 stable and the 11.2 tag with similar results. It's quite possible it's user error on my end, I am new to this process but didn't have issues with the previous patches when I tested those a couple weeks ago.[1] https://www.postgresql.org/message-id/5C8ECAA4.3090301@anastigmatix.netRyan LambertOn Sat, Mar 23, 2019 at 5:07 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/23/19 18:22, Tom Lane wrote:\n>> Out of curiosity, what further processing would you expect libxml to do?\n> \n> Hm, I'd have thought it'd try to parse the arguments to some extent,\n> but maybe not. Does everybody reimplement attribute parsing for\n> themselves when using PIs?\n\nYeah, the content of a PI (whatever's after the target name) is left\nall to be defined by whatever XML-using application might care about\nthat PI.\n\nIt could have an attribute=value syntax inspired by XML elements, or\nsome other form entirely, but there'd just better not be any ?> in it.\n\nRegards,\n-Chap",
"msg_date": "Sun, 24 Mar 2019 19:04:24 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/24/19 21:04, Ryan Lambert wrote:\n> I am unable to get latest patches I found [1] to apply cleanly to current\n> branches. It's possible I missed the latest patches so if I'm using the\n> wrong ones please let me know. I tried against master, 11.2 stable and the\n> 11.2 tag with similar results.\n\nTom pushed the content-with-DOCTYPE patch, it's now included in master,\nREL_11_STABLE, REL_10_STABLE, REL9_6_STABLE, REL9_5_STABLE, and\nREL9_4_STABLE.\n\nThe only patch that's left to be reviewed and applied is the documentation\nfix, latest in [1].\n\nIf you were interested in giving a review opinion on some XML documentation.\n\nRegards,\n-Chap\n\n\n[1] https://www.postgresql.org/message-id/5C96DBB5.2080103@anastigmatix.net\n\n",
"msg_date": "Sun, 24 Mar 2019 21:49:31 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/24/19 21:04, Ryan Lambert wrote:\n>> I am unable to get latest patches I found [1] to apply cleanly to current\n>> branches. It's possible I missed the latest patches so if I'm using the\n>> wrong ones please let me know. I tried against master, 11.2 stable and the\n>> 11.2 tag with similar results.\n\n> Tom pushed the content-with-DOCTYPE patch, it's now included in master,\n> REL_11_STABLE, REL_10_STABLE, REL9_6_STABLE, REL9_5_STABLE, and\n> REL9_4_STABLE.\n\nRight. If you want to test (and please do!) you could grab the relevant\nbranch tip from our git repo, or download a nightly snapshot tarball from\n\nhttps://www.postgresql.org/ftp/snapshot/\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 24 Mar 2019 23:18:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Perfect, thank you! I do remember seeing that message now, but hadn't\nunderstood what it really meant.\nI will test later today. Thanks\n\n*Ryan*\n\nOn Sun, Mar 24, 2019 at 9:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Chapman Flack <chap@anastigmatix.net> writes:\n> > On 03/24/19 21:04, Ryan Lambert wrote:\n> >> I am unable to get latest patches I found [1] to apply cleanly to\n> current\n> >> branches. It's possible I missed the latest patches so if I'm using the\n> >> wrong ones please let me know. I tried against master, 11.2 stable and\n> the\n> >> 11.2 tag with similar results.\n>\n> > Tom pushed the content-with-DOCTYPE patch, it's now included in master,\n> > REL_11_STABLE, REL_10_STABLE, REL9_6_STABLE, REL9_5_STABLE, and\n> > REL9_4_STABLE.\n>\n> Right. If you want to test (and please do!) you could grab the relevant\n> branch tip from our git repo, or download a nightly snapshot tarball from\n>\n> https://www.postgresql.org/ftp/snapshot/\n>\n> regards, tom lane\n>\n\nPerfect, thank you! I do remember seeing that message now, but hadn't understood what it really meant.I will test later today. ThanksRyanOn Sun, Mar 24, 2019 at 9:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Chapman Flack <chap@anastigmatix.net> writes:\n> On 03/24/19 21:04, Ryan Lambert wrote:\n>> I am unable to get latest patches I found [1] to apply cleanly to current\n>> branches. It's possible I missed the latest patches so if I'm using the\n>> wrong ones please let me know. I tried against master, 11.2 stable and the\n>> 11.2 tag with similar results.\n\n> Tom pushed the content-with-DOCTYPE patch, it's now included in master,\n> REL_11_STABLE, REL_10_STABLE, REL9_6_STABLE, REL9_5_STABLE, and\n> REL9_4_STABLE.\n\nRight. If you want to test (and please do!) you could grab the relevant\nbranch tip from our git repo, or download a nightly snapshot tarball from\n\nhttps://www.postgresql.org/ftp/snapshot/\n\n regards, tom lane",
"msg_date": "Mon, 25 Mar 2019 08:40:50 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI tested the master branch (commit 8edd0e7), REL_11_STABLE (commit 24df866) and REL9_6_STABLE (commit 5097368) and verified functionality. This patch fixes the bug I had reported [1] previously.\r\n\r\nWith this in the stable branches is it safe to assume this will be included with the next minor releases? Thanks for your work on this!!\r\n\r\nRyan\r\n\r\n[1] https://www.postgresql.org/message-id/flat/153478795159.1302.9617586466368699403%40wrigleys.postgresql.org\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 25 Mar 2019 22:03:09 +0000",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/25/19 18:03, Ryan Lambert wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n\nHi,\n\nThanks for the review! Yes, that part of this commitfest entry has been\ncommitted already and will appear in the next minor releases of those\nbranches.\n\nThat leaves only one patch in this commitfest entry that is still in\nneed of review, namely the update to the documentation.\n\nIf you happened to feel moved to look over a documentation patch, that\nwould be what this CF entry most needs in the waning days of the commitfest.\n\nThere seem to be community members reluctant to review it because of not\nfeeling sufficiently expert in XML to scrutinize every technical detail,\nbut there are other valuable angles for documentation review. (And the\nreason there *is* a documentation patch is the plentiful room for\nimprovement in the documentation that's already there, so as far as\nreviewing goes, the old yarn about the two guys, the running shoes, and\nthe bear comes to mind.)\n\nI can supply pointers to specs, etc., for anyone who does see some technical\ndetails in the patch and has questions about them.\n\nRegards,\n-Chap\n\n",
"msg_date": "Mon, 25 Mar 2019 18:52:06 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Thanks for the review! Yes, that part of this commitfest entry has been\n> committed already and will appear in the next minor releases of those\n> branches.\n\nIndeed, thanks for verifying that this fixes your problem.\n\n> That leaves only one patch in this commitfest entry that is still in\n> need of review, namely the update to the documentation.\n\nYeah. Since it *is* in need of review, I changed the CF entry's\nstate back to Needs Review.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 18:56:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Ok, I'll give it a go.\n\n\n> If you happened to feel moved to look over a documentation patch, that\n> would be what this CF entry most needs in the waning days of the\n> commitfest.\n\n\nIs the xml-functions-type-docfix-4.patch [1] the one needing review? I'll\ntest applying it and review the changes in better detail. Is there a\nsection in the docs that shows how to verify if the updated pages render\nproperly? I would assume the pages are build when installing from source.\n\nRyan\n\n[1]\nhttps://www.postgresql.org/message-id/attachment/100016/xml-functions-type-docfix-4.patch\n\nOn Mon, Mar 25, 2019 at 4:52 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 03/25/19 18:03, Ryan Lambert wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: not tested\n>\n> Hi,\n>\n> Thanks for the review! Yes, that part of this commitfest entry has been\n> committed already and will appear in the next minor releases of those\n> branches.\n>\n> That leaves only one patch in this commitfest entry that is still in\n> need of review, namely the update to the documentation.\n>\n> If you happened to feel moved to look over a documentation patch, that\n> would be what this CF entry most needs in the waning days of the\n> commitfest.\n>\n> There seem to be community members reluctant to review it because of not\n> feeling sufficiently expert in XML to scrutinize every technical detail,\n> but there are other valuable angles for documentation review. (And the\n> reason there *is* a documentation patch is the plentiful room for\n> improvement in the documentation that's already there, so as far as\n> reviewing goes, the old yarn about the two guys, the running shoes, and\n> the bear comes to mind.)\n>\n> I can supply pointers to specs, etc., for anyone who does see some\n> technical\n> details in the patch and has questions about them.\n>\n> Regards,\n> -Chap\n>\n\nOk, I'll give it a go. If you happened to feel moved to look over a documentation patch, thatwould be what this CF entry most needs in the waning days of the commitfest.Is the xml-functions-type-docfix-4.patch [1] the one needing review? I'll test applying it and review the changes in better detail. Is there a section in the docs that shows how to verify if the updated pages render properly? I would assume the pages are build when installing from source.Ryan[1] https://www.postgresql.org/message-id/attachment/100016/xml-functions-type-docfix-4.patchOn Mon, Mar 25, 2019 at 4:52 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/25/19 18:03, Ryan Lambert wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n\nHi,\n\nThanks for the review! Yes, that part of this commitfest entry has been\ncommitted already and will appear in the next minor releases of those\nbranches.\n\nThat leaves only one patch in this commitfest entry that is still in\nneed of review, namely the update to the documentation.\n\nIf you happened to feel moved to look over a documentation patch, that\nwould be what this CF entry most needs in the waning days of the commitfest.\n\nThere seem to be community members reluctant to review it because of not\nfeeling sufficiently expert in XML to scrutinize every technical detail,\nbut there are other valuable angles for documentation review. (And the\nreason there *is* a documentation patch is the plentiful room for\nimprovement in the documentation that's already there, so as far as\nreviewing goes, the old yarn about the two guys, the running shoes, and\nthe bear comes to mind.)\n\nI can supply pointers to specs, etc., for anyone who does see some technical\ndetails in the patch and has questions about them.\n\nRegards,\n-Chap",
"msg_date": "Tue, 26 Mar 2019 16:17:49 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Ryan Lambert <ryan@rustprooflabs.com> writes:\n> Is the xml-functions-type-docfix-4.patch [1] the one needing review? I'll\n> test applying it and review the changes in better detail. Is there a\n> section in the docs that shows how to verify if the updated pages render\n> properly? I would assume the pages are build when installing from source.\n\nPlain old \"make all\" doesn't build the docs. See\nhttps://www.postgresql.org/docs/devel/docguide.html\nfor tooling prerequisites and build instructions.\n\n(Usually people just build the HTML docs and look at them\nwith a browser; the other doc formats are less interesting.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Mar 2019 18:31:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nOverall I think this patch [1] improves the docs and help explain edge case functionality that many of us, myself included, don't fully understand. I can't verify technical accuracy for many of the details (nuances between XPath 1.0, et. al), but overall my experience with the XML functionality lines up with what has been documented here. Adding the clear declaration of \"XPath 1.0\" instead of the generic \"XPath\" helps make it clear of the functional differences and helps frame the differences for new users.\r\n\r\nI have two recommendations for features.sgml. You state: \r\n\r\n> relies on the libxml library\r\n\r\nShould this be clarified as the libxml2 library? That's what I installed to build postgres from source (Ubuntu 16/18). If it is the libxml library and the \"2\" is irrelevant, or it it works with either, it might be nice to have a clarifying note to indicate that.\r\n\r\nThere are a few places where the parenthesis around a block of text seem unnecessary. I don't think parens serve a purpose when a full sentence is contained within.\r\n\r\n> (The libxml library does seem to always return nodesets to PostgreSQL with their members in the same relative order they had in the input document; it does not commit to this behavior, and an XPath 1.0 expression cannot control it.)\r\n\r\n\r\nIt seems you are standardizing from \"node set\" to \"nodeset\", is that the preferred nomenclature or preference?\r\n\r\nHopefully this helps! Thanks,\r\n\r\nRyan Lambert\r\n\r\n[1] https://www.postgresql.org/message-id/attachment/100016/xml-functions-type-docfix-4.patch\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 27 Mar 2019 01:39:37 +0000",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Ryan Lambert <ryan@rustprooflabs.com> writes:\n> I have two recommendations for features.sgml. You state: \n\n>> relies on the libxml library\n\n> Should this be clarified as the libxml2 library? That's what I installed to build postgres from source (Ubuntu 16/18). If it is the libxml library and the \"2\" is irrelevant, or it it works with either, it might be nice to have a clarifying note to indicate that.\n\nDo we need to mention that at all? If you're not building from source,\nit doesn't seem very interesting ... but maybe I'm missing some reason\nwhy end users would care.\n\n> It seems you are standardizing from \"node set\" to \"nodeset\", is that the preferred nomenclature or preference?\n\nThat seemed a bit jargon-y to me too. If that's standard terminology\nin the XML world, maybe it's fine; but I'd have stuck with \"node set\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Mar 2019 23:52:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/26/19 21:39, Ryan Lambert wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n\nThanks for the review!\n\n> I have two recommendations for features.sgml. You state: \n> \n>> relies on the libxml library\n> \n> Should this be clarified as the libxml2 library? That's what I installed\n> to build postgres from source (Ubuntu 16/18). If it is the libxml library\n> and the \"2\" is irrelevant\n\nThat's a good catch. I'm not actually sure whether there is any \"libxml\"\nlibrary that isn't libxml2. Maybe there was once and nobody admits to\nhanging out with it. Most Google hits on \"libxml\" seem to be modules\nthat have libxml in their names and libxml2 as their actual dependency.\n\n Perl XML:LibXML: \"This module is an interface to libxml2\"\n Haskell libxml: \"Binding to libxml2\"\n libxml-ruby: \"The Libxml-Ruby project provides Ruby language bindings\n for the GNOME Libxml2 ...\"\n\n --with-libxml is the PostgreSQL configure option to make it use libxml2.\n\n The very web page http://xmlsoft.org/index.html says \"The XML C parser\n and toolkit of Gnome: libxml\" and is all about libxml2.\n\nSo I think I was unsure what convention to follow, and threw up my hands\nand went with libxml. I could just as easily throw them up and go with\nlibxml2. Which do you think would be better?\n\nOn 03/26/19 23:52, Tom Lane wrote:\n> Do we need to mention that at all? If you're not building from source,\n> it doesn't seem very interesting ... but maybe I'm missing some reason\n> why end users would care.\n\nThe three places I've mentioned it were the ones where I thought users\nmight care:\n\n - why are we stuck at XPath 1.0? It's what we get from the library we use.\n\n - in what order do we get things out from a (hmm) node-set? Per XPath 1.0,\n it's indeterminate (it's a set!), unlike XPath 2.0/XQuery where there's\n a sequence type and you have order control. Observable behavior from\n libxml2 (and you could certainly want to know this) is you get things out\n in document order, whether that's what you wanted or not, even though\n this is undocumented, and even counter-documented[1], libxml2 behavior.\n So it's an example of something you would fundamentally like to know,\n where the only available answer depends precariously on the library\n we happen to be using.\n\n - which limits in our implementation are inherent to the library, and\n which are just current limits in our embedding of it? (Maybe this is\n right at the border of what a user would care to know, but I know it's\n a question that crosses my mind when I bonk into a limit I wasn't\n expecting.)\n\n> There are a few places where the parenthesis around a block of text\n> seem unnecessary.\n\n)blush( that's a long-standing wart in my writing ... seems I often think\nin parentheses, then look and say \"those aren't needed\" and take them out,\nonly sometimes I don't.\n\nI skimmed just now and found a few instances of parenthesized whole\nsentence: the one you quoted, and some (if argument is null, the result\nis null), and (No rows will be produced if ....). Shall I deparenthesize\nthem all? Did you have other instances in mind?\n\n> It seems you are standardizing from \"node set\" to \"nodeset\", is that\n> the preferred nomenclature or preference?\n\nAnother good catch. I remember consciously making a last pass to get them\nall consistent, and I wanted them consistent with the spec, and I see now\nI messed up.\n\nXPath 1.0 [2] has zero instances of \"nodeset\", two of \"node set\" and about\nsix dozen of \"node-set\". The only appearances of \"node set\" without the\nhyphen are in a heading and its ToC entry. The stuff under that heading\nconsistently uses node-set. It seems that's the XPath 1.0 term for sure.\n\nWhen I made my consistency pass, I must have been looking too recently\nin libxml2 C source, rather than the spec.\n\nOn 03/26/19 23:52, Tom Lane wrote:\n> That seemed a bit jargon-y to me too. If that's standard terminology\n> in the XML world, maybe it's fine; but I'd have stuck with \"node set\".\n\nIt really was my intention (though I flubbed it) to use XPath 1.0's term\nfor XPath 1.0's concept; in my doc philosophy, that gives readers\nthe most breadcrumbs to follow for the rest of the details if they want\nthem. \"Node set\" might be some sort of squishy expository concept I'm\nusing, but node-set is a thing, in a spec.\n\nIf you agree, I should go through and fix my nodesets to be node-sets.\n\nI do think the terminology matters here, especially because of the\ndifferences between what you can do with a node-set (XPath 1.0 thing)\nand with a sequence (XPath 2.0+,XQuery,SQL/XML thing).\n\nLet me know what you'd like best on these points and I'll revise the patch.\n\nRegards,\n-Chap\n\n\n[1] http://xmlsoft.org/html/libxml-xpath.html#xmlNodeSet : \"array of nodes\n in no particular order\"\n\n[2] https://www.w3.org/TR/1999/REC-xpath-19991116/\n\n\n",
"msg_date": "Wed, 27 Mar 2019 01:05:27 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/26/19 21:39, Ryan Lambert wrote:\n\n> I can't verify technical accuracy for many of the details (nuances between\n> XPath 1.0, et. al), but overall my experience with the XML functionality\n> lines up with what has been documented here.\n\nBy the way, in case it's buried too far back in the email thread now,\nmuch of the early drafting for this happened on the wiki page\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL/XML_Standards\n\nwhich includes a lot of reference links, including a nice paper by\nAndrew Eisenberg and Jim Melton that introduced the major changes\nfrom the SQL:2003 to :2006 editions of SQL/XML.\n\nCheers,\n-Chap\n\n\n",
"msg_date": "Wed, 27 Mar 2019 01:53:20 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Thanks for putting up with a new reviewer!\n\n --with-libxml is the PostgreSQL configure option to make it use libxml2.\n>\n\n\n> The very web page http://xmlsoft.org/index.html says \"The XML C parser\n> and toolkit of Gnome: libxml\" and is all about libxml2.\n>\n\n\n> So I think I was unsure what convention to follow, and threw up my hands\n> and went with libxml. I could just as easily throw them up and go with\n> libxml2. Which do you think would be better?\n\n\nI think leaving it as libxml makes sense with all that. Good point that\n--with-libxml is used to build so I think staying with that works and is\nconsistent. I agree that having this point included does clarify the how\nand why of the limitations of this implementation.\n\nI also over-parenthesize so I'm used to looking for that in my own\nwriting. The full sentences were the ones that seemed excessive to me, I\nthink the others are ok and I won't nit-pick either way on those (unless\nyou want me to!).\n\nIf you agree, I should go through and fix my nodesets to be node-sets.\n\n\nYes, I like node-sets better, especially knowing it conforms to the spec's\nlanguage.\n\nThanks,\n\n*Ryan Lambert*\n\n\nOn Tue, Mar 26, 2019 at 11:05 PM Chapman Flack <chap@anastigmatix.net>\nwrote:\n\n> On 03/26/19 21:39, Ryan Lambert wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: not tested\n> > Documentation: tested, passed\n>\n> Thanks for the review!\n>\n> > I have two recommendations for features.sgml. You state:\n> >\n> >> relies on the libxml library\n> >\n> > Should this be clarified as the libxml2 library? That's what I installed\n> > to build postgres from source (Ubuntu 16/18). If it is the libxml\n> library\n> > and the \"2\" is irrelevant\n>\n> That's a good catch. I'm not actually sure whether there is any \"libxml\"\n> library that isn't libxml2. Maybe there was once and nobody admits to\n> hanging out with it. Most Google hits on \"libxml\" seem to be modules\n> that have libxml in their names and libxml2 as their actual dependency.\n>\n> Perl XML:LibXML: \"This module is an interface to libxml2\"\n> Haskell libxml: \"Binding to libxml2\"\n> libxml-ruby: \"The Libxml-Ruby project provides Ruby language bindings\n> for the GNOME Libxml2 ...\"\n>\n> --with-libxml is the PostgreSQL configure option to make it use libxml2.\n>\n> The very web page http://xmlsoft.org/index.html says \"The XML C parser\n> and toolkit of Gnome: libxml\" and is all about libxml2.\n>\n> So I think I was unsure what convention to follow, and threw up my hands\n> and went with libxml. I could just as easily throw them up and go with\n> libxml2. Which do you think would be better?\n>\n> On 03/26/19 23:52, Tom Lane wrote:\n> > Do we need to mention that at all? If you're not building from source,\n> > it doesn't seem very interesting ... but maybe I'm missing some reason\n> > why end users would care.\n>\n> The three places I've mentioned it were the ones where I thought users\n> might care:\n>\n> - why are we stuck at XPath 1.0? It's what we get from the library we use.\n>\n> - in what order do we get things out from a (hmm) node-set? Per XPath 1.0,\n> it's indeterminate (it's a set!), unlike XPath 2.0/XQuery where there's\n> a sequence type and you have order control. Observable behavior from\n> libxml2 (and you could certainly want to know this) is you get things\n> out\n> in document order, whether that's what you wanted or not, even though\n> this is undocumented, and even counter-documented[1], libxml2 behavior.\n> So it's an example of something you would fundamentally like to know,\n> where the only available answer depends precariously on the library\n> we happen to be using.\n>\n> - which limits in our implementation are inherent to the library, and\n> which are just current limits in our embedding of it? (Maybe this is\n> right at the border of what a user would care to know, but I know it's\n> a question that crosses my mind when I bonk into a limit I wasn't\n> expecting.)\n>\n> > There are a few places where the parenthesis around a block of text\n> > seem unnecessary.\n>\n> )blush( that's a long-standing wart in my writing ... seems I often think\n> in parentheses, then look and say \"those aren't needed\" and take them out,\n> only sometimes I don't.\n>\n> I skimmed just now and found a few instances of parenthesized whole\n> sentence: the one you quoted, and some (if argument is null, the result\n> is null), and (No rows will be produced if ....). Shall I deparenthesize\n> them all? Did you have other instances in mind?\n>\n> > It seems you are standardizing from \"node set\" to \"nodeset\", is that\n> > the preferred nomenclature or preference?\n>\n> Another good catch. I remember consciously making a last pass to get them\n> all consistent, and I wanted them consistent with the spec, and I see now\n> I messed up.\n>\n> XPath 1.0 [2] has zero instances of \"nodeset\", two of \"node set\" and about\n> six dozen of \"node-set\". The only appearances of \"node set\" without the\n> hyphen are in a heading and its ToC entry. The stuff under that heading\n> consistently uses node-set. It seems that's the XPath 1.0 term for sure.\n>\n> When I made my consistency pass, I must have been looking too recently\n> in libxml2 C source, rather than the spec.\n>\n> On 03/26/19 23:52, Tom Lane wrote:\n> > That seemed a bit jargon-y to me too. If that's standard terminology\n> > in the XML world, maybe it's fine; but I'd have stuck with \"node set\".\n>\n> It really was my intention (though I flubbed it) to use XPath 1.0's term\n> for XPath 1.0's concept; in my doc philosophy, that gives readers\n> the most breadcrumbs to follow for the rest of the details if they want\n> them. \"Node set\" might be some sort of squishy expository concept I'm\n> using, but node-set is a thing, in a spec.\n>\n> If you agree, I should go through and fix my nodesets to be node-sets.\n>\n> I do think the terminology matters here, especially because of the\n> differences between what you can do with a node-set (XPath 1.0 thing)\n> and with a sequence (XPath 2.0+,XQuery,SQL/XML thing).\n>\n> Let me know what you'd like best on these points and I'll revise the patch.\n>\n> Regards,\n> -Chap\n>\n>\n> [1] http://xmlsoft.org/html/libxml-xpath.html#xmlNodeSet : \"array of nodes\n> in no particular order\"\n>\n> [2] https://www.w3.org/TR/1999/REC-xpath-19991116/\n>\n\nThanks for putting up with a new reviewer! --with-libxml is the PostgreSQL configure option to make it use libxml2. The very web page http://xmlsoft.org/index.html says \"The XML C parser and toolkit of Gnome: libxml\" and is all about libxml2. So I think I was unsure what convention to follow, and threw up my handsand went with libxml. I could just as easily throw them up and go withlibxml2. Which do you think would be better? I think leaving it as libxml makes sense with all that. Good point that --with-libxml is used to build so I think staying with that works and is consistent. I agree that having this point included does clarify the how and why of the limitations of this implementation.I also over-parenthesize so I'm used to looking for that in my own writing. The full sentences were the ones that seemed excessive to me, I think the others are ok and I won't nit-pick either way on those (unless you want me to!). If you agree, I should go through and fix my nodesets to be node-sets. Yes, I like node-sets better, especially knowing it conforms to the spec's language.Thanks,Ryan LambertOn Tue, Mar 26, 2019 at 11:05 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/26/19 21:39, Ryan Lambert wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: tested, passed\n\nThanks for the review!\n\n> I have two recommendations for features.sgml. You state: \n> \n>> relies on the libxml library\n> \n> Should this be clarified as the libxml2 library? That's what I installed\n> to build postgres from source (Ubuntu 16/18). If it is the libxml library\n> and the \"2\" is irrelevant\n\nThat's a good catch. I'm not actually sure whether there is any \"libxml\"\nlibrary that isn't libxml2. Maybe there was once and nobody admits to\nhanging out with it. Most Google hits on \"libxml\" seem to be modules\nthat have libxml in their names and libxml2 as their actual dependency.\n\n Perl XML:LibXML: \"This module is an interface to libxml2\"\n Haskell libxml: \"Binding to libxml2\"\n libxml-ruby: \"The Libxml-Ruby project provides Ruby language bindings\n for the GNOME Libxml2 ...\"\n\n --with-libxml is the PostgreSQL configure option to make it use libxml2.\n\n The very web page http://xmlsoft.org/index.html says \"The XML C parser\n and toolkit of Gnome: libxml\" and is all about libxml2.\n\nSo I think I was unsure what convention to follow, and threw up my hands\nand went with libxml. I could just as easily throw them up and go with\nlibxml2. Which do you think would be better?\n\nOn 03/26/19 23:52, Tom Lane wrote:\n> Do we need to mention that at all? If you're not building from source,\n> it doesn't seem very interesting ... but maybe I'm missing some reason\n> why end users would care.\n\nThe three places I've mentioned it were the ones where I thought users\nmight care:\n\n - why are we stuck at XPath 1.0? It's what we get from the library we use.\n\n - in what order do we get things out from a (hmm) node-set? Per XPath 1.0,\n it's indeterminate (it's a set!), unlike XPath 2.0/XQuery where there's\n a sequence type and you have order control. Observable behavior from\n libxml2 (and you could certainly want to know this) is you get things out\n in document order, whether that's what you wanted or not, even though\n this is undocumented, and even counter-documented[1], libxml2 behavior.\n So it's an example of something you would fundamentally like to know,\n where the only available answer depends precariously on the library\n we happen to be using.\n\n - which limits in our implementation are inherent to the library, and\n which are just current limits in our embedding of it? (Maybe this is\n right at the border of what a user would care to know, but I know it's\n a question that crosses my mind when I bonk into a limit I wasn't\n expecting.)\n\n> There are a few places where the parenthesis around a block of text\n> seem unnecessary.\n\n)blush( that's a long-standing wart in my writing ... seems I often think\nin parentheses, then look and say \"those aren't needed\" and take them out,\nonly sometimes I don't.\n\nI skimmed just now and found a few instances of parenthesized whole\nsentence: the one you quoted, and some (if argument is null, the result\nis null), and (No rows will be produced if ....). Shall I deparenthesize\nthem all? Did you have other instances in mind?\n\n> It seems you are standardizing from \"node set\" to \"nodeset\", is that\n> the preferred nomenclature or preference?\n\nAnother good catch. I remember consciously making a last pass to get them\nall consistent, and I wanted them consistent with the spec, and I see now\nI messed up.\n\nXPath 1.0 [2] has zero instances of \"nodeset\", two of \"node set\" and about\nsix dozen of \"node-set\". The only appearances of \"node set\" without the\nhyphen are in a heading and its ToC entry. The stuff under that heading\nconsistently uses node-set. It seems that's the XPath 1.0 term for sure.\n\nWhen I made my consistency pass, I must have been looking too recently\nin libxml2 C source, rather than the spec.\n\nOn 03/26/19 23:52, Tom Lane wrote:\n> That seemed a bit jargon-y to me too. If that's standard terminology\n> in the XML world, maybe it's fine; but I'd have stuck with \"node set\".\n\nIt really was my intention (though I flubbed it) to use XPath 1.0's term\nfor XPath 1.0's concept; in my doc philosophy, that gives readers\nthe most breadcrumbs to follow for the rest of the details if they want\nthem. \"Node set\" might be some sort of squishy expository concept I'm\nusing, but node-set is a thing, in a spec.\n\nIf you agree, I should go through and fix my nodesets to be node-sets.\n\nI do think the terminology matters here, especially because of the\ndifferences between what you can do with a node-set (XPath 1.0 thing)\nand with a sequence (XPath 2.0+,XQuery,SQL/XML thing).\n\nLet me know what you'd like best on these points and I'll revise the patch.\n\nRegards,\n-Chap\n\n\n[1] http://xmlsoft.org/html/libxml-xpath.html#xmlNodeSet : \"array of nodes\n in no particular order\"\n\n[2] https://www.w3.org/TR/1999/REC-xpath-19991116/",
"msg_date": "Wed, 27 Mar 2019 06:22:43 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 2019-Mar-27, Chapman Flack wrote:\n\n> On 03/26/19 21:39, Ryan Lambert wrote:\n\n> > Should this be clarified as the libxml2 library? That's what I installed\n> > to build postgres from source (Ubuntu 16/18). If it is the libxml library\n> > and the \"2\" is irrelevant\n> \n> That's a good catch. I'm not actually sure whether there is any \"libxml\"\n> library that isn't libxml2. Maybe there was once and nobody admits to\n> hanging out with it. Most Google hits on \"libxml\" seem to be modules\n> that have libxml in their names and libxml2 as their actual dependency.\n> \n> Perl XML:LibXML: \"This module is an interface to libxml2\"\n> Haskell libxml: \"Binding to libxml2\"\n> libxml-ruby: \"The Libxml-Ruby project provides Ruby language bindings\n> for the GNOME Libxml2 ...\"\n> \n> --with-libxml is the PostgreSQL configure option to make it use libxml2.\n> \n> The very web page http://xmlsoft.org/index.html says \"The XML C parser\n> and toolkit of Gnome: libxml\" and is all about libxml2.\n> \n> So I think I was unsure what convention to follow, and threw up my hands\n> and went with libxml. I could just as easily throw them up and go with\n> libxml2. Which do you think would be better?\n\nDaniel Veillard actually had libxml version 1 in that repository (mostly\nof GNOME provenance, it seems, put together during some W3C meeting in\n1998). The version number changed to 2 sometime during year 2000.\nVersion 1 was mostly abandoned at that point, and for some reason\neveryone keeps using \"libxml2\" as the name as though it was a different\nthing from \"libxml\". I suppose the latter name is just too generic, or\nbecause they wanted to differentiate from the old (probably\nincompatible API) code.\nhttps://gitlab.gnome.org/GNOME/libxml2/tree/LIB_XML_1_BRANCH\n\nEveryone calls it \"libxml2\" nowadays. Let's just use that and avoid any\npossible confusion. If some libxml3 emerges one day, it's quite likely\nwe'll need to revise much more than our docs in order to use it.\n\n> On 03/26/19 23:52, Tom Lane wrote:\n> > Do we need to mention that at all? If you're not building from source,\n> > it doesn't seem very interesting ... but maybe I'm missing some reason\n> > why end users would care.\n> \n> The three places I've mentioned it were the ones where I thought users\n> might care:\n\nThese seem relevant details.\n\n> If you agree, I should go through and fix my nodesets to be node-sets.\n\n+1\n\n> [1] http://xmlsoft.org/html/libxml-xpath.html#xmlNodeSet : \"array of nodes\n> in no particular order\"\n\nWhat this means is \"we don't guarantee any specific order\". It's like a\nquery without ORDER BY: you may currently always get document order, but\nif you upgrade the library one day, it's quite possible to get the nodes\nin another order and you'll not get a refund. So you (the user) should\nnot rely on the order, or at least be mindful that it may change in the\nfuture.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 10:31:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 3/27/19 9:31 AM, Alvaro Herrera wrote:\n> Everyone calls it \"libxml2\" nowadays. Let's just use that and avoid any\n> possible confusion. If some libxml3 emerges one day, it's quite likely\n> we'll need to revise much more than our docs in order to use it.\n\nThat's persuasive to me. I'll change the references to say libxml2\nand let a committer serve as tiebreaker.\n\n>> [1] http://xmlsoft.org/html/libxml-xpath.html#xmlNodeSet : \"array of nodes\n>> in no particular order\"\n> \n> What this means is \"we don't guarantee any specific order\". It's like a\n> query without ORDER BY: you may currently always get document order, but\n> if you upgrade the library one day, it's quite possible to get the nodes\n> in another order and you'll not get a refund. So you (the user) should\n> not rely on the order, or at least be mindful that it may change in the\n> future.\n\nExactly. I called the behavior \"counter-documented\" to distinguish this\nfrom the usual \"undocumented\" case, where you notice that a library is\nbehaving in a way you like, but its docs are utterly silent on the\nmatter, so you know you're going out on a limb to count on what you've\nnoticed.\n\nIn this case, you can notice the handy behavior but the doc *comes\nright out and disclaims it* so if you count on it, you're going out\non a limb that has no bark left and looks punky.\n\nAnd yet it seems worthwhile to mention how the library does in fact\nseem to behave, because you might well be in the situation of porting\ncode over from SQL/XML:2006+ or XQuery or XPath 2+, or those are the\nlanguages you've learned, so you may have order assumptions you've made,\nand be surprised that XPath 1 doesn't let you make them, and at least\nwe can say \"in a pinch, if you don't mind standing on this punky limb\nhere, you may be able to use the code you've got without having to\nrefactor every XMLTABLE() or xpath() into something wrapped in an\nouter SQL query with ORDER BY. You just don't get your money back if\na later library upgrade changes the order.\"\n\nThe wiki page remembers[1] that I had tried some pretty gnarly XPath 1\nqueries to see if I could make libxml2 return things in a different\norder, but no, got document order every time.\n\nRegards,\n-Chap\n\n[1]\nhttps://www.postgresql.org/message-id/5C465A65.4030305%40anastigmatix.net\n\n\n",
"msg_date": "Wed, 27 Mar 2019 12:35:22 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Hi,\n\nxml-functions-type-docfix-5.patch attached, with node-sets instead of\nnodesets, libxml2 instead of libxml, and parenthesized full sentences\nnow au naturel.\n\nI ended up turning the formerly-parenthesized note about libxml2's\nnode-set ordering into a DocBook <note>: there is really something\nparenthetical about it, with the official statement of node-set\nelement ordering being that there is none, and the description of\nwhat the library happens to do being of possible interest, but set\napart, with the necessary caveats about relying on it.\n\nSpotted and fixed a couple more typos in the process.\n\nRegards,\n-Chap",
"msg_date": "Wed, 27 Mar 2019 19:07:43 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/27/19 19:07, Chapman Flack wrote:\n> xml-functions-type-docfix-5.patch attached, with node-sets instead of\n> nodesets, libxml2 instead of libxml, and parenthesized full sentences\n> now au naturel.\n> \n> I ended up turning the formerly-parenthesized note about libxml2's\n> node-set ordering into a DocBook <note>: there is really something\n> parenthetical about it, with the official statement of node-set\n> element ordering being that there is none, and the description of\n> what the library happens to do being of possible interest, but set\n> apart, with the necessary caveats about relying on it.\n\nI have just suffered a giant sinking feeling upon re-reading this\nsentence in our XMLTABLE doc:\n\n A column marked FOR ORDINALITY will be populated with row numbers\n matching the order in which the output rows appeared in the original\n input XML document.\n\nI've been skimming right over it all this time, and that right there is\na glaring built-in reliance on the observable-but-disclaimed iteration\norder of a libxml2 node-set.\n\nI'm a bit unsure what any clarifying language should even say.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 27 Mar 2019 19:27:23 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 03/27/19 19:27, Chapman Flack wrote:\n> A column marked FOR ORDINALITY will be populated with row numbers\n> matching the order in which the output rows appeared in the original\n> input XML document.\n> \n> I've been skimming right over it all this time, and that right there is\n> a glaring built-in reliance on the observable-but-disclaimed iteration\n> order of a libxml2 node-set.\n\nSo, xml-functions-type-docfix-6.patch.\n\nI changed that language to say \"populated with row numbers, starting\nwith 1, in the order of nodes retrieved from the row_expression's\nresult node-set.\"\n\nThat's not such a terrible thing to have to say; in fact, it's the\n*correct* description for the standard, XQuery-based, XMLTABLE (where\nthe language gives you control of the result sequence's order).\n\nI followed that with a short note saying since XPath 1.0 doesn't\nspecify that order, relying on it is implementation-dependent, and\nlinked to the existing Appendix D discussion.\n\nI would have like to link directly to the <listitem>, but of course\n<xref> doesn't know what to call that, so I linked to the <sect3>\ninstead.\n\nRegards,\n-Chap",
"msg_date": "Thu, 28 Mar 2019 19:45:24 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "I applied and reviewed xml-functions-type-docfix-6.patch. Looks good to me.\n\nI like the standardization (e.g. libxml2, node-set) and I didn't catch any\nspots that used the other versions. I agree that the <note> is appropriate\nfor that block.\nIt also looks like you incorporated Alvaro's feedback about sorting, or the\nlack thereof.\n\nLet me know if there's anything else I can do to help get this accepted.\nThanks,\n\nRyan\n\n\n\nOn Thu, Mar 28, 2019 at 5:45 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 03/27/19 19:27, Chapman Flack wrote:\n> > A column marked FOR ORDINALITY will be populated with row numbers\n> > matching the order in which the output rows appeared in the original\n> > input XML document.\n> >\n> > I've been skimming right over it all this time, and that right there is\n> > a glaring built-in reliance on the observable-but-disclaimed iteration\n> > order of a libxml2 node-set.\n>\n> So, xml-functions-type-docfix-6.patch.\n>\n> I changed that language to say \"populated with row numbers, starting\n> with 1, in the order of nodes retrieved from the row_expression's\n> result node-set.\"\n>\n> That's not such a terrible thing to have to say; in fact, it's the\n> *correct* description for the standard, XQuery-based, XMLTABLE (where\n> the language gives you control of the result sequence's order).\n>\n> I followed that with a short note saying since XPath 1.0 doesn't\n> specify that order, relying on it is implementation-dependent, and\n> linked to the existing Appendix D discussion.\n>\n> I would have like to link directly to the <listitem>, but of course\n> <xref> doesn't know what to call that, so I linked to the <sect3>\n> instead.\n>\n> Regards,\n> -Chap\n>\n\nI applied and reviewed xml-functions-type-docfix-6.patch. Looks good to me.I like the standardization (e.g. libxml2, node-set) and I didn't catch any spots that used the other versions. I agree that the <note> is appropriate for that block.It also looks like you incorporated Alvaro's feedback about sorting, or the lack thereof. Let me know if there's anything else I can do to help get this accepted. Thanks,RyanOn Thu, Mar 28, 2019 at 5:45 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/27/19 19:27, Chapman Flack wrote:\n> A column marked FOR ORDINALITY will be populated with row numbers\n> matching the order in which the output rows appeared in the original\n> input XML document.\n> \n> I've been skimming right over it all this time, and that right there is\n> a glaring built-in reliance on the observable-but-disclaimed iteration\n> order of a libxml2 node-set.\n\nSo, xml-functions-type-docfix-6.patch.\n\nI changed that language to say \"populated with row numbers, starting\nwith 1, in the order of nodes retrieved from the row_expression's\nresult node-set.\"\n\nThat's not such a terrible thing to have to say; in fact, it's the\n*correct* description for the standard, XQuery-based, XMLTABLE (where\nthe language gives you control of the result sequence's order).\n\nI followed that with a short note saying since XPath 1.0 doesn't\nspecify that order, relying on it is implementation-dependent, and\nlinked to the existing Appendix D discussion.\n\nI would have like to link directly to the <listitem>, but of course\n<xref> doesn't know what to call that, so I linked to the <sect3>\ninstead.\n\nRegards,\n-Chap",
"msg_date": "Sat, 30 Mar 2019 10:06:18 -0600",
"msg_from": "Ryan Lambert <ryan@rustprooflabs.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> So, xml-functions-type-docfix-6.patch.\n\nPushed with some light(?) copy-editing.\n\nI believe this closes out everything discussed in\n\nhttps://commitfest.postgresql.org/22/1872/\n\nbut I haven't gone through all three threads in detail.\nPlease confirm whether that CF entry can be closed or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2019 16:22:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 4/1/19 4:22 PM, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> So, xml-functions-type-docfix-6.patch.\n> \n> Pushed with some light(?) copy-editing.\n> \n> I believe this closes out everything discussed in\n> \n> https://commitfest.postgresql.org/22/1872/\n> \n> but I haven't gone through all three threads in detail.\n> Please confirm whether that CF entry can be closed or not.\n\nI think that does wrap up everything in the CF entry. Thanks!\nAnd thanks for the copy-edits; they do read better than what\nI came up with.\n\nWhen I get a moment, I'll update the PostgreSQL vs. SQL/XML wiki page\nto reflect the things that were fixed.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 1 Apr 2019 17:24:34 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 2019-Apr-01, Chapman Flack wrote:\n\n> When I get a moment, I'll update the PostgreSQL vs. SQL/XML wiki page\n> to reflect the things that were fixed.\n\nI think there were some outright bugs in the docs, at least for\nXMLTABLE, that maybe we should backpatch. If you have the energy to\ncherry-pick a minimal doc update to 10/11, I offer to back-patch it.\n\nThanks everyone for taking care of this!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 18:34:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 04/01/19 17:34, Alvaro Herrera wrote:\n> I think there were some outright bugs in the docs, at least for\n> XMLTABLE, that maybe we should backpatch. If you have the energy to\n> cherry-pick a minimal doc update to 10/11, I offer to back-patch it.\n\nI'll see what I can do. There's breathing room for that after the end of\nthe CF, right?\n\nIt seems to me that the conformance-appendix part is worth using,\nalong with all of the clarifications in datatype.sgml and func.sgml\nexcept the ones clarifying fixed behavior, where the behavior fix\nwasn't backpatched. That'll be where the cherry-picking effort lies.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 1 Apr 2019 18:09:38 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 04/01/19 17:34, Alvaro Herrera wrote:\n> I think there were some outright bugs in the docs, at least for\n> XMLTABLE, that maybe we should backpatch. If you have the energy to\n> cherry-pick a minimal doc update to 10/11, I offer to back-patch it.\n\nI don't know if this fits your intention for \"minimal\". What I've done\nis taken the doc commit made by Tom for 12 (12d46a), then revised it\nso it describes the unfixed behavior for the bugs whose fixes weren't\nbackpatched to 11 or 10.\n\nI don't know if it's too late to get in the upcoming minor releases,\nbut maybe it can, if it looks ok, or the next ones, if that's too rushed.\n\n11.patch applies cleanly to 11, 10.patch to 10.\n\nI've confirmed the 11 docs build successfully, but without sgml tools,\nI haven't confirmed that for 10.\n\nRegards,\n-Chap",
"msg_date": "Sat, 3 Aug 2019 11:12:15 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "On 2019-Aug-03, Chapman Flack wrote:\n\n> I don't know if it's too late to get in the upcoming minor releases,\n> but maybe it can, if it looks ok, or the next ones, if that's too rushed.\n\nHmm, I'm travelling back home from a conference the weekend, so yeah I\nthink it would be rushed for me to handle for the upcoming set. But I\ncan look at it before the *next* set.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 12:15:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Hi Alvaro,\n\nOn 08/03/19 12:15, Alvaro Herrera wrote:\n\n>> I don't know if it's too late to get in the upcoming minor releases,\n>> but maybe it can, if it looks ok, or the next ones, if that's too rushed.\n> \n> Hmm, I'm travelling back home from a conference the weekend, so yeah I\n> think it would be rushed for me to handle for the upcoming set. But I\n> can look at it before the *next* set.\n\nAre these on your radar to maybe backpatch in this round of activity?\n\nThe latest patches I did for 11 and 10 are in\nhttps://www.postgresql.org/message-id/5D45A44F.8010803%40anastigmatix.net\n\nCheers,\n-Chap\n\n\n",
"msg_date": "Thu, 5 Sep 2019 18:06:18 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
},
{
"msg_contents": "Hi Chapman,\n\nOn 2019-Sep-05, Chapman Flack wrote:\n\n> Are these on your radar to maybe backpatch in this round of activity?\n> \n> The latest patches I did for 11 and 10 are in\n> https://www.postgresql.org/message-id/5D45A44F.8010803%40anastigmatix.net\n\nThanks! I just pushed these to those branches.\n\nI think we're finally done with these. Many thanks for your\npersistence.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Sep 2019 17:34:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix XML handling with DOCTYPE"
}
] |
[
{
"msg_contents": "Noob here. I'm getting started on building a Postgres extension.\n\nI'd like to add some keywords/clauses to the SELECT statement. For my\nparticular application, the syntax with new keywords would be way better\nthan trying to do it through functions alone. I would add some new keywords\nfollowed by expressions similar to those allowed in WHERE and GROUP BY\nclauses. The new SELECT would return multiple result sets.\n\nI did find an example where someone did modify the parser:\nhttp://www.neilconway.org/talks/hacking/hack_slides.pdf\n\nQuestion: is it possible to do this in an extension? Or do I have to fork\nthe Postgres codebase itself?\n\nObviously, I'd prefer the former. Forks are bad.\n\nNoob here. I'm getting started on building a Postgres extension.I'd like to add some keywords/clauses to the SELECT statement. For my particular application, the syntax with new keywords would be way better than trying to do it through functions alone. I would add some new keywords followed by expressions similar to those allowed in WHERE and GROUP BY clauses. The new SELECT would return multiple result sets.I did find an example where someone did modify the parser: http://www.neilconway.org/talks/hacking/hack_slides.pdfQuestion: is it possible to do this in an extension? Or do I have to fork the Postgres codebase itself?Obviously, I'd prefer the former. Forks are bad.",
"msg_date": "Sat, 16 Mar 2019 18:24:11 -0500",
"msg_from": "Chris Cleveland <ccleve+github@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Possible to modify query language in an extension?"
},
{
"msg_contents": "Chris Cleveland <ccleve+github@dieselpoint.com> writes:\n> I'd like to add some keywords/clauses to the SELECT statement.\n\nYeah, you'll have to modify gram.y (and a pile of other places)\nif you want to do that. That's certainly something we do all\nthe time, but bison doesn't provide any way to add grammar\nproductions on-the-fly, so it does imply core-code mods.\n\n> ... The new SELECT would return multiple result sets.\n\nAnd that sounds like you'd also be redefining the wire protocol,\nhence having to touch client-side code as well as the server.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Mar 2019 00:21:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible to modify query language in an extension?"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 12:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Chris Cleveland <ccleve+github@dieselpoint.com> writes:\n> > I'd like to add some keywords/clauses to the SELECT statement.\n>\n> Yeah, you'll have to modify gram.y (and a pile of other places)\n> if you want to do that. That's certainly something we do all\n> the time, but bison doesn't provide any way to add grammar\n> productions on-the-fly, so it does imply core-code mods.\n>\n> > ... The new SELECT would return multiple result sets.\n>\n> And that sounds like you'd also be redefining the wire protocol,\n> hence having to touch client-side code as well as the server.\n\nLong story short, this sounds like a VERY hard project. Chris, you\nwill probably want to think about some other approach to achieving\nyour objective, because this sounds like a project that even an expert\ncoder would spend a lot of time trying to get done.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 18 Mar 2019 10:09:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible to modify query language in an extension?"
}
] |
[
{
"msg_contents": "So some PostGIS people were griping (on irc) about how the lack of\nCREATE OR REPLACE AGGREGATE made their life difficult for updates. It\nstruck me that aggregates have acquired a relatively large number of new\nattributes in recent years, almost all of which are applicable at\nexecution time rather than in parse analysis, so having a CREATE OR\nREPLACE option seems like a no-brainer.\n\nI took a bash at actually writing it and didn't see any obvious problems\n(I'll post the patch in a bit). Is there some reason (other than\nshortage of round tuits) why this might not be a good idea, or why it\nhasn't been done before?\n\n-- \nAndrew (irc:RhodiumToad)\n\n",
"msg_date": "Sun, 17 Mar 2019 07:35:16 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "CREATE OR REPLACE AGGREGATE?"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 07:35:16AM +0000, Andrew Gierth wrote:\n> I took a bash at actually writing it and didn't see any obvious problems\n> (I'll post the patch in a bit). Is there some reason (other than\n> shortage of round tuits) why this might not be a good idea, or why it\n> hasn't been done before?\n\nIndeed. There is not much on the matter in pgsql-hackers as far as I\ncan see, except that but the thread is short:\nhttps://www.postgresql.org/message-id/CAGYyBgj3u_4mfTNPMnpOM2NPtWQVPU4WRsYz=RLCF59g-kGVmQ@mail.gmail.com\n--\nMichael",
"msg_date": "Sun, 17 Mar 2019 18:14:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CREATE OR REPLACE AGGREGATE?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Mar 17, 2019 at 07:35:16AM +0000, Andrew Gierth wrote:\n>> I took a bash at actually writing it and didn't see any obvious problems\n>> (I'll post the patch in a bit). Is there some reason (other than\n>> shortage of round tuits) why this might not be a good idea, or why it\n>> hasn't been done before?\n\n> Indeed.\n\nYeah, it seems like mostly a lack-of-round-tuits problem.\n\nUpdating the aggregate's dependencies correctly might be a bit tricky, but\nit can't be any worse than the corresponding problem for functions...\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 17 Mar 2019 10:22:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE OR REPLACE AGGREGATE?"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Yeah, it seems like mostly a lack-of-round-tuits problem.\n\n Tom> Updating the aggregate's dependencies correctly might be a bit\n Tom> tricky, but it can't be any worse than the corresponding problem\n Tom> for functions...\n\nI was worried about that myself but looking at it, unless I overlooked\nsomething, it's not hard to deal with. The main thing is that all the\ndependencies attach to the pg_proc entry, not the pg_aggregate row\n(which has no oid anyway), and ProcedureCreate when replacing that will\ndelete all of the old dependency entries. So all that AggregateCreate\nends up having to do is to create the same set of dependency entries\nthat it would have created anyway.\n\nHere's my initial draft patch (includes docs but not tests yet) - I have\nmore testing to do on it, particularly to check the dependencies are\nright, but so far it seems to work.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Sun, 17 Mar 2019 20:38:53 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": true,
"msg_subject": "Re: CREATE OR REPLACE AGGREGATE?"
}
] |
[
{
"msg_contents": "Hi;\n\nAttached is my second attempt at the pg_rewind change which allows one to\ninclude only a minimal set. To my understanding, all past feedback has\nbeen addressed.\n\nThe current patch does not change default behavior at present. It does add\na --data-only flag which allows pg_rewind to only rewind minimal files to\nwork. I believe this would generally be a good practice though maybe there\nis some disagreement on that.\n\nI have not run pg_indent because of the large number of other changes but\nif that is desired at some point I can do that.\n\nI also added test cases and some docs. I don't know if the docs are\nsufficient. Feedback is appreciated. This is of course submitted for v13.\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Sun, 17 Mar 2019 21:00:57 +0800",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": true,
"msg_subject": "Data-only pg_rewind, take 2"
},
{
"msg_contents": "On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n> I also added test cases and some docs. I don't know if the docs are\n> sufficient. Feedback is appreciated.\n\nTo be honest, I don't think that this approach is a good idea per the\nsame reasons as mentioned the last time, as this can cause pg_rewind\nto break if any newly-added folder in the data directory has\nnon-replaceable data which is needed at the beginning of recovery and\ncannot be automatically rebuilt. So that's one extra maintenance\nburden to worry about.\n\nHere is the reference of the last thread about the same topic:\nhttps://www.postgresql.org/message-id/CAN-RpxD8Y7hMOjzd93hOqV6n8kPEo5cmW9gYm+8JirTPiFnmmQ@mail.gmail.com\n--\nMichael",
"msg_date": "Mon, 18 Mar 2019 13:09:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Data-only pg_rewind, take 2"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n> > I also added test cases and some docs. I don't know if the docs are\n> > sufficient. Feedback is appreciated.\n> \n> To be honest, I don't think that this approach is a good idea per the\n> same reasons as mentioned the last time, as this can cause pg_rewind\n> to break if any newly-added folder in the data directory has\n> non-replaceable data which is needed at the beginning of recovery and\n> cannot be automatically rebuilt. So that's one extra maintenance\n> burden to worry about.\n\nThe right approach to deal with that is to have a canonical list of\nthose, isn't it? So that we have one place to update that takes care to\nmake sure that all of the tools realize what's actually needed.\n\nIn general, I agree completely with Chris on the reasoning behind this\npatch and that we really should try to avoid copying random files and\ndirectories that have shown up in the data directory during a pg_rewind.\nHaving regular expressions and other such things just strike me as a\nreally bad idea for a low-level tool like pg_rewind- if users have\ndropped other stuff in the data directory that they want copied around\nbetween systems then it should be on them to make that happen, not\nexpect pg_rewind to copy them..\n\nThanks!\n\nStephen",
"msg_date": "Mon, 18 Mar 2019 02:32:20 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Data-only pg_rewind, take 2"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 4:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n> > I also added test cases and some docs. I don't know if the docs are\n> > sufficient. Feedback is appreciated.\n>\n> To be honest, I don't think that this approach is a good idea per the\n> same reasons as mentioned the last time, as this can cause pg_rewind\n> to break if any newly-added folder in the data directory has\n> non-replaceable data which is needed at the beginning of recovery and\n> cannot be automatically rebuilt. So that's one extra maintenance\n> burden to worry about.\n>\n\nActually I think this is safe. Let me go through the cases not handled in\nthe current behavior at all:\n\n1. For rpms we distribute, we clobber db logs, which means we overwrite\napplication logs on the failed system with copes of logs from a replica.\nThis means that after you rewind, you lose the ability to figure out what\nwent wrong. This is an exquisitely bad idea unless you like data loss, and\nsince this location is configurable you can't just say \"well we put our\nlogs here so we are excluding them.\" Making this configured per rewind run\nstrikes me as error-prone and something that will may lead to hidden\ninterference with postmortems in the future, and postmortems are vitally\nimportant in terms of running database clusters with any sort of\nreliability guarantees.\n\n2. With the PostgreSQL.conf.auto now having recovery.conf info, you have\nsome very significant failure cases with regard to replication and\naccidentally clobbering these.\n\nOn to the corner cases with --data-only enabled and the implications as I\nsee them since this preserves files on the old master but does not copy\nthem from the replica:\n\n1. If the changes are not wal logged (let's say CSVs created using a file\nforeign data wrapper), then deleting the files on rewind is where you can\nlose data, and --data-only avoids this, so here you *avoid* data loss where\nyou put state files on the systems and do not rewind them because they were\nnot wal-logged. However the files still exist on the old master and are\nnot deleted, so the data can easily be restored at that point. Now, we can\nsay, probably, that putting data files in $PGDATA that are not wal-logged\nis a bad idea. But even if you put them somewhere else, pg_rewind isn't\ngoing to magically move them over to the replica for you.\n\n2. If the changes *are* wal-logged, then you have a problem with\n--data-only which is not present without it, namely that files can get out\nof sync with their wal-logged updates. So in this case, --data-dir is\n*not* safe.\n\nSo here I think we have to issue a choice. For now I don't feel\ncomfortable changing the default behavior, but the default behavior could\ncause data loss in certain cases (including the ones I think you are\nconcerned about). Maybe it would be better if I document the above points?\n\n\n>\n> Here is the reference of the last thread about the same topic:\n>\n> https://www.postgresql.org/message-id/CAN-RpxD8Y7hMOjzd93hOqV6n8kPEo5cmW9gYm+8JirTPiFnmmQ@mail.gmail.com\n> --\n> Michael\n> -----BEGIN PGP SIGNATURE-----\n>\n> iQIzBAABCgAdFiEEG72nH6vTowiyblFKnvQgOdbyQH0FAlyPGgYACgkQnvQgOdby\n> QH0ekRAAiXcZRcDZwwwdbdlIpkniE/SuG5gaS7etUcAW88m8Vts5r4QoAEwUwGhg\n> EZzuOb77OKvti7lmOZkBgC0VB1PmFku+mIdqJtzvdcSDdlOkABcLaw4JRrm//2/7\n> jAi5Jw4um1EAz38dZXcWYwORavyo/4tR2S1PCyBA35F704w2NILAEDiq233P/ALf\n> M3cOjgwiFIPf0v9PJIfYsl56sIwqW4rofPH63V6teaz5W8Qf2zHSsG5CeNqnEix0\n> QZwwlzuhtAUYINab3oN3qMtF2q9vzJWCoSprzxx1qYrzPHEX8EMot0+L7YPdaAp0\n> xyiUKSzy1rXtpoW0rsJ7w5bdrh1gS7HzprCEtqRZGe6NlVDcNjXfJIG9sT6hMWYS\n> GTNbVH5VpKziw3byT8JpyqR38+iFqeXoLd1PEVadYjP62qOWbK8P2wokQwM+7EcK\n> Hpr8jrvgV5x8IEnhR4bPyTqjORCJMBGTXCNgT99cPYpuVSasr/0IsBC/RtmQfRB9\n> xhK0/qp5koQbX+mbLK11XsaFS9JAL2DNmSQg8TqICtV3bb0UTThs331XgjEjlOpm\n> 1RjM6Tzwqq2is04mkkT+DtRAOclQuL8wWJWU5rr4fMKHCeFxtvUfwTyKlo2u+mI0\n> x7YZhd4AFCM14ga2Ko/qiGqeOWR5Y0RvYANmnmjG5bxQGi+Dtek=\n> =LNZB\n> -----END PGP SIGNATURE-----\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Mon, Mar 18, 2019 at 4:09 AM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n> I also added test cases and some docs. I don't know if the docs are\n> sufficient. Feedback is appreciated.\n\nTo be honest, I don't think that this approach is a good idea per the\nsame reasons as mentioned the last time, as this can cause pg_rewind\nto break if any newly-added folder in the data directory has\nnon-replaceable data which is needed at the beginning of recovery and\ncannot be automatically rebuilt. So that's one extra maintenance\nburden to worry about.Actually I think this is safe. Let me go through the cases not handled in the current behavior at all:1. For rpms we distribute, we clobber db logs, which means we overwrite application logs on the failed system with copes of logs from a replica. This means that after you rewind, you lose the ability to figure out what went wrong. This is an exquisitely bad idea unless you like data loss, and since this location is configurable you can't just say \"well we put our logs here so we are excluding them.\" Making this configured per rewind run strikes me as error-prone and something that will may lead to hidden interference with postmortems in the future, and postmortems are vitally important in terms of running database clusters with any sort of reliability guarantees.2. With the PostgreSQL.conf.auto now having recovery.conf info, you have some very significant failure cases with regard to replication and accidentally clobbering these.On to the corner cases with --data-only enabled and the implications as I see them since this preserves files on the old master but does not copy them from the replica:1. If the changes are not wal logged (let's say CSVs created using a file foreign data wrapper), then deleting the files on rewind is where you can lose data, and --data-only avoids this, so here you *avoid* data loss where you put state files on the systems and do not rewind them because they were not wal-logged. However the files still exist on the old master and are not deleted, so the data can easily be restored at that point. Now, we can say, probably, that putting data files in $PGDATA that are not wal-logged is a bad idea. But even if you put them somewhere else, pg_rewind isn't going to magically move them over to the replica for you.2. If the changes *are* wal-logged, then you have a problem with --data-only which is not present without it, namely that files can get out of sync with their wal-logged updates. So in this case, --data-dir is *not* safe.So here I think we have to issue a choice. For now I don't feel comfortable changing the default behavior, but the default behavior could cause data loss in certain cases (including the ones I think you are concerned about). Maybe it would be better if I document the above points? \n\nHere is the reference of the last thread about the same topic:\nhttps://www.postgresql.org/message-id/CAN-RpxD8Y7hMOjzd93hOqV6n8kPEo5cmW9gYm+8JirTPiFnmmQ@mail.gmail.com\n--\nMichael\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAABCgAdFiEEG72nH6vTowiyblFKnvQgOdbyQH0FAlyPGgYACgkQnvQgOdby\nQH0ekRAAiXcZRcDZwwwdbdlIpkniE/SuG5gaS7etUcAW88m8Vts5r4QoAEwUwGhg\nEZzuOb77OKvti7lmOZkBgC0VB1PmFku+mIdqJtzvdcSDdlOkABcLaw4JRrm//2/7\njAi5Jw4um1EAz38dZXcWYwORavyo/4tR2S1PCyBA35F704w2NILAEDiq233P/ALf\nM3cOjgwiFIPf0v9PJIfYsl56sIwqW4rofPH63V6teaz5W8Qf2zHSsG5CeNqnEix0\nQZwwlzuhtAUYINab3oN3qMtF2q9vzJWCoSprzxx1qYrzPHEX8EMot0+L7YPdaAp0\nxyiUKSzy1rXtpoW0rsJ7w5bdrh1gS7HzprCEtqRZGe6NlVDcNjXfJIG9sT6hMWYS\nGTNbVH5VpKziw3byT8JpyqR38+iFqeXoLd1PEVadYjP62qOWbK8P2wokQwM+7EcK\nHpr8jrvgV5x8IEnhR4bPyTqjORCJMBGTXCNgT99cPYpuVSasr/0IsBC/RtmQfRB9\nxhK0/qp5koQbX+mbLK11XsaFS9JAL2DNmSQg8TqICtV3bb0UTThs331XgjEjlOpm\n1RjM6Tzwqq2is04mkkT+DtRAOclQuL8wWJWU5rr4fMKHCeFxtvUfwTyKlo2u+mI0\nx7YZhd4AFCM14ga2Ko/qiGqeOWR5Y0RvYANmnmjG5bxQGi+Dtek=\n=LNZB\n-----END PGP SIGNATURE-----\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Mon, 18 Mar 2019 07:45:44 +0000",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": true,
"msg_subject": "Re: Data-only pg_rewind, take 2"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 8:46 PM Chris Travers <chris.travers@adjust.com> wrote:\n> On Mon, Mar 18, 2019 at 4:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n>> > I also added test cases and some docs. I don't know if the docs are\n>> > sufficient. Feedback is appreciated.\n>>\n>> To be honest, I don't think that this approach is a good idea per the\n>> same reasons as mentioned the last time, as this can cause pg_rewind\n>> to break if any newly-added folder in the data directory has\n>> non-replaceable data which is needed at the beginning of recovery and\n>> cannot be automatically rebuilt. So that's one extra maintenance\n>> burden to worry about.\n>\n> Actually I think this is safe. Let me go through the cases not handled in the current behavior at all:\n\nHi Chris,\n\nCould you please post a rebase? This has fairly thoroughly bitrotted.\nThe Commitfest is here, so now would be an excellent time for people\nto be able to apply and test the patch.\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 19:04:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data-only pg_rewind, take 2"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 7:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Mar 18, 2019 at 8:46 PM Chris Travers <chris.travers@adjust.com> wrote:\n> > On Mon, Mar 18, 2019 at 4:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On Sun, Mar 17, 2019 at 09:00:57PM +0800, Chris Travers wrote:\n> >> > I also added test cases and some docs. I don't know if the docs are\n> >> > sufficient. Feedback is appreciated.\n> >>\n> >> To be honest, I don't think that this approach is a good idea per the\n> >> same reasons as mentioned the last time, as this can cause pg_rewind\n> >> to break if any newly-added folder in the data directory has\n> >> non-replaceable data which is needed at the beginning of recovery and\n> >> cannot be automatically rebuilt. So that's one extra maintenance\n> >> burden to worry about.\n> >\n> > Actually I think this is safe. Let me go through the cases not handled in the current behavior at all:\n>\n> Hi Chris,\n>\n> Could you please post a rebase? This has fairly thoroughly bitrotted.\n> The Commitfest is here, so now would be an excellent time for people\n> to be able to apply and test the patch.\n\nHi Chris,\n\nI set this to \"Returned with feedback\" due to lack of response. If\nyou'd prefer to move it to the next CF instead because you're planning\nto work on it in time for the September CF, that might still be\npossible, otherwise of course please create a new entry when you're\nready. Thanks!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 11:07:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data-only pg_rewind, take 2"
}
] |
[
{
"msg_contents": "Hi,\n\nGiven a Portal, or an _SPI_plan, is there a practical way to tell whether\nit came from a query with FOR UPDATE or FOR SHARE?\n\nRegards,\n-Chap\n\n",
"msg_date": "Sun, 17 Mar 2019 20:46:40 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Determine if FOR UPDATE or FOR SHARE was used?"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Given a Portal, or an _SPI_plan, is there a practical way to tell whether\n> it came from a query with FOR UPDATE or FOR SHARE?\n\nIn principle, you could do something like drilling down into the plan\ntree to see if there's a LockRows node, but this wouldn't necessarily\nbe a great idea from a modularity or maintainability standpoint.\n\nI think it would help to take two steps back and ask why you want\nto know this, and what exactly is it that you want to know, anyhow.\nWhat does it matter if there's FOR SHARE in the query? Does it\nmatter if the FOR SHARE targets only some tables (and do you\ncare which ones?) How would your answer change if the FOR SHARE\nwere buried down in a CTE subquery? Why are you only interested\nin these cases, and not INSERT/UPDATE/DELETE?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Mar 2019 00:45:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Determine if FOR UPDATE or FOR SHARE was used?"
},
{
"msg_contents": "On 03/18/19 00:45, Tom Lane wrote:\n> I think it would help to take two steps back and ask why you want\n> to know this, and what exactly is it that you want to know, anyhow.\n> What does it matter if there's FOR SHARE in the query? Does it\n\nI was looking at an old design decision in PL/Java, which implements\njava.sql.ResultSet by grabbing a pile of tuples at a time from\nSPI_cursor_fetch, and then letting the ResultSet API iterate through\nthose, until the next pile needs to be fetched.\n\nIt seemed like the kind of optimization probably very important in a\nclient/server connection over RFC 2549, but I'm not sure how important\nit is for code running right in the backend.\n\nMaybe it does save a few cycles, but I don't want to be watching when\nsomebody tries to do UPDATE or DELETE WHERE CURRENT OF.\n\nIt occurred to me that positioned update/delete could be made to work\neither by simply having the Java ResultSet row fetch operations correspond\ndirectly to SPI fetches, or by continuing to SPI-fetch multiple rows at\na time, but repositioning with SPI_cursor_move as the Java ResultSet\npointer moves through them. (Is one of those techniques common in other\nPLs?)\n\nBut it also occurred to me that there might be a practical way to\nexamine the query to see it's one that could be used for positioned\nupdate or delete at all, and avoid any special treatment if it isn't.\n\nRegards,\n-Chap\n\n",
"msg_date": "Wed, 20 Mar 2019 23:31:30 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Re: Determine if FOR UPDATE or FOR SHARE was used?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've discovered bug, when vacuum full fails with error, because it\ncouldn't find toast chunks deleted by itself. That happens because\ncluster_rel() sets OldestXmin, but toast accesses gets snapshot later\nand independently. That causes heap_page_prune_opt() to clean chunks,\nwhich rebuild_relation() expects to exist. This bug very rarely\nhappens on busy systems which actively update toast values. But I\nfound way to reliably reproduce it using debugger.\n\n*Setup*\n\nCREATE FUNCTION random_string(seed integer, length integer) RETURNS text\n AS $$\n SELECT substr(\n string_agg(\n substr(\n encode(\n decode(\n md5(seed::text || '-' || i::text),\n 'hex'),\n 'base64'),\n 1, 21),\n ''),\n 1, length)\n FROM generate_series(1, (length + 20) / 21) i; $$\nLANGUAGE SQL;\n\nCREATE TABLE test (val text);\nINSERT INTO test (random_string(1,100000));\n\n*Reproduction steps*\n\ns1-s3 are three parallel PostgreSQL sessions\ns3lldb is lldb connected to s1\n\nAt first s1 acquires snapshot and holds it.\n\ns1# begin transaction isolation level repeatable read;\ns1# select 1;\n\nThen s2 makes multiple updates of our toasted value.\n\ns2# update test set val = random_string(2,100000);\ns2# update test set val = random_string(3,100000);\ns2# update test set val = random_string(4,100000);\ns2# update test set val = random_string(5,100000);\ns2# update test set val = random_string(6,100000);\ns2# update test set val = random_string(7,100000);\n\nThen s3 starting vacuum full stopping on vacuum_set_xid_limits().\n\ns3lldb# b vacuum_set_xid_limits\ns3# vacuum full test;\n\nWe pass vacuum_set_xid_limits() making sure old tuple versions made by\ns2 would be recently dead for vacuum full.\n\ns3lldb# finish\n\nThen s1 releases snapshot. Then heap_page_prune_opt() called from\ntoast accessed would cleanup toast chunks, which vacuum full expects\nto be recently dead.\n\ns1# commit;\n\nFinally, we continue our vacuum full and get error!\n\ns3lldb# continue\ns3#\nERROR: unexpected chunk number 50 (expected 2) for toast value 16429\nin pg_toast_16387\n\nAttached patch contains dirty fix of this bug, which just prevents\nheap_page_prune_opt() from clean tuples, when it's called from\nrebuild_relation(). Actually, it's not something I'm proposing to\ncommit or even review, it might be just some start point for thoughts.\n\nAny ideas?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 18 Mar 2019 19:53:22 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 12:53 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I've discovered bug, when vacuum full fails with error, because it\n> couldn't find toast chunks deleted by itself. That happens because\n> cluster_rel() sets OldestXmin, but toast accesses gets snapshot later\n> and independently. That causes heap_page_prune_opt() to clean chunks,\n> which rebuild_relation() expects to exist. This bug very rarely\n> happens on busy systems which actively update toast values. But I\n> found way to reliably reproduce it using debugger.\n\nBoy, I really feel like we've talked about this before. These are\nsomewhat-related discussions, but none of them are exactly the same\nthing:\n\nhttp://postgr.es/m/1335.1304187758@sss.pgh.pa.us\nhttp://postgr.es/m/20362.1359747327@sss.pgh.pa.us\nhttp://postgr.es/m/87in8nec96.fsf@news-spur.riddles.org.uk\n\nI don't know whether we've actually talked about this precise problem\nbefore and I just can't find the thread, or whether I'm confusing what\nyou've found here with some closely-related issue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Tue, 19 Mar 2019 11:48:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 6:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 18, 2019 at 12:53 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > I've discovered bug, when vacuum full fails with error, because it\n> > couldn't find toast chunks deleted by itself. That happens because\n> > cluster_rel() sets OldestXmin, but toast accesses gets snapshot later\n> > and independently. That causes heap_page_prune_opt() to clean chunks,\n> > which rebuild_relation() expects to exist. This bug very rarely\n> > happens on busy systems which actively update toast values. But I\n> > found way to reliably reproduce it using debugger.\n>\n> Boy, I really feel like we've talked about this before. These are\n> somewhat-related discussions, but none of them are exactly the same\n> thing:\n>\n> http://postgr.es/m/1335.1304187758@sss.pgh.pa.us\n> http://postgr.es/m/20362.1359747327@sss.pgh.pa.us\n> http://postgr.es/m/87in8nec96.fsf@news-spur.riddles.org.uk\n>\n> I don't know whether we've actually talked about this precise problem\n> before and I just can't find the thread, or whether I'm confusing what\n> you've found here with some closely-related issue.\n\nThank you for pointing, but none of the threads you pointed describe\nthis exact problem. Now I see this bug have a set of cute siblings :)\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Tue, 19 Mar 2019 20:37:08 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 1:37 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Thank you for pointing, but none of the threads you pointed describe\n> this exact problem. Now I see this bug have a set of cute siblings :)\n\nYeah. I really thought this precise issue -- the interlocking between\nthe VACUUM of the main table and the VACUUM of the TOAST table -- had\nbeen discussed somewhere before. But I couldn't find that discussion.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Fri, 22 Mar 2019 14:27:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 22, 2019 at 02:27:07PM -0400, Robert Haas wrote:\n> On Tue, Mar 19, 2019 at 1:37 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Thank you for pointing, but none of the threads you pointed describe\n> > this exact problem. Now I see this bug have a set of cute siblings :)\n> \n> Yeah. I really thought this precise issue -- the interlocking between\n> the VACUUM of the main table and the VACUUM of the TOAST table -- had\n> been discussed somewhere before. But I couldn't find that discussion.\n\nThat also describes the longstanding issue with pg_statistic / pg_toast_2619,\nno ?\n\nI think that's maybe what Robert is remembering, and searching for\npg_toast_2619 gives a good number of results (including my own problem report).\n\nIs this an \"Opened Item\" ?\n\n\n",
"msg_date": "Wed, 3 Apr 2019 10:21:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Fri, Mar 22, 2019 at 02:27:07PM -0400, Robert Haas wrote:\n> > On Tue, Mar 19, 2019 at 1:37 PM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Thank you for pointing, but none of the threads you pointed describe\n> > > this exact problem. Now I see this bug have a set of cute siblings :)\n> > \n> > Yeah. I really thought this precise issue -- the interlocking between\n> > the VACUUM of the main table and the VACUUM of the TOAST table -- had\n> > been discussed somewhere before. But I couldn't find that discussion.\n> \n> That also describes the longstanding issue with pg_statistic / pg_toast_2619,\n> no ?\n> \n> I think that's maybe what Robert is remembering, and searching for\n> pg_toast_2619 gives a good number of results (including my own problem report).\n> \n> Is this an \"Opened Item\" ?\n\nIf you're referring to the v12 open items list, then, no, I wouldn't\nthink it would be as it's not a new issue (unless I've misunderstood).\nOnly regressions from prior versions are appropriate for the v12 open\nitems list, long-standing bugs/issues should be addressed and fixed, of\ncourse, but those would be fixed and then back-patched.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 3 Apr 2019 11:26:20 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
},
{
"msg_contents": "On Wed, Apr 03, 2019 at 11:26:20AM -0400, Stephen Frost wrote:\n> If you're referring to the v12 open items list, then, no, I wouldn't\n> think it would be as it's not a new issue (unless I've misunderstood).\n> Only regressions from prior versions are appropriate for the v12 open\n> items list, long-standing bugs/issues should be addressed and fixed, of\n> course, but those would be fixed and then back-patched.\n\nPlease no open items which do not apply directly and only to v12.\nThere is a section on the page for older bugs however, which could\nprove to be useful for this case (items listed in this section do not\nhave any impact on the release normally):\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#Older_Bugs\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 13:15:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Concurrency bug with vacuum full (cluster) and toast"
}
] |
[
{
"msg_contents": "Hi!\n\nThis patch introduce a dummy_index access method module, that does not do any \nindexing at all, but allow to test reloptions from inside of access method \nextension.\n\nThis patch is part of my bigger work on reloptions refactoring.\n\nIt came from \nhttps://www.postgresql.org/message-id/20190220060832.GI15532%40paquier.xyz \nthread where I suggested to add a \"enum\" reloption type, and we came to \nconclusion that we need to test how this new option works for access method \ncreated from extension (it does not work in the same way as in-core access \nmethods) . But we can't add this option to bloom index, so we need an index \nextension that can be freely used for tests.\n\nSo I created src/test/modules/dummy_index, it does no real indexing, but it \nhas all types of reloptions that can be set (reloption_int, reloption_real, \nreloption_bool, reloption_string and reloption_string2). It also has set of \nboolean GUC variables that enables test output concerning certain reloption:\n(do_test_reloption_int, do_test_reloption_real, do_test_reloption_bool and \ndo_test_reloption_string and do_test_reloption_string2) also set \ndo_test_reloptions to true to get any output at all.\nDummy index will print this output when index is created, and when record is \ninserted (this needed to check if your ALTER TABLE did well)\nThen you just use normal regression tests: turns on test output, sets some \nreloption and check test output, that it properly reaches the access method \ninternals.\n\nWhile writing this module I kept in mind the idea that this module can be also \nused for other am-related tests, so I separated the code into two parts: \ndummy_index.c has only code related to implementation of an empty access \nmethod, and all code related to reloptions tests were stored into \ndireloptions.c. So in future somebody can add di[what_ever_he_wants].c whith \nhis own tests code, add necessary calls to dummy_index.c, create some GUC \nvariables, and has his own feature tested.\n\nSo I kindly ask you to review and commit this module, so I would be able to \ncontinue my work on reloptions refactoring...\n\nThanks!",
"msg_date": "Mon, 18 Mar 2019 22:41:13 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "[PATCH] src/test/modules/dummy_index -- way to test reloptions from\n inside of access method"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 10:41:13PM +0300, Nikolay Shaplov wrote:\n> So I created src/test/modules/dummy_index, it does no real indexing, but it \n> has all types of reloptions that can be set (reloption_int, reloption_real, \n> reloption_bool, reloption_string and reloption_string2). It also has set of \n> boolean GUC variables that enables test output concerning certain reloption:\n> (do_test_reloption_int, do_test_reloption_real, do_test_reloption_bool and \n> do_test_reloption_string and do_test_reloption_string2) also set \n> do_test_reloptions to true to get any output at all.\n> Dummy index will print this output when index is created, and when record is \n> inserted (this needed to check if your ALTER TABLE did well)\n> Then you just use normal regression tests: turns on test output, sets some \n> reloption and check test output, that it properly reaches the access method \n> internals.\n\nThanks for doing the effort to split that stuff. This looks like an\ninteresting base template for anybody willing to look after some\nbasics with index AMs, like what's done for FDWs with blackhole_fdw.\nPerhaps the name should be dummy_am_index or dummy_index_am?\ndummy_index does not sound bad either.\n\n> While writing this module I kept in mind the idea that this module can be also \n> used for other am-related tests, so I separated the code into two parts: \n> dummy_index.c has only code related to implementation of an empty access \n> method, and all code related to reloptions tests were stored into \n> direloptions.c. So in future somebody can add di[what_ever_he_wants].c whith \n> his own tests code, add necessary calls to dummy_index.c, create some GUC \n> variables, and has his own feature tested.\n\nHere are some comments. I think that this could be simplified\nfurther more.\n\nThe README file could have a more consistent format with the rest.\nSee for example dummy_seclabel/README. You could add a small\nexample with its usage.\n\nIs there any point in having string_option2? String reloptions are\nalready tested with string_option. Also => s/Seconf/Second/.\n\ns/valudate/validate/.\n\n+-- Test behavior of second string option (there can be issues with second one)\nWhat are those issues?\n\n+ } else\n+ {\nCode format does not follow the Postgres guidelines. You could fix\nall that with an indent run.\n\nThe ranges of the different values are not tested, wouldn't it be\nbetter to test that as well?\n\nThe way the test is configured with the strong dependencies between\nthe reloption types and the GUCs is much bug-prone I think. All of\nthat is present only to print a couple of WARNING messages with\nspecific values of values. So, why not removing the GUCs and the\nprinting logic which shows a subset of values? Please note that these\nare visible directly via pg_class.reloptions. So we could shave quite\nsome code.\n\nPlease note that the compilation of the module fails.\nnodes/relation.h maybe is access/relation.h? You may want to review\nall that.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2019 16:09:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "В письме от вторник, 19 марта 2019 г. 16:09:13 MSK пользователь Michael \nPaquier написал:\n\n> > So I created src/test/modules/dummy_index, it does no real indexing, but\n> > it\n> > has all types of reloptions that can be set (reloption_int,\n> > reloption_real,\n> > reloption_bool, reloption_string and reloption_string2). It also has set\n> > of\n> > boolean GUC variables that enables test output concerning certain\n> > reloption: (do_test_reloption_int, do_test_reloption_real,\n> > do_test_reloption_bool and do_test_reloption_string and \n> > do_test_reloption_string2) also set\n> > do_test_reloptions to true to get any output at all.\n> > Dummy index will print this output when index is created, and when record\n> > is inserted (this needed to check if your ALTER TABLE did well)\n> > Then you just use normal regression tests: turns on test output, sets some\n> > reloption and check test output, that it properly reaches the access\n> > method\n> > internals.\n> \n> Thanks for doing the effort to split that stuff. This looks like an\n> interesting base template for anybody willing to look after some\n> basics with index AMs, like what's done for FDWs with blackhole_fdw.\nI am not sure it is good template. Most methods are empty, and does not show \nany example of how it should work.\nIf I am to create a template I would try to create index that just do seq scan \nof indexed values. It would have all code index must have, but the code of the \nindex algorithm iteslf would be minimal. But it is another task.\n\n> Perhaps the name should be dummy_am_index or dummy_index_am?\n> dummy_index does not sound bad either.\nActually I do not see any reason to do it, all indexes in postgres are \nimplemented as access methods, so it sounds as double naming for me. But I \nactually do not care about this name, if you think adding _am is better, so I \ndid it.\nBut i did not remove .c file names and did not change di- suffix to dia- in the \ncode. Is it ok for you?\n\n> > While writing this module I kept in mind the idea that this module can be\n> > also used for other am-related tests, so I separated the code into two\n> > parts: dummy_index.c has only code related to implementation of an empty\n> > access method, and all code related to reloptions tests were stored into\n> > direloptions.c. So in future somebody can add di[what_ever_he_wants].c\n> > whith his own tests code, add necessary calls to dummy_index.c, create\n> > some GUC variables, and has his own feature tested.\n> \n> Here are some comments. I think that this could be simplified\n> further more.\n> \n> The README file could have a more consistent format with the rest.\n> See for example dummy_seclabel/README. You could add a small\n> example with its usage.\nGood notion. Fixed it.\n\n> Is there any point in having string_option2? String reloptions are\n> already tested with string_option.\nThere are two reasons for that:\n1. We should test both behavior with validation function, and without one. For \nthis we need two options, because we can change this in runtime\n2. The implementation of string functions is a bit tricky. It allocates some \nmore memory after the Option structure, and string values are placed there. It \nworks well with one string option, but I was not sure that is works properly \nfor two of them. I can imagine a bug that will show itself only with a second \noption. So we anyway should test two. \n\n> Also => s/Seconf/Second/.\n> s/valudate/validate/.\nThanks. I tried my best with aspell, but still missed something.\n\n> +-- Test behavior of second string option (there can be issues with second\n> one) What are those issues?\nThis issues are listed in README. And also I've written them above. To prevent \nconfusion I've removed this issue notion. :-) One who want to know more, can \nread README file ;-)\n \n> + } else\n> + {\n> Code format does not follow the Postgres guidelines. You could fix\n> all that with an indent run.\nOups, it's my favorite codestyle, I fall back to it when does not pay \nattention. I've reindented the code, a good idea. Should come to it myself....\n\n> The ranges of the different values are not tested, wouldn't it be\n> better to test that as well?\n\nMy idea was to test only things that can't be tested in regression tests. \nRanges are tested in regression tests ( I also wrote that tests) and it is \nbetter to leave it there.\n\nBut the question is good, I would mention it in README file, to make it \nclear....\n\n> The way the test is configured with the strong dependencies between\n> the reloption types and the GUCs is much bug-prone I think. All of\n> that is present only to print a couple of WARNING messages with\n> specific values of values. So, why not removing the GUCs and the\n> printing logic which shows a subset of values? \nI am afraid that we will get a mess that will work well, but it would be \ndifficult for a human to find any logic in the output. And sooner or later we \nwill need it, when something will go wrong and somebody will try to find out \nwhy.\nSo it is better to test one option at a time, and that's why mute test output \nfor other options.\n\n> Please note that these\n> are visible directly via pg_class.reloptions. So we could shave quite\n> some code.\nValues from pg_class are well tested in regression test. My point here is to \ncheck that they reach index internal as expected. And there is a long way \nbetween pg_class.reloptions and index internals.\n\n> Please note that the compilation of the module fails.\n> nodes/relation.h maybe is access/relation.h? You may want to review\n> all that.\nHm... I do not quite understand how it get there and why in worked for me \nbefore. But I changed it to nodes/pathnodes.h It was here because it is needed \nfor PlannerInfo symbol. \n\nPS. Sorry for long delays. I do not always have much time to do postgres...",
"msg_date": "Wed, 03 Apr 2019 21:54:13 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "Hi Nikolay,\n\n> On 3 Apr 2019, at 20:54, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> \n> В письме от вторник, 19 марта 2019 г. 16:09:13 MSK пользователь Michael \n> Paquier написал:\n> \n>> Thanks for doing the effort to split that stuff. This looks like an\n>> interesting base template for anybody willing to look after some\n>> basics with index AMs, like what's done for FDWs with blackhole_fdw.\n> I am not sure it is good template. Most methods are empty, and does not show \n> any example of how it should work.\n\nI think it would probably not be a good template — not for a a solid start point.\n\nThere is value in having something that has all the relevant method signatures, just to save someone the bother of crawling docs, or scraping other contrib/ examples for copy/paste snippets. But I think it should really be a different thing. It would be a distraction to litter such a template with custom reloptions clutter.\n\nI guess that assumes it is possible to create a realistic AM without configurable options. I’m guessing it should be. But perhaps such situations are rarer than I imagine…?\n\nBetter than an empty template, though, would be a concrete, but minimal, implementation of an INDEX/AM. I find it difficult to see how you get something clear and concise, while trying to simultaneously serve both INDEX/AM template and reloptions testing needs.\n\n>> Please note that these\n>> are visible directly via pg_class.reloptions. So we could shave quite\n>> some code.\n> Values from pg_class are well tested in regression test. My point here is to \n> check that they reach index internal as expected. And there is a long way \n> between pg_class.reloptions and index internals.\n\nI had the same thought. But on quick inspection — and perhaps I have missed something — I don’t see that /custom/ reloptions are really tested at all by the regression tests.\n\nSo I do think verifying an extension’s custom reloptions exposure would be valuable.\n\nI guess you might argue that it’s the regression test suite that should properly test that exposure mechanism. I kind of agree. :-) But I think that argument falls for similar reasons you cite for your initiative — i.e., it’s basically pretty hard to set up the situation where any kind of custom reloption would ever be reported.\n\nHope that is useful feedback.\n\ndenty.\n\n",
"msg_date": "Thu, 27 Jun 2019 00:17:06 +0200",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 10:17 AM Dent John <denty@qqdd.eu> wrote:\n> > On 3 Apr 2019, at 20:54, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> > В письме от вторник, 19 марта 2019 г. 16:09:13 MSK пользователь Michael\n> > Paquier написал:\n> >\n> >> Thanks for doing the effort to split that stuff. This looks like an\n> >> interesting base template for anybody willing to look after some\n> >> basics with index AMs, like what's done for FDWs with blackhole_fdw.\n> > I am not sure it is good template. Most methods are empty, and does not show\n> > any example of how it should work.\n>\n> [review]\n\nHi Nikolay,\n\nWhile moving this to the September CF, I noticed this failure:\n\ntest reloptions ... FAILED 32 ms\n\n--- /home/travis/build/postgresql-cfbot/postgresql/src/test/modules/dummy_index_am/expected/reloptions.out\n2019-08-01 08:06:16.580197980 +0000\n+++ /home/travis/build/postgresql-cfbot/postgresql/src/test/modules/dummy_index_am/results/reloptions.out\n2019-08-01 08:11:57.817493999 +0000\n@@ -13,12 +13,14 @@\n CREATE INDEX test_idx ON tst USING dummy_index_am (i) WITH (int_option = 5);\n WARNING: int_option = 5\n ALTER INDEX test_idx SET (int_option = 3);\n+ERROR: unrecognized lock mode: 2139062143\n INSERT INTO tst VALUES(1);\n-WARNING: int_option = 3\n+WARNING: int_option = 5\n ALTER INDEX test_idx SET (bool_option = false);\n ALTER INDEX test_idx RESET (int_option);\n+ERROR: unrecognized lock mode: 2139062143\n INSERT INTO tst VALUES(1);\n-WARNING: int_option = 10\n+WARNING: int_option = 5\n DROP INDEX test_idx;\n SET dummy_index.do_test_reloption_int to false;\n -- Test behavior of real option (default and non default values)\n@@ -48,9 +50,10 @@\n INSERT INTO tst VALUES(1);\n WARNING: bool_option = 1\n ALTER INDEX test_idx SET (int_option = 5, bool_option = false);\n+ERROR: unrecognized lock mode: 2139062143\n ALTER INDEX test_idx RESET (bool_option);\n INSERT INTO tst VALUES(1);\n-WARNING: bool_option = 1\n+WARNING: No reloptions is set, default values will be chosen in module runtime\n DROP INDEX test_idx;\n SET dummy_index.do_test_reloption_bool to false;\n -- Test behavior of string option (default and non default values + validate\n@@ -68,12 +71,12 @@\n WARNING: Validating string option 'Valid_value'\n WARNING: string_option = 'Valid_value'\n ALTER INDEX test_idx SET (string_option = \"Valid_value_2\", int_option = 5);\n-WARNING: Validating string option 'Valid_value_2'\n+ERROR: unrecognized lock mode: 2139062143\n INSERT INTO tst VALUES(1);\n-WARNING: string_option = 'Valid_value_2'\n+WARNING: string_option = 'Valid_value'\n ALTER INDEX test_idx RESET (string_option);\n INSERT INTO tst VALUES(1);\n-WARNING: string_option = 'DefaultValue'\n+WARNING: No reloptions is set, default values will be chosen in module runtime\n DROP INDEX test_idx;\n SET dummy_index.do_test_reloption_string to false;\n -- Test behavior of second string option\n@@ -87,11 +90,12 @@\n \"Some_value\");\n WARNING: string_option2 = 'Some_value'\n ALTER INDEX test_idx SET (string_option2 = \"Valid_value_2\", int_option = 5);\n+ERROR: unrecognized lock mode: 2139062143\n INSERT INTO tst VALUES(1);\n-WARNING: string_option2 = 'Valid_value_2'\n+WARNING: string_option2 = 'Some_value'\n ALTER INDEX test_idx RESET (string_option2);\n INSERT INTO tst VALUES(1);\n-WARNING: string_option2 = 'SecondDefaultValue'\n+WARNING: No reloptions is set, default values will be chosen in module runtime\n DROP INDEX test_idx;\n SET dummy_index.do_test_reloption_string2 to false;\n SET dummy_index.do_test_reloptions to false;\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Aug 2019 11:12:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "В Fri, 2 Aug 2019 11:12:35 +1200\nThomas Munro <thomas.munro@gmail.com> пишет:\n \n> While moving this to the September CF, I noticed this failure:\n> \n> test reloptions ... FAILED 32 ms\n\nDo you have any idea, how to reproduce this? I tried this patch on\ncurrent master, and did not get result you are talking about.\nIs it still there for you BTW?\n\n\n\n",
"msg_date": "Wed, 18 Sep 2019 22:57:52 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 7:58 AM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В Fri, 2 Aug 2019 11:12:35 +1200\n> Thomas Munro <thomas.munro@gmail.com> пишет:\n>\n> > While moving this to the September CF, I noticed this failure:\n> >\n> > test reloptions ... FAILED 32 ms\n>\n> Do you have any idea, how to reproduce this? I tried this patch on\n> current master, and did not get result you are talking about.\n> Is it still there for you BTW?\n\nHi Nikolay,\n\nYeah, it's still happening on Travis:\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/586714100\n\nAlthough the \"reloptions\" tests passes when it's run as part of the\nregular test schedule (ie \"make check\"), the patch also runs it from\nsrc/test/modules/dummy_index_am/Makefile (\"REGRESS = reloptions\"), and\nwhen it runs in that context it fails. Cfbot is simply running \"make\ncheck-world\".\n\nLet's see if I can see this on my Mac... yep, with \"make -C\nsrc/test/modules/dummy_index_am directory check\" I see a similar\nfailure with \"ERROR: unrecognized lock mode: 2139062143\". I changed\nthat to a PANIC and got a core file containing this stack:\n\n frame #4: 0x00000001051e6572 postgres`elog_finish(elevel=22,\nfmt=\"unrecognized lock mode: %d\") at elog.c:1365:2\n frame #5: 0x0000000104ff033a\npostgres`LockAcquireExtended(locktag=0x00007ffeeb14bc28,\nlockmode=2139062143, sessionLock=false, dontWait=false,\nreportMemoryError=true, locallockp=0x00007ffeeb14bc20) at lock.c:756:3\n frame #6: 0x0000000104fedaed postgres`LockRelationOid(relid=16397,\nlockmode=2139062143) at lmgr.c:116:8\n frame #7: 0x0000000104c056f2\npostgres`RangeVarGetRelidExtended(relation=0x00007fbd0f000b58,\nlockmode=2139062143, flags=0,\ncallback=(postgres`RangeVarCallbackForAlterRelation at\ntablecmds.c:14834), callback_arg=0x00007fbd0f000d60) at\nnamespace.c:379:4\n frame #8: 0x0000000104d4b14d\npostgres`AlterTableLookupRelation(stmt=0x00007fbd0f000d60,\nlockmode=2139062143) at tablecmds.c:3445:9\n frame #9: 0x000000010501ff8b\npostgres`ProcessUtilitySlow(pstate=0x00007fbd10800d18,\npstmt=0x00007fbd0f0010b0, queryString=\"ALTER INDEX test_idx SET\n(int_option = 3);\", context=PROCESS_UTILITY_TOPLEVEL,\nparams=0x0000000000000000, queryEnv=0x0000000000000000,\ndest=0x00007fbd0f0011a0, completionTag=\"\") at utility.c:1111:14\n frame #10: 0x000000010501f480\npostgres`standard_ProcessUtility(pstmt=0x00007fbd0f0010b0,\nqueryString=\"ALTER INDEX test_idx SET (int_option = 3);\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0000000000000000,\nqueryEnv=0x0000000000000000, dest=0x00007fbd0f0011a0,\ncompletionTag=\"\") at utility.c:927:4\n\nAlterTableGetLockLevel() returns that crazy lockmode value, becase it\ncalls AlterTableGetRelOptionsLockLevel(), I suspect with a garbage\ndefList, but I didn't dig further.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Sep 2019 10:51:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 10:51:09AM +1200, Thomas Munro wrote:\n> Let's see if I can see this on my Mac... yep, with \"make -C\n> src/test/modules/dummy_index_am directory check\" I see a similar\n> failure with \"ERROR: unrecognized lock mode: 2139062143\". I changed\n> that to a PANIC and got a core file containing this stack:\n\nA simple make check on the module reproduces the failure, so that's\nhard to miss.\n\nFrom what I can see it is not a problem caused directly by this\nmodule, specifically knowing that this test module is actually copying\nwhat bloom is doing when declaring its reloptions. Take this example:\nCREATE EXTENSION bloom;\nCREATE TABLE tbloom AS\n SELECT\n (random() * 1000000)::int as i1,\n (random() * 1000000)::int as i2,\n (random() * 1000000)::int as i3,\n (random() * 1000000)::int as i4,\n (random() * 1000000)::int as i5,\n (random() * 1000000)::int as i6\n FROM\n generate_series(1,100);\nCREATE INDEX bloomidx ON tbloom USING bloom (i1,i2,i3)\n WITH (length=80, col1=2, col2=2, col3=4);\nALTER INDEX bloomidx SET (length=100);\n\nAnd then you get that:\nERROR: XX000: unrecognized lock mode: 2139062143\nLOCATION: LockAcquireExtended, lock.c:756\n\nSo the options are registered in the relOpts array managed by\nreloptions.c but the data is not properly initialized. Hence when\nlooking at the lock needed we have an option match, but the lock\nnumber is incorrect, causing the failure. It looks like there is no\ndirect way to enforce the lockmode used for a reloption added via\nadd_int_reloption which does the allocation to add the option to\nadd_reloption(), but enforcing the value to be initialized fixes the\nissue:\n--- a/src/backend/access/common/reloptions.c\n+++ b/src/backend/access/common/reloptions.c\n@@ -658,6 +658,7 @@ allocate_reloption(bits32 kinds, int type, const\nchar *name, const char *desc)\n newoption->kinds = kinds;\n newoption->namelen = strlen(name);\n newoption->type = type;\n+ newoption->lockmode = AccessExclusiveLock;\n MemoryContextSwitchTo(oldcxt);\n\nI would think that initializing that to a sane default is something\nthat we should do anyway, or is there some trick allowing the\nmanipulation of relOpts I am missing? Changing the relopts APIs in\nback-branches is a no-go of course, but we could improve that for\n13~.\n\nWhile reading through the code, I found some extra issues... Here are\nsome comments about them.\n\n+++ b/src/test/modules/dummy_index_am/Makefile\n@@ -0,0 +1,21 @@\n+# contrib/bloom/Makefile\nIncorrect copy-paste here.\n\n+extern IndexBulkDeleteResult *dibulkdelete(IndexVacuumInfo *info,\n+ IndexBulkDeleteResult *stats, IndexBulkDeleteCallback callback,\n+ void *callback_state);\nAll the routines defining the index AM can just be static, so there is\nno point to complicate dummy_index.h with most of its contents.\n\nSome routines are missing a (void) in their declaration when the\nroutines have no arguments. This can cause warnings.\n\nNo sure I see the point of the various GUC with the use of WARNING\nmessages to check the sanity of the parameters. I find that awkward.\n--\nMichael",
"msg_date": "Thu, 19 Sep 2019 17:32:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "В письме от четверг, 19 сентября 2019 г. 17:32:03 MSK пользователь Michael \nPaquier написал:\n\n> > src/test/modules/dummy_index_am directory check\" I see a similar\n> > failure with \"ERROR: unrecognized lock mode: 2139062143\". I changed\n> \n> > that to a PANIC and got a core file containing this stack:\n> A simple make check on the module reproduces the failure, so that's\n> hard to miss.\nFor some reason it does not reproduce on my dev environment, but it not really \nimportant, since the core of the problem is found.\n> \n> From what I can see it is not a problem caused directly by this\n> module, specifically knowing that this test module is actually copying\n> what bloom is doing when declaring its reloptions. Take this example:\n> CREATE EXTENSION bloom;\n> CREATE TABLE tbloom AS\n> SELECT\n> (random() * 1000000)::int as i1,\n> (random() * 1000000)::int as i2,\n> (random() * 1000000)::int as i3,\n> (random() * 1000000)::int as i4,\n> (random() * 1000000)::int as i5,\n> (random() * 1000000)::int as i6\n> FROM\n> generate_series(1,100);\n> CREATE INDEX bloomidx ON tbloom USING bloom (i1,i2,i3)\n> WITH (length=80, col1=2, col2=2, col3=4);\n> ALTER INDEX bloomidx SET (length=100);\n> \n> And then you get that:\n> ERROR: XX000: unrecognized lock mode: 2139062143\n> LOCATION: LockAcquireExtended, lock.c:756\n> \n> So the options are registered in the relOpts array managed by\n> reloptions.c but the data is not properly initialized. Hence when\n> looking at the lock needed we have an option match, but the lock\n> number is incorrect, causing the failure. It looks like there is no\n> direct way to enforce the lockmode used for a reloption added via\n> add_int_reloption which does the allocation to add the option to\n> add_reloption(), but enforcing the value to be initialized fixes the\n> issue:\n> --- a/src/backend/access/common/reloptions.c\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -658,6 +658,7 @@ allocate_reloption(bits32 kinds, int type, const\n> char *name, const char *desc)\n> newoption->kinds = kinds;\n> newoption->namelen = strlen(name);\n> newoption->type = type;\n> + newoption->lockmode = AccessExclusiveLock;\n> MemoryContextSwitchTo(oldcxt);\n\nWhat a good catch! dummy_index already proved to be useful ;-)\n\n\n> I would think that initializing that to a sane default is something\n> that we should do anyway, or is there some trick allowing the\n> manipulation of relOpts I am missing? \n\nYes I think AccessExclusiveLock is quite good default I think. Especially in \nthe case when these options are not really used in real world ;-)\n\n> Changing the relopts APIs in\n> back-branches is a no-go of course, but we could improve that for\n> 13~.\n\nAs you know I have plans for rewriting options engine and there would be same \noptions code both for core Access Methods and for options for AM from \nextensions. So there would be API for setting lockmode...\nBut the way it is going right now, I am not sure it will reviewed to reach \n13...\n\n\nPS. Michael, who will submit this lock mode patch? Hope you will do it? It \nshould go separately from dummy_index for sure...\n\n\n",
"msg_date": "Thu, 19 Sep 2019 14:13:23 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 02:13:23PM +0300, Nikolay Shaplov wrote:\n> What a good catch! dummy_index already proved to be useful ;-)\n\nYes, the testing around custom reloptions is rather poor now, so this\nmodule has value. I still don't like much its shape though, so I\nbegan hacking on it for integration, and I wanted to get that part\ndown in this CF :)\n\nThere may be other issues, but let's sort out that later if anything\nshows up.\n\n> Yes I think AccessExclusiveLock is quite good default I think. Especially in \n> the case when these options are not really used in real world ;-)\n\nI guess so, but with table AMs introduced in 12, I would suspect that\nwe are going to have much more use cases popping out, and that these\nuse cases would be very happy to have the possibility to lower the\nlock level needed to set a custom reloption. I would like to get that\nfixed and back-patched separately. As it is not especially clear for\neverybody here in a thread dedicated to a test module that we are\ndiscussing about a backend-side bug, I am going to spawn a new thread\nwith a proper patch. Perhaps I missed something as well, so it would\nbe good to get more input on that.\n\n> As you know I have plans for rewriting options engine and there would be same \n> options code both for core Access Methods and for options for AM from \n> extensions. So there would be API for setting lockmode...\n> But the way it is going right now, I am not sure it will reviewed to reach \n> 13...\n\nWell, another thing would be to extend the existing routines so as\nthey take an extra argument to be able to enforce the lockmode, which\nis something that can be done without a large rewrite of the whole\nfacility, and the change is less invasive so it would have better\nchances to get into core. I don't mind changing those APIs on HEAD by\nthe way as long as the breakage involves a clean compilation failure\nand I don't think they are the most popular extension APIs ever.\nPerhaps others don't have the same line of thoughts, but let's see.\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 09:16:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 09:16:58AM +0900, Michael Paquier wrote:\n> On Thu, Sep 19, 2019 at 02:13:23PM +0300, Nikolay Shaplov wrote:\n>> What a good catch! dummy_index already proved to be useful ;-)\n> \n> Yes, the testing around custom reloptions is rather poor now, so this\n> module has value. I still don't like much its shape though, so I\n> began hacking on it for integration, and I wanted to get that part\n> done in this CF :)\n\nSo... I have looked at the patch of upthread in details, and as I\nsuspected the module is over-designed. First, on HEAD the coverage of\nreloptions.c is 86.6%, with your patch we get at 94.1%, and with the\nattached I reach 95.1% thanks to the addition of a string parameter\nwith a NULL default value and a NULL description, for roughly half the\ncode size.\n\nThe GUCs are also basically not necessary, as you can just replace the\nvarious WARNING calls (please don't call elog on anything which can be\nreached by the user!) by lookups at reloptions in pg_class. Once this\nis removed, the whole code gets more simple, and there is no point in\nhaving neither a separate header nor a different set of files and the\nsize of the whole module gets really reduced.\n\nI still need to do an extra pass on the code (particularly the AM\npart), but I think that we could commit that. Please note that I\nincluded the fix for the lockmode I sent today so as the patch can be\ntested:\nhttps://www.postgresql.org/message-id/20190920013831.GD1844@paquier.xyz\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 20 Sep 2019 20:58:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Fri, Sep 20, 2019 at 08:58:27PM +0900, Michael Paquier wrote:\n> I still need to do an extra pass on the code (particularly the AM\n> part), but I think that we could commit that. Please note that I\n> included the fix for the lockmode I sent today so as the patch can be\n> tested:\n> https://www.postgresql.org/message-id/20190920013831.GD1844@paquier.xyz\n\nI looked at that over the last couple of days, and done as attached.\nWell, the actual module is in 0003. I have added more comments to\ndocument the basic AM calls so as it can easier be used as a template\nfor some other work, and tweaked a couple of things. 0001 and 0002\nare just the patches from the other thread to address the issues with\nthe lock mode of custom reloptions.\n--\nMichael",
"msg_date": "Tue, 24 Sep 2019 11:39:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On 2019-Sep-24, Michael Paquier wrote:\n\n> I looked at that over the last couple of days, and done as attached.\n> Well, the actual module is in 0003. I have added more comments to\n> document the basic AM calls so as it can easier be used as a template\n> for some other work, and tweaked a couple of things. 0001 and 0002\n> are just the patches from the other thread to address the issues with\n> the lock mode of custom reloptions.\n\n0003 looks useful, thanks for completing it. I think it would be a good\nidea to test invalid values for each type of reloption too (passing\nfloating point to integers, strings to floating point, and so on).\nIf you can get this pushed, I'll push the enum reloptions on top.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 09:25:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "В Fri, 20 Sep 2019 20:58:27 +0900\nMichael Paquier <michael@paquier.xyz> пишет:\n\n\nSorry I am really slow to answer... I hope it is not too late.\n\n> The GUCs are also basically not necessary, as you can just replace the\n> various WARNING calls (please don't call elog on anything which can be\n> reached by the user!) by lookups at reloptions in pg_class. Once this\n> is removed, the whole code gets more simple, and there is no point in\n> having neither a separate header nor a different set of files and the\n> size of the whole module gets really reduced.\n\nReading options from pg_class is not a good idea. We already do this in\nreloption regression test. Here the thing is almost same...\n\nMy point of testing was to read these values from bytea right from\ninside of the module. This is not exactly the same value as in pg_class.\nIt _should_ be the same. But nobody promised is _is_ the same. That is\nwhy I was reading it right from relotions in-memory bytea, the same way\nreal access methods do it.\n\nAnd then we came to a GUC variables. Because it we have five reloptions\nand we print them all each time we change something, there would be\nquite huge output.\nIt is ok when everything goes well. Comparing with 'expected' is cheap.\nBut is something goes wrong, then it would be very difficult to find\nproper place in this output to deal with it.\nSo I created GUCs so we can get only one output in a row, not a whole\nbunch.\n\nThese are my points.\n\n\n\n\n\n",
"msg_date": "Tue, 24 Sep 2019 16:49:11 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "В письме от вторник, 24 сентября 2019 г. 9:25:54 MSK пользователь Alvaro \nHerrera написал:\n\n> 0003 looks useful, thanks for completing it. I think it would be a good\n> idea to test invalid values for each type of reloption too (passing\n> floating point to integers, strings to floating point, and so on).\n\nWe already do it in reloption regression tests.\n\nMy idea was to test here only the things that can't be tested in regression \ntests, on in real indexes like bloom.\n\n\n\n\n",
"msg_date": "Tue, 24 Sep 2019 18:20:31 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On 2019-Sep-24, Nikolay Shaplov wrote:\n\n> В письме от вторник, 24 сентября 2019 г. 9:25:54 MSK пользователь Alvaro \n> Herrera написал:\n> \n> > 0003 looks useful, thanks for completing it. I think it would be a good\n> > idea to test invalid values for each type of reloption too (passing\n> > floating point to integers, strings to floating point, and so on).\n> \n> We already do it in reloption regression tests.\n> \n> My idea was to test here only the things that can't be tested in regression \n> tests, on in real indexes like bloom.\n\nI suppose that makes sense. But of course when I push enum reloptions\nI will have to add such a test, since bloom does not have one.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 12:38:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 04:49:11PM +0300, Nikolay Shaplov wrote:\n> And then we came to a GUC variables. Because it we have five reloptions\n> and we print them all each time we change something, there would be\n> quite huge output.\n\nWell, that depends on how you design your tests. The first versions\nof the patch overdid it, and those GUCs have IMO little place for a\nmodule aimed as well to be a minimized referential template focused on\ntesting some portions of the backend code.\n\n> It is ok when everything goes well. Comparing with 'expected' is cheap.\n> But is something goes wrong, then it would be very difficult to find\n> proper place in this output to deal with it.\n> So I created GUCs so we can get only one output in a row, not a whole\n> bunch.\n\nI am still not convinced that this is worth the complication. Your\npoint is that you want to make *on-demand* and *user-visible* the set\nof options stored in rd_options after filling in the relation options\nusing the static table used in the AM.\n\nOne way to do that could be to have a simple wrapper function which\ncould be called at SQL level to do those checks, or you could issue a\nNOTICE with all the data filled in amoptions() or even ambuild(),\nthough the former makes the most sense as we fill in the options\nthere.\n\nOne thing that I think would value in the module would be to show how\na custom string option can be properly parsed when doing some\ndecisions in the AM. Now we store an offset in the static table, and\none needs to do a small dance with it to fetch the actual option\nvalue.\n\nThis can be guessed easily as for example gist has a string option\nwith \"buffering\", but we could document that better in the dummy\ntemplate, say like that:\n@@ -206,6 +210,15 @@ dioptions(Datum reloptions, bool validate)\n fillRelOptions((void *) rdopts, sizeof(DummyIndexOptions), options, numoptions,\n validate, di_relopt_tab, lengthof(di_relopt_tab));\n\n+ option_string_val = (char *) rdopts + rdopts->option_string_val_offset;\n+ option_string_null = (char *) rdopts + rdopts->option_string_null_offset;\n+ ereport(NOTICE,\n+ (errmsg(\"table option_int %d, option_real %f, option_bool %s, \"\n+ \"option_string_val %s, option_option_null %s\",\n+ rdopts->option_int, rdopts->option_real,\n+ rdopts->option_bool ? \"true\" : \"false\",\n+ option_string_val ? option_string_val : \"NULL\",\n+ option_string_null ? option_string_null : \"NULL\")));\n\nThe patch I have in my hands now is already doing a lot, so I am\ndiscarding that part for now. And we can easily improve it\nincrementally.\n\n(One extra thing which is also annoying with the current interface is\nthat we don't actually pass down the option name within the validator\nfunction for string options like GUCs, so you cannot know on which\noption you work on if a module generates logs, I'll send an extra\npatch for that on a separate thread.)\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 11:15:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
},
{
"msg_contents": "On Tue, Sep 24, 2019 at 12:38:30PM -0300, Alvaro Herrera wrote:\n> On 2019-Sep-24, Nikolay Shaplov wrote:\n>> В письме от вторник, 24 сентября 2019 г. 9:25:54 MSK пользователь Alvaro \n>> Herrera написал:\n>>> 0003 looks useful, thanks for completing it. I think it would be a good\n>>> idea to test invalid values for each type of reloption too (passing\n>>> floating point to integers, strings to floating point, and so on).\n>> \n>> We already do it in reloption regression tests.\n>> \n>> My idea was to test here only the things that can't be tested in regression \n>> tests, on in real indexes like bloom.\n> \n> I suppose that makes sense. But of course when I push enum reloptions\n> I will have to add such a test, since bloom does not have one.\n\nGood point. We rely now on the GUC parsing for reloptions, so having\ncross-checks about what patterns are allowed or not is a good idea for\nall reloption types. I have added all that, and committed the\nmodule. The amount of noise generated by the string validator routine\nwas a bit annoying, so I have silenced them where they don't really\nmatter (basically everything except the initial creation).\n--\nMichael",
"msg_date": "Wed, 25 Sep 2019 12:13:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] src/test/modules/dummy_index -- way to test reloptions\n from inside of access method"
}
] |
[
{
"msg_contents": "While poking around trying to find an explanation for the pg_upgrade\nfailure described here:\nhttps://www.postgresql.org/message-id/flat/CACmJi2JUhGo2ZxqDkh-EPHNjEN1ZA1S64uHLJFWHBhUuV4492w%40mail.gmail.com\nI noticed a few things that seem a bit fishy about pg_upgrade.\nI can't (yet) connect any of these to Tomasz' problem, but:\n\n1. check_bin_dir() does validate_exec() for pg_dumpall and pg_dump,\nbut not for pg_restore, though pg_upgrade surely calls that too.\nFor that matter, it's not validating initdb and vacuumdb, though\nit's grown dependencies on those as well. Seems like there's little\npoint in checking these if we're not going to check all of them.\n\n2. check_cluster_versions() insists that the target version be the\nsame major version as pg_upgrade itself, but is that really good enough?\nAs things stand, it looks like pg_upgrade 11.3 would happily use pg_dump\n11.1, or vice versa. With this rule, we cannot safely make any fixes\nin minor releases that rely on synchronized changes in the behavior of\npg_upgrade and pg_dump/pg_dumpall/pg_restore. I've not gone looking\nto see if we've already made such changes in the past, but even if we\nnever have, that's a rather tight-looking straitjacket. I think we\nshould insist that the new_cluster.bin_version be an exact match\nto pg_upgrade's own PG_VERSION_NUM.\n\n3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\noption at all, rather than just insisting on finding the new-version\nexecutables in the same directory it is in. This seems like, at best,\na hangover from before it got into core. Even if you don't want to\nremove the option, we could surely provide a useful default setting\nbased on find_my_exec. (I'm amused to notice that pg_upgrade\ncurrently takes the trouble to find out its own path, and then does\nprecisely nothing with the information.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 18 Mar 2019 19:35:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pg_upgrade version checking questions"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 07:35:17PM -0400, Tom Lane wrote:\n> While poking around trying to find an explanation for the pg_upgrade\n> failure described here:\n> https://www.postgresql.org/message-id/flat/CACmJi2JUhGo2ZxqDkh-EPHNjEN1ZA1S64uHLJFWHBhUuV4492w%40mail.gmail.com\n> I noticed a few things that seem a bit fishy about pg_upgrade.\n> I can't (yet) connect any of these to Tomasz' problem, but:\n> \n> 1. check_bin_dir() does validate_exec() for pg_dumpall and pg_dump,\n> but not for pg_restore, though pg_upgrade surely calls that too.\n> For that matter, it's not validating initdb and vacuumdb, though\n> it's grown dependencies on those as well. Seems like there's little\n> point in checking these if we're not going to check all of them.\n\nYes, adding those checks would be nice. I guess I never suspected there\nwould be mixed-version binaries in that directory.\n\n> 2. check_cluster_versions() insists that the target version be the\n> same major version as pg_upgrade itself, but is that really good enough?\n> As things stand, it looks like pg_upgrade 11.3 would happily use pg_dump\n> 11.1, or vice versa. With this rule, we cannot safely make any fixes\n> in minor releases that rely on synchronized changes in the behavior of\n> pg_upgrade and pg_dump/pg_dumpall/pg_restore. I've not gone looking\n> to see if we've already made such changes in the past, but even if we\n> never have, that's a rather tight-looking straitjacket. I think we\n> should insist that the new_cluster.bin_version be an exact match\n> to pg_upgrade's own PG_VERSION_NUM.\n\nAgain, I never considered minor-version changes, so yeah, forcing minor\nversion matching makes sense.\n\n> 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n> option at all, rather than just insisting on finding the new-version\n> executables in the same directory it is in. This seems like, at best,\n> a hangover from before it got into core. Even if you don't want to\n> remove the option, we could surely provide a useful default setting\n> based on find_my_exec. (I'm amused to notice that pg_upgrade\n> currently takes the trouble to find out its own path, and then does\n> precisely nothing with the information.)\n\nGood point. You are right that when it was outside of the source tree,\nand even in /contrib, that would not have worked easily. Makes sense to\nat least default to the same directory as pg_upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Tue, 19 Mar 2019 02:43:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 02:43:49AM -0400, Bruce Momjian wrote:\n> > 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n> > option at all, rather than just insisting on finding the new-version\n> > executables in the same directory it is in. This seems like, at best,\n> > a hangover from before it got into core. Even if you don't want to\n> > remove the option, we could surely provide a useful default setting\n> > based on find_my_exec. (I'm amused to notice that pg_upgrade\n> > currently takes the trouble to find out its own path, and then does\n> > precisely nothing with the information.)\n> \n> Good point. You are right that when it was outside of the source tree,\n> and even in /contrib, that would not have worked easily. Makes sense to\n> at least default to the same directory as pg_upgrade.\n\nI guess an open question is whether we should remove the --new-bindir\noption completely.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n",
"msg_date": "Tue, 19 Mar 2019 02:55:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On Tuesday, March 19, 2019 7:55 AM, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Mar 19, 2019 at 02:43:49AM -0400, Bruce Momjian wrote:\n>\n> > > 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n> > > option at all, rather than just insisting on finding the new-version\n> > > executables in the same directory it is in. This seems like, at best,\n> > > a hangover from before it got into core. Even if you don't want to\n> > > remove the option, we could surely provide a useful default setting\n> > > based on find_my_exec. (I'm amused to notice that pg_upgrade\n> > > currently takes the trouble to find out its own path, and then does\n> > > precisely nothing with the information.)\n> > >\n> >\n> > Good point. You are right that when it was outside of the source tree,\n> > and even in /contrib, that would not have worked easily. Makes sense to\n> > at least default to the same directory as pg_upgrade.\n>\n> I guess an open question is whether we should remove the --new-bindir\n> option completely.\n\nIf the default is made to find the new-version binaries in the same directory,\nkeeping --new-bindir could still be useful for easier testing of pg_upgrade.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 19 Mar 2019 13:00:50 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-03-19 00:35, Tom Lane wrote:\n> 2. check_cluster_versions() insists that the target version be the\n> same major version as pg_upgrade itself, but is that really good enough?\n> As things stand, it looks like pg_upgrade 11.3 would happily use pg_dump\n> 11.1, or vice versa.\n\nI'd hesitate to tie this down too much. It's possible that either the\nclient or the server package cannot currently be upgraded because of\nsome other dependencies. In fact, a careful packager might as a result\nof a change like this tie the client and server packages together with\nan exact version match. This has the potential to make the global\ndependency hell worse.\n\n> 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n> option at all, rather than just insisting on finding the new-version\n> executables in the same directory it is in. This seems like, at best,\n> a hangover from before it got into core. Even if you don't want to\n> remove the option, we could surely provide a useful default setting\n> based on find_my_exec.\n\nPreviously discussed here:\nhttps://www.postgresql.org/message-id/flat/1304710184.28821.9.camel%40vanquo.pezone.net\n (Summary: right)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Mar 2019 16:16:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-03-19 00:35, Tom Lane wrote:\n>> 2. check_cluster_versions() insists that the target version be the\n>> same major version as pg_upgrade itself, but is that really good enough?\n>> As things stand, it looks like pg_upgrade 11.3 would happily use pg_dump\n>> 11.1, or vice versa.\n\n> I'd hesitate to tie this down too much. It's possible that either the\n> client or the server package cannot currently be upgraded because of\n> some other dependencies. In fact, a careful packager might as a result\n> of a change like this tie the client and server packages together with\n> an exact version match. This has the potential to make the global\n> dependency hell worse.\n\nI'm not really getting your point here. Packagers ordinarily tie\nthose versions together anyway, I'd expect --- there's no upside\nto not doing so, and plenty of risk if one doesn't, because of\nexactly the sort of coordinated-changes hazard I'm talking about here.\n\n>> 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n>> option at all, rather than just insisting on finding the new-version\n>> executables in the same directory it is in. This seems like, at best,\n>> a hangover from before it got into core. Even if you don't want to\n>> remove the option, we could surely provide a useful default setting\n>> based on find_my_exec.\n\n> Previously discussed here:\n> https://www.postgresql.org/message-id/flat/1304710184.28821.9.camel%40vanquo.pezone.net\n> (Summary: right)\n\nMmm. The point that a default is of no particular use to scripts is\nstill valid. Shall we then remove the useless call to find_my_exec?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Mar 2019 11:51:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-03-19 16:51, Tom Lane wrote:\n> I'm not really getting your point here. Packagers ordinarily tie\n> those versions together anyway, I'd expect --- there's no upside\n> to not doing so, and plenty of risk if one doesn't, because of\n> exactly the sort of coordinated-changes hazard I'm talking about here.\n\nThe RPM packages do that, but the Debian packages do not.\n\n>>> 3. Actually, I'm kind of wondering why pg_upgrade has a --new-bindir\n>>> option at all, rather than just insisting on finding the new-version\n>>> executables in the same directory it is in. This seems like, at best,\n>>> a hangover from before it got into core. Even if you don't want to\n>>> remove the option, we could surely provide a useful default setting\n>>> based on find_my_exec.\n> \n>> Previously discussed here:\n>> https://www.postgresql.org/message-id/flat/1304710184.28821.9.camel%40vanquo.pezone.net\n>> (Summary: right)\n> \n> Mmm. The point that a default is of no particular use to scripts is\n> still valid. Shall we then remove the useless call to find_my_exec?\n\nI'm still in favor of defaulting --new-bindir appropriately. It seems\nsilly not to. We know where the directory is, we don't have to ask anyone.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 10:20:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-03-22 <57769959-8960-a9ca-fc9c-4dbb12629b8a@2ndquadrant.com>\n> I'm still in favor of defaulting --new-bindir appropriately. It seems\n> silly not to. We know where the directory is, we don't have to ask anyone.\n\nFwiw I've been wondering why I have to pass that option every time\nI've been using pg_upgrade. +1 on making it optional/redundant.\n\nChristoph\n\n",
"msg_date": "Fri, 22 Mar 2019 10:45:40 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On Tuesday, March 19, 2019 12:35 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I noticed a few things that seem a bit fishy about pg_upgrade.\n\nAttached are three patches which takes a stab at the issues raised here (and\nthe discussion in this thread):\n\n0001 - Enforces the version check to the full version including the minor\n0002 - Tests for all the binaries that pg_upgrade executes\n0003 - Make -B default to CWD and remove the exec_path check\n\nDefaulting to CWD for the new bindir has the side effect that the default\nsockdir is in the bin/ directory which may be less optimal.\n\ncheers ./daniel",
"msg_date": "Mon, 25 Mar 2019 23:12:12 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "Re: Daniel Gustafsson 2019-03-26 <pC-NMmh4vQLQP76YTwY4AuoD4OdNw9egikekyQpXFpgqmTlGjIZXOTd2W5RDZPpRski5N3ADRrLYgLk6QUuvmuT5fWC9acPAYyDU1AVxJcU=@yesql.se>\n> 0003 - Make -B default to CWD and remove the exec_path check\n> \n> Defaulting to CWD for the new bindir has the side effect that the default\n> sockdir is in the bin/ directory which may be less optimal.\n\nHmm, I would have thought that the default for the new bindir is the\ndirectory where pg_upgrade is located, not the CWD, which is likely to\nbe ~postgres or the like?\n\nOn Debian, the incantation is\n\n/usr/lib/postgresql/12/bin/pg_upgrade \\\n -b /usr/lib/postgresql/11/bin \\\n -B /usr/lib/postgresql/12/bin <-- should be redundant\n\nChristoph\n\n\n",
"msg_date": "Wed, 27 Mar 2019 13:43:52 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On Wednesday, March 27, 2019 1:43 PM, Christoph Berg <myon@debian.org> wrote:\n\n> Re: Daniel Gustafsson 2019-03-26 pC-NMmh4vQLQP76YTwY4AuoD4OdNw9egikekyQpXFpgqmTlGjIZXOTd2W5RDZPpRski5N3ADRrLYgLk6QUuvmuT5fWC9acPAYyDU1AVxJcU=@yesql.se\n>\n> > 0003 - Make -B default to CWD and remove the exec_path check\n> > Defaulting to CWD for the new bindir has the side effect that the default\n> > sockdir is in the bin/ directory which may be less optimal.\n>\n> Hmm, I would have thought that the default for the new bindir is the\n> directory where pg_upgrade is located, not the CWD, which is likely to\n> be ~postgres or the like?\n\nYes, thinking on it that's obviously better. The attached v2 repurposes the\nfind_my_exec() check to make the current directory of pg_upgrade the default\nfor new_cluster.bindir (the other two patches are left as they were).\n\ncheers ./daniel",
"msg_date": "Thu, 04 Apr 2019 13:40:40 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-04-04 15:40, Daniel Gustafsson wrote:\n> On Wednesday, March 27, 2019 1:43 PM, Christoph Berg <myon@debian.org> wrote:\n> \n>> Re: Daniel Gustafsson 2019-03-26 pC-NMmh4vQLQP76YTwY4AuoD4OdNw9egikekyQpXFpgqmTlGjIZXOTd2W5RDZPpRski5N3ADRrLYgLk6QUuvmuT5fWC9acPAYyDU1AVxJcU=@yesql.se\n>>\n>>> 0003 - Make -B default to CWD and remove the exec_path check\n>>> Defaulting to CWD for the new bindir has the side effect that the default\n>>> sockdir is in the bin/ directory which may be less optimal.\n>>\n>> Hmm, I would have thought that the default for the new bindir is the\n>> directory where pg_upgrade is located, not the CWD, which is likely to\n>> be ~postgres or the like?\n> \n> Yes, thinking on it that's obviously better. The attached v2 repurposes the\n> find_my_exec() check to make the current directory of pg_upgrade the default\n> for new_cluster.bindir (the other two patches are left as they were).\n\n0001-Only-allow-upgrades-by-the-same-exact-version-new-v2.patch\n\nI don't understand what this does. Please explain.\n\n\n0002-Check-all-used-executables-v2.patch\n\nI think we'd also need a check for pg_controldata.\n\nPerhaps this comment could be improved:\n\n/* these are only needed in the new cluster */\n\nto\n\n/* these are only needed for the target version */\n\n(pg_dump runs on the old cluster but has to be of the new version.)\n\n\n0003-Default-new-bindir-to-exec_path-v2.patch\n\nI don't like how the find_my_exec() code has been moved around. That\nmakes the modularity of the code worse. Let's keep it where it was and\nthen structure it like this:\n\nif -B was given:\n new_cluster.bindir = what was given for -B\nelse:\n # existing block\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 10:46:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 22 Jul 2019, at 10:46, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-04-04 15:40, Daniel Gustafsson wrote:\n>> On Wednesday, March 27, 2019 1:43 PM, Christoph Berg <myon@debian.org> wrote:\n>> \n>>> Re: Daniel Gustafsson 2019-03-26 pC-NMmh4vQLQP76YTwY4AuoD4OdNw9egikekyQpXFpgqmTlGjIZXOTd2W5RDZPpRski5N3ADRrLYgLk6QUuvmuT5fWC9acPAYyDU1AVxJcU=@yesql.se\n>>> \n>>>> 0003 - Make -B default to CWD and remove the exec_path check\n>>>> Defaulting to CWD for the new bindir has the side effect that the default\n>>>> sockdir is in the bin/ directory which may be less optimal.\n>>> \n>>> Hmm, I would have thought that the default for the new bindir is the\n>>> directory where pg_upgrade is located, not the CWD, which is likely to\n>>> be ~postgres or the like?\n>> \n>> Yes, thinking on it that's obviously better. The attached v2 repurposes the\n>> find_my_exec() check to make the current directory of pg_upgrade the default\n>> for new_cluster.bindir (the other two patches are left as they were).\n\nThanks for reviewing!\n\n> 0001-Only-allow-upgrades-by-the-same-exact-version-new-v2.patch\n> \n> I don't understand what this does. Please explain.\n\nThis patch makes the version check stricter to ensure that pg_upgrade and the\nnew cluster is of the same major and minor version. The code grabs the full\nversion from the various formats we have (x.y.z, x.z, xdevel) where we used to\nskip the minor rev. This is done to address one of Toms original complaints in\nthis thread.\n\n> 0002-Check-all-used-executables-v2.patch\n> \n> I think we'd also need a check for pg_controldata.\n\nFixed. I also rearranged the new cluster checks to be in alphabetical order\nsince the list makes more sense then (starting with initdb etc).\n\n> Perhaps this comment could be improved:\n> \n> /* these are only needed in the new cluster */\n> \n> to\n> \n> /* these are only needed for the target version */\n> \n> (pg_dump runs on the old cluster but has to be of the new version.)\n\nI like this suggestion, fixed with a little bit of wordsmithing.\n\n> 0003-Default-new-bindir-to-exec_path-v2.patch\n> \n> I don't like how the find_my_exec() code has been moved around. That\n> makes the modularity of the code worse. Let's keep it where it was and\n> then structure it like this:\n> \n> if -B was given:\n> new_cluster.bindir = what was given for -B\n> else:\n> # existing block\n\nThe reason for moving is that we print default values in usage(), and that\nrequires the value to be computed before calling usage(). We already do this\nfor resolving environment values in parseCommandLine(). If we do it in setup,\nthen we’d have to split out resolving the new_cluster.bindir into it’s own\nfunction exposed to option.c, or do you have any other suggestions there?\n\nI’ve attached all three patches as v3 to be compatible with the CFBot, only\n0002 changed so far.\n\ncheers ./daniel",
"msg_date": "Tue, 23 Jul 2019 17:30:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-07-23 17:30, Daniel Gustafsson wrote:\n> The reason for moving is that we print default values in usage(), and that\n> requires the value to be computed before calling usage(). We already do this\n> for resolving environment values in parseCommandLine(). If we do it in setup,\n> then we’d have to split out resolving the new_cluster.bindir into it’s own\n> function exposed to option.c, or do you have any other suggestions there?\n\nI think doing nontrivial work in order to print default values in\nusage() is bad practice, because in unfortunate cases it would even\nprevent you from calling --help. Also, in this case, it would probably\nvery often exceed the typical line length of --help output and create\nsome general ugliness. Writing something like \"(default: same as this\npg_upgrade)\" would probably achieve just about the same.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jul 2019 22:32:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 24 Jul 2019, at 22:32, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-07-23 17:30, Daniel Gustafsson wrote:\n>> The reason for moving is that we print default values in usage(), and that\n>> requires the value to be computed before calling usage(). We already do this\n>> for resolving environment values in parseCommandLine(). If we do it in setup,\n>> then we’d have to split out resolving the new_cluster.bindir into it’s own\n>> function exposed to option.c, or do you have any other suggestions there?\n> \n> I think doing nontrivial work in order to print default values in\n> usage() is bad practice, because in unfortunate cases it would even\n> prevent you from calling --help. Also, in this case, it would probably\n> very often exceed the typical line length of --help output and create\n> some general ugliness. Writing something like \"(default: same as this\n> pg_upgrade)\" would probably achieve just about the same.\n\nFair enough, those are both excellent points. I’ve shuffled the code around to\nmove back the check for exec_path to setup (albeit earlier than before due to\nwhere we perform directory checking). This does mean that the directory\nchecking in the options parsing must learn to cope with missing directories,\nwhich is a bit unfortunate since it’s already doing a few too many things IMHO.\nTo ensure dogfooding, I also removed the use of -B in ‘make check’ for\npg_upgrade, which should bump the coverage.\n\nAlso spotted a typo in a pg_upgrade file header in a file touched by this, so\nincluded it in this thread too as a 0004.\n\nThanks again for reviewing, much appreciated!\n\ncheers ./daniel",
"msg_date": "Thu, 25 Jul 2019 16:33:44 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-07-25 16:33, Daniel Gustafsson wrote:\n> Fair enough, those are both excellent points. I’ve shuffled the code around to\n> move back the check for exec_path to setup (albeit earlier than before due to\n> where we perform directory checking). This does mean that the directory\n> checking in the options parsing must learn to cope with missing directories,\n> which is a bit unfortunate since it’s already doing a few too many things IMHO.\n> To ensure dogfooding, I also removed the use of -B in ‘make check’ for\n> pg_upgrade, which should bump the coverage.\n> \n> Also spotted a typo in a pg_upgrade file header in a file touched by this, so\n> included it in this thread too as a 0004.\n\nI have committed 0002, 0003, and 0004.\n\nThe implementation in 0001 (Only allow upgrades by the same exact\nversion new bindir) has a problem. It compares (new_cluster.bin_version\n!= PG_VERSION_NUM), but new_cluster.bin_version is actually just the\nversion of pg_ctl, so this is just comparing the version of pg_upgrade\nwith the version of pg_ctl, which is not wrong, but doesn't really\nachieve the full goal of having all binaries match.\n\nI think a better structure would be to add a version check for each\nvalidate_exec() so that each program is checked against pg_upgrade.\nThis should mirror what find_other_exec() in src/common/exec.c does. In\na better world we would use find_other_exec() directly, but then we\ncan't support -B. Maybe expand find_other_exec() to support this, or\nmake a private copy for pg_upgrade to support this. (Also, we have two\ncopies of validate_exec() around. Maybe this could all be unified.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 27 Jul 2019 08:42:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 27 Jul 2019, at 08:42, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> I have committed 0002, 0003, and 0004.\n\nThanks!\n\n> The implementation in 0001 (Only allow upgrades by the same exact\n> version new bindir) has a problem. It compares (new_cluster.bin_version\n> != PG_VERSION_NUM), but new_cluster.bin_version is actually just the\n> version of pg_ctl, so this is just comparing the version of pg_upgrade\n> with the version of pg_ctl, which is not wrong, but doesn't really\n> achieve the full goal of having all binaries match.\n\nRight, it seemed the cleanest option at the time more or less based on the\nissues outlined below.\n\n> I think a better structure would be to add a version check for each\n> validate_exec() so that each program is checked against pg_upgrade.\n> This should mirror what find_other_exec() in src/common/exec.c does. In\n> a better world we would use find_other_exec() directly, but then we\n> can't support -B. Maybe expand find_other_exec() to support this, or\n> make a private copy for pg_upgrade to support this. (Also, we have two\n> copies of validate_exec() around. Maybe this could all be unified.)\n\nI’ll take a stab at tidying all of this up to require less duplication, we’ll\nsee where that ends up.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 30 Jul 2019 17:13:08 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 3:13 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 27 Jul 2019, at 08:42, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> > I have committed 0002, 0003, and 0004.\n>\n> Thanks!\n>\n> > The implementation in 0001 (Only allow upgrades by the same exact\n> > version new bindir) has a problem. It compares (new_cluster.bin_version\n> > != PG_VERSION_NUM), but new_cluster.bin_version is actually just the\n> > version of pg_ctl, so this is just comparing the version of pg_upgrade\n> > with the version of pg_ctl, which is not wrong, but doesn't really\n> > achieve the full goal of having all binaries match.\n>\n> Right, it seemed the cleanest option at the time more or less based on the\n> issues outlined below.\n>\n> > I think a better structure would be to add a version check for each\n> > validate_exec() so that each program is checked against pg_upgrade.\n> > This should mirror what find_other_exec() in src/common/exec.c does. In\n> > a better world we would use find_other_exec() directly, but then we\n> > can't support -B. Maybe expand find_other_exec() to support this, or\n> > make a private copy for pg_upgrade to support this. (Also, we have two\n> > copies of validate_exec() around. Maybe this could all be unified.)\n>\n> I’ll take a stab at tidying all of this up to require less duplication, we’ll\n> see where that ends up.\n\nHi Daniel,\n\nI've moved this to the next CF, because it sounds like you're working\non a new version of 0001. If I misunderstood and you're happy with\njust 0002-0004 being committed for now, please feel free to mark the\nSeptember entry 'Committed'.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 20:52:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 2019-Jul-30, Daniel Gustafsson wrote:\n\n> I’ll take a stab at tidying all of this up to require less duplication, we’ll\n> see where that ends up.\n\nHello Daniel, are you submitting a new version soon?\n\nThanks,\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 13:59:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 2 Sep 2019, at 19:59, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2019-Jul-30, Daniel Gustafsson wrote:\n> \n>> I’ll take a stab at tidying all of this up to require less duplication, we’ll\n>> see where that ends up.\n> \n> Hello Daniel, are you submitting a new version soon?\n\nI am working on an updated version which unfortunately got a bit delayed, but\nwill be submitted shortly (targeting this week).\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 3 Sep 2019 01:22:05 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 27 Jul 2019, at 08:42, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-07-25 16:33, Daniel Gustafsson wrote:\n>> Fair enough, those are both excellent points. I’ve shuffled the code around to\n>> move back the check for exec_path to setup (albeit earlier than before due to\n>> where we perform directory checking). This does mean that the directory\n>> checking in the options parsing must learn to cope with missing directories,\n>> which is a bit unfortunate since it’s already doing a few too many things IMHO.\n>> To ensure dogfooding, I also removed the use of -B in ‘make check’ for\n>> pg_upgrade, which should bump the coverage.\n>> \n>> Also spotted a typo in a pg_upgrade file header in a file touched by this, so\n>> included it in this thread too as a 0004.\n> \n> I have committed 0002, 0003, and 0004.\n> \n> The implementation in 0001 (Only allow upgrades by the same exact\n> version new bindir) has a problem. It compares (new_cluster.bin_version\n> != PG_VERSION_NUM), but new_cluster.bin_version is actually just the\n> version of pg_ctl, so this is just comparing the version of pg_upgrade\n> with the version of pg_ctl, which is not wrong, but doesn't really\n> achieve the full goal of having all binaries match.\n> \n> I think a better structure would be to add a version check for each\n> validate_exec() so that each program is checked against pg_upgrade.\n> This should mirror what find_other_exec() in src/common/exec.c does. In\n> a better world we would use find_other_exec() directly, but then we\n> can't support -B. Maybe expand find_other_exec() to support this, or\n> make a private copy for pg_upgrade to support this. (Also, we have two\n> copies of validate_exec() around. Maybe this could all be unified.)\n\nTurns out I overshot my original estimate of a new 0001 by a hair (by ~530 days\nor so) but attached is an updated version.\n\nThis exports validate_exec to reduce duplication, and implements a custom\nfind_other_exec-like function in pg_upgrade to check each binary for the\nversion number. Keeping a local copy of validate_exec is easy to do if it's\ndeemed not worth it to export it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 23 Feb 2021 17:14:28 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 23.02.21 17:14, Daniel Gustafsson wrote:\n> This exports validate_exec to reduce duplication, and implements a custom\n> find_other_exec-like function in pg_upgrade to check each binary for the\n> version number. Keeping a local copy of validate_exec is easy to do if it's\n> deemed not worth it to export it.\n\nThis looks mostly okay to me.\n\nThe commit message says something about \"to ensure the health of the \ntarget cluster\", which doesn't make sense to me. Maybe find a better \nwording.\n\nThe name find_exec() seems not very accurate. It doesn't find anything. \n Maybe \"check\"?\n\nI'm not sure why the new find_exec() adds EXE. AFAIK, this is only \nrequired for stat(), and validate_exec() already does it.\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 14:20:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 2 Mar 2021, at 14:20, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 23.02.21 17:14, Daniel Gustafsson wrote:\n>> This exports validate_exec to reduce duplication, and implements a custom\n>> find_other_exec-like function in pg_upgrade to check each binary for the\n>> version number. Keeping a local copy of validate_exec is easy to do if it's\n>> deemed not worth it to export it.\n> \n> This looks mostly okay to me.\n\nThanks for reviewing!\n\n> The commit message says something about \"to ensure the health of the target cluster\", which doesn't make sense to me. Maybe find a better wording.\n\nReworded in the attached updated version.\n\n> The name find_exec() seems not very accurate. It doesn't find anything. Maybe \"check\"?\n\nI'm not wild about check_exec(), but every other name I could think of was\ndrastically worse so I went with check_exec.\n\n> I'm not sure why the new find_exec() adds EXE. AFAIK, this is only required for stat(), and validate_exec() already does it.\n\nGood point, fixed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 2 Mar 2021 22:51:12 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "On 02.03.21 22:51, Daniel Gustafsson wrote:\n>> The commit message says something about \"to ensure the health of the target cluster\", which doesn't make sense to me. Maybe find a better wording.\n> \n> Reworded in the attached updated version.\n> \n>> The name find_exec() seems not very accurate. It doesn't find anything. Maybe \"check\"?\n> \n> I'm not wild about check_exec(), but every other name I could think of was\n> drastically worse so I went with check_exec.\n> \n>> I'm not sure why the new find_exec() adds EXE. AFAIK, this is only required for stat(), and validate_exec() already does it.\n> \n> Good point, fixed.\n\nI committed this. I added a pg_strip_crlf() so that there are no \nnewlines in the error message. I also slightly reworded the error \nmessage to make the found and expected value distinguishable.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 09:57:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
},
{
"msg_contents": "> On 3 Mar 2021, at 09:57, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> I committed this. I added a pg_strip_crlf() so that there are no newlines in the error message.\n\nRight, that's much better, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:04:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade version checking questions"
}
] |
[
{
"msg_contents": "Hi folks,\n\nAfter months and years of really trying to make EXTENSIONs meet the\nrequirements of my machinations, I have come to the conclusion that either\na) I am missing something or b) they are architecturally flawed. Or\npossibly both.\n\nAdmittedly, I might be trying to push extensions beyond what the great\nelephant in the sky ever intended. The general bent here is to try to\nachieve a level of modular reusable components similar to those in\n\"traditional\" programming environments like pip, gem, npm, cpan, etc.\nPersonally, I am trying to migrate as much of my dev stack as possible away\nfrom the filesystem and into the database. Files, especially code,\nconfiguration, templates, permissions, manifests and other development\nfiles, would be much happier in a database where they have constraints and\nan information model and can be queried!\n\nRegardless, it would be really great to be able to install an extension,\nand have it cascade down to multiple other extensions, which in turn\ncascade down to more, and have everything just work. Clearly, this was\nconsidered in the extension architecture, but I'm running into some\nproblems making it a reality. So here they are.\n\n\n#1: Dependencies\n\nLet's say we have two extensions, A and B, both of which depend on a third\nextension C, let's just say C is hstore. A and B are written by different\ndevelopers, and both contain in their .control file the line\n\n requires = 'hstore'\n\nWhen A is installed, if A creates a schema, it puts hstore in that schema.\nIf not, hstore is already installed, it uses it in that location. How does\nthe extension know where to reference hstore?\n\nThen, when B is installed, it checks to see if extension hstore is\ninstalled, sees that it is, and moves on. What if it expects it in a\ndifferent place than A does? The hstore extension can only be installed\nonce, in a single schema, but if multiple extensions depend on it and look\nfor it in different places, they are incompatible.\n\nI have heard talk of a way to write extensions so that they dynamically\nreference the schema of their dependencies, but sure don't know how that\nwould work if it's possible. The @extschema@ variable references the\n*current* extension's schema, but not there is no dynamic variable to\nreference the schema of a dependency.\n\nAlso it is possible in theory to dynamically set search_path to contain\nevery schema of every dependency in play and then just not specify a schema\nwhen you use something in a dependency. But this ANDs together all the\nscopes of all the dependencies of an extension, introducing potential for\ncollisions, and is generally kind of clunky.\n\n\n#2: Data in Extensions\n\nExtensions that are just a collection of functions and types seem to be the\nnorm. Extensions can contain what the docs call \"configuration\" data, but\nrows are really second class citizens: They aren't tracked with\npg_catalog.pg_depend, they aren't deleted when the extension is dropped,\netc.\n\nSometimes it would make sense for an extension to contain *only* data, or\ninsert some rows in a table that the extension doesn't \"own\", but has as a\ndependency. For example, a \"webserver\" extension might contain a\n\"resource\" table that serves up the content of resources in the table at a\nspecified path. But then, another extension, say an app, might want to just\nlist the webserver extension as a dependency, and insert a few resource\nrows into it. This is really from what I can tell beyond the scope of what\nextensions are capable of.\n\n\n#3 pg_dump and Extensions\n\nTables created by extensions are skipped by pg_dump unless they are flagged\nat create time with:\n\n pg_catalog.pg_extension_config_dump('my_table', 'where id < 20')\n\nHowever, there's no way that I can tell to mix and match rows and tables\nacross multiple extensions, so pg_dump can't keep track of multiple\nextensions that contain rows in the same table.\n\n\nI'd like an extension framework that can contain data as first class\ncitizens, and can gracefully handle a dependency chain and share\ndependencies. I have some ideas for a better approach, but they are pretty\nradical. I thought I would send this out and see what folks think.\n\nThanks,\nEric\n--\nhttp://aquameta.org/\n\nHi folks,After months and years of really trying to make EXTENSIONs meet the requirements of my machinations, I have come to the conclusion that either a) I am missing something or b) they are architecturally flawed. Or possibly both.Admittedly, I might be trying to push extensions beyond what the great elephant in the sky ever intended. The general bent here is to try to achieve a level of modular reusable components similar to those in \"traditional\" programming environments like pip, gem, npm, cpan, etc. Personally, I am trying to migrate as much of my dev stack as possible away from the filesystem and into the database. Files, especially code, configuration, templates, permissions, manifests and other development files, would be much happier in a database where they have constraints and an information model and can be queried!Regardless, it would be really great to be able to install an extension, and have it cascade down to multiple other extensions, which in turn cascade down to more, and have everything just work. Clearly, this was considered in the extension architecture, but I'm running into some problems making it a reality. So here they are.#1: DependenciesLet's say we have two extensions, A and B, both of which depend on a third extension C, let's just say C is hstore. A and B are written by different developers, and both contain in their .control file the line requires = 'hstore'When A is installed, if A creates a schema, it puts hstore in that schema. If not, hstore is already installed, it uses it in that location. How does the extension know where to reference hstore?Then, when B is installed, it checks to see if extension hstore is installed, sees that it is, and moves on. What if it expects it in a different place than A does? The hstore extension can only be installed once, in a single schema, but if multiple extensions depend on it and look for it in different places, they are incompatible.I have heard talk of a way to write extensions so that they dynamically reference the schema of their dependencies, but sure don't know how that would work if it's possible. The @extschema@ variable references the *current* extension's schema, but not there is no dynamic variable to reference the schema of a dependency.Also it is possible in theory to dynamically set search_path to contain every schema of every dependency in play and then just not specify a schema when you use something in a dependency. But this ANDs together all the scopes of all the dependencies of an extension, introducing potential for collisions, and is generally kind of clunky.#2: Data in ExtensionsExtensions that are just a collection of functions and types seem to be the norm. Extensions can contain what the docs call \"configuration\" data, but rows are really second class citizens: They aren't tracked with pg_catalog.pg_depend, they aren't deleted when the extension is dropped, etc.Sometimes it would make sense for an extension to contain *only* data, or insert some rows in a table that the extension doesn't \"own\", but has as a dependency. For example, a \"webserver\" extension might contain a \"resource\" table that serves up the content of resources in the table at a specified path. But then, another extension, say an app, might want to just list the webserver extension as a dependency, and insert a few resource rows into it. This is really from what I can tell beyond the scope of what extensions are capable of.#3 pg_dump and ExtensionsTables created by extensions are skipped by pg_dump unless they are flagged at create time with: pg_catalog.pg_extension_config_dump('my_table', 'where id < 20')However, there's no way that I can tell to mix and match rows and tables across multiple extensions, so pg_dump can't keep track of multiple extensions that contain rows in the same table.I'd like an extension framework that can contain data as first class citizens, and can gracefully handle a dependency chain and share dependencies. I have some ideas for a better approach, but they are pretty radical. I thought I would send this out and see what folks think.Thanks,Eric--http://aquameta.org/",
"msg_date": "Mon, 18 Mar 2019 21:38:19 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "extensions are hitting the ceiling"
},
{
"msg_contents": "On 03/18/19 22:38, Eric Hanson wrote:\n> rows are really second class citizens: They aren't tracked with\n> pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> etc.\n\nThis. You have other interests as well, but this is the one I was thinking\nabout a few years ago in [1] (starting at \"Ok, how numerous would be the\nproblems with this:\").\n\nNobody ever chimed in to say how numerous they did or didn't think the\nproblems would be. I was actually thinking recently about sitting down\nand trying to write that patch, as no one had exactly stood up to say\n\"oh heavens no, don't write that.\" But my round tuits are all deployed\nelsewhere at the moment.\n\nI'd still like to discuss the ideas.\n\n-Chap\n\n[1] https://www.postgresql.org/message-id/5685A2E7.6080209%40anastigmatix.net\n\n",
"msg_date": "Tue, 19 Mar 2019 00:56:03 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On 03/19/19 00:56, Chapman Flack wrote:\n> Nobody ever chimed in to say how numerous they did or didn't think the\n> problems would be. I was actually thinking recently about sitting down\n> and trying to write that patch, as no one had exactly stood up to say\n> \"oh heavens no, don't write that.\"\n\nOf course, one notable thing that has happened since I wrote that design\nwas that Oids have stopped being magical, or supported in user tables.\nSo a bit of \"mutatis mutandis\" is needed when reading it in 2019.\n\nRegards,\n-Chap\n\n",
"msg_date": "Tue, 19 Mar 2019 00:59:03 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 11:56 PM Chapman Flack <chap@anastigmatix.net>\nwrote:\n\n> On 03/18/19 22:38, Eric Hanson wrote:\n> > rows are really second class citizens: They aren't tracked with\n> > pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> > etc.\n>\n> This. You have other interests as well, but this is the one I was thinking\n> about a few years ago in [1] (starting at \"Ok, how numerous would be the\n> problems with this:\").\n>\n\nCool!\n\nFirst thoughts, it seems like a sensible way to go given the premise that\nextensions are immutable. But -- I'd be a little concerned about the\nperformance ramifications. Usually there are not jillions of database\nobjects in a extension, but if they started containing data, there sure\ncould be jillions of rows. Every row would have to be checked for\nexistence as part of an extension on every insert or update, no?\n\nNobody ever chimed in to say how numerous they did or didn't think the\n> problems would be. I was actually thinking recently about sitting down\n> and trying to write that patch, as no one had exactly stood up to say\n> \"oh heavens no, don't write that.\" But my round tuits are all deployed\n> elsewhere at the moment.\n>\n\nLikewise, if nobody tells me \"oh sheeze extensions can already do all this\"\nI'm going to assume they can't. :-)\n\nI'd still like to discuss the ideas.\n\n\nMe too!\n\nOk, I should probably come out and say it: I think the user story of\n\"There is some kind of packaging system that can contain both schema and\ndata, and these packages can be installed and removed along with their\ndependencies atomically\" is fairly obvious and desirable. But getting\nthere while accepting the premises that are currently baked into extensions\nmight be a tall order.\n\nExtensions have a middleware-ish aspect to them -- they are immutable and\nthat immutability is checked and enforced at runtime. That might scale\njust fine to a few dozen database objects that only check pg_depends on DDL\noperations, but if we introduce record tracking and start sticking sticks\ninto the wheels of the DML, things could go south really quickly it seems.\n\nI really like a more git-like pattern, where you are free to modify the\nworking copy of a repository (or in this case an extension), and instead of\nbeing blocked from changing things, the system tells the user what has\nchanged and how, and gives sensible options for what to do about it. That\nway it doesn't incur a performance hit, and the user can do a kind of \"git\nstatus\" on their extension to show any changes.\n\nHow about an extension system whose first principle is that an extension is\nmade up of rows, period. What about the DDL you ask? Well...\n\nImagine a system catalog whose sole purpose is to contain database object\ndefinitions like \"CREATE VIEW ...\", similar to those produced by\npg_catalog.pg_get_viewdef(), pg_catalog.get_functiondef(), etc. Let's call\nthis catalog `def`. There is exactly one VIEW for every type of database\nobject in PostgreSQL. def.table, def.role, def.sequence, def.operator,\ndef.type, etc. Each def.* VIEW contains only two columns, `id` and\n`definition`. The `id` column contains a unique identifier for the object,\nand the `definition` column contains the SQL statement that will recreate\nthe object.\n\nSo, inside this system catalog is the SQL definition statement of every\ndatabase object. In theory, the contents of all the `definition` columns\ntogether would be similar to the contents of pg_dump --schema-only.\n\nNow, imagine all these def.* views had insert triggers, so that on insert,\nit actually executes the contents of the `definition` column. In theory,\nwe could pg_restore the data in the def.* views, and it would recreate all\nthe database objects. It could shift all that logic out of pg_dump and into\nthe database.\n\nSo using the def.* catalog, we could package both \"regular\" table data and\nsystem objects via the contents of the def.* catalog views. Packages are a\ncollection rows, period. Build up from there.\n\nI'm working on a prototype called bundle [1], it still has a ways to go but\nit's showing some promise. It is going to require brining into PostgreSQL\nthe missing pg_get_*def functions, as folks have talked about before [2].\n\nThanks,\nEric\n\n[1]\nhttps://github.com/aquametalabs/aquameta/tree/master/src/pg-extension/bundle\n[2]\nhttps://www.postgresql.org/message-id/20130429234634.GA10380@tornado.leadboat.com\n\nOn Mon, Mar 18, 2019 at 11:56 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/18/19 22:38, Eric Hanson wrote:\n> rows are really second class citizens: They aren't tracked with\n> pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> etc.\n\nThis. You have other interests as well, but this is the one I was thinking\nabout a few years ago in [1] (starting at \"Ok, how numerous would be the\nproblems with this:\").Cool!First thoughts, it seems like a sensible way to go given the premise that extensions are immutable. But -- I'd be a little concerned about the performance ramifications. Usually there are not jillions of database objects in a extension, but if they started containing data, there sure could be jillions of rows. Every row would have to be checked for existence as part of an extension on every insert or update, no?\nNobody ever chimed in to say how numerous they did or didn't think the\nproblems would be. I was actually thinking recently about sitting down\nand trying to write that patch, as no one had exactly stood up to say\n\"oh heavens no, don't write that.\" But my round tuits are all deployed\nelsewhere at the moment.Likewise, if nobody tells me \"oh sheeze extensions can already do all this\" I'm going to assume they can't. :-)I'd still like to discuss the ideas.Me too!Ok, I should probably come out and say it: I think the user story of \"There is some kind of packaging system that can contain both schema and data, and these packages can be installed and removed along with their dependencies atomically\" is fairly obvious and desirable. But getting there while accepting the premises that are currently baked into extensions might be a tall order.Extensions have a middleware-ish aspect to them -- they are immutable and that immutability is checked and enforced at runtime. That might scale just fine to a few dozen database objects that only check pg_depends on DDL operations, but if we introduce record tracking and start sticking sticks into the wheels of the DML, things could go south really quickly it seems.I really like a more git-like pattern, where you are free to modify the working copy of a repository (or in this case an extension), and instead of being blocked from changing things, the system tells the user what has changed and how, and gives sensible options for what to do about it. That way it doesn't incur a performance hit, and the user can do a kind of \"git status\" on their extension to show any changes.How about an extension system whose first principle is that an extension is made up of rows, period. What about the DDL you ask? Well...Imagine a system catalog whose sole purpose is to contain database object definitions like \"CREATE VIEW ...\", similar to those produced by pg_catalog.pg_get_viewdef(), pg_catalog.get_functiondef(), etc. Let's call this catalog `def`. There is exactly one VIEW for every type of database object in PostgreSQL. def.table, def.role, def.sequence, def.operator, def.type, etc. Each def.* VIEW contains only two columns, `id` and `definition`. The `id` column contains a unique identifier for the object, and the `definition` column contains the SQL statement that will recreate the object.So, inside this system catalog is the SQL definition statement of every database object. In theory, the contents of all the `definition` columns together would be similar to the contents of pg_dump --schema-only.Now, imagine all these def.* views had insert triggers, so that on insert, it actually executes the contents of the `definition` column. In theory, we could pg_restore the data in the def.* views, and it would recreate all the database objects. It could shift all that logic out of pg_dump and into the database.So using the def.* catalog, we could package both \"regular\" table data and system objects via the contents of the def.* catalog views. Packages are a collection rows, period. Build up from there.I'm working on a prototype called bundle [1], it still has a ways to go but it's showing some promise. It is going to require brining into PostgreSQL the missing pg_get_*def functions, as folks have talked about before [2].Thanks,Eric[1] https://github.com/aquametalabs/aquameta/tree/master/src/pg-extension/bundle[2] https://www.postgresql.org/message-id/20130429234634.GA10380@tornado.leadboat.com",
"msg_date": "Tue, 19 Mar 2019 12:36:59 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:\n> #1: Dependencies\n> \n> Let's say we have two extensions, A and B, both of which depend on a third\n> extension C, let's just say C is hstore. A and B are written by different\n> developers, and both contain in their .control file the line\n> \n> requires = 'hstore'\n> \n> When A is installed, if A creates a schema, it puts hstore in that schema.\n> If not, hstore is already installed, it uses it in that location. How does\n> the extension know where to reference hstore?\n> \n> Then, when B is installed, it checks to see if extension hstore is\n> installed, sees that it is, and moves on. What if it expects it in a\n> different place than A does? The hstore extension can only be installed\n> once, in a single schema, but if multiple extensions depend on it and look\n> for it in different places, they are incompatible.\n> \n> I have heard talk of a way to write extensions so that they dynamically\n> reference the schema of their dependencies, but sure don't know how that\n> would work if it's possible. The @extschema@ variable references the\n> *current* extension's schema, but not there is no dynamic variable to\n> reference the schema of a dependency.\n\nIf desperate, you can do it like this:\n\n DO $$ BEGIN EXECUTE format('SELECT %I.earth()',\n (SELECT nspname FROM pg_namespace n\n JOIN pg_extension ON n.oid = extnamespace\n WHERE extname = 'earthdistance' )); END $$;\n\nNeedless to say, that's too ugly. Though probably unimportant in practice, it\nalso has a race condition vs. ALTER EXTENSION SET SCHEMA.\n\n> Also it is possible in theory to dynamically set search_path to contain\n> every schema of every dependency in play and then just not specify a schema\n> when you use something in a dependency. But this ANDs together all the\n> scopes of all the dependencies of an extension, introducing potential for\n> collisions, and is generally kind of clunky.\n\nThat's how it works today, and it has the problems you describe. I discussed\nsome solution candidates here:\nhttps://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n\nThe @DEPNAME_schema@ thing was trivial to implement, but I shelved it. I'm\nattaching the proof of concept, for your information.\n\n> #2: Data in Extensions\n> \n> Extensions that are just a collection of functions and types seem to be the\n> norm. Extensions can contain what the docs call \"configuration\" data, but\n> rows are really second class citizens: They aren't tracked with\n> pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> etc.\n> \n> Sometimes it would make sense for an extension to contain *only* data, or\n> insert some rows in a table that the extension doesn't \"own\", but has as a\n> dependency. For example, a \"webserver\" extension might contain a\n> \"resource\" table that serves up the content of resources in the table at a\n> specified path. But then, another extension, say an app, might want to just\n> list the webserver extension as a dependency, and insert a few resource\n> rows into it. This is really from what I can tell beyond the scope of what\n> extensions are capable of.\n\nI never thought of this use case. Interesting.",
"msg_date": "Mon, 15 Apr 2019 22:47:21 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 12:47 AM Noah Misch <noah@leadboat.com> wrote:\n\n> On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:\n> > I have heard talk of a way to write extensions so that they dynamically\n> > reference the schema of their dependencies, but sure don't know how that\n> > would work if it's possible. The @extschema@ variable references the\n> > *current* extension's schema, but not there is no dynamic variable to\n> > reference the schema of a dependency.\n>\n> If desperate, you can do it like this:\n>\n> DO $$ BEGIN EXECUTE format('SELECT %I.earth()',\n> (SELECT nspname FROM pg_namespace n\n> JOIN pg_extension ON n.oid = extnamespace\n> WHERE extname = 'earthdistance' )); END $$;\n>\n> Needless to say, that's too ugly. Though probably unimportant in\n> practice, it\n> also has a race condition vs. ALTER EXTENSION SET SCHEMA.\n>\n> > Also it is possible in theory to dynamically set search_path to contain\n> > every schema of every dependency in play and then just not specify a\n> schema\n> > when you use something in a dependency. But this ANDs together all the\n> > scopes of all the dependencies of an extension, introducing potential for\n> > collisions, and is generally kind of clunky.\n>\n> That's how it works today, and it has the problems you describe. I\n> discussed\n> some solution candidates here:\n>\n> https://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n>\n> The @DEPNAME_schema@ thing was trivial to implement, but I shelved it.\n> I'm\n> attaching the proof of concept, for your information.\n>\n\nInteresting.\n\nWhy shelved? I like it. You said you lean toward 2b in the link above,\nbut there is no 2b :-) but 1b was this option, which maybe you meant?\n\nThe other approach would be to have each extension be in it's own schema,\nwhose name is fixed for life. Then there are no collisions and no\nambiguity about their location. I don't use NPM but was just reading\nabout how they converted their package namespace from a single global\nnamespace with I think it was 30k packages in it,\nto @organization/packagename. I don't know how folks would feel about a\ncentral namespace registry, I don't love the idea if we can find a way\naround it, but would settle for it if there's no better solution. Either\nthat or use a UUID as the schema name. Truly hideous. But it seems like\nyour approach above with just dynamically looking up the extension's schema\nas a variable would solve everything.\n\nThere is the problem of sequencing, where extension A installs dependency\nextension B in it's own schema. Then extension C also wants to use\ndependency B, but extension A is uninstalled and extension B is now still\nhanging around in A's old schema. Not ideal but at least everything would\nstill function.\n\nI'll keep thinking about it...\n\n\n> > #2: Data in Extensions\n> >\n> > Extensions that are just a collection of functions and types seem to be\n> the\n> > norm. Extensions can contain what the docs call \"configuration\" data,\n> but\n> > rows are really second class citizens: They aren't tracked with\n> > pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> > etc.\n> >\n> > Sometimes it would make sense for an extension to contain *only* data, or\n> > insert some rows in a table that the extension doesn't \"own\", but has as\n> a\n> > dependency. For example, a \"webserver\" extension might contain a\n> > \"resource\" table that serves up the content of resources in the table at\n> a\n> > specified path. But then, another extension, say an app, might want to\n> just\n> > list the webserver extension as a dependency, and insert a few resource\n> > rows into it. This is really from what I can tell beyond the scope of\n> what\n> > extensions are capable of.\n>\n> I never thought of this use case. Interesting.\n>\n\nIt's a *really* powerful pattern. I am sure of this because I've been\nexploring it while developing a row packaging system modeled after git [1],\nand using it in conjunction with EXTENSIONs with extreme joy. But one does\nrows, and the other does DDL, and this is not ideal.\n\nCheers,\nEric\n\n[1]\nhttps://github.com/aquametalabs/aquameta/tree/master/src/pg-extension/bundle\n\nOn Tue, Apr 16, 2019 at 12:47 AM Noah Misch <noah@leadboat.com> wrote:On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:\n> I have heard talk of a way to write extensions so that they dynamically\n> reference the schema of their dependencies, but sure don't know how that\n> would work if it's possible. The @extschema@ variable references the\n> *current* extension's schema, but not there is no dynamic variable to\n> reference the schema of a dependency.\n\nIf desperate, you can do it like this:\n\n DO $$ BEGIN EXECUTE format('SELECT %I.earth()',\n (SELECT nspname FROM pg_namespace n\n JOIN pg_extension ON n.oid = extnamespace\n WHERE extname = 'earthdistance' )); END $$;\n\nNeedless to say, that's too ugly. Though probably unimportant in practice, it\nalso has a race condition vs. ALTER EXTENSION SET SCHEMA.\n\n> Also it is possible in theory to dynamically set search_path to contain\n> every schema of every dependency in play and then just not specify a schema\n> when you use something in a dependency. But this ANDs together all the\n> scopes of all the dependencies of an extension, introducing potential for\n> collisions, and is generally kind of clunky.\n\nThat's how it works today, and it has the problems you describe. I discussed\nsome solution candidates here:\nhttps://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n\nThe @DEPNAME_schema@ thing was trivial to implement, but I shelved it. I'm\nattaching the proof of concept, for your information.Interesting.Why shelved? I like it. You said you lean toward 2b in the link above, but there is no 2b :-) but 1b was this option, which maybe you meant?The other approach would be to have each extension be in it's own schema, whose name is fixed for life. Then there are no collisions and no ambiguity about their location. I don't use NPM but was just reading about how they converted their package namespace from a single global namespace with I think it was 30k packages in it, to @organization/packagename. I don't know how folks would feel about a central namespace registry, I don't love the idea if we can find a way around it, but would settle for it if there's no better solution. Either that or use a UUID as the schema name. Truly hideous. But it seems like your approach above with just dynamically looking up the extension's schema as a variable would solve everything.There is the problem of sequencing, where extension A installs dependency extension B in it's own schema. Then extension C also wants to use dependency B, but extension A is uninstalled and extension B is now still hanging around in A's old schema. Not ideal but at least everything would still function.I'll keep thinking about it... \n> #2: Data in Extensions\n> \n> Extensions that are just a collection of functions and types seem to be the\n> norm. Extensions can contain what the docs call \"configuration\" data, but\n> rows are really second class citizens: They aren't tracked with\n> pg_catalog.pg_depend, they aren't deleted when the extension is dropped,\n> etc.\n> \n> Sometimes it would make sense for an extension to contain *only* data, or\n> insert some rows in a table that the extension doesn't \"own\", but has as a\n> dependency. For example, a \"webserver\" extension might contain a\n> \"resource\" table that serves up the content of resources in the table at a\n> specified path. But then, another extension, say an app, might want to just\n> list the webserver extension as a dependency, and insert a few resource\n> rows into it. This is really from what I can tell beyond the scope of what\n> extensions are capable of.\n\nI never thought of this use case. Interesting.It's a *really* powerful pattern. I am sure of this because I've been exploring it while developing a row packaging system modeled after git [1], and using it in conjunction with EXTENSIONs with extreme joy. But one does rows, and the other does DDL, and this is not ideal.Cheers,Eric[1] https://github.com/aquametalabs/aquameta/tree/master/src/pg-extension/bundle",
"msg_date": "Tue, 16 Apr 2019 04:24:20 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 4:24 AM Eric Hanson <eric@aquameta.com> wrote:\n\n>\n>\n> On Tue, Apr 16, 2019 at 12:47 AM Noah Misch <noah@leadboat.com> wrote:\n>\n>> On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:\n>> > I have heard talk of a way to write extensions so that they dynamically\n>> > reference the schema of their dependencies, but sure don't know how that\n>> > would work if it's possible. The @extschema@ variable references the\n>> > *current* extension's schema, but not there is no dynamic variable to\n>> > reference the schema of a dependency.\n>>\n>> If desperate, you can do it like this:\n>>\n>> DO $$ BEGIN EXECUTE format('SELECT %I.earth()',\n>> (SELECT nspname FROM pg_namespace n\n>> JOIN pg_extension ON n.oid = extnamespace\n>> WHERE extname = 'earthdistance' )); END $$;\n>>\n>> Needless to say, that's too ugly. Though probably unimportant in\n>> practice, it\n>> also has a race condition vs. ALTER EXTENSION SET SCHEMA.\n>>\n>> > Also it is possible in theory to dynamically set search_path to contain\n>> > every schema of every dependency in play and then just not specify a\n>> schema\n>> > when you use something in a dependency. But this ANDs together all the\n>> > scopes of all the dependencies of an extension, introducing potential\n>> for\n>> > collisions, and is generally kind of clunky.\n>>\n>> That's how it works today, and it has the problems you describe. I\n>> discussed\n>> some solution candidates here:\n>>\n>> https://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n>>\n>> The @DEPNAME_schema@ thing was trivial to implement, but I shelved it.\n>> I'm\n>> attaching the proof of concept, for your information.\n>>\n>\n> Interesting.\n>\n> Why shelved? I like it. You said you lean toward 2b in the link above,\n> but there is no 2b :-) but 1b was this option, which maybe you meant?\n>\n> The other approach would be to have each extension be in it's own schema,\n> whose name is fixed for life. Then there are no collisions and no\n> ambiguity about their location. I don't use NPM but was just reading\n> about how they converted their package namespace from a single global\n> namespace with I think it was 30k packages in it,\n> to @organization/packagename. I don't know how folks would feel about a\n> central namespace registry, I don't love the idea if we can find a way\n> around it, but would settle for it if there's no better solution. Either\n> that or use a UUID as the schema name. Truly hideous. But it seems like\n> your approach above with just dynamically looking up the extension's schema\n> as a variable would solve everything.\n>\n> There is the problem of sequencing, where extension A installs dependency\n> extension B in it's own schema. Then extension C also wants to use\n> dependency B, but extension A is uninstalled and extension B is now still\n> hanging around in A's old schema. Not ideal but at least everything would\n> still function.\n>\n> I'll keep thinking about it...\n>\n\nWe would probably be wise to learn from what has gone (so I hear) terribly\nwrong with the Node / NPM packaging system (and I'm sure many before it),\nnamely versioning. What happens when two extensions require different\nversions of the same extension? At a glance it almost seems unsolvable,\ngiven the constraint that an extension can only be installed once, and only\nat a single version. I don't understand why that constraint exists though.\n\nEric\n\nOn Tue, Apr 16, 2019 at 4:24 AM Eric Hanson <eric@aquameta.com> wrote:On Tue, Apr 16, 2019 at 12:47 AM Noah Misch <noah@leadboat.com> wrote:On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:\n> I have heard talk of a way to write extensions so that they dynamically\n> reference the schema of their dependencies, but sure don't know how that\n> would work if it's possible. The @extschema@ variable references the\n> *current* extension's schema, but not there is no dynamic variable to\n> reference the schema of a dependency.\n\nIf desperate, you can do it like this:\n\n DO $$ BEGIN EXECUTE format('SELECT %I.earth()',\n (SELECT nspname FROM pg_namespace n\n JOIN pg_extension ON n.oid = extnamespace\n WHERE extname = 'earthdistance' )); END $$;\n\nNeedless to say, that's too ugly. Though probably unimportant in practice, it\nalso has a race condition vs. ALTER EXTENSION SET SCHEMA.\n\n> Also it is possible in theory to dynamically set search_path to contain\n> every schema of every dependency in play and then just not specify a schema\n> when you use something in a dependency. But this ANDs together all the\n> scopes of all the dependencies of an extension, introducing potential for\n> collisions, and is generally kind of clunky.\n\nThat's how it works today, and it has the problems you describe. I discussed\nsome solution candidates here:\nhttps://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n\nThe @DEPNAME_schema@ thing was trivial to implement, but I shelved it. I'm\nattaching the proof of concept, for your information.Interesting.Why shelved? I like it. You said you lean toward 2b in the link above, but there is no 2b :-) but 1b was this option, which maybe you meant?The other approach would be to have each extension be in it's own schema, whose name is fixed for life. Then there are no collisions and no ambiguity about their location. I don't use NPM but was just reading about how they converted their package namespace from a single global namespace with I think it was 30k packages in it, to @organization/packagename. I don't know how folks would feel about a central namespace registry, I don't love the idea if we can find a way around it, but would settle for it if there's no better solution. Either that or use a UUID as the schema name. Truly hideous. But it seems like your approach above with just dynamically looking up the extension's schema as a variable would solve everything.There is the problem of sequencing, where extension A installs dependency extension B in it's own schema. Then extension C also wants to use dependency B, but extension A is uninstalled and extension B is now still hanging around in A's old schema. Not ideal but at least everything would still function.I'll keep thinking about it...We would probably be wise to learn from what has gone (so I hear) terribly wrong with the Node / NPM packaging system (and I'm sure many before it), namely versioning. What happens when two extensions require different versions of the same extension? At a glance it almost seems unsolvable, given the constraint that an extension can only be installed once, and only at a single version. I don't understand why that constraint exists though.Eric",
"msg_date": "Tue, 16 Apr 2019 04:47:12 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 4:47 AM Eric Hanson <eric@aquameta.com> wrote:\n\n> We would probably be wise to learn from what has gone (so I hear) terribly\n> wrong with the Node / NPM packaging system (and I'm sure many before it),\n> namely versioning. What happens when two extensions require different\n> versions of the same extension? At a glance it almost seems unsolvable,\n> given the constraint that an extension can only be installed once, and only\n> at a single version. I don't understand why that constraint exists though.\n>\n\nHow about this:\n\n1. Extension can be installed once *per-version*.\n2. Each version of an extension that is installed is assigned by the system\na unique, hidden schema (similar to temp table schemas) whose name doesn't\nmatter because the extension user will never need to know it.\n3. There exists a dynamic variable, like you proposed above, but it\nincludes version number as well. @DEPNAME_VERSION_schema@ perhaps. This\nvariable would resolve to the system-assigned schema name of the extension\nspecified, at the version specified.\n4. Since sprinkling ones code with version numbers is awful, there exists a\nway (which I haven't thought of) to set a kind of search_path-type setting\nwhich sets in the current scope the version number of the extension that\nshould be dereferenced, so developers can still use @DEPNAME_schema@.\n\nThis would allow multiple versions of extensions to coexist, and it would\nsolve the problem with two extensions wanting the same dependency in\ndifferent places.\n\nIt's radical, but extensions are radically broken. A critique of the above\nwould be that extensions still have a single global namespace, so\npersonally I don't think it even goes far enough.\n\nCheers,\nEric\n\nOn Tue, Apr 16, 2019 at 4:47 AM Eric Hanson <eric@aquameta.com> wrote:We would probably be wise to learn from what has gone (so I hear) terribly wrong with the Node / NPM packaging system (and I'm sure many before it), namely versioning. What happens when two extensions require different versions of the same extension? At a glance it almost seems unsolvable, given the constraint that an extension can only be installed once, and only at a single version. I don't understand why that constraint exists though.How about this:1. Extension can be installed once *per-version*.2. Each version of an extension that is installed is assigned by the system a unique, hidden schema (similar to temp table schemas) whose name doesn't matter because the extension user will never need to know it.3. There exists a dynamic variable, like you proposed above, but it includes version number as well. @DEPNAME_VERSION_schema@ perhaps. This variable would resolve to the system-assigned schema name of the extension specified, at the version specified.4. Since sprinkling ones code with version numbers is awful, there exists a way (which I haven't thought of) to set a kind of search_path-type setting which sets in the current scope the version number of the extension that should be dereferenced, so developers can still use @DEPNAME_schema@.This would allow multiple versions of extensions to coexist, and it would solve the problem with two extensions wanting the same dependency in different places.It's radical, but extensions are radically broken. A critique of the above would be that extensions still have a single global namespace, so personally I don't think it even goes far enough.Cheers,Eric",
"msg_date": "Tue, 16 Apr 2019 05:28:53 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "Hi all!\n\nI am sending our comments to mentioned issues. I was trying to send it\nmonth ago (https://www.postgresql.org/message-id/CA%2B8wVNUOt2Bh4x7YQEVoq5BfP%3DjM-F6cDYKxJiTODG_VCGhUVQ%40mail.gmail.com),\nbut it somehow doesn't append in the \"thread\" (sorry, I am new in\nmailing list practice...).\n\nMy colleague already posted some report to bug mailing list\n(https://www.postgresql.org/message-id/15616-260dc9cb3bec7e7e@postgresql.org)\nbut with no response.\n\nOn Tue, 19 Mar 2019 at 02:38, Eric Hanson <eric@aquameta.com> wrote:\n>\n> Hi folks,\n>\n> After months and years of really trying to make EXTENSIONs meet the requirements of my machinations, I have come to the conclusion that either a) I am missing something or b) they are architecturally flawed. Or possibly both.\n>\n> Admittedly, I might be trying to push extensions beyond what the great elephant in the sky ever intended. The general bent here is to try to achieve a level of modular reusable components similar to those in \"traditional\" programming environments like pip, gem, npm, cpan, etc. Personally, I am trying to migrate as much of my dev stack as possible away from the filesystem and into the database. Files, especially code, configuration, templates, permissions, manifests and other development files, would be much happier in a database where they have constraints and an information model and can be queried!\n>\n> Regardless, it would be really great to be able to install an extension, and have it cascade down to multiple other extensions, which in turn cascade down to more, and have everything just work. Clearly, this was considered in the extension architecture, but I'm running into some problems making it a reality. So here they are.\n>\n>\n> #1: Dependencies\n>\n> Let's say we have two extensions, A and B, both of which depend on a third extension C, let's just say C is hstore. A and B are written by different developers, and both contain in their .control file the line\n>\n> requires = 'hstore'\n>\n> When A is installed, if A creates a schema, it puts hstore in that schema. If not, hstore is already installed, it uses it in that location. How does the extension know where to reference hstore?\n>\n> Then, when B is installed, it checks to see if extension hstore is installed, sees that it is, and moves on. What if it expects it in a different place than A does? The hstore extension can only be installed once, in a single schema, but if multiple extensions depend on it and look for it in different places, they are incompatible.\n>\n> I have heard talk of a way to write extensions so that they dynamically reference the schema of their dependencies, but sure don't know how that would work if it's possible. The @extschema@ variable references the *current* extension's schema, but not there is no dynamic variable to reference the schema of a dependency.\n>\n> Also it is possible in theory to dynamically set search_path to contain every schema of every dependency in play and then just not specify a schema when you use something in a dependency. But this ANDs together all the scopes of all the dependencies of an extension, introducing potential for collisions, and is generally kind of clunky.\n>\n\nIt is not possible to specify the version of extension we are\ndependent on in .control file.\n\n> #2: Data in Extensions\n>\n> Extensions that are just a collection of functions and types seem to be the norm. Extensions can contain what the docs call \"configuration\" data, but rows are really second class citizens: They aren't tracked with pg_catalog.pg_depend, they aren't deleted when the extension is dropped, etc.\n>\n> Sometimes it would make sense for an extension to contain *only* data, or insert some rows in a table that the extension doesn't \"own\", but has as a dependency. For example, a \"webserver\" extension might contain a \"resource\" table that serves up the content of resources in the table at a specified path. But then, another extension, say an app, might want to just list the webserver extension as a dependency, and insert a few resource rows into it. This is really from what I can tell beyond the scope of what extensions are capable of.\n>\n\nI am not sure about the name \"Configuration\" Tables. From my point of\nview extensions can hold two sorts of data:\n1) \"static\" data: delivered with extension, inserted by update\nscripts; the same \"static\" data are present across multiple\ninstallation of extension in the same version. This data are not\nsupposed to be dumped.\n2) \"dynamic\" data: inserted by users, have to be included in dumps,\nare marked with pg_extension_config_dump and are called\n\"configuration\" tables/data ... but why \"configuration\"?\n\n>\n> #3 pg_dump and Extensions\n>\n> Tables created by extensions are skipped by pg_dump unless they are flagged at create time with:\n>\n> pg_catalog.pg_extension_config_dump('my_table', 'where id < 20')\n>\n> However, there's no way that I can tell to mix and match rows and tables across multiple extensions, so pg_dump can't keep track of multiple extensions that contain rows in the same table.\n>\n\nWe have described some behavior of pg_dump, which we believe are in\nfact bugs: see [1] \"1) pg_dump with --schema parameter\" and \"2)\nHanging OID in extconfig\".\nMaybe it would be good to introduce new switch pg_dump --extension\nextA dumping all \"dynamic\" data from extension tables regardless on\nschema\n\n>\n> I'd like an extension framework that can contain data as first class citizens, and can gracefully handle a dependency chain and share dependencies. I have some ideas for a better approach, but they are pretty radical. I thought I would send this out and see what folks think.\n>\n> Thanks,\n> Eric\n> --\n> http://aquameta.org/\n\n#4: Extension owned\n\nIt is not possible to alter extension owner\n\nThanks for consideration, Jiří & Ivo.\n\n[1] https://www.postgresql.org/message-id/15616-260dc9cb3bec7e7e@postgresql.org\n\n\n",
"msg_date": "Wed, 17 Apr 2019 08:57:19 +0200",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: extensions are hitting the ceiling"
},
{
"msg_contents": "On Tue, Apr 16, 2019 at 04:24:20AM -0500, Eric Hanson wrote:\n> On Tue, Apr 16, 2019 at 12:47 AM Noah Misch <noah@leadboat.com> wrote:\n> > https://www.postgresql.org/message-id/20180710014308.GA805781@rfd.leadboat.com\n> >\n> > The @DEPNAME_schema@ thing was trivial to implement, but I shelved it.\n> > I'm attaching the proof of concept, for your information.\n\n> Why shelved? I like it. You said you lean toward 2b in the link above,\n> but there is no 2b :-) but 1b was this option, which maybe you meant?\n\n(2) is a mutation of (1), so (2b) exists by mutating (1b) according to the\ndescription of (2). In other words, (2b) would be this:\n\n Drop relocatable=true from extensions that have cause to do so (by adding a\n new version number and versioned control file): cube, earthdistance,\n pageinspect, pg_freespacemap, xml2. Do likewise for others as needed in the\n future. To relocate an affected extension, drop and recreate it. Warn\n about relocatable=true in non-core extensions. Expand @DEPNAME_schema@ in\n extension SQL files. Use @cube_schema@ to refer to the right objects.\n\nI shelved it because thread\nhttp://postgr.es/m/flat/20180830070609.GA1485875@rfd.leadboat.com did not\naccept it as a solution for contrib/ extensions. If it's not good enough for\ncontrib/, it's not good enough for this problem space.\n\n> The other approach would be to have each extension be in it's own schema,\n> whose name is fixed for life. Then there are no collisions and no\n> ambiguity about their location. I don't use NPM but was just reading\n> about how they converted their package namespace from a single global\n> namespace with I think it was 30k packages in it,\n> to @organization/packagename. I don't know how folks would feel about a\n> central namespace registry, I don't love the idea if we can find a way\n> around it, but would settle for it if there's no better solution. Either\n> that or use a UUID as the schema name. Truly hideous. But it seems like\n> your approach above with just dynamically looking up the extension's schema\n> as a variable would solve everything.\n\nThat's like how C/C++/Java identifiers work, turning each @DEPNAME_schema@\ninto a constant. If we were starting from scratch, that's attractive.\nUnfortunately, folks have applications that expect to use e.g. public.earth().\nWe'd need a big benefit to justify obligating those users to migrate. If we\nhad @DEPNAME_schema@, communities of users could decide to adopt a local\nconvention of a fixed schema per extension. Other communities of users,\nparticularly those with substantial stable code, could retain their current\nschema usage patterns.\n\nThanks,\nnm\n\n\n",
"msg_date": "Sun, 21 Apr 2019 20:25:22 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: extensions are hitting the ceiling"
}
] |
[
{
"msg_contents": "Hello, Postgres hackers,\n\nPlease see the attached patches.\n\nThe first patch adds an option to automatically generate recovery conf\ncontents in related files, following pg_basebackup. In the patch,\nGenerateRecoveryConf(), WriteRecoveryConf() and escape_quotes() are almost\nsame as them on pg_basebackup. The main difference is due to replication\nslot support and code (variables) limit. It seems that we could slightly\nrefactor later to put some common code into another file after aligning\npg_rewind with pg_basebackup. This was tested manually and was done by\nJimmy (cc-ed), Ashiwin (cc-ed) and me.\n\nAnother patch does automatic clean shutdown by running a single mode\npostgres instance if the target was not clean shut down since that is\nrequired by pg_rewind. This was manually tested and was done by Jimmy\n(cc-ed) and me. I'm not sure if we want a test case for that though.\n\nThanks.",
"msg_date": "Tue, 19 Mar 2019 14:09:03 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Two pg_rewind patches (auto generate recovery conf and ensure clean\n shutdown)"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 02:09:03PM +0800, Paul Guo wrote:\n> The first patch adds an option to automatically generate recovery conf\n> contents in related files, following pg_basebackup. In the patch,\n> GenerateRecoveryConf(), WriteRecoveryConf() and escape_quotes() are almost\n> same as them on pg_basebackup. The main difference is due to replication\n> slot support and code (variables) limit. It seems that we could slightly\n> refactor later to put some common code into another file after aligning\n> pg_rewind with pg_basebackup. This was tested manually and was done by\n> Jimmy (cc-ed), Ashiwin (cc-ed) and me.\n\n\nInteresting. The two routines have really the same logic, I would\nrecommend to have a first patch which does the refactoring and have\npg_rewind use it, and then a second patch which writes recovery.conf\nand uses the first patch to get the contents. Please note that the\ncommon routine needs to be version-aware as pg_basebackup requires\ncompatibility with past versions, but you could just pass the version\nnumber from the connection, and have pg_rewind pass the compiled-in\nversion value.\n\n> Another patch does automatic clean shutdown by running a single mode\n> postgres instance if the target was not clean shut down since that is\n> required by pg_rewind. This was manually tested and was done by Jimmy\n> (cc-ed) and me. I'm not sure if we want a test case for that though.\n\nI am not sure that I see the value in that. I'd rather let the\nrequired service start and stop out of pg_rewind and not introduce\ndependencies with other binaries. This step can also take quite some\ntime depending on the amount of WAL to replay post-crash at recovery\nand the shutdown checkpoint which is required to reach a consistent\non-disk state.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2019 15:18:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 19, 2019 at 02:09:03PM +0800, Paul Guo wrote:\n> > The first patch adds an option to automatically generate recovery conf\n> > contents in related files, following pg_basebackup. In the patch,\n> > GenerateRecoveryConf(), WriteRecoveryConf() and escape_quotes() are\n> almost\n> > same as them on pg_basebackup. The main difference is due to replication\n> > slot support and code (variables) limit. It seems that we could slightly\n> > refactor later to put some common code into another file after aligning\n> > pg_rewind with pg_basebackup. This was tested manually and was done by\n> > Jimmy (cc-ed), Ashiwin (cc-ed) and me.\n>\n>\n> Interesting. The two routines have really the same logic, I would\n> recommend to have a first patch which does the refactoring and have\n> pg_rewind use it, and then a second patch which writes recovery.conf\n> and uses the first patch to get the contents. Please note that the\n>\n\nThis is a good suggestion also. Will do it.\n\n\n> common routine needs to be version-aware as pg_basebackup requires\n> compatibility with past versions, but you could just pass the version\n> number from the connection, and have pg_rewind pass the compiled-in\n> version value.\n>\n> > Another patch does automatic clean shutdown by running a single mode\n> > postgres instance if the target was not clean shut down since that is\n> > required by pg_rewind. This was manually tested and was done by Jimmy\n> > (cc-ed) and me. I'm not sure if we want a test case for that though.\n>\n> I am not sure that I see the value in that. I'd rather let the\n> required service start and stop out of pg_rewind and not introduce\n> dependencies with other binaries. This step can also take quite some\n>\n\nThis makes recovery more automatically. Yes, it will add the dependency on\nthe postgres\nbinary, but it seems that most time pg_rewind should be shipped as postgres\nin the same install directory. From my experience of manually testing\npg_rewind,\nI feel that this besides auto-recovery-conf writing really alleviate my\nburden. I'm not sure how\nother users usually do before running pg_rewind when the target is not\ncleanly shut down,\nbut probably we can add an argument to pg_rewind to give those people who\nwant to\nhandle target separately another chance? default on or off whatever.\n\n\n> time depending on the amount of WAL to replay post-crash at recovery\n> and the shutdown checkpoint which is required to reach a consistent\n> on-disk state.\n>\n\nThe time is still required for people who want to make the target ready for\npg_rewind in another way.\n\nThanks.\n\nOn Tue, Mar 19, 2019 at 2:18 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Mar 19, 2019 at 02:09:03PM +0800, Paul Guo wrote:> The first patch adds an option to automatically generate recovery conf> contents in related files, following pg_basebackup. In the patch,> GenerateRecoveryConf(), WriteRecoveryConf() and escape_quotes() are almost> same as them on pg_basebackup. The main difference is due to replication> slot support and code (variables) limit. It seems that we could slightly> refactor later to put some common code into another file after aligning> pg_rewind with pg_basebackup. This was tested manually and was done by> Jimmy (cc-ed), Ashiwin (cc-ed) and me.\n\nInteresting. The two routines have really the same logic, I wouldrecommend to have a first patch which does the refactoring and havepg_rewind use it, and then a second patch which writes recovery.confand uses the first patch to get the contents. Please note that theThis is a good suggestion also. Will do it. common routine needs to be version-aware as pg_basebackup requirescompatibility with past versions, but you could just pass the versionnumber from the connection, and have pg_rewind pass the compiled-inversion value.\n> Another patch does automatic clean shutdown by running a single mode> postgres instance if the target was not clean shut down since that is> required by pg_rewind. This was manually tested and was done by Jimmy> (cc-ed) and me. I'm not sure if we want a test case for that though.\nI am not sure that I see the value in that. I'd rather let therequired service start and stop out of pg_rewind and not introducedependencies with other binaries. This step can also take quite someThis makes recovery more automatically. Yes, it will add the dependency on the postgresbinary, but it seems that most time pg_rewind should be shipped as postgresin the same install directory. From my experience of manually testing pg_rewind,I feel that this besides auto-recovery-conf writing really alleviate my burden. I'm not sure howother users usually do before running pg_rewind when the target is not cleanly shut down,but probably we can add an argument to pg_rewind to give those people who want tohandle target separately another chance? default on or off whatever. time depending on the amount of WAL to replay post-crash at recoveryand the shutdown checkpoint which is required to reach a consistenton-disk state.The time is still required for people who want to make the target ready for pg_rewind in another way.Thanks.",
"msg_date": "Wed, 20 Mar 2019 12:48:52 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 12:48:52PM +0800, Paul Guo wrote:\n> This is a good suggestion also. Will do it.\n\nPlease note also that we don't care about recovery.conf since v12 as\nrecovery parameters are now GUCs. I would suggest appending those\nextra parameters to postgresql.auto.conf, which is what pg_basebackup\ndoes.\n--\nMichael",
"msg_date": "Wed, 20 Mar 2019 14:20:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 1:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Mar 20, 2019 at 12:48:52PM +0800, Paul Guo wrote:\n> > This is a good suggestion also. Will do it.\n>\n> Please note also that we don't care about recovery.conf since v12 as\n> recovery parameters are now GUCs. I would suggest appending those\n> extra parameters to postgresql.auto.conf, which is what pg_basebackup\n> does.\n>\nYes, the recovery conf patch in the first email did like this, i.e. writing\npostgresql.auto.conf & standby.signal\n\nOn Wed, Mar 20, 2019 at 1:20 PM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Mar 20, 2019 at 12:48:52PM +0800, Paul Guo wrote:> This is a good suggestion also. Will do it.\nPlease note also that we don't care about recovery.conf since v12 asrecovery parameters are now GUCs. I would suggest appending thoseextra parameters to postgresql.auto.conf, which is what pg_basebackupdoes.Yes, the recovery conf patch in the first email did like this, i.e. writing postgresql.auto.conf & standby.signal",
"msg_date": "Wed, 20 Mar 2019 13:23:36 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "Hi Michael,\n\nI updated the patches as attached following previous discussions.\n\nThe two patches:\nv2-0001-Extact-common-functions-from-pg_basebackup-into-s.patch\nv2-0002-Add-option-to-write-recovery-configuration-inform.patch\n\n1. 0001 does move those common functions & variables to two new files (one\n.c and one .h) for both pg_rewind and pg_basebackup use,\nnote the functions are slightly modified (e.g. because conn is probably\nNULL on pg_rewind). I do not know where is more proper to put the\nnew files. Currently, they are under pg_basebackup and are used in\npg_rewind (Makefile modified to support that).\n\n2. 0002 adds the option to write recovery conf.\n\nThe below patch runs single mode Postgres if needed to make sure the target\nis cleanly shutdown. A new option is added (off by default).\nv2-0001-Ensure-target-clean-shutdown-at-beginning-of-pg_r.patch\n\nI've manually tested them and installcheck passes.\n\nThanks.\n\nOn Wed, Mar 20, 2019 at 1:23 PM Paul Guo <pguo@pivotal.io> wrote:\n\n>\n>\n> On Wed, Mar 20, 2019 at 1:20 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Wed, Mar 20, 2019 at 12:48:52PM +0800, Paul Guo wrote:\n>> > This is a good suggestion also. Will do it.\n>>\n>> Please note also that we don't care about recovery.conf since v12 as\n>> recovery parameters are now GUCs. I would suggest appending those\n>> extra parameters to postgresql.auto.conf, which is what pg_basebackup\n>> does.\n>>\n> Yes, the recovery conf patch in the first email did like this, i.e.\n> writing postgresql.auto.conf & standby.signal\n>\n>",
"msg_date": "Fri, 19 Apr 2019 11:40:04 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 3:41 PM Paul Guo <pguo@pivotal.io> wrote:\n> I updated the patches as attached following previous discussions.\n\nHi Paul,\n\nCould we please have a fresh rebase now that the CF is here?\n\nThanks,\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Jul 2019 23:34:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Apr-19, Paul Guo wrote:\n\n> The below patch runs single mode Postgres if needed to make sure the target\n> is cleanly shutdown. A new option is added (off by default).\n> v2-0001-Ensure-target-clean-shutdown-at-beginning-of-pg_r.patch\n\nWhy do we need an option for this? Is there a reason not to do this\nunconditionally?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Jul 2019 11:57:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "Rebased, aligned with recent changes in pg_rewind/pg_basebackup and then\nretested. Thanks.\n\nOn Mon, Jul 1, 2019 at 7:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Fri, Apr 19, 2019 at 3:41 PM Paul Guo <pguo@pivotal.io> wrote:\n> > I updated the patches as attached following previous discussions.\n>\n> Hi Paul,\n>\n> Could we please have a fresh rebase now that the CF is here?\n>\n> Thanks,\n>\n> --\n> Thomas Munro\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=mxictY8xxFIFvsyxFYx4bXwF4PfnGWWRuYLLX4R1yhs&s=qvC2BI2OhKkBz71FO1w2XNy6dvfhIeGHT3X0yU3XDlU&e=\n>",
"msg_date": "Tue, 2 Jul 2019 13:46:21 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 12:35 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Apr-19, Paul Guo wrote:\n>\n> > The below patch runs single mode Postgres if needed to make sure the\n> target\n> > is cleanly shutdown. A new option is added (off by default).\n> > v2-0001-Ensure-target-clean-shutdown-at-beginning-of-pg_r.patch\n>\n> Why do we need an option for this? Is there a reason not to do this\n> unconditionally?\n>\n\nThere is concern about this (see previous emails in this thread). On\ngreenplum (MPP DB based on Postgres),\nwe unconditionally do this. I'm not sure about usually how Postgres users\ndo this when there is an unclean shutdown,\nbut providing an option seem to be safer to avoid breaking existing\nscript/service whatever. If many people\nthink this option is unnecessary, I'm fine to remove the option and keep\nthe code logic.\n\nOn Tue, Jul 2, 2019 at 12:35 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Apr-19, Paul Guo wrote:\n\n> The below patch runs single mode Postgres if needed to make sure the target\n> is cleanly shutdown. A new option is added (off by default).\n> v2-0001-Ensure-target-clean-shutdown-at-beginning-of-pg_r.patch\n\nWhy do we need an option for this? Is there a reason not to do this\nunconditionally?There is concern about this (see previous emails in this thread). On greenplum (MPP DB based on Postgres),we unconditionally do this. I'm not sure about usually how Postgres users do this when there is an unclean shutdown,but providing an option seem to be safer to avoid breaking existing script/service whatever. If many peoplethink this option is unnecessary, I'm fine to remove the option and keep the code logic.",
"msg_date": "Tue, 2 Jul 2019 13:54:45 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 5:46 PM Paul Guo <pguo@pivotal.io> wrote:\n> Rebased, aligned with recent changes in pg_rewind/pg_basebackup and then retested. Thanks.\n\nHi Paul,\n\nA minor build problem on Windows:\n\nsrc/bin/pg_rewind/pg_rewind.c(32): fatal error C1083: Cannot open\ninclude file: 'backup_common.h': No such file or directory\n[C:\\projects\\postgresql\\pg_rewind.vcxproj]\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46422\nhttp://cfbot.cputube.org/paul-guo.html\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 10:54:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "Yes, the patches changed Makefile so that pg_rewind and pg_basebackup could\nuse some common code, but for Windows build, I'm not sure where are those\nwindow build files. Does anyone know about that? Thanks.\n\nOn Tue, Jul 9, 2019 at 6:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Tue, Jul 2, 2019 at 5:46 PM Paul Guo <pguo@pivotal.io> wrote:\n> > Rebased, aligned with recent changes in pg_rewind/pg_basebackup and then\n> retested. Thanks.\n>\n> Hi Paul,\n>\n> A minor build problem on Windows:\n>\n> src/bin/pg_rewind/pg_rewind.c(32): fatal error C1083: Cannot open\n> include file: 'backup_common.h': No such file or directory\n> [C:\\projects\\postgresql\\pg_rewind.vcxproj]\n>\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__ci.appveyor.com_project_postgresql-2Dcfbot_postgresql_build_1.0.46422&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=cieAr5np_1qgfD3tXImqOJcdIpBzgBco-pm1TLLUUuI&e=\n>\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__cfbot.cputube.org_paul-2Dguo.html&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=RkCg3MktPW2gi4I_fAkHqI8i3anSADrz0MXq2VaqmFE&e=\n>\n> --\n> Thomas Munro\n>\n> https://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=N8IZBBSK2EyREasMrbBQqHTHJwe1NbCBLEsxP-8C1Hk&e=\n>\n\nYes, the patches changed Makefile so that pg_rewind and pg_basebackup could use some common code, but for Windows build, I'm not sure where are those window build files. Does anyone know about that? Thanks.On Tue, Jul 9, 2019 at 6:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Jul 2, 2019 at 5:46 PM Paul Guo <pguo@pivotal.io> wrote:\n> Rebased, aligned with recent changes in pg_rewind/pg_basebackup and then retested. Thanks.\n\nHi Paul,\n\nA minor build problem on Windows:\n\nsrc/bin/pg_rewind/pg_rewind.c(32): fatal error C1083: Cannot open\ninclude file: 'backup_common.h': No such file or directory\n[C:\\projects\\postgresql\\pg_rewind.vcxproj]\n\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__ci.appveyor.com_project_postgresql-2Dcfbot_postgresql_build_1.0.46422&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=cieAr5np_1qgfD3tXImqOJcdIpBzgBco-pm1TLLUUuI&e= \nhttps://urldefense.proofpoint.com/v2/url?u=http-3A__cfbot.cputube.org_paul-2Dguo.html&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=RkCg3MktPW2gi4I_fAkHqI8i3anSADrz0MXq2VaqmFE&e= \n\n-- \nThomas Munro\nhttps://urldefense.proofpoint.com/v2/url?u=https-3A__enterprisedb.com&d=DwIBaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=Usi0ex6Ch92MsB5QQDgYFw&m=9FvwTFotsG_UdMt_xNEvMpM7_kKgR-hV4Fg8mnseaNM&s=N8IZBBSK2EyREasMrbBQqHTHJwe1NbCBLEsxP-8C1Hk&e=",
"msg_date": "Tue, 9 Jul 2019 22:48:49 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 10:48:49PM +0800, Paul Guo wrote:\n> Yes, the patches changed Makefile so that pg_rewind and pg_basebackup could\n> use some common code, but for Windows build, I'm not sure where are those\n> window build files. Does anyone know about that? Thanks.\n\nThe VS scripts are located in src/tools/msvc/. You will likely need\nto tweak things like $frontend_extraincludes or variables in the same\narea for this patch (please see Mkvcbuild.pm).\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 16:28:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 09, 2019 at 10:48:49PM +0800, Paul Guo wrote:\n> > Yes, the patches changed Makefile so that pg_rewind and pg_basebackup\n> could\n> > use some common code, but for Windows build, I'm not sure where are those\n> > window build files. Does anyone know about that? Thanks.\n>\n> The VS scripts are located in src/tools/msvc/. You will likely need\n> to tweak things like $frontend_extraincludes or variables in the same\n> area for this patch (please see Mkvcbuild.pm).\n>\n\nThanks. Both Mkvcbuild.pm and pg_rewind/Makefile are modified to make\nWindows build pass in a\nlocal environment (Hopefully this passes the CI testing), also now\npg_rewind/Makefile does not\ncreate soft link for backup_common.h anymore. Instead -I is used to specify\nthe header directory.\n\nI also noticed that doc change is needed so modified documents for the two\nnew options accordingly.\nPlease see the attached new patches.",
"msg_date": "Mon, 15 Jul 2019 16:52:14 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Jul-15, Paul Guo wrote:\n\n> Thanks. Both Mkvcbuild.pm and pg_rewind/Makefile are modified to make\n> Windows build pass in a\n> local environment (Hopefully this passes the CI testing), also now\n> pg_rewind/Makefile does not\n> create soft link for backup_common.h anymore. Instead -I is used to specify\n> the header directory.\n\nIt seems there's minor breakage in the build, per CFbot. Can you\nplease rebase this?\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 18:39:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n> It seems there's minor breakage in the build, per CFbot. Can you\n> please rebase this?\n>\n> There is a code conflict. See attached for the new version. Thanks.",
"msg_date": "Thu, 5 Sep 2019 15:41:14 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "Thank for rebasing.\n\nI didn't like 0001 very much.\n\n* It seems now would be the time to stop pretending we're using a file\ncalled recovery.conf; I know we still support older server versions that\nuse that file, but it sounds like we should take the opportunity to\nrename the function to be less misleading once those versions vanish out\nof existance.\n\n* disconnect_and_exit seems a bit out of place compared to the other\nparts of this new module. I think you only put it there so that the\n'conn' can be a global, and that you can stop passing 'conn' as a\nvariable to GenerateRecoveryConf. It seems more modular to me to keep\nit as a separate variable in each program and have it passed down to the\nroutine that writes the file.\n\n* From modularity also seems better to me to avoid a global variable\n'recoveryconfcontents' and instead return the string from\nGenerateRecoveryConf to pass as a param to WriteRecoveryConf.\n(In fact, I wonder why the current structure is like it is, namely to\nhave ReceiveAndUnpackTarFile write the file; why wouldn't be its caller\nbe responsible for writing it?)\n\nI wonder about putting this new file in src/fe_utils instead of keeping\nit in pg_basebackup and symlinking to pg_rewind. Maybe if we make it a\ntrue module (recovery_config_gen.c) it makes more sense there.\n\n0002 seems okay as far as it goes.\n\n\n0003:\n\nI still don't understand why we need a command-line option to do this.\nWhy should it not be the default behavior?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 5 Sep 2019 09:23:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n> Thank for rebasing.\n>\n> I didn't like 0001 very much.\n>\n> * It seems now would be the time to stop pretending we're using a file\n> called recovery.conf; I know we still support older server versions that\n> use that file, but it sounds like we should take the opportunity to\n> rename the function to be less misleading once those versions vanish out\n> of existance.\n>\n\nHow about renaming the function names to\nGenerateRecoveryConf -> GenerateRecoveryConfContents\nWriteRecoveryConf -> WriteRecoveryConfInfo <- it writes standby.signal\non pg12+, so function name WriteRecoveryConfContents is not accurate.\nand\nvariable writerecoveryconf -> write_recovery_conf_info?\n\n\n> * disconnect_and_exit seems a bit out of place compared to the other\n> parts of this new module. I think you only put it there so that the\n> 'conn' can be a global, and that you can stop passing 'conn' as a\n> variable to GenerateRecoveryConf. It seems more modular to me to keep\n> it as a separate variable in each program and have it passed down to the\n> routine that writes the file.\n>\n> * From modularity also seems better to me to avoid a global variable\n> 'recoveryconfcontents' and instead return the string from\n> GenerateRecoveryConf to pass as a param to WriteRecoveryConf.\n> (In fact, I wonder why the current structure is like it is, namely to\n> have ReceiveAndUnpackTarFile write the file; why wouldn't be its caller\n> be responsible for writing it?)\n>\n\nReasonable to make common code include less variables. I can try modifying\nthe patches to remove the previously added variables below in the common\ncode.\n\n+/* Contents of configuration file to be generated */\n+extern PQExpBuffer recoveryconfcontents;\n+\n+extern bool writerecoveryconf;\n+extern char *replication_slot;\n+PGconn *conn;\n\n\n>\n> I wonder about putting this new file in src/fe_utils instead of keeping\n> it in pg_basebackup and symlinking to pg_rewind. Maybe if we make it a\n> true module (recovery_config_gen.c) it makes more sense there.\n>\n> I thought some about where to put the common code also. It seems pg_rewind\nand pg_basebackup are the only consumers of the small common code. I doubt\nit deserves a separate file under src/fe_utils.\n\n\n>\n> 0003:\n>\n> I still don't understand why we need a command-line option to do this.\n> Why should it not be the default behavior?\n>\n\nThis was discussed but frankly speaking I do not know how other postgres\nusers or enterprise providers handle this (probably some have own scripts?).\nI could easily remove the option code if more and more people agree on that\nor at least we could turn it on by default?\n\nThanks\n\nThank for rebasing.\n\nI didn't like 0001 very much.\n\n* It seems now would be the time to stop pretending we're using a file\ncalled recovery.conf; I know we still support older server versions that\nuse that file, but it sounds like we should take the opportunity to\nrename the function to be less misleading once those versions vanish out\nof existance.How about renaming the function names to GenerateRecoveryConf -> GenerateRecoveryConfContents WriteRecoveryConf -> WriteRecoveryConfInfo <- it writes standby.signal on pg12+, so function name WriteRecoveryConfContents is not accurate.andvariable writerecoveryconf -> write_recovery_conf_info?\n\n* disconnect_and_exit seems a bit out of place compared to the other\nparts of this new module. I think you only put it there so that the\n'conn' can be a global, and that you can stop passing 'conn' as a\nvariable to GenerateRecoveryConf. It seems more modular to me to keep\nit as a separate variable in each program and have it passed down to the\nroutine that writes the file.\n\n* From modularity also seems better to me to avoid a global variable\n'recoveryconfcontents' and instead return the string from\nGenerateRecoveryConf to pass as a param to WriteRecoveryConf.\n(In fact, I wonder why the current structure is like it is, namely to\nhave ReceiveAndUnpackTarFile write the file; why wouldn't be its caller\nbe responsible for writing it?) Reasonable to make common code include less variables. I can try modifyingthe patches to remove the previously added variables below in the common code.+/* Contents of configuration file to be generated */+extern PQExpBuffer recoveryconfcontents;++extern bool writerecoveryconf;+extern char *replication_slot;+PGconn *conn; \n\nI wonder about putting this new file in src/fe_utils instead of keeping\nit in pg_basebackup and symlinking to pg_rewind. Maybe if we make it a\ntrue module (recovery_config_gen.c) it makes more sense there.I thought some about where to put the common code also. It seems pg_rewindand pg_basebackup are the only consumers of the small common code. I doubtit deserves a separate file under src/fe_utils. \n\n0003:\n\nI still don't understand why we need a command-line option to do this.\nWhy should it not be the default behavior? This was discussed but frankly speaking I do not know how other postgresusers or enterprise providers handle this (probably some have own scripts?).I could easily remove the option code if more and more people agree on thator at least we could turn it on by default?Thanks",
"msg_date": "Mon, 9 Sep 2019 22:18:49 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Sep-09, Paul Guo wrote:\n\n> >\n> > Thank for rebasing.\n> >\n> > I didn't like 0001 very much.\n> >\n> > * It seems now would be the time to stop pretending we're using a file\n> > called recovery.conf; I know we still support older server versions that\n> > use that file, but it sounds like we should take the opportunity to\n> > rename the function to be less misleading once those versions vanish out\n> > of existance.\n> \n> How about renaming the function names to\n> GenerateRecoveryConf -> GenerateRecoveryConfContents\n> WriteRecoveryConf -> WriteRecoveryConfInfo <- it writes standby.signal\n> on pg12+, so function name WriteRecoveryConfContents is not accurate.\n\nGenerateRecoveryConfig / WriteRecoveryConfig ?\n\n> > I wonder about putting this new file in src/fe_utils instead of keeping\n> > it in pg_basebackup and symlinking to pg_rewind. Maybe if we make it a\n> > true module (recovery_config_gen.c) it makes more sense there.\n> >\n> I thought some about where to put the common code also. It seems pg_rewind\n> and pg_basebackup are the only consumers of the small common code. I doubt\n> it deserves a separate file under src/fe_utils.\n\nHmm, but other things there are also used by only two programs, say\npsqlscan.l and conditional.c are just for psql and pgbench.\n\n> > 0003:\n> >\n> > I still don't understand why we need a command-line option to do this.\n> > Why should it not be the default behavior?\n> \n> This was discussed but frankly speaking I do not know how other postgres\n> users or enterprise providers handle this (probably some have own scripts?).\n> I could easily remove the option code if more and more people agree on that\n> or at least we could turn it on by default?\n\nWell, I've seen no contrary votes, and frankly I see no use for the\nopposite (current) behavior.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Sep 2019 14:52:19 -0300",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "I've updated the patch series following the suggestions. Thanks.\n\n\n>\n>",
"msg_date": "Thu, 19 Sep 2019 21:21:04 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "The patch series failed on Windows CI. I modified the Windows build file to\nfix that. See attached for the v7 version.",
"msg_date": "Fri, 20 Sep 2019 15:33:13 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Sep-20, Paul Guo wrote:\n\n> The patch series failed on Windows CI. I modified the Windows build file to\n> fix that. See attached for the v7 version.\n\nThanks.\n\nQuestion about 0003. If we specify --skip-clean-shutdown and the cluter\nwas not cleanly shut down, shouldn't we error out instead of trying to\npress on?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 24 Sep 2019 17:12:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n> On 2019-Sep-20, Paul Guo wrote:\n>\n> > The patch series failed on Windows CI. I modified the Windows build file\n> to\n> > fix that. See attached for the v7 version.\n>\n> Thanks.\n>\n> Question about 0003. If we specify --skip-clean-shutdown and the cluter\n> was not cleanly shut down, shouldn't we error out instead of trying to\n> press on?\n\n\npg_rewind would error out in this case, see sanityChecks().\nUsers are expected to clean up themselves if they use this argument.\n\nOn 2019-Sep-20, Paul Guo wrote:\n\n> The patch series failed on Windows CI. I modified the Windows build file to\n> fix that. See attached for the v7 version.\n\nThanks.\n\nQuestion about 0003. If we specify --skip-clean-shutdown and the cluter\nwas not cleanly shut down, shouldn't we error out instead of trying to\npress on?pg_rewind would error out in this case, see sanityChecks().Users are expected to clean up themselves if they use this argument.",
"msg_date": "Wed, 25 Sep 2019 10:03:44 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "CC Alexey for reasons that become clear below.\n\nOn 2019-Sep-25, Paul Guo wrote:\n\n> > Question about 0003. If we specify --skip-clean-shutdown and the cluter\n> > was not cleanly shut down, shouldn't we error out instead of trying to\n> > press on?\n> \n> pg_rewind would error out in this case, see sanityChecks().\n> Users are expected to clean up themselves if they use this argument.\n\nAh, good. We should have a comment about that below the relevant\nstanza, I suggest. (Or maybe in the same comment that ends in line\n272).\n\nI pushed 0001 with a few tweaks. Nothing really substantial, just my\nOCD that doesn't leave me alone ... but this means your subsequent\npatches need to be adjusted. One thing is that that patch touched\npg_rewind for no reason (those changes should have been in 0002) --\ndropped those.\n\nAnother thing in 0002 is that you're adding a \"-R\" switch to pg_rewind,\nbut we have another patch in the commitfest using the same switch for a\ndifferent purpose. Maybe you guys need to get to an agreement over who\nuses the letter :-) Also, it would be super helpful if you review\nAlexey's patch: https://commitfest.postgresql.org/24/1849/\n\n\nThis line is far too long:\n\n+ printf(_(\" -s, --skip-clean-shutdown skip running single-mode postgres if needed to make sure target is clean shutdown\\n\"));\n\nCan we make the description more concise?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 14:48:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Wed, 2019-09-25 at 14:48 -0300, Alvaro Herrera wrote:\n> Another thing in 0002 is that you're adding a \"-R\" switch to\n> pg_rewind, but we have another patch in the commitfest using the same\n> switch for a different purpose. Maybe you guys need to get to an\n> agreement over who uses the letter :-) Also, it would be super\n> helpful if you review Alexey's patch:\n> https://commitfest.postgresql.org/24/1849/\n\nI believe that -R should be reserved for creating recovery.conf,\nsimilar to pg_basebackup.\n\nEverything else would be confusing.\n\nI've been missing pg_rewind -R!\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 25 Sep 2019 21:26:54 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-09-25 20:48, Alvaro Herrera wrote:\n> CC Alexey for reasons that become clear below.\n> \n> Another thing in 0002 is that you're adding a \"-R\" switch to pg_rewind,\n> but we have another patch in the commitfest using the same switch for a\n> different purpose. Maybe you guys need to get to an agreement over who\n> uses the letter :-)\n> \n\nThank you for mentioning me. I've been monitoring silently this thread \nand was ready to modify my patch if this one will proceed faster. It \nseems like it's time :)\n\nOn 2019-09-25 22:26, Laurenz Albe wrote:\n> \n> I believe that -R should be reserved for creating recovery.conf,\n> similar to pg_basebackup.\n> \n> Everything else would be confusing.\n> \n> I've been missing pg_rewind -R!\n> \n\nYes, -R is already used in pg_basebackup for the same functionality, so \nit seems natural to use it here as well for consistency.\n\nI will review options naming in my own patch and update it accordingly. \nMaybe -w/-W or -a/-A options will be good, since it's about WALs \nretrieval from archive.\n\n\nRegards\n--\nAlexey\n\nP.S. Just noticed that in v12 fullname of -R option in pg_basebackup is \nstill --write-recovery-conf, which is good for a backward compatibility, \nbut looks a little bit awkward, since recovery.conf doesn't exist \nalready, doesn't it? However, one may read it as \n'write-recovery-configuration', then it seems fine.\n\n\n\n",
"msg_date": "Wed, 25 Sep 2019 23:22:35 +0300",
"msg_from": "a.kondratov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 1:48 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> CC Alexey for reasons that become clear below.\n>\n> On 2019-Sep-25, Paul Guo wrote:\n>\n> > > Question about 0003. If we specify --skip-clean-shutdown and the\n> cluter\n> > > was not cleanly shut down, shouldn't we error out instead of trying to\n> > > press on?\n> >\n> > pg_rewind would error out in this case, see sanityChecks().\n> > Users are expected to clean up themselves if they use this argument.\n>\n> Ah, good. We should have a comment about that below the relevant\n> stanza, I suggest. (Or maybe in the same comment that ends in line\n> 272).\n>\n> I pushed 0001 with a few tweaks. Nothing really substantial, just my\n> OCD that doesn't leave me alone ... but this means your subsequent\n> patches need to be adjusted. One thing is that that patch touched\n> pg_rewind for no reason (those changes should have been in 0002) --\n> dropped those.\n>\n> Another thing in 0002 is that you're adding a \"-R\" switch to pg_rewind,\n> but we have another patch in the commitfest using the same switch for a\n> different purpose. Maybe you guys need to get to an agreement over who\n> uses the letter :-) Also, it would be super helpful if you review\n> Alexey's patch: https://commitfest.postgresql.org/24/1849/\n>\n>\n> This line is far too long:\n>\n> + printf(_(\" -s, --skip-clean-shutdown skip running single-mode\n> postgres if needed to make sure target is clean shutdown\\n\"));\n>\n> Can we make the description more concise?\n>\n\nThanks. I've updated the reset two patches and attached as v8.\n\nNote in the 2nd patch, the long option is changed as below. Both the option\nand description\nnow seems to be more concise since we want db state as either DB_SHUTDOWNED\nor\nDB_SHUTDOWNED_IN_RECOVERY.\n\n\"-s, --no-ensure-shutdowned do not auto-fix unclean shutdown\"",
"msg_date": "Thu, 26 Sep 2019 22:05:16 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n>\n> Yes, -R is already used in pg_basebackup for the same functionality, so\n> it seems natural to use it here as well for consistency.\n>\n> I will review options naming in my own patch and update it accordingly.\n> Maybe -w/-W or -a/-A options will be good, since it's about WALs\n> retrieval from archive.\n>\n>\nThanks\n\n\n>\n> Regards\n> --\n> Alexey\n>\n> P.S. Just noticed that in v12 fullname of -R option in pg_basebackup is\n> still --write-recovery-conf, which is good for a backward compatibility,\n> but looks a little bit awkward, since recovery.conf doesn't exist\n> already, doesn't it? However, one may read it as\n> 'write-recovery-configuration', then it seems fine.\n>\n>\nYes, here is the description\n\"--write-recovery-conf write configuration for replication\"\nSo we do not mention that is the file recovery.conf. People do not know\nabout the recovery.conf history might really not be confused since\npostgresql has various configuration files.\n\n\nYes, -R is already used in pg_basebackup for the same functionality, so \nit seems natural to use it here as well for consistency.\n\nI will review options naming in my own patch and update it accordingly. \nMaybe -w/-W or -a/-A options will be good, since it's about WALs \nretrieval from archive.\nThanks \n\nRegards\n--\nAlexey\n\nP.S. Just noticed that in v12 fullname of -R option in pg_basebackup is \nstill --write-recovery-conf, which is good for a backward compatibility, \nbut looks a little bit awkward, since recovery.conf doesn't exist \nalready, doesn't it? However, one may read it as \n'write-recovery-configuration', then it seems fine.\nYes, here is the description\"--write-recovery-conf write configuration for replication\"So we do not mention that is the file recovery.conf. People do not knowabout the recovery.conf history might really not be confused sincepostgresql has various configuration files.",
"msg_date": "Thu, 26 Sep 2019 22:09:46 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "\n> Thanks. I've updated the reset two patches and attached as v8.\n\nGreat, thanks.\n\n> Note in the 2nd patch, the long option is changed as below. Both the option\n> and description\n> now seems to be more concise since we want db state as either DB_SHUTDOWNED\n> or\n> DB_SHUTDOWNED_IN_RECOVERY.\n> \n> \"-s, --no-ensure-shutdowned do not auto-fix unclean shutdown\"\n\nNote that \"shutdowned\" is incorrect English; we've let\nit live in the code because it's not user-visible, but we should\ncertainly not immortalize it where it becomes so. I suppose\n\"--no-ensure-shutdown\" is okay, although I think some may prefer\n\"--no-ensure-shut-down\". Opinions from native speakers would be\nwelcome. Also, let's expand \"auto-fix\" to \"automatically fix\" (or\n\"repair\" if there's room in the line? Not sure. Can be bikeshedded to\ndeath I guess.)\n\nSecondarily, I see no reason to test connstr_source rather than just\n\"conn\" in the other patch; doing it the other way is more natural, since\nit's that thing that's tested as an argument.\n\npg_rewind.c: Please put the new #include line keeping the alphabetical\norder.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 26 Sep 2019 11:51:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n>\n> > Note in the 2nd patch, the long option is changed as below. Both the\n> option\n> > and description\n> > now seems to be more concise since we want db state as either\n> DB_SHUTDOWNED\n> > or\n> > DB_SHUTDOWNED_IN_RECOVERY.\n> >\n> > \"-s, --no-ensure-shutdowned do not auto-fix unclean shutdown\"\n>\n> Note that \"shutdowned\" is incorrect English; we've let\n> it live in the code because it's not user-visible, but we should\n> certainly not immortalize it where it becomes so. I suppose\n> \"--no-ensure-shutdown\" is okay, although I think some may prefer\n> \"--no-ensure-shut-down\". Opinions from native speakers would be\n> welcome. Also, let's expand \"auto-fix\" to \"automatically fix\" (or\n> \"repair\" if there's room in the line? Not sure. Can be bikeshedded to\n> death I guess.)\n>\n\nI choose that one from the below tree.\n\n--no-ensure-shutdown\n--no-ensure-shutdowned\n--no-ensure-clean-shutdown\n\nNow I agree for user experience we should not use the 2nd one. For\n--no-ensure-clean-shutdown or -no-ensure-shut-down, seems too many -.\n\nI'm using --no-ensure-shutdown in the new version unless there are better\nsuggestions.\n\n\n>\n> Secondarily, I see no reason to test connstr_source rather than just\n> \"conn\" in the other patch; doing it the other way is more natural, since\n> it's that thing that's tested as an argument.\n>\n> pg_rewind.c: Please put the new #include line keeping the alphabetical\n> order.\n>\n\nAgreed to the above suggestions. I attached the v9.\n\nThanks.",
"msg_date": "Fri, 27 Sep 2019 11:27:56 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 27.09.2019 6:27, Paul Guo wrote:\n>\n>\n> Secondarily, I see no reason to test connstr_source rather than just\n> \"conn\" in the other patch; doing it the other way is more natural,\n> since\n> it's that thing that's tested as an argument.\n>\n> pg_rewind.c: Please put the new #include line keeping the alphabetical\n> order.\n>\n>\n> Agreed to the above suggestions. I attached the v9.\n>\n\nI went through the remaining two patches and they seem to be very clear \nand concise. However, there are two points I could complain about:\n\n1) Maybe I've missed it somewhere in the thread above, but currently \npg_rewind allows to run itself with -R and --source-pgdata. In that case \n-R option is just swallowed and neither standby.signal, nor \npostgresql.auto.conf is written, which is reasonable though. Should it \nbe stated somehow in the docs that -R option always has to go altogether \nwith --source-server? Or should pg_rewind notify user that options are \nincompatible and no recovery configuration will be written?\n\n2) Are you going to leave -R option completely without tap-tests? \nAttached is a small patch, which tests -R option along with the existing \n'remote' case. If needed it may be split into two separate cases. First, \nit tests that pg_rewind is able to succeed with minimal permissions \naccording to the Michael's patch d9f543e [1]. Next, it checks presence \nof standby.signal and adds REPLICATION permission to rewind_user to test \nthat new standby is able to start with generated recovery configuration.\n\n[1] \nhttps://github.com/postgres/postgres/commit/d9f543e9e9be15f92abdeaf870e57ef289020191\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 27 Sep 2019 15:18:59 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Sep-27, Alexey Kondratov wrote:\n\n> 1) Maybe I've missed it somewhere in the thread above, but currently\n> pg_rewind allows to run itself with -R and --source-pgdata. In that case -R\n> option is just swallowed and neither standby.signal, nor\n> postgresql.auto.conf is written, which is reasonable though. Should it be\n> stated somehow in the docs that -R option always has to go altogether with\n> --source-server? Or should pg_rewind notify user that options are\n> incompatible and no recovery configuration will be written?\n\nHmm I think it should throw an error, yeah. Ignoring options is not\ngood.\n\n> +\t\t# Now, when pg_rewind apparently succeeded with minimal permissions,\n> +\t\t# add REPLICATION privilege. So we could test that new standby\n> +\t\t# is able to connect to the new master with generated config.\n> +\t\t$node_standby->psql(\n> +\t\t\t'postgres', \"ALTER ROLE rewind_user WITH REPLICATION;\");\n\nI think this better use safe_psql.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 11:28:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Sep-27, Paul Guo wrote:\n\n> I'm using --no-ensure-shutdown in the new version unless there are better\n> suggestions.\n\nThat sounds sufficiently good. I pushed this patch, after fixing a few\nsmallish problems, such as an assertion failure because of the\nterminating \\n in the error message when \"postgres --single\" fails\n(which I tested by introducing a typo in the command). I also removed\nthe short option, because I doubt that this option is useful enough to\nwarrant using up such an important shorthand (Maybe if it had been\n-\\ or -% or -& I would have let it through, since I doubt anybody would\nhave wanted to use those for anything else). But if somebody disagrees,\nthey can send a patch to restore it, and we can then discuss the merits\nof individual chars to use.\n\nI also added quotes to DEVNULL, because we do that everywhere. Maybe\nthere exists a system somewhere that requires this ... !!??\n\nFinally, I split out the command in the error message in case it fails.\n\nThanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Sep 2019 16:52:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n>\n> I went through the remaining two patches and they seem to be very clear\n> and concise. However, there are two points I could complain about:\n>\n> 1) Maybe I've missed it somewhere in the thread above, but currently\n> pg_rewind allows to run itself with -R and --source-pgdata. In that case\n> -R option is just swallowed and neither standby.signal, nor\n> postgresql.auto.conf is written, which is reasonable though. Should it\n> be stated somehow in the docs that -R option always has to go altogether\n> with --source-server? Or should pg_rewind notify user that options are\n> incompatible and no recovery configuration will be written?\n>\n\nI modified code & doc to address this. In code, pg_rewind will error out\nfor the local case.\n\n\n> 2) Are you going to leave -R option completely without tap-tests?\n> Attached is a small patch, which tests -R option along with the existing\n> 'remote' case. If needed it may be split into two separate cases. First,\n> it tests that pg_rewind is able to succeed with minimal permissions\n> according to the Michael's patch d9f543e [1]. Next, it checks presence\n> of standby.signal and adds REPLICATION permission to rewind_user to test\n> that new standby is able to start with generated recovery configuration.\n>\n> [1]\n>\n> https://github.com/postgres/postgres/commit/d9f543e9e9be15f92abdeaf870e57ef289020191\n>\n>\nIt seems that we could further disabling recovery info setting code for the\n'remote' test case?\n\n- my $port_standby = $node_standby->port;\n- $node_master->append_conf(\n- 'postgresql.conf', qq(\n-primary_conninfo='port=$port_standby'\n-));\n+ if ($test_mode ne \"remote\")\n+ {\n+ my $port_standby = $node_standby->port;\n+ $node_master->append_conf(\n+ 'postgresql.conf',\n+ qq(primary_conninfo='port=$port_standby'));\n\n- $node_master->set_standby_mode();\n+ $node_master->set_standby_mode();\n+ }\n\nThanks.",
"msg_date": "Mon, 30 Sep 2019 15:07:48 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 27.09.2019 17:28, Alvaro Herrera wrote:\n>\n>> +\t\t# Now, when pg_rewind apparently succeeded with minimal permissions,\n>> +\t\t# add REPLICATION privilege. So we could test that new standby\n>> +\t\t# is able to connect to the new master with generated config.\n>> +\t\t$node_standby->psql(\n>> +\t\t\t'postgres', \"ALTER ROLE rewind_user WITH REPLICATION;\");\n> I think this better use safe_psql.\n>\n\nYes, indeed.\n\nOn 30.09.2019 10:07, Paul Guo wrote:\n>\n> 2) Are you going to leave -R option completely without tap-tests?\n> Attached is a small patch, which tests -R option along with the\n> existing\n> 'remote' case. If needed it may be split into two separate cases.\n> First,\n> it tests that pg_rewind is able to succeed with minimal permissions\n> according to the Michael's patch d9f543e [1]. Next, it checks\n> presence\n> of standby.signal and adds REPLICATION permission to rewind_user\n> to test\n> that new standby is able to start with generated recovery\n> configuration.\n>\n> [1]\n> https://github.com/postgres/postgres/commit/d9f543e9e9be15f92abdeaf870e57ef289020191\n>\n> It seems that we could further disabling recovery info setting code \n> for the 'remote' test case?\n>\n> - my $port_standby = $node_standby->port;\n> - $node_master->append_conf(\n> - 'postgresql.conf', qq(\n> -primary_conninfo='port=$port_standby'\n> -));\n> + if ($test_mode ne \"remote\")\n> + {\n> + my $port_standby = $node_standby->port;\n> + $node_master->append_conf(\n> + 'postgresql.conf',\n> + qq(primary_conninfo='port=$port_standby'));\n>\n> - $node_master->set_standby_mode();\n> + $node_master->set_standby_mode();\n> + }\n>\n>\n\nYeah, it makes sense. It is excessive for remote if we add '-R' there. \nI've updated and attached my test adding patch.\n\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Mon, 30 Sep 2019 11:51:25 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "OK, I pushed this patch as well as Alexey's test patch. It all works\nfor me, and the coverage report shows that we're doing the new thing ...\nthough only in the case that rewind *is* required. There is no test to\nverify the case where rewind is *not* required. I guess it'd also be\ngood to test the case when we throw the new error, if only for\ncompleteness ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Sep 2019 14:13:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 2019-Mar-19, Paul Guo wrote:\n\n> Hello, Postgres hackers,\n> \n> Please see the attached patches.\n\nBTW in the future if you have two separate patches, please post them in\nseparate threads and use separate commitfest items for each, even if\nthey have minor conflicts.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Sep 2019 14:15:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": ">\n> BTW in the future if you have two separate patches, please post them in\n> separate threads and use separate commitfest items for each, even if\n> they have minor conflicts.\n>\n\nSure. Thanks.\n\nBTW in the future if you have two separate patches, please post them in\nseparate threads and use separate commitfest items for each, even if\nthey have minor conflicts.Sure. Thanks.",
"msg_date": "Tue, 1 Oct 2019 10:08:10 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "Hi Alvaro,\n\nOn 30.09.2019 20:13, Alvaro Herrera wrote:\n> OK, I pushed this patch as well as Alexey's test patch. It all works\n> for me, and the coverage report shows that we're doing the new thing ...\n> though only in the case that rewind *is* required. There is no test to\n> verify the case where rewind is *not* required. I guess it'd also be\n> good to test the case when we throw the new error, if only for\n> completeness ...\n\nI've directly followed your guess and tried to elaborate pg_rewind test \ncases and... It seems I've caught a few bugs:\n\n1) --dry-run actually wasn't completely 'dry'. It did update target \ncontrolfile, which could cause repetitive pg_rewind calls to fail after \ndry-run ones.\n\n2) --no-ensure-shutdown flag was broken, it simply didn't turn off this \nnew feature.\n\n3) --write-recovery-conf didn't obey the --dry-run flag.\n\nThus, it was definitely a good idea to add new tests. Two patches are \nattached:\n\n1) First one fixes all the issues above;\n\n2) Second one slightly increases pg_rewind overall code coverage from \n74% to 78.6%.\n\nShould I put this fix on the next commitfest?\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\nP.S. My apologies that I've missed two of these bugs during review.",
"msg_date": "Wed, 2 Oct 2019 20:28:09 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Wed, Oct 02, 2019 at 08:28:09PM +0300, Alexey Kondratov wrote:\n> I've directly followed your guess and tried to elaborate pg_rewind test\n> cases and... It seems I've caught a few bugs:\n> \n> 1) --dry-run actually wasn't completely 'dry'. It did update target\n> controlfile, which could cause repetitive pg_rewind calls to fail after\n> dry-run ones.\n\nI have just paid attention to this thread, but this is a bug which\ngoes down to 12 actually so let's treat it independently of the rest.\nThe control file was not written thanks to the safeguards in\nwrite_target_range() in past versions, but the recent refactoring\naround control file handling broke that promise. Another thing which\nis not completely exact is the progress reporting which should be\nreported even if the dry-run mode runs. That's less critical, but\nlet's make things consistent.\n\nPatch 0001 also forgot that recovery.conf should not be written either\nwhen no rewind is needed.\n\nI have reworked your first patch as per the attached. What do you\nthink about it? The part with the control file needs to go down to\nv12, and I would likely split that into two commits on HEAD: one for\nthe control file and a second for the recovery.conf portion with the\nfix for --no-ensure-shutdown to keep a cleaner history.\n\n+ # Check that incompatible options error out.\n+ command_fails(\n+ [\n+ 'pg_rewind', \"--debug\",\n+ \"--source-pgdata=$standby_pgdata\",\n+ \"--target-pgdata=$master_pgdata\", \"-R\",\n+ \"--no-ensure-shutdown\"\n+ ],\n+ 'pg_rewind local with -R');\nIncompatible options had better be checked within a separate perl\nscript? We generally do that for the other binaries.\n--\nMichael",
"msg_date": "Thu, 3 Oct 2019 12:07:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 03.10.2019 6:07, Michael Paquier wrote:\n> On Wed, Oct 02, 2019 at 08:28:09PM +0300, Alexey Kondratov wrote:\n>> I've directly followed your guess and tried to elaborate pg_rewind test\n>> cases and... It seems I've caught a few bugs:\n>>\n>> 1) --dry-run actually wasn't completely 'dry'. It did update target\n>> controlfile, which could cause repetitive pg_rewind calls to fail after\n>> dry-run ones.\n> I have just paid attention to this thread, but this is a bug which\n> goes down to 12 actually so let's treat it independently of the rest.\n> The control file was not written thanks to the safeguards in\n> write_target_range() in past versions, but the recent refactoring\n> around control file handling broke that promise. Another thing which\n> is not completely exact is the progress reporting which should be\n> reported even if the dry-run mode runs. That's less critical, but\n> let's make things consistent.\n\nI also thought about v12, though didn't check whether it's affected.\n\n> Patch 0001 also forgot that recovery.conf should not be written either\n> when no rewind is needed.\n\nYes, definitely, I forgot this code path, thanks.\n\n> I have reworked your first patch as per the attached. What do you\n> think about it? The part with the control file needs to go down to\n> v12, and I would likely split that into two commits on HEAD: one for\n> the control file and a second for the recovery.conf portion with the\n> fix for --no-ensure-shutdown to keep a cleaner history.\n\nIt looks fine for me excepting the progress reporting part. It now adds \nPG_CONTROL_FILE_SIZE to fetch_done. However, I cannot find that control \nfile is either included into filemap and fetch_size or counted during \ncalculate_totals(). Maybe I've missed something, but now it looks like \nwe report something that wasn't planned for progress reporting, doesn't it?\n\n> + # Check that incompatible options error out.\n> + command_fails(\n> + [\n> + 'pg_rewind', \"--debug\",\n> + \"--source-pgdata=$standby_pgdata\",\n> + \"--target-pgdata=$master_pgdata\", \"-R\",\n> + \"--no-ensure-shutdown\"\n> + ],\n> + 'pg_rewind local with -R');\n> Incompatible options had better be checked within a separate perl\n> script? We generally do that for the other binaries.\n\nYes, it makes sense. I've reworked the patch with tests and added a \ncouple of extra cases.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 3 Oct 2019 12:43:37 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Thu, Oct 03, 2019 at 12:43:37PM +0300, Alexey Kondratov wrote:\n> On 03.10.2019 6:07, Michael Paquier wrote:\n>> I have reworked your first patch as per the attached. What do you\n>> think about it? The part with the control file needs to go down to\n>> v12, and I would likely split that into two commits on HEAD: one for\n>> the control file and a second for the recovery.conf portion with the\n>> fix for --no-ensure-shutdown to keep a cleaner history.\n> \n> It looks fine for me excepting the progress reporting part. It now adds\n> PG_CONTROL_FILE_SIZE to fetch_done. However, I cannot find that control file\n> is either included into filemap and fetch_size or counted during\n> calculate_totals(). Maybe I've missed something, but now it looks like we\n> report something that wasn't planned for progress reporting, doesn't\n> it?\n\nRight. The pre-12 code actually handles that incorrecly as it assumed\nthat any files written through file_ops.c should be part of the\nprogress. So I went with the simplest solution, and backpatched this\npart with 6f3823b. I have also committed the set of fixes for the new\noptions so as we have a better base of work than what's on HEAD\ncurrently.\n\n>> + # Check that incompatible options error out.\n>> + command_fails(\n>> + [\n>> + 'pg_rewind', \"--debug\",\n>> + \"--source-pgdata=$standby_pgdata\",\n>> + \"--target-pgdata=$master_pgdata\", \"-R\",\n>> + \"--no-ensure-shutdown\"\n>> + ],\n>> + 'pg_rewind local with -R');\n>> Incompatible options had better be checked within a separate perl\n>> script? We generally do that for the other binaries.\n> \n> Yes, it makes sense. I've reworked the patch with tests and added a couple\n> of extra cases.\n\nRegarding the tests, adding a --dry-run command is a good idea.\nHowever I think that there is more value to automate the use of the\nsingle user mode automatically in the tests as that's more critical\nfrom the point of view of rewind run, and stopping the cluster with\nimmediate mode causes, as expected, the next --dry-run command to\nfail.\n\nAnother thing is that I think that we should use -F with --single.\nThis makes recovery faster, and the target data folder is synced\nat the end of pg_rewind anyway.\n\nUsing the long option names makes the tests easier to follow in this\ncase, so I have switched -R to --write-recovery-conf.\n\nSome comments and the docs have been using some confusing wording, so\nI have reworked what I found (like many \"it\" in a single sentence\nreferring different things).\n\n+command_fails(\n+ [\n+ 'pg_rewind', \"--debug\",\n+ \"--source-pgdata=$standby_pgdata\",\n+ \"--target-pgdata=$master_pgdata\",\n+ \"--no-ensure-shutdown\"\n+ ],\n+ 'pg_rewind local without source shutdown');\nRegarding all the set of incompatible options, we have much more of\nthat after the initial option parsing so I think that we should group\nall the cheap ones together. Let's tackle that as a separate patch.\nWe can also just check after --no-ensure-shutdown directly in\nRewindTest.pm as I have switched the cluster to not be cleanly shut\ndown anymore to stress the automatic recovery path, and trigger that\nbefore running pg_rewind for the local and remote mode. \n\nAttached is an updated patch with all I found. What do you think?\n--\nMichael",
"msg_date": "Fri, 4 Oct 2019 17:37:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 04.10.2019 11:37, Michael Paquier wrote:\n> On Thu, Oct 03, 2019 at 12:43:37PM +0300, Alexey Kondratov wrote:\n>> On 03.10.2019 6:07, Michael Paquier wrote:\n>>> I have reworked your first patch as per the attached. What do you\n>>> think about it? The part with the control file needs to go down to\n>>> v12, and I would likely split that into two commits on HEAD: one for\n>>> the control file and a second for the recovery.conf portion with the\n>>> fix for --no-ensure-shutdown to keep a cleaner history.\n>> It looks fine for me excepting the progress reporting part. It now adds\n>> PG_CONTROL_FILE_SIZE to fetch_done. However, I cannot find that control file\n>> is either included into filemap and fetch_size or counted during\n>> calculate_totals(). Maybe I've missed something, but now it looks like we\n>> report something that wasn't planned for progress reporting, doesn't\n>> it?\n> Right. The pre-12 code actually handles that incorrecly as it assumed\n> that any files written through file_ops.c should be part of the\n> progress. So I went with the simplest solution, and backpatched this\n> part with 6f3823b. I have also committed the set of fixes for the new\n> options so as we have a better base of work than what's on HEAD\n> currently.\n\nGreat, thanks.\n\n>\n> Regarding the tests, adding a --dry-run command is a good idea.\n> However I think that there is more value to automate the use of the\n> single user mode automatically in the tests as that's more critical\n> from the point of view of rewind run, and stopping the cluster with\n> immediate mode causes, as expected, the next --dry-run command to\n> fail.\n>\n> Another thing is that I think that we should use -F with --single.\n> This makes recovery faster, and the target data folder is synced\n> at the end of pg_rewind anyway.\n>\n> Using the long option names makes the tests easier to follow in this\n> case, so I have switched -R to --write-recovery-conf.\n>\n> Some comments and the docs have been using some confusing wording, so\n> I have reworked what I found (like many \"it\" in a single sentence\n> referring different things).\n\nI agree with all the points. Shutting down target server using \n'immediate' mode is a good way to test ensureCleanShutdown automatically.\n\n> Regarding all the set of incompatible options, we have much more of\n> that after the initial option parsing so I think that we should group\n> all the cheap ones together. Let's tackle that as a separate patch.\n> We can also just check after --no-ensure-shutdown directly in\n> RewindTest.pm as I have switched the cluster to not be cleanly shut\n> down anymore to stress the automatic recovery path, and trigger that\n> before running pg_rewind for the local and remote mode.\n>\n> Attached is an updated patch with all I found. What do you think?\n\nI've checked your patch, but it seems that it cannot be applied as is, \nsince it e.g. adds a comment to 005_same_timeline.pl without actually \nchanging the test. So I've slightly modified your patch and tried to fit \nboth dry-run and ensureCleanShutdown testing together. It works just \nfine and fails immediately if any of recent fixes is reverted. I still \nthink that dry-run testing is worth adding, since it helped to catch \nthis v12 refactoring issue, but feel free to throw it way if it isn't \ncommitable right now, of course.\n\nAs for incompatible options and sanity checks testing, yes, I agree that \nit is a matter of different patch. I attached it as a separate WIP patch \njust for history. Maybe I will try to gather more cases there later.\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Fri, 4 Oct 2019 17:21:25 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Fri, Oct 04, 2019 at 05:21:25PM +0300, Alexey Kondratov wrote:\n> I've checked your patch, but it seems that it cannot be applied as is, since\n> it e.g. adds a comment to 005_same_timeline.pl without actually changing the\n> test. So I've slightly modified your patch and tried to fit both dry-run and\n> ensureCleanShutdown testing together. It works just fine and fails\n> immediately if any of recent fixes is reverted. I still think that dry-run\n> testing is worth adding, since it helped to catch this v12 refactoring\n> issue, but feel free to throw it way if it isn't commitable right now, of\n> course.\n\nI can guarantee the last patch I sent can be applied on top of HEAD:\nhttps://www.postgresql.org/message-id/20191004083721.GA1829@paquier.xyz\n\nIt would be nice to add the --dry-run part, though I think that we\ncould just make that part of one of the existing tests, and stop the\ntarget server first (got to think about that part, please see below).\n\n> As for incompatible options and sanity checks testing, yes, I agree that it\n> is a matter of different patch. I attached it as a separate WIP patch just\n> for history. Maybe I will try to gather more cases there later.\n\nThanks. I have applied the first patch for the various improvements\naround --no-ensure-shutdown.\n\nRegarding the rest, I have hacked my way through as per the attached.\nThe previous set of patches did the following, which looked either\noverkill or not necessary:\n- Why running test 005 with the remote mode?\n- --dry-run coverage is basically the same with the local and remote\nmodes, so it seems like a waste of resource to run it for all the\ntests and all the modes. I tend to think that this would live better\nas part of another existing test, only running for say the local mode.\nIt is also possible to group all your tests from patch 2 and\n006_actions.pl in this area.\n- There is no need for the script checking for options combinations to\ninitialize a data folder. It is important to design the tests to be\ncheap and meaningful.\n\nPatch v3-0002 also had a test to make sure that the source server is\nshut down cleanly before using it. I have included that part as\nwell, as the flow feels right.\n\nSo, Alexey, what do you think?\n--\nMichael",
"msg_date": "Mon, 7 Oct 2019 10:06:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On 07.10.2019 4:06, Michael Paquier wrote:\n> On Fri, Oct 04, 2019 at 05:21:25PM +0300, Alexey Kondratov wrote:\n>> I've checked your patch, but it seems that it cannot be applied as is, since\n>> it e.g. adds a comment to 005_same_timeline.pl without actually changing the\n>> test. So I've slightly modified your patch and tried to fit both dry-run and\n>> ensureCleanShutdown testing together. It works just fine and fails\n>> immediately if any of recent fixes is reverted. I still think that dry-run\n>> testing is worth adding, since it helped to catch this v12 refactoring\n>> issue, but feel free to throw it way if it isn't commitable right now, of\n>> course.\n> I can guarantee the last patch I sent can be applied on top of HEAD:\n> https://www.postgresql.org/message-id/20191004083721.GA1829@paquier.xyz\n\nYes, it did, but my comment was about these lines:\n\ndiff --git a/src/bin/pg_rewind/t/005_same_timeline.pl \nb/src/bin/pg_rewind/t/005_same_timeline.pl\nindex 40dbc44caa..df469d3939 100644\n--- a/src/bin/pg_rewind/t/005_same_timeline.pl\n+++ b/src/bin/pg_rewind/t/005_same_timeline.pl\n@@ -1,3 +1,7 @@\n+#\n+# Test that running pg_rewind with the source and target clusters\n+# on the same timeline runs successfully.\n+#\n\nYou have added this new comment section, but kept the old one, which was \npretty much the same [1].\n\n> Regarding the rest, I have hacked my way through as per the attached.\n> The previous set of patches did the following, which looked either\n> overkill or not necessary:\n> - Why running test 005 with the remote mode?\n\nOK, it was definitely an overkill, since remote control file fetch will \nbe also tested in any other remote test case.\n\n> - --dry-run coverage is basically the same with the local and remote\n> modes, so it seems like a waste of resource to run it for all the\n> tests and all the modes.\n\nMy point was to test --dry-run + --write-recover-conf in remote, since \nthe last one may cause recovery configuration write without doing any \nactual work, due to some wrong refactoring for example.\n\n> - There is no need for the script checking for options combinations to\n> initialize a data folder. It is important to design the tests to be\n> cheap and meaningful.\n\nYes, I agree, moving some of those tests to just a 001_basic seems to be \na proper optimization.\n\n> Patch v3-0002 also had a test to make sure that the source server is\n> shut down cleanly before using it. I have included that part as\n> well, as the flow feels right.\n>\n> So, Alexey, what do you think?\n\nIt looks good for me. Two minor remarks:\n\n+��� # option combinations.� As the code paths taken by those tests\n+��� # does not change for the \"local\" and \"remote\" modes, just run them\n\nI am far from being fluent in English, but should it be 'do not change' \ninstead?\n\n+command_fails(\n+��� [\n+��� ��� 'pg_rewind',���� '--target-pgdata',\n+��� ��� $primary_pgdata, '--source-pgdata',\n+��� ��� $standby_pgdata, 'extra_arg1'\n+��� ],\n\nHere and below I would prefer traditional options ordering \"'--key', \n'value'\". It should be easier to recognizefrom the reader perspective:\n\n+command_fails(\n+��� [\n+��� ��� 'pg_rewind',\n+��� ��� '--target-pgdata', $primary_pgdata,\n+��� ��� '--source-pgdata', $standby_pgdata,\n+������� 'extra_arg1'\n+��� ],\n\n\n[1] \nhttps://github.com/postgres/postgres/blob/caa078353ecd1f3b3681c0d4fa95ad4bb8c2308a/src/bin/pg_rewind/t/005_same_timeline.pl#L15\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Mon, 7 Oct 2019 15:31:45 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
},
{
"msg_contents": "On Mon, Oct 07, 2019 at 03:31:45PM +0300, Alexey Kondratov wrote:\n> On 07.10.2019 4:06, Michael Paquier wrote:\n>> - --dry-run coverage is basically the same with the local and remote\n>> modes, so it seems like a waste of resource to run it for all the\n>> tests and all the modes.\n> \n> My point was to test --dry-run + --write-recover-conf in remote, since the\n> last one may cause recovery configuration write without doing any actual\n> work, due to some wrong refactoring for example.\n\nYes, that's possible. I agree that it would be nice to have an extra\ntest for that, still I would avoid making that run in all the tests.\n\n>> Patch v3-0002 also had a test to make sure that the source server is\n>> shut down cleanly before using it. I have included that part as\n>> well, as the flow feels right.\n>> \n>> So, Alexey, what do you think?\n> \n> It looks good for me. Two minor remarks:\n> \n> + # option combinations. As the code paths taken by those tests\n> + # does not change for the \"local\" and \"remote\" modes, just run them\n> \n> I am far from being fluent in English, but should it be 'do not change'\n> instead?\n\nThat was wrong, fixed.\n\n> +command_fails(\n> + [\n> + 'pg_rewind', '--target-pgdata',\n> + $primary_pgdata, '--source-pgdata',\n> + $standby_pgdata, 'extra_arg1'\n> + ],\n> \n> Here and below I would prefer traditional options ordering \"'--key',\n> 'value'\". It should be easier to recognizefrom the reader perspective:\n\nWhile I agree with you, the perl indentation we use has formatted the\ncode this way. There is also an argument for keeping it at the end\nfor clarity (I recall that Windows also requires extra args to be\nlast when parsing options). Anyway, I have used a trick by adding\n--debug to reach command, which is still useful, so the order of the\noptions is better at the end.\n--\nMichael",
"msg_date": "Tue, 8 Oct 2019 11:51:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Two pg_rewind patches (auto generate recovery conf and ensure\n clean shutdown)"
}
] |
[
{
"msg_contents": "Hi,all\r\n\r\nOn website: https://wiki.postgresql.org/wiki/Todo#libpq\r\nI found that in libpq module,there is a TODO case:\r\n-------------------------------------------------------------------------------\r\nConsider disallowing multiple queries in PQexec() as an additional barrier to SQL injection attacks\r\n-------------------------------------------------------------------------------\r\nI am interested in this one. So ,Had it be fixed?\r\nIf not, I am willing to do so.\r\nIn manual, I found that:\r\n-----------------------------------------------------------------------------\r\nUnlike PQexec, PQexecParams allows at most one SQL command in the given string. (There can be\r\nsemicolons in it, but not more than one nonempty command.) This is a limitation of the underlying\r\nprotocol, but has some usefulness as an extra defense against SQL-injection attacks.\r\n\r\n-------------------------------------------------------------------------------\r\nMaybe we can fix PQexec() just likes PQexecParams()?\r\n\r\nI will try to fix it~\r\n\r\n\r\n--\r\nBest Regards\r\n-----------------------------------------------------\r\nWu Fei\r\nDX3\r\nSoftware Division III\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-9356\r\nCOINS: 7998-9356\r\nFAX: +86+25-83317685\r\nMAIL:wufei.fnst@cn.fujitsu.com\r\nhttp://www.fujitsu.com/cn/fnst/\r\n---------------------------------------------------\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nHi,all\n \nOn website: \nhttps://wiki.postgresql.org/wiki/Todo#libpq\nI found that in libpq module,there is a TODO case:\n-------------------------------------------------------------------------------\nConsider disallowing multiple queries in PQexec() as an additional barrier to SQL injection attacks\n-------------------------------------------------------------------------------\nI am interested in this one. So ,Had it be fixed?\nIf not, I am willing to do so.\nIn manual, I found that:\n-----------------------------------------------------------------------------\nUnlike\nPQexec,\nPQexecParams\nallows at most one SQL command in the given string. (There can be\nsemicolons in it, but not more than one nonempty command.) This is a limitation of the underlying\nprotocol, but has some usefulness as an extra defense against SQL-injection attacks.\n \n-------------------------------------------------------------------------------\nMaybe we can fix PQexec() just likes PQexecParams()?\n \nI will try to fix it~\n \n \n--\nBest Regards\n-----------------------------------------------------\n\nWu Fei\nDX3\nSoftware Division III\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-9356\nCOINS: 7998-9356\nFAX: +86+25-83317685\nMAIL:wufei.fnst@cn.fujitsu.com\nhttp://www.fujitsu.com/cn/fnst/\n---------------------------------------------------",
"msg_date": "Tue, 19 Mar 2019 08:18:23 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hello.\n\nAt Tue, 19 Mar 2019 08:18:23 +0000, \"Wu, Fei\" <wufei.fnst@cn.fujitsu.com> wrote in <52E6E0843B9D774C8C73D6CF64402F05621F0FFC@G08CNEXMBPEKD02.g08.fujitsu.local>\n> Hi,all\n> \n> On website: https://wiki.postgresql.org/wiki/Todo#libpq\n> I found that in libpq module,there is a TODO case:\n> -------------------------------------------------------------------------------\n> Consider disallowing multiple queries in PQexec() as an additional barrier to SQL injection attacks\n> -------------------------------------------------------------------------------\n> I am interested in this one. So ,Had it be fixed?\n> If not, I am willing to do so.\n> In manual, I found that:\n> -----------------------------------------------------------------------------\n> Unlike PQexec, PQexecParams allows at most one SQL command in the given string. (There can be\n> semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying\n> protocol, but has some usefulness as an extra defense against SQL-injection attacks.\n> \n> -------------------------------------------------------------------------------\n> Maybe we can fix PQexec() just likes PQexecParams()?\n> \n> I will try to fix it~\n\nI don't oppose that, but as the discussion linked from there [1],\npsql already has a feature that sends multiple statements by one\nPQexec() in two ways. Fixing it means making the features\nobsolete.\n\npsql db -c 'select 1; select 1;'\n\nbash> psql db\ndb=> select 1\\; select 1;\n\n\nI couldn't find the documentation about the behavior..\n\n[1] https://www.postgresql.org/message-id/9236.1167968298@sss.pgh.pa.us\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Mar 2019 19:47:05 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:\n> At Tue, 19 Mar 2019 08:18:23 +0000, \"Wu, Fei\" <wufei.fnst@cn.fujitsu.com> wrote in <52E6E0843B9D774C8C73D6CF64402F05621F0FFC@G08CNEXMBPEKD02.g08.fujitsu.local>\n>> I will try to fix it~\n\n> I don't oppose that, but as the discussion linked from there [1],\n> psql already has a feature that sends multiple statements by one\n> PQexec() in two ways. Fixing it means making the features\n> obsolete.\n\nYeah, the problem here is that a lot of people think that that's\na feature not a bug. You certainly can't get away with just summarily\nchanging the behavior of PQexec without any recourse. Maybe there\nwould be acceptance for either of\n\n(1) a different function that is like PQexec but restricts the\nquery string\n\n(2) a connection option or state variable that affects PQexec's\nbehavior --- but it probably still has to default to permissive.\n\nUnfortunately, if the default behavior doesn't change, then there's little\nargument for doing this at all. The security reasoning behind doing\nanything in this area would be to provide an extra measure of protection\nagainst SQL-injection attacks on carelessly-written clients, and of course\nit's unlikely that a carelessly-written client would get changed to make\nuse of a non-default feature.\n\nSo that's why nothing has been done about this for umpteen years.\nIf somebody can think of a way to resolve this tension, maybe the\nitem will get finished; but it's not just a matter of writing some\ncode.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Mar 2019 10:30:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 10:30:45AM -0400, Tom Lane wrote:\n> Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> writes:\n> > At Tue, 19 Mar 2019 08:18:23 +0000, \"Wu, Fei\" <wufei.fnst@cn.fujitsu.com> wrote in <52E6E0843B9D774C8C73D6CF64402F05621F0FFC@G08CNEXMBPEKD02.g08.fujitsu.local>\n> >> I will try to fix it~\n> \n> > I don't oppose that, but as the discussion linked from there [1],\n> > psql already has a feature that sends multiple statements by one\n> > PQexec() in two ways. Fixing it means making the features\n> > obsolete.\n> \n> Yeah, the problem here is that a lot of people think that that's\n> a feature not a bug. You certainly can't get away with just summarily\n> changing the behavior of PQexec without any recourse. Maybe there\n> would be acceptance for either of\n> \n> (1) a different function that is like PQexec but restricts the\n> query string\n> \n> (2) a connection option or state variable that affects PQexec's\n> behavior --- but it probably still has to default to permissive.\n> \n> Unfortunately, if the default behavior doesn't change, then there's little\n> argument for doing this at all. The security reasoning behind doing\n> anything in this area would be to provide an extra measure of protection\n> against SQL-injection attacks on carelessly-written clients, and of course\n> it's unlikely that a carelessly-written client would get changed to make\n> use of a non-default feature.\n\nIt's also unlikely that writers and maintainers of carelessly-written\nclients are our main user base. Quite the opposite, in fact. Do we\nreally need to set their failure to make an effort as a higher\npriority than getting this fixed?\n\nI think the answer is \"no,\" and we should deprecate this misfeature.\nIt's bad enough that we'll be supporting it for five years after\ndeprecating it, but it's worse to leave it hanging around our necks\nforever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Tue, 19 Mar 2019 17:43:46 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> I think the answer is \"no,\" and we should deprecate this misfeature.\n> It's bad enough that we'll be supporting it for five years after\n> deprecating it, but it's worse to leave it hanging around our necks\n> forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n\nThe problem with that approach is that not everybody agrees that\nit's a misfeature.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Mar 2019 12:51:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-19 12:51:39 -0400, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > I think the answer is \"no,\" and we should deprecate this misfeature.\n> > It's bad enough that we'll be supporting it for five years after\n> > deprecating it, but it's worse to leave it hanging around our necks\n> > forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n> \n> The problem with that approach is that not everybody agrees that\n> it's a misfeature.\n\nYea, it's extremely useful to just be able to send a whole script to the\nserver. Otherwise every application wanting to do so needs to be able to\nsplit SQL statements, not exactly a trivial task. And the result will be\nslower, due to increased rountrips.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Mar 2019 09:55:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "On 2019-Mar-19, Andres Freund wrote:\n\n> Hi,\n> \n> On 2019-03-19 12:51:39 -0400, Tom Lane wrote:\n> > David Fetter <david@fetter.org> writes:\n> > > I think the answer is \"no,\" and we should deprecate this misfeature.\n> > > It's bad enough that we'll be supporting it for five years after\n> > > deprecating it, but it's worse to leave it hanging around our necks\n> > > forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n> > \n> > The problem with that approach is that not everybody agrees that\n> > it's a misfeature.\n> \n> Yea, it's extremely useful to just be able to send a whole script to the\n> server. Otherwise every application wanting to do so needs to be able to\n> split SQL statements, not exactly a trivial task. And the result will be\n> slower, due to increased rountrips.\n\nI suppose it can be argued that for the cases where they want that, it\nis not entirely ridiculous to have it be done with a different API call,\nsay PQexecMultiple.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Mar 2019 13:59:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 01:59:34PM -0300, Alvaro Herrera wrote:\n> On 2019-Mar-19, Andres Freund wrote:\n> \n> > Hi,\n> > \n> > On 2019-03-19 12:51:39 -0400, Tom Lane wrote:\n> > > David Fetter <david@fetter.org> writes:\n> > > > I think the answer is \"no,\" and we should deprecate this misfeature.\n> > > > It's bad enough that we'll be supporting it for five years after\n> > > > deprecating it, but it's worse to leave it hanging around our necks\n> > > > forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n> > > \n> > > The problem with that approach is that not everybody agrees that\n> > > it's a misfeature.\n> > \n> > Yea, it's extremely useful to just be able to send a whole script to the\n> > server. Otherwise every application wanting to do so needs to be able to\n> > split SQL statements, not exactly a trivial task. And the result will be\n> > slower, due to increased rountrips.\n> \n> I suppose it can be argued that for the cases where they want that, it\n> is not entirely ridiculous to have it be done with a different API call,\n> say PQexecMultiple.\n\nRenaming it to emphasize that it's a non-default choice seems like a\nlarge step in the right direction.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Tue, 19 Mar 2019 18:02:23 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-19 13:59:34 -0300, Alvaro Herrera wrote:\n> On 2019-Mar-19, Andres Freund wrote:\n> > On 2019-03-19 12:51:39 -0400, Tom Lane wrote:\n> > > David Fetter <david@fetter.org> writes:\n> > > > I think the answer is \"no,\" and we should deprecate this misfeature.\n> > > > It's bad enough that we'll be supporting it for five years after\n> > > > deprecating it, but it's worse to leave it hanging around our necks\n> > > > forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n> > > \n> > > The problem with that approach is that not everybody agrees that\n> > > it's a misfeature.\n> > \n> > Yea, it's extremely useful to just be able to send a whole script to the\n> > server. Otherwise every application wanting to do so needs to be able to\n> > split SQL statements, not exactly a trivial task. And the result will be\n> > slower, due to increased rountrips.\n> \n> I suppose it can be argued that for the cases where they want that, it\n> is not entirely ridiculous to have it be done with a different API call,\n> say PQexecMultiple.\n\nSure, but what'd the gain be? Using PQexecParams() already enforces that\nthere's only a single command. Sure, explicit is better than implicit\nand all that, but is that justification for breaking a significant\nnumber of applications?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Mar 2019 10:02:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "On 2019-03-19 10:02:33 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-19 13:59:34 -0300, Alvaro Herrera wrote:\n> > On 2019-Mar-19, Andres Freund wrote:\n> > > On 2019-03-19 12:51:39 -0400, Tom Lane wrote:\n> > > > David Fetter <david@fetter.org> writes:\n> > > > > I think the answer is \"no,\" and we should deprecate this misfeature.\n> > > > > It's bad enough that we'll be supporting it for five years after\n> > > > > deprecating it, but it's worse to leave it hanging around our necks\n> > > > > forever. https://en.wikipedia.org/wiki/Albatross_(metaphor)\n> > > > \n> > > > The problem with that approach is that not everybody agrees that\n> > > > it's a misfeature.\n> > > \n> > > Yea, it's extremely useful to just be able to send a whole script to the\n> > > server. Otherwise every application wanting to do so needs to be able to\n> > > split SQL statements, not exactly a trivial task. And the result will be\n> > > slower, due to increased rountrips.\n> > \n> > I suppose it can be argued that for the cases where they want that, it\n> > is not entirely ridiculous to have it be done with a different API call,\n> > say PQexecMultiple.\n> \n> Sure, but what'd the gain be? Using PQexecParams() already enforces that\n> there's only a single command. Sure, explicit is better than implicit\n> and all that, but is that justification for breaking a significant\n> number of applications?\n\nIn short: I think we should just remove this todo entry. If somebody\nfeels like we should do something, I guess making the dangers of\nPQexec() vs PQexecPrepared() even clearer would be the best thing to\ndo. Although I actually find it easy enough, it's not like we're holding\nback:\n\nhttps://www.postgresql.org/docs/devel/libpq-exec.html\n\nPQexec():\n\nThe command string can include multiple SQL commands (separated by semicolons). Multiple queries sent in a single PQexec call are processed in a single transaction, unless there are explicit BEGIN/COMMIT commands included in the query string to divide it into multiple transactions. (See Section 52.2.2.1 for more details about how the server handles multi-query strings.) Note however that the returned PGresult structure describes only the result of the last command executed from the string. Should one of the commands fail, processing of the string stops with it and the returned PGresult describes the error condition.\n\nPQexecParams():\nUnlike PQexec, PQexecParams allows at most one SQL command in the given string. (There can be semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has some usefulness as an extra defense against SQL-injection attacks.\n\n",
"msg_date": "Tue, 19 Mar 2019 10:05:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-19 13:59:34 -0300, Alvaro Herrera wrote:\n>> I suppose it can be argued that for the cases where they want that, it\n>> is not entirely ridiculous to have it be done with a different API call,\n>> say PQexecMultiple.\n\n> Sure, but what'd the gain be? Using PQexecParams() already enforces that\n> there's only a single command. Sure, explicit is better than implicit\n> and all that, but is that justification for breaking a significant\n> number of applications?\n\nRight, the tradeoff here comes down to breaking existing apps vs.\nadding security for poorly-written apps. Whether you think it's\nworthwhile to break stuff depends on your estimate of how common\npoorly-written apps are. To that point, I'd be inclined to throw\nDavid's previous comment back at him: they're likely not that\ncommon. A well-written app should probably be treating insecure\ninputs as parameters in PQexecParams anyhow, making this whole\ndiscussion moot.\n\nHaving said that ... a better argument for a new API is that it\ncould be explicitly designed to handle multiple queries, and in\nparticular make some provision for returning multiple PGresults.\nMaybe if we had that there would be more support for deprecating\nthe ability to send multiple queries in plain PQexec. It'd still\nbe a long time before we could turn it off though, at least by\ndefault.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Mar 2019 13:18:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-19 13:18:25 -0400, Tom Lane wrote:\n> Having said that ... a better argument for a new API is that it\n> could be explicitly designed to handle multiple queries, and in\n> particular make some provision for returning multiple PGresults.\n\nOh, I completely agree, that'd be hugely useful.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Mar 2019 10:24:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 01:18:25PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-03-19 13:59:34 -0300, Alvaro Herrera wrote:\n> >> I suppose it can be argued that for the cases where they want that, it\n> >> is not entirely ridiculous to have it be done with a different API call,\n> >> say PQexecMultiple.\n> \n> > Sure, but what'd the gain be? Using PQexecParams() already enforces that\n> > there's only a single command. Sure, explicit is better than implicit\n> > and all that, but is that justification for breaking a significant\n> > number of applications?\n> \n> Right, the tradeoff here comes down to breaking existing apps vs.\n> adding security for poorly-written apps. Whether you think it's\n> worthwhile to break stuff depends on your estimate of how common\n> poorly-written apps are. To that point, I'd be inclined to throw\n> David's previous comment back at him: they're likely not that\n> common. A well-written app should probably be treating insecure\n> inputs as parameters in PQexecParams anyhow, making this whole\n> discussion moot.\n> \n> Having said that ... a better argument for a new API is that it\n> could be explicitly designed to handle multiple queries, and in\n> particular make some provision for returning multiple PGresults.\n\nThat sounds like it'd be *really* handy if one were building a\nclient-side retry framework. People will be doing (the equivalent of)\nthis as the vulnerabilities inherent in isolation levels lower than\nSERIALIZABLE become better known.\nhttps://www.cockroachlabs.com/blog/acid-rain/\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Tue, 19 Mar 2019 18:28:08 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-19 13:18:25 -0400, Tom Lane wrote:\n>> Having said that ... a better argument for a new API is that it\n>> could be explicitly designed to handle multiple queries, and in\n>> particular make some provision for returning multiple PGresults.\n\n> Oh, I completely agree, that'd be hugely useful.\n\nOf course, you can do that already with PQsendQuery + a loop\naround PQgetResult. So the question here is whether that can\nbe wrapped up into something easier-to-use. I'm not entirely\nsure what that might look like.\n\nWe should also keep in mind that there's a perfectly valid\nuse-case for wanting to send a big script of commands and\njust check for overall success or failure. So it's not like\nPQexec's current behavior has *no* valid uses.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Mar 2019 13:46:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "\tTom Lane wrote:\n\n> Unfortunately, if the default behavior doesn't change, then there's little\n> argument for doing this at all. The security reasoning behind doing\n> anything in this area would be to provide an extra measure of protection\n> against SQL-injection attacks on carelessly-written clients, and of course\n> it's unlikely that a carelessly-written client would get changed to make\n> use of a non-default feature.\n\nA patch introducing an \"allow_multiple_queries\" GUC to\ncontrol this was proposed and eventually rejected for lack of\nconsensus some time ago (also there were some concerns about\nthe implementation that might have played against it too):\n\nhttps://www.postgresql.org/message-id/CALAY4q_eHUx%3D3p1QUOvabibwBvxEWGm-bzORrHA-itB7MBtd5Q%40mail.gmail.com\n\nAbout the effectiveness of this feature, there's a valid use case in\nwhich it's not the developers who decide to set this GUC, but the DBA\nor the organization deploying the application. That applies to\napplications that of course do not intentionally use multiple queries\nper command.\nThat would provide a certain level a protection against SQL\ninjections, without changing the application or libpq or breaking\nbackward compatibility, being optional.\n\nBut both in this thread and the other thread, the reasoning about the\nGUC seems to make the premise that applications would\nneed to be updated or developpers need to be aware of it,\nas if they _had_ to issue SET allow_multiple_queries TO off/on,\nrather than being submitted to it, as imposed upon them by\npostgresql.conf or the database settings.\n\nIf we compare this to, say, lo_compat_privileges. An application\ntypically doesn't get to decide whether it's \"on\". It's for a\nsuperuser to decide which databases or which users must operate with\nthis setting to \"on\".\nWhy wouldn't that model work for disallowing multiple queries per command?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n",
"msg_date": "Tue, 19 Mar 2019 20:08:13 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hi, thanks for all replies.\r\nAccording to all your discussions, Maybe the problems is that\r\n1) keep modifications just in client side;\r\n2) modifications VS client current applications\r\n\r\nMaybe we could create a new function(May called PQexecSafe() ) just likes PQexec() but with additional input argument(May called issafe) to switch whether allowing at most one SQL command.\r\nIn that way, clients who want the safe feature just use the new function PQexecSafe() with issafe set true,\r\nThe others can choose:\r\n1) just use the old version PQexec(),\r\n2) using PQexecSafe() with issafe set false\r\n\r\nThen, we strongly recommended using PQexecSafe(),and PQexec() keep in use but labeled deprecated in documents. In other word, give client the time to choose and modify their applications if then want use the safe feature.\r\n\r\nOf course, we should admit that it is not just a coding problem.\r\n\r\n\r\n-----Original Message-----\r\nFrom: Daniel Verite [mailto:daniel@manitou-mail.org] \r\nSent: Wednesday, March 20, 2019 3:08 AM\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>; Wu, Fei/吴 非 <wufei.fnst@cn.fujitsu.com>; pgsql-hackers@postgresql.org\r\nSubject: Re: Willing to fix a PQexec() in libpq module\r\n\r\n\tTom Lane wrote:\r\n\r\n> Unfortunately, if the default behavior doesn't change, then there's \r\n> little argument for doing this at all. The security reasoning behind \r\n> doing anything in this area would be to provide an extra measure of \r\n> protection against SQL-injection attacks on carelessly-written \r\n> clients, and of course it's unlikely that a carelessly-written client \r\n> would get changed to make use of a non-default feature.\r\n\r\nA patch introducing an \"allow_multiple_queries\" GUC to control this was proposed and eventually rejected for lack of consensus some time ago (also there were some concerns about the implementation that might have played against it too):\r\n\r\nhttps://www.postgresql.org/message-id/CALAY4q_eHUx%3D3p1QUOvabibwBvxEWGm-bzORrHA-itB7MBtd5Q%40mail.gmail.com\r\n\r\nAbout the effectiveness of this feature, there's a valid use case in which it's not the developers who decide to set this GUC, but the DBA or the organization deploying the application. That applies to applications that of course do not intentionally use multiple queries per command.\r\nThat would provide a certain level a protection against SQL injections, without changing the application or libpq or breaking backward compatibility, being optional.\r\n\r\nBut both in this thread and the other thread, the reasoning about the GUC seems to make the premise that applications would need to be updated or developpers need to be aware of it, as if they _had_ to issue SET allow_multiple_queries TO off/on, rather than being submitted to it, as imposed upon them by postgresql.conf or the database settings.\r\n\r\nIf we compare this to, say, lo_compat_privileges. An application typically doesn't get to decide whether it's \"on\". It's for a superuser to decide which databases or which users must operate with this setting to \"on\".\r\nWhy wouldn't that model work for disallowing multiple queries per command?\r\n\r\n\r\nBest regards,\r\n--\r\nDaniel Vérité\r\nPostgreSQL-powered mailer: http://www.manitou-mail.org\r\nTwitter: @DanielVerite\r\n\r\n\r\n\r\n\n\n",
"msg_date": "Wed, 20 Mar 2019 02:19:54 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Willing to fix a PQexec() in libpq module"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-20 02:19:54 +0000, Wu, Fei wrote:\n> Hi, thanks for all replies.\n> According to all your discussions, Maybe the problems is that\n> 1) keep modifications just in client side;\n> 2) modifications VS client current applications\n> \n> Maybe we could create a new function(May called PQexecSafe() ) just likes PQexec() but with additional input argument(May called issafe) to switch whether allowing at most one SQL command.\n> In that way, clients who want the safe feature just use the new function PQexecSafe() with issafe set true,\n> The others can choose:\n> 1) just use the old version PQexec(),\n> 2) using PQexecSafe() with issafe set false\n> \n> Then, we strongly recommended using PQexecSafe(),and PQexec() keep in use but labeled deprecated in documents. In other word, give client the time to choose and modify their applications if then want use the safe feature.\n\nWe already have PQexecParams(). And there's already comments explaining\nthe multi-statement behaviour in the docs. Do you see an additional\nadvantage in your proposal?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 19 Mar 2019 19:22:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Willing to fix a PQexec() in libpq module"
}
] |
[
{
"msg_contents": "Greetings, i am interested in databases and would like to make a contribution to thePostgreSQL by participating in GSoC 2019. Currently i am studying in HSE[1],doing last year of master's program that mostly build on top of collaborationwith ISP RAS[2]. In the previous year i have been working on llvm_jit extension forPostgreSQL 9.6, that was developed in ISP RAS and presented at PGCON[3].Specifically, my work consisted of adding support for several missingnodes(bitmapscan, mergejoin, subqueryscan, etc)by rewriting them with LLVM API, as well as other functionality(e.g. distinct in group by)that is required to fully support TPC-H of SCALE 100. Originally i wanted to pursue \"TOAST\" tasks from ideas list, but noticedthat couple of students have already mentioned them in mailing list. So, insteadof increasing the queue for single possible idea, i would like to offer otherones, that sound interesting to me and can potentially be useful for PostgreSQLand community: 1) The so-called Adaptive join, that exists in modern Oracle[4] and MSSQL[5] versions. This type of node is designed to mitigate cardinality estimation errors in queries that are somewhere inbetween NL(nested loop with indexscan) and HJ(hashjoin). One possible implementation of that is to start execution in HJ fasion, by accumulating rows in hashtable with certain threshold. If threshold is not exceeded, then continue with indexscan, otherwise switch to usual HJ. 2) Changing buffer manager strategy. Somewhere in 2016 Andres Freund made a presention[6] of possible improvements that can be done in buffer manager. I find the idea of changing hashtable to trees of radix trees[7] promising. Most likely, taking into account program's time constraints, this task won't be done as \"ready to deploy\" solution. Instead, some kind of prototype can be implemented and benchmarked. 3) Improvements in jit component. Great progress has been made in this direction in 10 and 11 versions, but still there's a lot to be done. Possible subareas: compiled code caching/sharing, cost-based optimizer improvements, push-based execution with bytecode transformation, compiling plpgsql, etc. At this stage i would like to receive some feedback from the community,which of those ideas are more useful for the near future of PostgreSQL andmore suitable for GSoC itself. With that information i can dive into particulartopic, extract additional information and prepare required proposal. p.s. my preferred order: 2,1,3 --------------------------------------------------------------------------------[1] https://www.hse.ru/en/ma/sp[2] http://www.ispras.ru/en/[3] http://www.pgcon.org/2017/schedule/events/1092.en.html[4] https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf[5] https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/04/19/introducing-batch-mode-adaptive-joins/[6] https://pgconf.ru/media/2016/05/13/1io.pdf[7] http://events17.linuxfoundation.org/sites/events/files/slides/LinuxConNA2016%20-%20Radix%20Tree.pdf Best regards, Michael.",
"msg_date": "Tue, 19 Mar 2019 12:53:49 +0300",
"msg_from": "pantilimonov misha <pantlimon@yandex.ru>",
"msg_from_op": true,
"msg_subject": "[GSoC] application ideas"
},
{
"msg_contents": "Excuse me for the previous letter, should be fixed now by using simple html.\n\n---\n\nGreetings,\n\ni am interested in databases and would like to make a contribution to the\nPostgreSQL by participating in GSoC 2019. Currently i am studying in HSE[1],\ndoing last year of master's program that mostly build on top of collaboration\nwith ISP RAS[2].\n\nIn the previous year i have been working on llvm_jit extension for\nPostgreSQL 9.6, that was developed in ISP RAS and presented at PGCON[3].\nSpecifically, my work consisted of adding support for several missing\nnodes(bitmapscan, mergejoin, subqueryscan, etc)\nby rewriting them with LLVM API, as well as other functionality(e.g. distinct in group by)\nthat is required to fully support TPC-H of SCALE 100.\n\nOriginally i wanted to pursue \"TOAST\" tasks from ideas list, but noticed\nthat couple of students have already mentioned them in mailing list. So, instead\nof increasing the queue for single possible idea, i would like to offer other\nones, that sound interesting to me and can potentially be useful for PostgreSQL\nand community:\n\n1) The so-called Adaptive join, that exists in modern Oracle[4] and MSSQL[5]\nversions. This type of node is designed to mitigate cardinality estimation\nerrors in queries that are somewhere inbetween NL(nested loop with indexscan)\nand HJ(hashjoin).\n\nOne possible implementation of that is to start execution in HJ fasion, by accumulating\nrows in hashtable with certain threshold. If threshold is not exceeded, then\ncontinue with indexscan, otherwise switch to usual HJ.\n\n2) Changing buffer manager strategy.\nSomewhere in 2016 Andres Freund made a presention[6] of possible improvements\nthat can be done in buffer manager. I find the idea of changing hashtable to\ntrees of radix trees[7] promising. Most likely, taking into account program's\ntime constraints, this task won't be done as \"ready to deploy\" solution.\nInstead, some kind of prototype can be implemented and benchmarked. \n\n3) Improvements in jit component.\nGreat progress has been made in this direction in 10 and 11 versions, but\nstill there's a lot to be done. Possible subareas: compiled code caching/sharing,\ncost-based optimizer improvements, push-based execution with bytecode\ntransformation, compiling plpgsql, etc.\n\nAt this stage i would like to receive some feedback from the community,\nwhich of those ideas are more useful for the near future of PostgreSQL and\nmore suitable for GSoC itself. With that information i can dive into particular\n topic, extract additional information and prepare required proposal.\n\np.s. my preferred order: 2,1,3\n\n--------------------------------------------------------------------------------\n[1] https://www.hse.ru/en/ma/sp\n[2] http://www.ispras.ru/en/\n[3] http://www.pgcon.org/2017/schedule/events/1092.en.html\n[4] https://www.oracle.com/technetwork/database/bi-datawarehousing/twp-optimizer-with-oracledb-12c-1963236.pdf\n[5] https://blogs.msdn.microsoft.com/sqlserverstorageengine/2017/04/19/introducing-batch-mode-adaptive-joins/\n[6] https://pgconf.ru/media/2016/05/13/1io.pdf\n[7] http://events17.linuxfoundation.org/sites/events/files/slides/LinuxConNA2016%20-%20Radix%20Tree.pdf\n\nBest regards,\n\nMichael.\n\n\n",
"msg_date": "Wed, 20 Mar 2019 14:25:23 +0300",
"msg_from": "pantilimonov misha <pantlimon@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: [GSoC] application ideas"
},
{
"msg_contents": "Hi, Michael!\n\n> 19 марта 2019 г., в 14:53, pantilimonov misha <pantlimon@yandex.ru> написал(а):\n> \n> 2) Changing buffer manager strategy.\n> Somewhere in 2016 Andres Freund made a presention[6] of possible improvements\n> that can be done in buffer manager. I find the idea of changing hashtable to\n> trees of radix trees[7] promising. Most likely, taking into account program's\n> time constraints, this task won't be done as \"ready to deploy\" solution.\n> Instead, some kind of prototype can be implemented and benchmarked. \n\nI like the idea of more efficient BufferTag->Page data structure. I'm not sure cache locality is a real problem there, but I believe this idea deserves giving it a shot.\nI'd happily review your proposal and co-mentor project, if it will be chosen for GSoC.\n\nAlso, plz check some work of my students in related area [0].\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/89A121E3-B593-4D65-98D9-BBC210B87268%40yandex-team.ru\n",
"msg_date": "Sun, 24 Mar 2019 14:12:24 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC] application ideas"
},
{
"msg_contents": "Andrey, thank you for your reply.\n\n> 24.03.2019, 12:12, \"Andrey Borodin\" <x4mmm@yandex-team.ru>:\n>\n> I like the idea of more efficient BufferTag->Page data structure. I'm not sure cache locality is a real problem there, but I believe this idea deserves giving it a shot.\n> I'd happily review your proposal and co-mentor project, if it will be chosen for GSoC.\n\nHere it is:\n\nhttps://docs.google.com/document/d/1HmhOs07zE8Q1TX1pOdtjxHSjjAUjaO2tp9NSmry8muY/edit?usp=sharing\n\n> Also, plz check some work of my students in related area [0].\n\nThis is definitely helpful! Also found couple of relevant discussions, putting information together...\n\n-- \nBest regards,\n\nMichael.\n\n\n",
"msg_date": "Wed, 03 Apr 2019 00:53:31 +0300",
"msg_from": "pantilimonov misha <pantlimon@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Re: [GSoC] application ideas"
},
{
"msg_contents": "Hi!\n\nWe are discussing GSoC details offlist, but I'll put some recommendations on your proposal to the list.\n\n> 3 апр. 2019 г., в 2:53, pantilimonov misha <pantlimon@yandex.ru> написал(а):\n> \n> Andrey, thank you for your reply.\n> \n>> 24.03.2019, 12:12, \"Andrey Borodin\" <x4mmm@yandex-team.ru>:\n>> \n>> I like the idea of more efficient BufferTag->Page data structure. I'm not sure cache locality is a real problem there, but I believe this idea deserves giving it a shot.\n>> I'd happily review your proposal and co-mentor project, if it will be chosen for GSoC.\n> \n> Here it is:\n> \n> https://docs.google.com/document/d/1HmhOs07zE8Q1TX1pOdtjxHSjjAUjaO2tp9NSmry8muY/edit?usp=sharing\n\nWhile your project is clearly research-oriented, it is planned within PostgreSQL development process. Can you please add some information about which patches are your going to put on commitfest?\nAlso, please plan to review one or more patches. This is important for integrating into community.\n\nBTW, there is somewhat related IntegerSet data structure added recently [0]. In my version it was implemented as radix tree. I think it is a good example how generic data structure can be presented and then reused by BufferManager.\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/postgres/postgres/commit/df816f6ad532ad685a3897869a2e64d3a53fe312\n\n",
"msg_date": "Fri, 5 Apr 2019 10:14:31 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC] application ideas"
}
] |
[
{
"msg_contents": "PostgreSQL pollutes the file system with lots of binaries that it is\nnot obvious that they belong to PostgreSQL.\n\nSuch as \"/usr/bin/createdb\", etc.\n\nIt would be better if these files were renamed to be prefixed with\npg_, such as pg_createdb.\nOr even better postgresql-createdb then be reachable by through a\n\"postgresql\" wrapper script.\n\n",
"msg_date": "Tue, 19 Mar 2019 11:19:33 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n> PostgreSQL pollutes the file system with lots of binaries that it is\n> not obvious that they belong to PostgreSQL.\n> \n> Such as \"/usr/bin/createdb\", etc.\n> \n> It would be better if these files were renamed to be prefixed with\n> pg_, such as pg_createdb.\n> Or even better postgresql-createdb then be reachable by through a\n> \"postgresql\" wrapper script.\n\nHi,\n\nThis topic has been discussed before e.g. in 2008 in \nhttps://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and \nalso more recently but I cannot find it in the archives right now.\n\nI am personally in favor of renaming e.g. createdb to pg_createdb, since \nit is not obvious that createdb belongs to PostgreSQL when reading a \nscript or looking in /usr/bin, but we would need a some kind of \ndeprecation cycle here or we would suddenly break tons of people's scripts.\n\nAnd as for the git-like solution with a wrapper script, that seems to be \nthe modern way to do things but would be an even larger breakage and I \nam not convinced the advantage would be worth it especially since our \nexecutables are not as closely related and consistent as for example git's.\n\nAndreas\n\n",
"msg_date": "Wed, 20 Mar 2019 11:05:53 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "It seems nothing came out of the discussion in 2008.\nI feel the topic should be revisited.\n\nI am in favor of doing so too. The deprecation cycle could involve\nsymlinks for a brief period of time or a couple of versions.\n\nYes, the wrapper script approach is used by Git as well as the \"dotnet\" command.\nThe wrapper script addition doesn't mean executing the commands\ndirectly without the wrapper won't be possible. So one doesn't exclude\nthe other.\nIt would be a welcome addition.\n\nOn Wed, Mar 20, 2019 at 11:05 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>\n> On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n> > PostgreSQL pollutes the file system with lots of binaries that it is\n> > not obvious that they belong to PostgreSQL.\n> >\n> > Such as \"/usr/bin/createdb\", etc.\n> >\n> > It would be better if these files were renamed to be prefixed with\n> > pg_, such as pg_createdb.\n> > Or even better postgresql-createdb then be reachable by through a\n> > \"postgresql\" wrapper script.\n>\n> Hi,\n>\n> This topic has been discussed before e.g. in 2008 in\n> https://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and\n> also more recently but I cannot find it in the archives right now.\n>\n> I am personally in favor of renaming e.g. createdb to pg_createdb, since\n> it is not obvious that createdb belongs to PostgreSQL when reading a\n> script or looking in /usr/bin, but we would need a some kind of\n> deprecation cycle here or we would suddenly break tons of people's scripts.\n>\n> And as for the git-like solution with a wrapper script, that seems to be\n> the modern way to do things but would be an even larger breakage and I\n> am not convinced the advantage would be worth it especially since our\n> executables are not as closely related and consistent as for example git's.\n>\n> Andreas\n\n",
"msg_date": "Wed, 20 Mar 2019 11:43:24 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 11:06 AM Andreas Karlsson <andreas@proxel.se> wrote:\n\n> On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n> > PostgreSQL pollutes the file system with lots of binaries that it is\n> > not obvious that they belong to PostgreSQL.\n> >\n> > Such as \"/usr/bin/createdb\", etc.\n> >\n> > It would be better if these files were renamed to be prefixed with\n> > pg_, such as pg_createdb.\n> > Or even better postgresql-createdb then be reachable by through a\n> > \"postgresql\" wrapper script.\n>\n> Hi,\n>\n> This topic has been discussed before e.g. in 2008 in\n> https://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and\n> also more recently but I cannot find it in the archives right now.\n>\n> I am personally in favor of renaming e.g. createdb to pg_createdb, since\n> it is not obvious that createdb belongs to PostgreSQL when reading a\n> script or looking in /usr/bin, but we would need a some kind of\n> deprecation cycle here or we would suddenly break tons of people's scripts\n\n\nI wouldn't be opposed to this, but I would note two points on a deprecation\ncycle:\n1 Given that people may have tools that work with all supported versions\nof PostgreSQL, this needs to be a long cycle, and\n2. Managing that cycle makes it a little bit of a tough sell.\n\n> .\n>\n> And as for the git-like solution with a wrapper script, that seems to be\n> the modern way to do things but would be an even larger breakage and I\n> am not convinced the advantage would be worth it especially since our\n> executables are not as closely related and consistent as for example git's.\n>\n\nGit commands may be related, but I would actually argue that git commands\nhave a lot of inconsistency because of this structure,\n\nSee, for example, http://stevelosh.com/blog/2013/04/git-koans/\n\n\n>\n> Andreas\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Wed, Mar 20, 2019 at 11:06 AM Andreas Karlsson <andreas@proxel.se> wrote:On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n> PostgreSQL pollutes the file system with lots of binaries that it is\n> not obvious that they belong to PostgreSQL.\n> \n> Such as \"/usr/bin/createdb\", etc.\n> \n> It would be better if these files were renamed to be prefixed with\n> pg_, such as pg_createdb.\n> Or even better postgresql-createdb then be reachable by through a\n> \"postgresql\" wrapper script.\n\nHi,\n\nThis topic has been discussed before e.g. in 2008 in \nhttps://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and \nalso more recently but I cannot find it in the archives right now.\n\nI am personally in favor of renaming e.g. createdb to pg_createdb, since \nit is not obvious that createdb belongs to PostgreSQL when reading a \nscript or looking in /usr/bin, but we would need a some kind of \ndeprecation cycle here or we would suddenly break tons of people's scriptsI wouldn't be opposed to this, but I would note two points on a deprecation cycle:1 Given that people may have tools that work with all supported versions of PostgreSQL, this needs to be a long cycle, and2. Managing that cycle makes it a little bit of a tough sell. .\n\nAnd as for the git-like solution with a wrapper script, that seems to be \nthe modern way to do things but would be an even larger breakage and I \nam not convinced the advantage would be worth it especially since our \nexecutables are not as closely related and consistent as for example git's.Git commands may be related, but I would actually argue that git commands have a lot of inconsistency because of this structure,See, for example, http://stevelosh.com/blog/2013/04/git-koans/ \n\nAndreas\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Wed, 20 Mar 2019 11:55:38 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "\nAnother pattern is to have a separate bin path for\nvarious software packages: /opt/postgres/bin for example.\n\nThat doesn't directly answer \"what is createdb?\" but it\ndoes give a quicker indication via the 'which' command.\n\n\n\n\nOn 3/20/19 5:43 AM, Fred .Flintstone wrote:\n> It seems nothing came out of the discussion in 2008.\n> I feel the topic should be revisited.\n>\n> I am in favor of doing so too. The deprecation cycle could involve\n> symlinks for a brief period of time or a couple of versions.\n>\n> Yes, the wrapper script approach is used by Git as well as the \"dotnet\" command.\n> The wrapper script addition doesn't mean executing the commands\n> directly without the wrapper won't be possible. So one doesn't exclude\n> the other.\n> It would be a welcome addition.\n>\n> On Wed, Mar 20, 2019 at 11:05 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>> On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n>>> PostgreSQL pollutes the file system with lots of binaries that it is\n>>> not obvious that they belong to PostgreSQL.\n>>>\n>>> Such as \"/usr/bin/createdb\", etc.\n>>>\n>>> It would be better if these files were renamed to be prefixed with\n>>> pg_, such as pg_createdb.\n>>> Or even better postgresql-createdb then be reachable by through a\n>>> \"postgresql\" wrapper script.\n>> Hi,\n>>\n>> This topic has been discussed before e.g. in 2008 in\n>> https://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and\n>> also more recently but I cannot find it in the archives right now.\n>>\n>> I am personally in favor of renaming e.g. createdb to pg_createdb, since\n>> it is not obvious that createdb belongs to PostgreSQL when reading a\n>> script or looking in /usr/bin, but we would need a some kind of\n>> deprecation cycle here or we would suddenly break tons of people's scripts.\n>>\n>> And as for the git-like solution with a wrapper script, that seems to be\n>> the modern way to do things but would be an even larger breakage and I\n>> am not convinced the advantage would be worth it especially since our\n>> executables are not as closely related and consistent as for example git's.\n>>\n>> Andreas\n>\n>\n\n\n",
"msg_date": "Wed, 20 Mar 2019 09:17:29 -0500",
"msg_from": "Chris Howard <chris@elfpen.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Chris Travers <chris.travers@adjust.com> writes:\n> On Wed, Mar 20, 2019 at 11:06 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>> On 3/19/19 11:19 AM, Fred .Flintstone wrote:\n>>> It would be better if these files were renamed to be prefixed with\n>>> pg_, such as pg_createdb.\n>>> Or even better postgresql-createdb then be reachable by through a\n>>> \"postgresql\" wrapper script.\n\n>> This topic has been discussed before e.g. in 2008 in\n>> https://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and\n>> also more recently but I cannot find it in the archives right now.\n\nAnd also before that, eg\nhttps://www.postgresql.org/message-id/flat/199910091253.IAA10670%40candle.pha.pa.us\n\n> I wouldn't be opposed to this, but I would note two points on a deprecation\n> cycle:\n> 1 Given that people may have tools that work with all supported versions\n> of PostgreSQL, this needs to be a long cycle, and\n> 2. Managing that cycle makes it a little bit of a tough sell.\n\nIf we didn't pull the trigger twenty years ago, nor ten years ago,\nwe're not likely to do so now. Yeah, it's a mess and we'd certainly\ndo it differently if we were starting from scratch, but we're not\nstarting from scratch. There are decades worth of scripts out there\nthat know these program names, most of them not under our control.\n\nEvery time this has been looked at, we've concluded that the\ndistributed costs of getting rid of these program names would exceed\nthe value; and that tradeoff gets worse, not better, as more years\ngo by. I don't foresee it happening.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 10:19:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": ">>> This topic has been discussed before e.g. in 2008 in\n>>> https://www.postgresql.org/message-id/47EA5CC0.8040102%40sun.com and\n>>> also more recently but I cannot find it in the archives right now.\n> \n> And also before that, eg\n> https://www.postgresql.org/message-id/flat/199910091253.IAA10670%40candle.pha.pa.us\n> \n>> I wouldn't be opposed to this, but I would note two points on a deprecation\n>> cycle:\n>> 1 Given that people may have tools that work with all supported versions\n>> of PostgreSQL, this needs to be a long cycle, and\n>> 2. Managing that cycle makes it a little bit of a tough sell.\n> \n> If we didn't pull the trigger twenty years ago, nor ten years ago,\n> we're not likely to do so now. Yeah, it's a mess and we'd certainly\n> do it differently if we were starting from scratch, but we're not\n> starting from scratch. There are decades worth of scripts out there\n> that know these program names, most of them not under our control.\n> \n> Every time this has been looked at, we've concluded that the\n> distributed costs of getting rid of these program names would exceed\n> the value; and that tradeoff gets worse, not better, as more years\n> go by. I don't foresee it happening.\n\n+1. As one of third party PostgreSQL tool developers, I am afraid\nchanging names of PostgreSQL commands would give us lots of pain: for\nexample checking PostgreSQL version to decide to use command \"foo\" not\n\"pg_foo\".\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Wed, 20 Mar 2019 23:39:27 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 3:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we didn't pull the trigger twenty years ago, nor ten years ago,\n> we're not likely to do so now. Yeah, it's a mess and we'd certainly\n> do it differently if we were starting from scratch, but we're not\n> starting from scratch. There are decades worth of scripts out there\n> that know these program names, most of them not under our control.\n>\n> Every time this has been looked at, we've concluded that the\n> distributed costs of getting rid of these program names would exceed\n> the value; and that tradeoff gets worse, not better, as more years\n> go by. I don't foresee it happening.\n\nEven just creating symlinks would be a welcome change.\nSo the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\nThen at least I can type pg_<tab> and use tab auto-completion to find\neverything related to PostgreSQL.\n\n",
"msg_date": "Wed, 20 Mar 2019 16:30:33 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Em qua, 20 de mar de 2019 às 11:39, Tatsuo Ishii <ishii@sraoss.co.jp> escreveu:\n>\n> +1. As one of third party PostgreSQL tool developers, I am afraid\n> changing names of PostgreSQL commands would give us lots of pain: for\n> example checking PostgreSQL version to decide to use command \"foo\" not\n> \"pg_foo\".\n>\ncreatedb, dropdb, createuser, dropuser, reindexdb are binaries that\nconfuse most newbies. Which tool is theses binaries from? The names\ndoes not give a hint. How often those confusing name tools are used?\nAFAICS a graphical tool or psql is used to create roles and databases.\npsql -c \"stmt\" can replace createdb, dropdb, createuser and dropuser.\nWhat about deprecate them (and remove after a support cycle)?\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Mar 2019 14:25:30 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Em qua, 20 de mar de 2019 às 14:22, Fred .Flintstone\n<eldmannen@gmail.com> escreveu:\n>\n> Even just creating symlinks would be a welcome change.\n> So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n> Then at least I can type pg_<tab> and use tab auto-completion to find\n> everything related to PostgreSQL.\n>\nThere are Postgres binaries that do not start with 'pg_' (for example,\npgbench and ecpg) and do not confuse newbies or conflict with OS\nbinary names.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Mar 2019 14:32:10 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-20, Fred .Flintstone wrote:\n\n> Even just creating symlinks would be a welcome change.\n> So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n> Then at least I can type pg_<tab> and use tab auto-completion to find\n> everything related to PostgreSQL.\n\nThere is merit to this argument; if the starting point is an unknown\nfile /usr/bin/foo, then having it be a symlink to /usr/bin/pg_foo makes\nit clear which package it belongs to. We don't *have to* get rid of the\nsymlinks any time soon, but installing as symlinks now will allow Skynet\nto get rid of them some decades from now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 14:32:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 6:25 PM Euler Taveira <euler@timbira.com.br> wrote:\n>\n> createdb, dropdb, createuser, dropuser, reindexdb are binaries that\n> confuse most newbies. Which tool is theses binaries from? The names\n> does not give a hint. How often those confusing name tools are used?\n\ninitdb is probably an order of magnitude worse name than all of these.\n\n",
"msg_date": "Wed, 20 Mar 2019 18:36:13 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Wed, Mar 20, 2019 at 6:25 PM Euler Taveira <euler@timbira.com.br> wrote:\n>> createdb, dropdb, createuser, dropuser, reindexdb are binaries that\n>> confuse most newbies. Which tool is theses binaries from? The names\n>> does not give a hint. How often those confusing name tools are used?\n\n> initdb is probably an order of magnitude worse name than all of these.\n\nMeh. The ones with \"db\" in the name don't strike me as mortal sins;\neven if you don't recognize them as referring to a \"database\", you're\nnot likely to guess wrongly that you know what they do. The two that\nseem the worst to me are createuser and dropuser, which not only have\nno visible connection to \"postgres\" or \"database\" but could easily\nbe mistaken for utilities for managing operating-system accounts.\n\nWe managed to get rid of createlang and droplang in v10, and there\nhasn't been that much push-back about it. So maybe there could be\na move to remove createuser/dropuser? Or at least rename them to\npg_createuser and pg_dropuser. But I think this was discussed\n(again) during the v10 cycle, and we couldn't agree to do more than\nget rid of createlang/droplang.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 13:56:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Em qua, 20 de mar de 2019 às 14:57, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> We managed to get rid of createlang and droplang in v10, and there\n> hasn't been that much push-back about it. So maybe there could be\n> a move to remove createuser/dropuser? Or at least rename them to\n> pg_createuser and pg_dropuser. But I think this was discussed\n> (again) during the v10 cycle, and we couldn't agree to do more than\n> get rid of createlang/droplang.\n>\nVotes? +1 to remove createuser/dropuser (and also createdb/dropdb as I\nsaid in the other email). However, if we don't have sufficient votes,\nlet's at least consider a 'pg_' prefix.\n\n\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Wed, 20 Mar 2019 15:02:56 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-20, Euler Taveira wrote:\n\n> Em qua, 20 de mar de 2019 �s 14:57, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n> >\n> > We managed to get rid of createlang and droplang in v10, and there\n> > hasn't been that much push-back about it. So maybe there could be\n> > a move to remove createuser/dropuser? Or at least rename them to\n> > pg_createuser and pg_dropuser. But I think this was discussed\n> > (again) during the v10 cycle, and we couldn't agree to do more than\n> > get rid of createlang/droplang.\n\nPrevious discussion: \nhttps://postgr.es/m/CABUevExPrfPH5K5qM=zsT7tvfyACe+i5qjA6bfWCKKYrh8MJLw@mail.gmail.com\n\n> Votes? +1 to remove createuser/dropuser (and also createdb/dropdb as I\n> said in the other email). However, if we don't have sufficient votes,\n> let's at least consider a 'pg_' prefix.\n\nI vote to keep these rename these utilities to have a pg_ prefix and to\nsimultaneously install symlinks for their current names, so that nothing\nbreaks.\n\n\n[In a couple of releases we could patch them so that they print a\ndeprecation warning to stderr if they're invoked without the prefix.]\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 15:08:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, 20 Mar 2019 13:56:55 -0400\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Wed, Mar 20, 2019 at 6:25 PM Euler Taveira <euler@timbira.com.br>\n> > wrote: \n> >> createdb, dropdb, createuser, dropuser, reindexdb are binaries that\n> >> confuse most newbies. Which tool is theses binaries from? The names\n> >> does not give a hint. How often those confusing name tools are used? \n> \n> > initdb is probably an order of magnitude worse name than all of these. \n> \n> Meh. The ones with \"db\" in the name don't strike me as mortal sins;\n> even if you don't recognize them as referring to a \"database\", you're\n> not likely to guess wrongly that you know what they do. The two that\n> seem the worst to me are createuser and dropuser, which not only have\n> no visible connection to \"postgres\" or \"database\" but could easily\n> be mistaken for utilities for managing operating-system accounts.\n> \n> We managed to get rid of createlang and droplang in v10, and there\n> hasn't been that much push-back about it. So maybe there could be\n> a move to remove createuser/dropuser? Or at least rename them to\n> pg_createuser and pg_dropuser.\n\nIf you rename them, rename as pg_createrole and pg_droprole :)\n\nI teach people not to use \"CREATE USER/GROUP\", but each time I have to tell\nthem \"Yes, we kept createuser since 8.1 where roles has been introduced for\nbackward compatibility. No, there's no createrole\".\n\n++\n\n",
"msg_date": "Wed, 20 Mar 2019 19:10:25 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "\"Fred .Flintstone\" <eldmannen@gmail.com> writes:\n> Even just creating symlinks would be a welcome change.\n> So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n> Then at least I can type pg_<tab> and use tab auto-completion to find\n> everything related to PostgreSQL.\n\nYou'd miss psql. I think the odds of renaming psql are not\ndistinguishable from zero: whatever arguments you might want to make\nabout, say, renaming initdb perhaps not affecting too many scripts\nare surely not going to fly for psql. So that line of argument\nisn't too convincing.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 14:11:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 9:32 PM, Alvaro Herrera wrote:\n> On 2019-Mar-20, Fred .Flintstone wrote:\n> \n>> Even just creating symlinks would be a welcome change.\n>> So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n>> Then at least I can type pg_<tab> and use tab auto-completion to find\n>> everything related to PostgreSQL.\n> \n> There is merit to this argument; if the starting point is an unknown\n> file /usr/bin/foo, then having it be a symlink to /usr/bin/pg_foo makes\n> it clear which package it belongs to. We don't *have to* get rid of the\n> symlinks any time soon, but installing as symlinks now will allow Skynet\n> to get rid of them some decades from now.\n\n+1 to tasking Skynet with removing deprecated features. Seems like it \nwould save a lot of arguing.\n\n-- \n-David\ndavid@pgmasters.net\n\n",
"msg_date": "Wed, 20 Mar 2019 22:30:48 +0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Then someone who don't want the symlinks could delete them.\nOr the symlinks could ship in an optional postgesql-legacy-symlinks package.\n\nOn Wed, Mar 20, 2019 at 6:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-20, Fred .Flintstone wrote:\n>\n> > Even just creating symlinks would be a welcome change.\n> > So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n> > Then at least I can type pg_<tab> and use tab auto-completion to find\n> > everything related to PostgreSQL.\n>\n> There is merit to this argument; if the starting point is an unknown\n> file /usr/bin/foo, then having it be a symlink to /usr/bin/pg_foo makes\n> it clear which package it belongs to. We don't *have to* get rid of the\n> symlinks any time soon, but installing as symlinks now will allow Skynet\n> to get rid of them some decades from now.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 20:09:59 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 2:08 PM, Alvaro Herrera wrote:\n> On 2019-Mar-20, Euler Taveira wrote:\n> \n>> Em qua, 20 de mar de 2019 às 14:57, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>>\n>>> We managed to get rid of createlang and droplang in v10, and there\n>>> hasn't been that much push-back about it. So maybe there could be\n>>> a move to remove createuser/dropuser? Or at least rename them to\n>>> pg_createuser and pg_dropuser. But I think this was discussed\n>>> (again) during the v10 cycle, and we couldn't agree to do more than\n>>> get rid of createlang/droplang.\n> \n> Previous discussion: \n> https://postgr.es/m/CABUevExPrfPH5K5qM=zsT7tvfyACe+i5qjA6bfWCKKYrh8MJLw@mail.gmail.com\n> \n>> Votes? +1 to remove createuser/dropuser (and also createdb/dropdb as I\n>> said in the other email). However, if we don't have sufficient votes,\n>> let's at least consider a 'pg_' prefix.\n> \n> I vote to keep these rename these utilities to have a pg_ prefix and to\n> simultaneously install symlinks for their current names, so that nothing\n> breaks.\n\nThis sounds like a reasonable plan, pending which binaries we feel to do\nthat with.\n\nPardon this naive question as I have not used such systems in awhile,\nbut would this work on systems that do not support symlinks?\n\nJonathan",
"msg_date": "Wed, 20 Mar 2019 15:13:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I would be fine with that.\nWe can make an exception for psql.\n\nAs long as we get rid of:\n* clusterdb\n* createdb\n* createuser\n* dropdb\n* dropuser\n* reindexdb\n* vacuumdb\n\nOn Wed, Mar 20, 2019 at 7:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Fred .Flintstone\" <eldmannen@gmail.com> writes:\n> > Even just creating symlinks would be a welcome change.\n> > So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n> > Then at least I can type pg_<tab> and use tab auto-completion to find\n> > everything related to PostgreSQL.\n>\n> You'd miss psql. I think the odds of renaming psql are not\n> distinguishable from zero: whatever arguments you might want to make\n> about, say, renaming initdb perhaps not affecting too many scripts\n> are surely not going to fly for psql. So that line of argument\n> isn't too convincing.\n>\n> regards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 20:14:04 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 2:11 PM, Tom Lane wrote:\n> \"Fred .Flintstone\" <eldmannen@gmail.com> writes:\n>> Even just creating symlinks would be a welcome change.\n>> So the real binary is pg_foo and foo is a symoblic link that points to pg_foo.\n>> Then at least I can type pg_<tab> and use tab auto-completion to find\n>> everything related to PostgreSQL.\n> \n> You'd miss psql. I think the odds of renaming psql are not\n> distinguishable from zero: whatever arguments you might want to make\n> about, say, renaming initdb perhaps not affecting too many scripts\n> are surely not going to fly for psql. So that line of argument\n> isn't too convincing.\n\nTo add to that, for better or worse, many people associate the\nPostgreSQL database itself as \"psql\" or \"pgsql\" (\"I use psql, it's my\nfavorite database!\").\n\nIf we are evaluating this whole symlink / renaming thing, there could be\narguments for a \"pgsql\" alias to psql (or vice versa), but I don't think\n\"pg_sql\" makes any sense and could be fairly confusing.\n\nJonathan",
"msg_date": "Wed, 20 Mar 2019 15:15:02 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "The binaries:\n* clusterdb\n* createdb\n* createuser\n* dropdb\n* dropuser\n* reindexdb\n* vacuumdb\n\nOn Wed, Mar 20, 2019 at 8:13 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 3/20/19 2:08 PM, Alvaro Herrera wrote:\n> > On 2019-Mar-20, Euler Taveira wrote:\n> >\n> >> Em qua, 20 de mar de 2019 às 14:57, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n> >>>\n> >>> We managed to get rid of createlang and droplang in v10, and there\n> >>> hasn't been that much push-back about it. So maybe there could be\n> >>> a move to remove createuser/dropuser? Or at least rename them to\n> >>> pg_createuser and pg_dropuser. But I think this was discussed\n> >>> (again) during the v10 cycle, and we couldn't agree to do more than\n> >>> get rid of createlang/droplang.\n> >\n> > Previous discussion:\n> > https://postgr.es/m/CABUevExPrfPH5K5qM=zsT7tvfyACe+i5qjA6bfWCKKYrh8MJLw@mail.gmail.com\n> >\n> >> Votes? +1 to remove createuser/dropuser (and also createdb/dropdb as I\n> >> said in the other email). However, if we don't have sufficient votes,\n> >> let's at least consider a 'pg_' prefix.\n> >\n> > I vote to keep these rename these utilities to have a pg_ prefix and to\n> > simultaneously install symlinks for their current names, so that nothing\n> > breaks.\n>\n> This sounds like a reasonable plan, pending which binaries we feel to do\n> that with.\n>\n> Pardon this naive question as I have not used such systems in awhile,\n> but would this work on systems that do not support symlinks?\n>\n> Jonathan\n>\n\n",
"msg_date": "Wed, 20 Mar 2019 20:16:51 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-03-20 15:13:00 -0400, Jonathan S. Katz wrote:\n> Pardon this naive question as I have not used such systems in awhile,\n> but would this work on systems that do not support symlinks?\n\nWe can just copy the binaries there, they're not that big anyway.\n\n",
"msg_date": "Wed, 20 Mar 2019 12:17:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-20 15:15:02 -0400, Jonathan S. Katz wrote:\n> If we are evaluating this whole symlink / renaming thing, there could be\n> arguments for a \"pgsql\" alias to psql (or vice versa), but I don't think\n> \"pg_sql\" makes any sense and could be fairly confusing.\n\nI don't care much about createdb etc, but I'm *strongly* against\nrenaming psql and/or adding symlinks. That's like 95% of all\ninteractions people have with postgres binaries, making that more\nconfusing would be an enterily unnecessary self own.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Wed, 20 Mar 2019 12:19:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 3:19 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-20 15:15:02 -0400, Jonathan S. Katz wrote:\n>> If we are evaluating this whole symlink / renaming thing, there could be\n>> arguments for a \"pgsql\" alias to psql (or vice versa), but I don't think\n>> \"pg_sql\" makes any sense and could be fairly confusing.\n> \n> I don't care much about createdb etc, but I'm *strongly* against\n> renaming psql and/or adding symlinks. That's like 95% of all\n> interactions people have with postgres binaries, making that more\n> confusing would be an enterily unnecessary self own.\n\nYeah I agree. The only one I would entertain is \"pgsql\" given enough\npeople refer to PostgreSQL as such, but note I use the term \"entertain\"\nin a similar way to when I knowingly watch terrible movies.\n\nJonathan",
"msg_date": "Wed, 20 Mar 2019 15:21:34 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-20, Andres Freund wrote:\n\n> On 2019-03-20 15:15:02 -0400, Jonathan S. Katz wrote:\n> > If we are evaluating this whole symlink / renaming thing, there could be\n> > arguments for a \"pgsql\" alias to psql (or vice versa), but I don't think\n> > \"pg_sql\" makes any sense and could be fairly confusing.\n> \n> I don't care much about createdb etc, but I'm *strongly* against\n> renaming psql and/or adding symlinks.\n\n+1.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 16:53:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 7:08 PM, Alvaro Herrera wrote:\n> On 2019-Mar-20, Euler Taveira wrote:\n> \n>> Em qua, 20 de mar de 2019 às 14:57, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>>\n>>> We managed to get rid of createlang and droplang in v10, and there\n>>> hasn't been that much push-back about it. So maybe there could be\n>>> a move to remove createuser/dropuser? Or at least rename them to\n>>> pg_createuser and pg_dropuser. But I think this was discussed\n>>> (again) during the v10 cycle, and we couldn't agree to do more than\n>>> get rid of createlang/droplang.\n> \n> Previous discussion: \n> https://postgr.es/m/CABUevExPrfPH5K5qM=zsT7tvfyACe+i5qjA6bfWCKKYrh8MJLw@mail.gmail.com\n> \n>> Votes? +1 to remove createuser/dropuser (and also createdb/dropdb as I\n>> said in the other email). However, if we don't have sufficient votes,\n>> let's at least consider a 'pg_' prefix.\n> \n> I vote to keep these rename these utilities to have a pg_ prefix and to\n> simultaneously install symlinks for their current names, so that nothing\n> breaks.\n> \n\nI don't really understand what issue are we trying to solve here.\n\nCan someone describe a scenario where this (name of the binary not\nclearly indicating it's related postgres) causes issues in practice? On\nmy system, there are ~1400 binaries in /usr/bin, and for the vast\nmajority of them it's rather unclear where do they come from.\n\nBut it's not really an issue, because we have tools to do that\n\n1) man\n\n2) -h/--help\n\n3) rpm -qf $file (and similarly for other packagers)\n\n4) set --prefix to install binaries so separate directory (which some\ndistros already do anyway)\n\nSo to me this seems like a fairly invasive change (potentially breaking\nquite a few scripts/tools) just to address a minor inconvenience.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 23:22:44 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": ">> +1. As one of third party PostgreSQL tool developers, I am afraid\n>> changing names of PostgreSQL commands would give us lots of pain: for\n>> example checking PostgreSQL version to decide to use command \"foo\" not\n>> \"pg_foo\".\n>>\n> createdb, dropdb, createuser, dropuser, reindexdb are binaries that\n> confuse most newbies. Which tool is theses binaries from? The names\n> does not give a hint. How often those confusing name tools are used?\n> AFAICS a graphical tool or psql is used to create roles and databases.\n> psql -c \"stmt\" can replace createdb, dropdb, createuser and dropuser.\n> What about deprecate them (and remove after a support cycle)?\n\nAt least psql, initdb, pg_config, pgbench and pg_ctl for now. But I\ndon't want to say that renaming other commands would be fine for me\nbecause I would like to take a liberty to extend my tool for my users.\n\nBTW, a strange thing in the whole discussion is, installing those\nPostgreSQL commands in /usr/bin is done by packagers, not PostgreSQL\ncore project itself. The default installation directory has been\n/usr/local/pgsql/bin in the source code of PostgreSQL since it was\nborn, and I love the place. Forcing to install everything into\n/usr/bin is distributions' policy, not PostgreSQL core project's as\nfar as I know. So I wonder why people don't ask the renaming request\nto packagers, rather than PostgreSQL core project itself.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Thu, 21 Mar 2019 07:56:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "> I don't really understand what issue are we trying to solve here.\n> \n> Can someone describe a scenario where this (name of the binary not\n> clearly indicating it's related postgres) causes issues in practice? On\n> my system, there are ~1400 binaries in /usr/bin, and for the vast\n> majority of them it's rather unclear where do they come from.\n> \n> But it's not really an issue, because we have tools to do that\n> \n> 1) man\n> \n> 2) -h/--help\n> \n> 3) rpm -qf $file (and similarly for other packagers)\n> \n> 4) set --prefix to install binaries so separate directory (which some\n> distros already do anyway)\n> \n> So to me this seems like a fairly invasive change (potentially breaking\n> quite a few scripts/tools) just to address a minor inconvenience.\n\n+1.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Thu, 21 Mar 2019 08:41:32 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 08:41:32AM +0900, Tatsuo Ishii wrote:\n>> Can someone describe a scenario where this (name of the binary not\n>> clearly indicating it's related postgres) causes issues in practice? On\n>> my system, there are ~1400 binaries in /usr/bin, and for the vast\n>> majority of them it's rather unclear where do they come from.\n\nNaming conflict because our binary names are too generic? createdb\ncould for example be applied to any database, and not only Postgres.\n(I have 1600 entries in /usr/bin on a Debian installation.)\n\n>> \n>> But it's not really an issue, because we have tools to do that\n>> \n>> 1) man\n>> \n>> 2) -h/--help\n>> \n>> 3) rpm -qf $file (and similarly for other packagers)\n>> \n>> 4) set --prefix to install binaries so separate directory (which some\n>> distros already do anyway)\n>> \n>> So to me this seems like a fairly invasive change (potentially breaking\n>> quite a few scripts/tools) just to address a minor inconvenience.\n> \n> +1.\n\nYes, +1.\n--\nMichael",
"msg_date": "Thu, 21 Mar 2019 09:49:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "\n\nOn 3/21/19 1:49 AM, Michael Paquier wrote:\n> On Thu, Mar 21, 2019 at 08:41:32AM +0900, Tatsuo Ishii wrote:\n>>> Can someone describe a scenario where this (name of the binary not\n>>> clearly indicating it's related postgres) causes issues in practice? On\n>>> my system, there are ~1400 binaries in /usr/bin, and for the vast\n>>> majority of them it's rather unclear where do they come from.\n> \n> Naming conflict because our binary names are too generic? createdb\n> could for example be applied to any database, and not only Postgres.\n> (I have 1600 entries in /usr/bin on a Debian installation.)\n> \n\nMaybe. Do we actually know about such cases? Also, isn't setting\n--prefix a suitable solution? I mean, it's what we/packagers do to\nsupport installing multiple Pg versions (in which case it'll conflict no\nmatter how we rename stuff) anyway.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 21 Mar 2019 02:32:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On 3/21/19 1:49 AM, Michael Paquier wrote:\n>> On Thu, Mar 21, 2019 at 08:41:32AM +0900, Tatsuo Ishii wrote:\n>>> Can someone describe a scenario where this (name of the binary not\n>>> clearly indicating it's related postgres) causes issues in practice?\n\n>> Naming conflict because our binary names are too generic?\n\n> Maybe. Do we actually know about such cases?\n\nMore to the point, we have now got twenty+ years seniority on any other\npackage that might want those /usr/bin names. So a conflict is not\n*really* going to happen, or at least it's not going to be our problem\nif it does.\n\nThe whole thing is unfortunate, without a doubt, but it's still\nunclear that renaming those programs will buy anything that's worth\nthe conversion costs. I'd be happy to pay said costs if it were all\nfalling to this project to do so ... but most of the pain will be\nborne by other people.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 23:22:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "At 2019-03-20 23:22:44 +0100, tomas.vondra@2ndquadrant.com wrote:\n>\n> I don't really understand what issue are we trying to solve here.\n> \n> Can someone describe a scenario where this (name of the binary not\n> clearly indicating it's related postgres) causes issues in practice?\n> On my system, there are ~1400 binaries in /usr/bin, and for the vast\n> majority of them it's rather unclear where do they come from.\n\nIt sounds like a problem especially when described with charged terms\nlike \"pollutes\", but I agree with you and others that it just doesn't\nseem worth the effort to try to rename everything.\n\n-- Abhijit\n\n",
"msg_date": "Thu, 21 Mar 2019 09:36:17 +0530",
"msg_from": "Abhijit Menon-Sen <ams@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/20/19 8:19 PM, Andres Freund wrote:\n> On 2019-03-20 15:15:02 -0400, Jonathan S. Katz wrote:\n>> If we are evaluating this whole symlink / renaming thing, there could be\n>> arguments for a \"pgsql\" alias to psql (or vice versa), but I don't think\n>> \"pg_sql\" makes any sense and could be fairly confusing.\n> \n> I don't care much about createdb etc, but I'm *strongly* against\n> renaming psql and/or adding symlinks. That's like 95% of all\n> interactions people have with postgres binaries, making that more\n> confusing would be an enterily unnecessary self own.\n\n+1 \"psql\" as a tool for connecting to PostgreSQL is so well established \nthat renaming it would just confuse everyone.\n\nAndreas\n\n",
"msg_date": "Thu, 21 Mar 2019 07:04:25 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 1:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 21, 2019 at 08:41:32AM +0900, Tatsuo Ishii wrote:\n> >> Can someone describe a scenario where this (name of the binary not\n> >> clearly indicating it's related postgres) causes issues in practice? On\n> >> my system, there are ~1400 binaries in /usr/bin, and for the vast\n> >> majority of them it's rather unclear where do they come from.\n>\n> Naming conflict because our binary names are too generic? createdb\n> could for example be applied to any database, and not only Postgres.\n> (I have 1600 entries in /usr/bin on a Debian installation.)\n>\n\nI generally agree with Tom that there is sufficient precedence here that we\ndon't need to worry about these conflicts per se. However I would add two\npoints where we might want to think:\n\n1. createuser/dropuser are things that I don't consider good ways of\ncreating users anyway. I think we should just consider removing these\nbinaries. The SQL queries are better, more functional, and can be rolled\nback as a part of a larger transaction.\n\n2. initdb is not so much of a pressing issue but I think despite the\nlonger string, pg_ctl -D mydatadir init [options] would be clearer from a\nnew user perspective and pose less cognitive load.\n\n>\n> >>\n> >> But it's not really an issue, because we have tools to do that\n> >>\n> >> 1) man\n> >>\n> >> 2) -h/--help\n> >>\n> >> 3) rpm -qf $file (and similarly for other packagers)\n> >>\n> >> 4) set --prefix to install binaries so separate directory (which some\n> >> distros already do anyway)\n> >>\n> >> So to me this seems like a fairly invasive change (potentially breaking\n> >> quite a few scripts/tools) just to address a minor inconvenience.\n> >\n> > +1.\n>\n> Yes, +1.\n> --\n> Michael\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Thu, Mar 21, 2019 at 1:49 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 21, 2019 at 08:41:32AM +0900, Tatsuo Ishii wrote:\n>> Can someone describe a scenario where this (name of the binary not\n>> clearly indicating it's related postgres) causes issues in practice? On\n>> my system, there are ~1400 binaries in /usr/bin, and for the vast\n>> majority of them it's rather unclear where do they come from.\n\nNaming conflict because our binary names are too generic? createdb\ncould for example be applied to any database, and not only Postgres.\n(I have 1600 entries in /usr/bin on a Debian installation.)I generally agree with Tom that there is sufficient precedence here that we don't need to worry about these conflicts per se. However I would add two points where we might want to think:1. createuser/dropuser are things that I don't consider good ways of creating users anyway. I think we should just consider removing these binaries. The SQL queries are better, more functional, and can be rolled back as a part of a larger transaction.2. initdb is not so much of a pressing issue but I think despite the longer string, pg_ctl -D mydatadir init [options] would be clearer from a new user perspective and pose less cognitive load.\n\n>> \n>> But it's not really an issue, because we have tools to do that\n>> \n>> 1) man\n>> \n>> 2) -h/--help\n>> \n>> 3) rpm -qf $file (and similarly for other packagers)\n>> \n>> 4) set --prefix to install binaries so separate directory (which some\n>> distros already do anyway)\n>> \n>> So to me this seems like a fairly invasive change (potentially breaking\n>> quite a few scripts/tools) just to address a minor inconvenience.\n> \n> +1.\n\nYes, +1.\n--\nMichael\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Thu, 21 Mar 2019 07:07:21 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-20, Tomas Vondra wrote:\n\n> So to me this seems like a fairly invasive change (potentially breaking\n> quite a few scripts/tools) just to address a minor inconvenience.\n\nI don't think anything would break, actually. What are you thinking\nwould break?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Thu, 21 Mar 2019 08:45:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/21/19 7:07 AM, Chris Travers wrote:\n> 1. createuser/dropuser are things that I don't consider good ways of \n> creating users anyway. I think we should just consider removing these \n> binaries. The SQL queries are better, more functional, and can be \n> rolled back as a part of a larger transaction.\n\nThose binaries are pretty convenient to use in scripts since they handle \nSQL escaping for you, but probably not convenient enough that we would \nhave added createuser today.\n\nCompare\n\ncreateuser \"$USER\"\n\nvs\n\necho 'CREATE ROLE :\"user\" LOGIN' | psql postgres -v \"user=$USER\"\n\nAndreas\n\n",
"msg_date": "Thu, 21 Mar 2019 13:12:23 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 3/21/19 7:07 AM, Chris Travers wrote:\n>> 1. createuser/dropuser are things that I don't consider good ways of \n>> creating users anyway.\n\n> Those binaries are pretty convenient to use in scripts since they handle \n> SQL escaping for you, but probably not convenient enough that we would \n> have added createuser today.\n\n> Compare\n> createuser \"$USER\"\n> vs\n> echo 'CREATE ROLE :\"user\" LOGIN' | psql postgres -v \"user=$USER\"\n\nHmm. That example is actually quite scary, because while nearly\nanybody who's ever done any shell scripting would get the first\none right, the second one requires a fair deal of specialized\nknowledge and creativity. I fear that 99% of people would have\ncoded it like\n\n\techo \"CREATE USER $USER\" | psql\n\nor some variant on that, and now they have a SQL-injection\nhazard that they didn't have before.\n\nSo there seems like a real risk that taking away createuser would\nresult in security holes, not just annoying-but-trivial script update\nwork. That puts me more in the camp of \"if we're going to do anything,\nrename it with a pg_ prefix\" than \"if we're going to do anything,\nremove it\".\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Mar 2019 10:02:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 10:02:40AM -0400, Tom Lane wrote:\n> So there seems like a real risk that taking away createuser would\n> result in security holes, not just annoying-but-trivial script update\n> work. That puts me more in the camp of \"if we're going to do anything,\n> rename it with a pg_ prefix\" than \"if we're going to do anything,\n> remove it\".\n\nRemoving it would be a bad idea as it is very easy to mess up with\nthings in such cases. As you mentioned, renaming the tools now would\ncreate more pain than actually solving things, so that's a bad idea\nanyway.\n\nI would be curious to hear the reason why such tool names have been\nchosen from the start. The tools have been switched to C in 9e0ab71\nfrom 2003, have been introduced by Peter Eisentraut as of 240e4c9 from\n1999, and I cannot spot the thread from the time where this was\ndiscussed.\n--\nMichael",
"msg_date": "Fri, 22 Mar 2019 09:36:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I would be curious to hear the reason why such tool names have been\n> chosen from the start. The tools have been switched to C in 9e0ab71\n> from 2003, have been introduced by Peter Eisentraut as of 240e4c9 from\n> 1999, and I cannot spot the thread from the time where this was\n> discussed.\n\ncreateuser, at least, dates back to Berkeley days: my copy of the\nPG v4r2 tarball contains a \"src/bin/createuser/createuser.sh\" file\ndated 1994-03-19. (The 1999 commit you mention just moved the\nfunctionality around; it was there before.) So I imagine the answer\nis that nobody at the time thought of fitting these scripts into a\nlarger ecosystem.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Mar 2019 22:05:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 22/03/19 3:05 PM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> I would be curious to hear the reason why such tool names have been\n>> chosen from the start. The tools have been switched to C in 9e0ab71\n>> from 2003, have been introduced by Peter Eisentraut as of 240e4c9 from\n>> 1999, and I cannot spot the thread from the time where this was\n>> discussed.\n> createuser, at least, dates back to Berkeley days: my copy of the\n> PG v4r2 tarball contains a \"src/bin/createuser/createuser.sh\" file\n> dated 1994-03-19. (The 1999 commit you mention just moved the\n> functionality around; it was there before.) So I imagine the answer\n> is that nobody at the time thought of fitting these scripts into a\n> larger ecosystem.\n\n\nFWIW the whole set is there in version 6.4.2:\n\nmarkir@vedavec:/download/postgres/src/postgresql-6.4.2/src/bin$ ls -l\ntotal 72\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 cleardbdir\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 createdb\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 createuser\ndrwxr-sr-x 2 markir adm 4096 Dec 31 1998 CVS\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 destroydb\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 destroyuser\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 initdb\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 initlocation\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 ipcclean\n-rw-r--r-- 1 markir adm 795 Dec 19 1998 Makefile\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pgaccess\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_dump\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_encoding\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_id\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_passwd\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pgtclsh\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_version\ndrwxr-sr-x 3 markir adm 4096 Dec 31 1998 psql\n\n--\n\nMark\n\n\n\n",
"msg_date": "Fri, 22 Mar 2019 15:13:45 +1300",
"msg_from": "Mark Kirkwood <mark.kirkwood@catalyst.net.nz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Many of these are gone in the modern PostgreSQL, a few remain.\nhttps://packages.ubuntu.com/disco/amd64/postgresql-client-11/filelist\n\n/usr/lib/postgresql/11/bin/clusterdb\n/usr/lib/postgresql/11/bin/createdb\n/usr/lib/postgresql/11/bin/createuser\n/usr/lib/postgresql/11/bin/dropdb\n/usr/lib/postgresql/11/bin/dropuser\n/usr/lib/postgresql/11/bin/pg_basebackup\n/usr/lib/postgresql/11/bin/pg_dump\n/usr/lib/postgresql/11/bin/pg_dumpall\n/usr/lib/postgresql/11/bin/pg_isready\n/usr/lib/postgresql/11/bin/pg_receivewal\n/usr/lib/postgresql/11/bin/pg_recvlogical\n/usr/lib/postgresql/11/bin/pg_restore\n/usr/lib/postgresql/11/bin/psql\n/usr/lib/postgresql/11/bin/reindexdb\n/usr/lib/postgresql/11/bin/vacuumdb\n\nCan we rename clusterdb, reindexdb and vacuumdb to carry the pg_ prefix?\n\nOn Fri, Mar 22, 2019 at 3:13 AM Mark Kirkwood\n<mark.kirkwood@catalyst.net.nz> wrote:\n>\n> On 22/03/19 3:05 PM, Tom Lane wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> >> I would be curious to hear the reason why such tool names have been\n> >> chosen from the start. The tools have been switched to C in 9e0ab71\n> >> from 2003, have been introduced by Peter Eisentraut as of 240e4c9 from\n> >> 1999, and I cannot spot the thread from the time where this was\n> >> discussed.\n> > createuser, at least, dates back to Berkeley days: my copy of the\n> > PG v4r2 tarball contains a \"src/bin/createuser/createuser.sh\" file\n> > dated 1994-03-19. (The 1999 commit you mention just moved the\n> > functionality around; it was there before.) So I imagine the answer\n> > is that nobody at the time thought of fitting these scripts into a\n> > larger ecosystem.\n>\n>\n> FWIW the whole set is there in version 6.4.2:\n>\n> markir@vedavec:/download/postgres/src/postgresql-6.4.2/src/bin$ ls -l\n> total 72\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 cleardbdir\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 createdb\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 createuser\n> drwxr-sr-x 2 markir adm 4096 Dec 31 1998 CVS\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 destroydb\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 destroyuser\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 initdb\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 initlocation\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 ipcclean\n> -rw-r--r-- 1 markir adm 795 Dec 19 1998 Makefile\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pgaccess\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_dump\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_encoding\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_id\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_passwd\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pgtclsh\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 pg_version\n> drwxr-sr-x 3 markir adm 4096 Dec 31 1998 psql\n>\n> --\n>\n> Mark\n>\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:31:14 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 02:31:14PM +0100, Fred .Flintstone wrote:\n>Many of these are gone in the modern PostgreSQL, a few remain.\n>https://packages.ubuntu.com/disco/amd64/postgresql-client-11/filelist\n>\n>/usr/lib/postgresql/11/bin/clusterdb\n>/usr/lib/postgresql/11/bin/createdb\n>/usr/lib/postgresql/11/bin/createuser\n>/usr/lib/postgresql/11/bin/dropdb\n>/usr/lib/postgresql/11/bin/dropuser\n>/usr/lib/postgresql/11/bin/pg_basebackup\n>/usr/lib/postgresql/11/bin/pg_dump\n>/usr/lib/postgresql/11/bin/pg_dumpall\n>/usr/lib/postgresql/11/bin/pg_isready\n>/usr/lib/postgresql/11/bin/pg_receivewal\n>/usr/lib/postgresql/11/bin/pg_recvlogical\n>/usr/lib/postgresql/11/bin/pg_restore\n>/usr/lib/postgresql/11/bin/psql\n>/usr/lib/postgresql/11/bin/reindexdb\n>/usr/lib/postgresql/11/bin/vacuumdb\n>\n>Can we rename clusterdb, reindexdb and vacuumdb to carry the pg_ prefix?\n>\n\nI think the consensus in this thread (and the previous ancient ones) is\nthat it's not worth it. It's one thing to introduce new commands with the\npg_ prefix, and it's a completely different thing to rename existing ones.\nThat has inherent costs, and as Tom pointed out the burden would fall on\npeople using PostgreSQL (and that's rather undesirable).\n\nI personally don't see why having commands without pg_ prefix would be\nan issue. Especially when placed in a separate directory, which eliminates\nthe possibility of conflict with other commands.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:51:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Hello,\n\nat the very least my Ubuntu Cosmic has createdb, createuser and createlang\nin user's space, and I had at least two cases when people were trying to\nuse createuser to create a new OS user.\n\nI would prefer them having pg_ prefix to have less confusion.\n\nOn Wed, Mar 27, 2019 at 4:51 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Mar 27, 2019 at 02:31:14PM +0100, Fred .Flintstone wrote:\n> >Many of these are gone in the modern PostgreSQL, a few remain.\n> >https://packages.ubuntu.com/disco/amd64/postgresql-client-11/filelist\n> >\n> >/usr/lib/postgresql/11/bin/clusterdb\n> >/usr/lib/postgresql/11/bin/createdb\n> >/usr/lib/postgresql/11/bin/createuser\n> >/usr/lib/postgresql/11/bin/dropdb\n> >/usr/lib/postgresql/11/bin/dropuser\n> >/usr/lib/postgresql/11/bin/pg_basebackup\n> >/usr/lib/postgresql/11/bin/pg_dump\n> >/usr/lib/postgresql/11/bin/pg_dumpall\n> >/usr/lib/postgresql/11/bin/pg_isready\n> >/usr/lib/postgresql/11/bin/pg_receivewal\n> >/usr/lib/postgresql/11/bin/pg_recvlogical\n> >/usr/lib/postgresql/11/bin/pg_restore\n> >/usr/lib/postgresql/11/bin/psql\n> >/usr/lib/postgresql/11/bin/reindexdb\n> >/usr/lib/postgresql/11/bin/vacuumdb\n> >\n> >Can we rename clusterdb, reindexdb and vacuumdb to carry the pg_ prefix?\n> >\n>\n> I think the consensus in this thread (and the previous ancient ones) is\n> that it's not worth it. It's one thing to introduce new commands with the\n> pg_ prefix, and it's a completely different thing to rename existing ones.\n> That has inherent costs, and as Tom pointed out the burden would fall on\n> people using PostgreSQL (and that's rather undesirable).\n>\n> I personally don't see why having commands without pg_ prefix would be\n> an issue. Especially when placed in a separate directory, which eliminates\n> the possibility of conflict with other commands.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n>\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHello,at the very least my Ubuntu Cosmic has createdb, createuser and createlang in user's space, and I had at least two cases when people were trying to use createuser to create a new OS user. I would prefer them having pg_ prefix to have less confusion.On Wed, Mar 27, 2019 at 4:51 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, Mar 27, 2019 at 02:31:14PM +0100, Fred .Flintstone wrote:\n>Many of these are gone in the modern PostgreSQL, a few remain.\n>https://packages.ubuntu.com/disco/amd64/postgresql-client-11/filelist\n>\n>/usr/lib/postgresql/11/bin/clusterdb\n>/usr/lib/postgresql/11/bin/createdb\n>/usr/lib/postgresql/11/bin/createuser\n>/usr/lib/postgresql/11/bin/dropdb\n>/usr/lib/postgresql/11/bin/dropuser\n>/usr/lib/postgresql/11/bin/pg_basebackup\n>/usr/lib/postgresql/11/bin/pg_dump\n>/usr/lib/postgresql/11/bin/pg_dumpall\n>/usr/lib/postgresql/11/bin/pg_isready\n>/usr/lib/postgresql/11/bin/pg_receivewal\n>/usr/lib/postgresql/11/bin/pg_recvlogical\n>/usr/lib/postgresql/11/bin/pg_restore\n>/usr/lib/postgresql/11/bin/psql\n>/usr/lib/postgresql/11/bin/reindexdb\n>/usr/lib/postgresql/11/bin/vacuumdb\n>\n>Can we rename clusterdb, reindexdb and vacuumdb to carry the pg_ prefix?\n>\n\nI think the consensus in this thread (and the previous ancient ones) is\nthat it's not worth it. It's one thing to introduce new commands with the\npg_ prefix, and it's a completely different thing to rename existing ones.\nThat has inherent costs, and as Tom pointed out the burden would fall on\npeople using PostgreSQL (and that's rather undesirable).\n\nI personally don't see why having commands without pg_ prefix would be\nan issue. Especially when placed in a separate directory, which eliminates\nthe possibility of conflict with other commands.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Wed, 27 Mar 2019 16:56:02 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "It does not matter if they are in a different directory, because when\nI use tab-completion in the shell, then all commands show.\nI type \"create<tab>\" then \"createdb\" and \"createuser\" shows up. This\nis very confusing, and I don't know if this creates a Linux system\nuser account or a PostgreSQL account. Without knowing better, I would\nbe inclined to believe such a command would create a system account.\n\nIt gets even more confusing when a user have multiple database servers\ninstalled such as MySQL and PostgreSQL or MongoDB and PostgreSQL. Then\nit is very confusing what \"createdb\" does.\n\n\nOn Wed, Mar 27, 2019 at 2:51 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Mar 27, 2019 at 02:31:14PM +0100, Fred .Flintstone wrote:\n> >Many of these are gone in the modern PostgreSQL, a few remain.\n> >https://packages.ubuntu.com/disco/amd64/postgresql-client-11/filelist\n> >\n> >/usr/lib/postgresql/11/bin/clusterdb\n> >/usr/lib/postgresql/11/bin/createdb\n> >/usr/lib/postgresql/11/bin/createuser\n> >/usr/lib/postgresql/11/bin/dropdb\n> >/usr/lib/postgresql/11/bin/dropuser\n> >/usr/lib/postgresql/11/bin/pg_basebackup\n> >/usr/lib/postgresql/11/bin/pg_dump\n> >/usr/lib/postgresql/11/bin/pg_dumpall\n> >/usr/lib/postgresql/11/bin/pg_isready\n> >/usr/lib/postgresql/11/bin/pg_receivewal\n> >/usr/lib/postgresql/11/bin/pg_recvlogical\n> >/usr/lib/postgresql/11/bin/pg_restore\n> >/usr/lib/postgresql/11/bin/psql\n> >/usr/lib/postgresql/11/bin/reindexdb\n> >/usr/lib/postgresql/11/bin/vacuumdb\n> >\n> >Can we rename clusterdb, reindexdb and vacuumdb to carry the pg_ prefix?\n> >\n>\n> I think the consensus in this thread (and the previous ancient ones) is\n> that it's not worth it. It's one thing to introduce new commands with the\n> pg_ prefix, and it's a completely different thing to rename existing ones.\n> That has inherent costs, and as Tom pointed out the burden would fall on\n> people using PostgreSQL (and that's rather undesirable).\n>\n> I personally don't see why having commands without pg_ prefix would be\n> an issue. Especially when placed in a separate directory, which eliminates\n> the possibility of conflict with other commands.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:57:03 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-27, Tomas Vondra wrote:\n\n> I think the consensus in this thread (and the previous ancient ones) is\n> that it's not worth it. It's one thing to introduce new commands with the\n> pg_ prefix, and it's a completely different thing to rename existing ones.\n> That has inherent costs, and as Tom pointed out the burden would fall on\n> people using PostgreSQL (and that's rather undesirable).\n\nI thought the consensus was to rename them, and install symlinks to the\nold names.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 11:00:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/27/19 2:51 PM, Tomas Vondra wrote:\n> I think the consensus in this thread (and the previous ancient ones) is\n> that it's not worth it. It's one thing to introduce new commands with the\n> pg_ prefix, and it's a completely different thing to rename existing ones.\n> That has inherent costs, and as Tom pointed out the burden would fall on\n> people using PostgreSQL (and that's rather undesirable).\n> \n> I personally don't see why having commands without pg_ prefix would be\n> an issue. Especially when placed in a separate directory, which eliminates\n> the possibility of conflict with other commands.\n\nI buy that it may not be worth breaking tens of thousands of scripts to \nfix this, but I disagree about it not being an issue. Most Linux \ndistributions add PostgreSQL's executables in to a directory which is in \nthe default $PATH (/usr/bin in the case of Debian). And even if it would \nbe installed into a separate directory there would still be a conflict \nas soon as that directory is added to $PATH.\n\nAnd I think that it is also relatively easy to confuse adduser and \ncreateuser when reading a script. Nothing about the name createuser \nindicates that it will create a role in an SQL database.\n\nAndreas\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:07:24 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 11:00:18AM -0300, Alvaro Herrera wrote:\n>On 2019-Mar-27, Tomas Vondra wrote:\n>\n>> I think the consensus in this thread (and the previous ancient ones) is\n>> that it's not worth it. It's one thing to introduce new commands with the\n>> pg_ prefix, and it's a completely different thing to rename existing ones.\n>> That has inherent costs, and as Tom pointed out the burden would fall on\n>> people using PostgreSQL (and that's rather undesirable).\n>\n>I thought the consensus was to rename them, and install symlinks to the\n>old names.\n>\n\nI know symlinks were mentioned/proposed, but I don't think there's a clear\nconsensus to do that. I might have missed that part of the discussion.\n\nThat being said, I'm not strongly opposed to doing that, although I still\ndon't see the need to do that ...\n\nregard\n\n>-- \n>�lvaro Herrera https://www.2ndQuadrant.com/\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:20:05 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Mar-27, Tomas Vondra wrote:\n>> I think the consensus in this thread (and the previous ancient ones) is\n>> that it's not worth it. It's one thing to introduce new commands with the\n>> pg_ prefix, and it's a completely different thing to rename existing ones.\n>> That has inherent costs, and as Tom pointed out the burden would fall on\n>> people using PostgreSQL (and that's rather undesirable).\n\n> I thought the consensus was to rename them, and install symlinks to the\n> old names.\n\nThe question is what's the endgame. We haven't actually fixed the\ncomplained-of confusion problem unless we eventually remove createuser\nand dropuser under those names. Are we prepared to force script\nbreakage of that sort, even over a multi-year deprecation cycle?\n\n(As a comparison point, I note that we still haven't removed the\n\"postmaster\" symlink, though it's been deprecated for at least a\ndozen years.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2019 10:23:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 03:07:24PM +0100, Andreas Karlsson wrote:\n>On 3/27/19 2:51 PM, Tomas Vondra wrote:\n>>I think the consensus in this thread (and the previous ancient ones) is\n>>that it's not worth it. It's one thing to introduce new commands with the\n>>pg_ prefix, and it's a completely different thing to rename existing ones.\n>>That has inherent costs, and as Tom pointed out the burden would fall on\n>>people using PostgreSQL (and that's rather undesirable).\n>>\n>>I personally don't see why having commands without pg_ prefix would be\n>>an issue. Especially when placed in a separate directory, which eliminates\n>>the possibility of conflict with other commands.\n>\n>I buy that it may not be worth breaking tens of thousands of scripts \n>to fix this, but I disagree about it not being an issue. Most Linux \n>distributions add PostgreSQL's executables in to a directory which is \n>in the default $PATH (/usr/bin in the case of Debian). And even if it \n>would be installed into a separate directory there would still be a \n>conflict as soon as that directory is added to $PATH.\n>\n\nThat is true, of course. But are there actual examples of such conflicts\nin practice? I mean, are there tools/packages that provide commands with\na conflicting name? I'm not aware of any, and as was pointed before, we'd\nhave ~20 years of history on any new ones.\n\n>And I think that it is also relatively easy to confuse adduser and \n>createuser when reading a script. Nothing about the name createuser \n>indicates that it will create a role in an SQL database.\n>\n\nSure, and I've confused those tools too in the past. But that's not\nsomething you'll hit in a script, at least not if you test it before\nrunning it on production system. And if you're running untested scripts,\nthis is likely the least of your problems ...\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:26:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/27/19 3:26 PM, Tomas Vondra wrote:\n> That is true, of course. But are there actual examples of such conflicts\n> in practice? I mean, are there tools/packages that provide commands with\n> a conflicting name? I'm not aware of any, and as was pointed before, we'd\n> have ~20 years of history on any new ones.\n\nThat is a fair argument. Since we squatted those names back in the \nmid-90s I think the risk of collision is low.\n\nAndreas\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:36:07 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 3/27/19 3:26 PM, Tomas Vondra wrote:\n>> That is true, of course. But are there actual examples of such conflicts\n>> in practice? I mean, are there tools/packages that provide commands with\n>> a conflicting name? I'm not aware of any, and as was pointed before, we'd\n>> have ~20 years of history on any new ones.\n\n> That is a fair argument. Since we squatted those names back in the \n> mid-90s I think the risk of collision is low.\n\nRight. I think there is a fair argument to be made for user confusion\n(not actual conflict) with respect to createuser and dropuser. The\nargument for renaming any of the other tools is much weaker, IMO.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2019 10:41:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-27, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Mar-27, Tomas Vondra wrote:\n> >> I think the consensus in this thread (and the previous ancient ones) is\n> >> that it's not worth it. It's one thing to introduce new commands with the\n> >> pg_ prefix, and it's a completely different thing to rename existing ones.\n> >> That has inherent costs, and as Tom pointed out the burden would fall on\n> >> people using PostgreSQL (and that's rather undesirable).\n> \n> > I thought the consensus was to rename them, and install symlinks to the\n> > old names.\n> \n> The question is what's the endgame. We haven't actually fixed the\n> complained-of confusion problem unless we eventually remove createuser\n> and dropuser under those names.\n\nWell, partly we have, because there mere act of having a symlink\ndocuments the command via the symlink target.\n\nSomebody proposed to rename createuser not to pg_createuser, though, but\nrather to pg_createrole; ditto dropuser. That seems to make sense.\n\nI additionally proposed (nobody replied to this part) that we could have\nthe command print a WARNING if the argv[0] is shown to be the old name.\nNot necessarily in pg12; maybe we can have them print such a warning in\npg13, and then remove the old names three years from now, or something\nlike that.\n\nI suppose that if you're a Postgres developer, you naturally expect that\n\"createdb\" creates a Postgres DB. What if you use multiple database\nsystems, and then only occasionally have to do DBA tasks? I find this\nPOV that createdb doesn't need renaming a bit self-centered.\n\n> Are we prepared to force script breakage of that sort, even over a\n> multi-year deprecation cycle?\n\nWhy not?\n\n> (As a comparison point, I note that we still haven't removed the\n> \"postmaster\" symlink, though it's been deprecated for at least a\n> dozen years.)\n\nI don't think that change was because of executable namespace pollution\nor user confusion. (Commit 5266f221a2e1, can't find the discussion\nthough.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 11:52:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Wed, 2019-03-27 at 15:07 +0100, Andreas Karlsson wrote:\r\n> [EXTERNAL SOURCE]\r\n> \r\n> \r\n> \r\n> On 3/27/19 2:51 PM, Tomas Vondra wrote:\r\n> > I think the consensus in this thread (and the previous ancient ones) is\r\n> > that it's not worth it. It's one thing to introduce new commands with the\r\n> > pg_ prefix, and it's a completely different thing to rename existing ones.\r\n> > That has inherent costs, and as Tom pointed out the burden would fall on\r\n> > people using PostgreSQL (and that's rather undesirable).\r\n> > \r\n> > I personally don't see why having commands without pg_ prefix would be\r\n> > an issue. Especially when placed in a separate directory, which eliminates\r\n> > the possibility of conflict with other commands.\r\n> \r\n> I buy that it may not be worth breaking tens of thousands of scripts to\r\n> fix this, but I disagree about it not being an issue. Most Linux\r\n> distributions add PostgreSQL's executables in to a directory which is in\r\n> the default $PATH (/usr/bin in the case of Debian). And even if it would\r\n> be installed into a separate directory there would still be a conflict\r\n> as soon as that directory is added to $PATH.\r\n> \r\n> And I think that it is also relatively easy to confuse adduser and\r\n> createuser when reading a script. Nothing about the name createuser\r\n> indicates that it will create a role in an SQL database.\r\n> \r\n> Andreas\r\n> \r\n\r\ntheres nothing about createuser or adduser( useradd on my system,\r\nadduser doesn't exist on mine ) that indicates that either would/should\r\ncreate a user in the system either. That's what man and -h/--help are\r\nfor. If you don't know what an executable does, don't invoke it until\r\nyou do. That's a basic premise for any executable.\r\n\r\nreid\r\n\r\n",
"msg_date": "Wed, 27 Mar 2019 15:02:29 +0000",
"msg_from": "Reid Thompson <Reid.Thompson@omnicell.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I suppose that if you're a Postgres developer, you naturally expect that\n> \"createdb\" creates a Postgres DB. What if you use multiple database\n> systems, and then only occasionally have to do DBA tasks? I find this\n> POV that createdb doesn't need renaming a bit self-centered.\n\nNobody is defending the existing names as being something we'd pick\nif we were picking them today. The question is whether changing them\nis worth the pain. (And, one more time, may I point out that most\nof the pain will be borne by people not on this mailing list, hence\nunable to vote here.) I don't think there is any reasonable argument\nthat said pain will be justified for any of them except maybe createuser\nand dropuser.\n\n>> \"postmaster\" symlink, though it's been deprecated for at least a\n>> dozen years.)\n\n> I don't think that change was because of executable namespace pollution\n> or user confusion. (Commit 5266f221a2e1, can't find the discussion\n> though.)\n\nMy recollection of the discussion is that people argued that \"postmaster\"\nmight be taken to have something to do with an e-mail server, and\ntherefore we needed to stop using that name. The lack of either follow-on\ncomplaints or follow-on action doesn't make me too well disposed to\nwhat is essentially that same argument over again.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2019 13:09:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 27/03/2019 15:26, Tomas Vondra wrote:\n> On Wed, Mar 27, 2019 at 03:07:24PM +0100, Andreas Karlsson wrote:\n>> On 3/27/19 2:51 PM, Tomas Vondra wrote:\n>>> I think the consensus in this thread (and the previous ancient ones) is\n>>> that it's not worth it. It's one thing to introduce new commands with\n>>> the\n>>> pg_ prefix, and it's a completely different thing to rename existing\n>>> ones.\n>>> That has inherent costs, and as Tom pointed out the burden would fall on\n>>> people using PostgreSQL (and that's rather undesirable).\n>>>\n>>> I personally don't see why having commands without pg_ prefix would be\n>>> an issue. Especially when placed in a separate directory, which\n>>> eliminates\n>>> the possibility of conflict with other commands.\n>>\n>> I buy that it may not be worth breaking tens of thousands of scripts\n>> to fix this, but I disagree about it not being an issue. Most Linux\n>> distributions add PostgreSQL's executables in to a directory which is\n>> in the default $PATH (/usr/bin in the case of Debian). And even if it\n>> would be installed into a separate directory there would still be a\n>> conflict as soon as that directory is added to $PATH.\n>>\n> \n> That is true, of course.\n\nIt's only partially true, for example on my systems:\n\nDebian/Ubuntu:\n$ readlink -f /usr/bin/createuser\n/usr/share/postgresql-common/pg_wrapper\n\nCentos (PGDG package):\nreadlink -f /usr/bin/createdb\n/usr/pgsql-11/bin/createdb\n\nThis also means that the idea about symlinks is something packages\nalready do.\n\n-- \n Petr Jelinek http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 18:26:11 +0100",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Symlinks would be great, because then the symlinks could be packaged\nas an optional package.\nsuch as;\n- postgresql-11\n- postgresql-client-11\n- postgresql-client-symlinks-11\n- postgresql-client-common\n- postgresql-common\n\nThen one might chose to not install the symlinks package or uninstall it.\n\nAnd it would ease discoverability, predictability, intuitiveness, and\nease-of-use so much by just being able to type pg_<tab> to discover\nall the PostgreSQL-related commands.\n\nOn Wed, Mar 27, 2019 at 6:26 PM Petr Jelinek\n<petr.jelinek@2ndquadrant.com> wrote:\n>\n> On 27/03/2019 15:26, Tomas Vondra wrote:\n> > On Wed, Mar 27, 2019 at 03:07:24PM +0100, Andreas Karlsson wrote:\n> >> On 3/27/19 2:51 PM, Tomas Vondra wrote:\n> >>> I think the consensus in this thread (and the previous ancient ones) is\n> >>> that it's not worth it. It's one thing to introduce new commands with\n> >>> the\n> >>> pg_ prefix, and it's a completely different thing to rename existing\n> >>> ones.\n> >>> That has inherent costs, and as Tom pointed out the burden would fall on\n> >>> people using PostgreSQL (and that's rather undesirable).\n> >>>\n> >>> I personally don't see why having commands without pg_ prefix would be\n> >>> an issue. Especially when placed in a separate directory, which\n> >>> eliminates\n> >>> the possibility of conflict with other commands.\n> >>\n> >> I buy that it may not be worth breaking tens of thousands of scripts\n> >> to fix this, but I disagree about it not being an issue. Most Linux\n> >> distributions add PostgreSQL's executables in to a directory which is\n> >> in the default $PATH (/usr/bin in the case of Debian). And even if it\n> >> would be installed into a separate directory there would still be a\n> >> conflict as soon as that directory is added to $PATH.\n> >>\n> >\n> > That is true, of course.\n>\n> It's only partially true, for example on my systems:\n>\n> Debian/Ubuntu:\n> $ readlink -f /usr/bin/createuser\n> /usr/share/postgresql-common/pg_wrapper\n>\n> Centos (PGDG package):\n> readlink -f /usr/bin/createdb\n> /usr/pgsql-11/bin/createdb\n>\n> This also means that the idea about symlinks is something packages\n> already do.\n>\n> --\n> Petr Jelinek http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 18:40:05 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-27, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I suppose that if you're a Postgres developer, you naturally expect that\n> > \"createdb\" creates a Postgres DB. What if you use multiple database\n> > systems, and then only occasionally have to do DBA tasks? I find this\n> > POV that createdb doesn't need renaming a bit self-centered.\n> \n> Nobody is defending the existing names as being something we'd pick\n> if we were picking them today. The question is whether changing them\n> is worth the pain. (And, one more time, may I point out that most\n> of the pain will be borne by people not on this mailing list, hence\n> unable to vote here.) I don't think there is any reasonable argument\n> that said pain will be justified for any of them except maybe createuser\n> and dropuser.\n\nThe implicit argument here is that existing users are a larger\npopulation than future users. I, for one, don't believe that. I think\ntaking no action is a disservice to future users. Also, that modifying\nthe code will be utterly painful and that less administrative code will be\nwritten in the future than has already been written.\n\nWe *could* run a poll on twitter/slack/website to get a feeling on a\nwider population. That would still reach mostly existing Postgres\nusers, but at least it would be much more diverse than this group.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:03:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 28/03/2019 03:07, Andreas Karlsson wrote:\n> On 3/27/19 2:51 PM, Tomas Vondra wrote:\n>> I think the consensus in this thread (and the previous ancient ones) is\n>> that it's not worth it. It's one thing to introduce new commands with \n>> the\n>> pg_ prefix, and it's a completely different thing to rename existing \n>> ones.\n>> That has inherent costs, and as Tom pointed out the burden would fall on\n>> people using PostgreSQL (and that's rather undesirable).\n>>\n>> I personally don't see why having commands without pg_ prefix would be\n>> an issue. Especially when placed in a separate directory, which \n>> eliminates\n>> the possibility of conflict with other commands.\n>\n> I buy that it may not be worth breaking tens of thousands of scripts \n> to fix this, but I disagree about it not being an issue. Most Linux \n> distributions add PostgreSQL's executables in to a directory which is \n> in the default $PATH (/usr/bin in the case of Debian). And even if it \n> would be installed into a separate directory there would still be a \n> conflict as soon as that directory is added to $PATH.\n>\n> And I think that it is also relatively easy to confuse adduser and \n> createuser when reading a script. Nothing about the name createuser \n> indicates that it will create a role in an SQL database.\n>\n> Andreas\n>\n>\nExisting users would feel some pain, but continued use of commands \n'creatuser' rather than pg_createuser (better still pg_createrole, as \nsuggested elsewhere) create confusion and display unintended arrogance.\n\nThere is a suggestion to use aliases, and I think that is a good interim \nstep, to introduce the 'pg_' variants. Possible with an option at \ninstall time to force only 'pg_' variants (with the possible exception \nof psql).\n\nThe only command, that I think warrants a permanent alias is psql, which \nI think is not ambiguous, but having a pg_sql for consistency would be good.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Thu, 28 Mar 2019 09:57:41 +1300",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 28/03/2019 03:41, Tom Lane wrote:\n> Andreas Karlsson <andreas@proxel.se> writes:\n>> On 3/27/19 3:26 PM, Tomas Vondra wrote:\n>>> That is true, of course. But are there actual examples of such conflicts\n>>> in practice? I mean, are there tools/packages that provide commands with\n>>> a conflicting name? I'm not aware of any, and as was pointed before, we'd\n>>> have ~20 years of history on any new ones.\n>> That is a fair argument. Since we squatted those names back in the\n>> mid-90s I think the risk of collision is low.\n> Right. I think there is a fair argument to be made for user confusion\n> (not actual conflict) with respect to createuser and dropuser. The\n> argument for renaming any of the other tools is much weaker, IMO.\n>\n> \t\t\tregards, tom lane\n>\n>\nI think the consistency of having all PostgreSQL commands start with \n'pg_' would make them both easier to find and to learn.\n\nAlthough I think we should keep the psql command name, in addition to \nthe pg_sql variant - the latter needed for consistency.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Thu, 28 Mar 2019 10:01:46 +1300",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-03-27 18:09, Tom Lane wrote:\n> My recollection of the discussion is that people argued that \"postmaster\"\n> might be taken to have something to do with an e-mail server, and\n> therefore we needed to stop using that name. The lack of either follow-on\n> complaints or follow-on action doesn't make me too well disposed to\n> what is essentially that same argument over again.\n\nThe reason there was that the distinction was mostly useless and the\ndifferent command-line option parsing was confusing. The name itself\nwas confusing but not in conflict with anything.\n\nHowever, we do know that we are very bad at actually getting rid of\ndeprecated things.\n\nHow about we compromise in this thread and remove postmaster and leave\neverything else as is. ;-)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 22:20:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "There would be no need to remove anything if we just renamed the\nexecutable and created symlinks for them.\n\nOn Wed, Mar 27, 2019 at 10:20 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-03-27 18:09, Tom Lane wrote:\n> > My recollection of the discussion is that people argued that \"postmaster\"\n> > might be taken to have something to do with an e-mail server, and\n> > therefore we needed to stop using that name. The lack of either follow-on\n> > complaints or follow-on action doesn't make me too well disposed to\n> > what is essentially that same argument over again.\n>\n> The reason there was that the distinction was mostly useless and the\n> different command-line option parsing was confusing. The name itself\n> was confusing but not in conflict with anything.\n>\n> However, we do know that we are very bad at actually getting rid of\n> deprecated things.\n>\n> How about we compromise in this thread and remove postmaster and leave\n> everything else as is. ;-)\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 14:18:27 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Thursday, March 28, 2019, Fred .Flintstone <eldmannen@gmail.com> wrote:\n\n> There would be no need to remove anything if we just renamed the\n> executable and created symlinks for them.\n\n\nWill there still be man pages for both commands?\n\nman pg_createuser\nman createuser\n\n?\n\n\n>\n> On Wed, Mar 27, 2019 at 10:20 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2019-03-27 18:09, Tom Lane wrote:\n> > > My recollection of the discussion is that people argued that\n> \"postmaster\"\n> > > might be taken to have something to do with an e-mail server, and\n> > > therefore we needed to stop using that name. The lack of either\n> follow-on\n> > > complaints or follow-on action doesn't make me too well disposed to\n> > > what is essentially that same argument over again.\n> >\n> > The reason there was that the distinction was mostly useless and the\n> > different command-line option parsing was confusing. The name itself\n> > was confusing but not in conflict with anything.\n> >\n> > However, we do know that we are very bad at actually getting rid of\n> > deprecated things.\n> >\n> > How about we compromise in this thread and remove postmaster and leave\n> > everything else as is. ;-)\n> >\n> > --\n> > Peter Eisentraut http://www.2ndQuadrant.com/\n> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\nOn Thursday, March 28, 2019, Fred .Flintstone <eldmannen@gmail.com> wrote:There would be no need to remove anything if we just renamed the\nexecutable and created symlinks for them.Will there still be man pages for both commands? man pg_createuserman createuser? \n\nOn Wed, Mar 27, 2019 at 10:20 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-03-27 18:09, Tom Lane wrote:\n> > My recollection of the discussion is that people argued that \"postmaster\"\n> > might be taken to have something to do with an e-mail server, and\n> > therefore we needed to stop using that name. The lack of either follow-on\n> > complaints or follow-on action doesn't make me too well disposed to\n> > what is essentially that same argument over again.\n>\n> The reason there was that the distinction was mostly useless and the\n> different command-line option parsing was confusing. The name itself\n> was confusing but not in conflict with anything.\n>\n> However, we do know that we are very bad at actually getting rid of\n> deprecated things.\n>\n> How about we compromise in this thread and remove postmaster and leave\n> everything else as is. ;-)\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 28 Mar 2019 07:31:53 -0600",
"msg_from": "Abel Abraham Camarillo Ojeda <acamari@verlet.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-28, Abel Abraham Camarillo Ojeda wrote:\n\n> On Thursday, March 28, 2019, Fred .Flintstone <eldmannen@gmail.com> wrote:\n> \n> > There would be no need to remove anything if we just renamed the\n> > executable and created symlinks for them.\n> \n> Will there still be man pages for both commands?\n> \n> man pg_createuser\n> man createuser\n\nThere are provisions in the manpage system to have some pages be\nsymlinks to other pages. We don't currently use that anywhere, but I\nsee no reason why we couldn't just do that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 10:50:46 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> There are provisions in the manpage system to have some pages be\n> symlinks to other pages. We don't currently use that anywhere,\n\nActually we do, eg WITH is a link to SELECT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2019 09:52:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "So what we could do is:\n* Rename executables to be prefixed with pg_. Symlink old names to\nrenamed executables. This while remaining 100% backwards\ncompatibility, not breaking anything legacy.\n* Print warnings when the executables are executed using the symlink.\n* Have the option to have the symlinks in a different optional package.\n* At later time in the future be able to chose to remove the symlinks.\n\nOn Thu, Mar 28, 2019 at 2:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > There are provisions in the manpage system to have some pages be\n> > symlinks to other pages. We don't currently use that anywhere,\n>\n> Actually we do, eg WITH is a link to SELECT.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2019 15:05:53 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "> Andreas Karlsson <andreas@proxel.se> writes:\n>> On 3/27/19 3:26 PM, Tomas Vondra wrote:\n>>> That is true, of course. But are there actual examples of such conflicts\n>>> in practice? I mean, are there tools/packages that provide commands with\n>>> a conflicting name? I'm not aware of any, and as was pointed before, we'd\n>>> have ~20 years of history on any new ones.\n> \n>> That is a fair argument. Since we squatted those names back in the \n>> mid-90s I think the risk of collision is low.\n> \n> Right. I think there is a fair argument to be made for user confusion\n> (not actual conflict) with respect to createuser and dropuser. The\n> argument for renaming any of the other tools is much weaker, IMO.\n\nIf we were to invent new command names, what about doing similar to\ngit? I mean something like:\n\npgsql createdb ....\n\nHere, \"pgsql\" is new command name and \"createdb\" is a sub command name\nto create a database.\n\nThis way, we would be free from the command name conflict problem and\nplus, we could do:\n\npgsql --help\n\nwhich will prints subscommand names when a user is not sure what is\nthe sub command name.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 29 Mar 2019 10:04:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I think that would be amazing! It would be great!\n\nOn Fri, Mar 29, 2019 at 4:01 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > Andreas Karlsson <andreas@proxel.se> writes:\n> >> On 3/27/19 3:26 PM, Tomas Vondra wrote:\n> >>> That is true, of course. But are there actual examples of such conflicts\n> >>> in practice? I mean, are there tools/packages that provide commands with\n> >>> a conflicting name? I'm not aware of any, and as was pointed before, we'd\n> >>> have ~20 years of history on any new ones.\n> >\n> >> That is a fair argument. Since we squatted those names back in the\n> >> mid-90s I think the risk of collision is low.\n> >\n> > Right. I think there is a fair argument to be made for user confusion\n> > (not actual conflict) with respect to createuser and dropuser. The\n> > argument for renaming any of the other tools is much weaker, IMO.\n>\n> If we were to invent new command names, what about doing similar to\n> git? I mean something like:\n>\n> pgsql createdb ....\n>\n> Here, \"pgsql\" is new command name and \"createdb\" is a sub command name\n> to create a database.\n>\n> This way, we would be free from the command name conflict problem and\n> plus, we could do:\n>\n> pgsql --help\n>\n> which will prints subscommand names when a user is not sure what is\n> the sub command name.\n>\n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:29:07 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Tatsuo Ishii 2019-03-29 <20190329.100407.1159785913847835944.t-ishii@sraoss.co.jp>\n> If we were to invent new command names, what about doing similar to\n> git? I mean something like:\n> \n> pgsql createdb ....\n\nThat is pretty close to \"psql\" and it will be utterly confusing for\nnew users. And everyone will have a hard time when talking about the\ntools, imagine someone saying \"please run psql appdbname\" or \"please\nrun pgsql createdb\". The difference is just too small.\n\nWhat might possibly make sense is to add options to psql to\nfacilitate common tasks:\n\npsql --createdb foo\npsql --createuser bar --superuser\npsql --reindex foo\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 16:25:57 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> What might possibly make sense is to add options to psql to\n> facilitate common tasks:\n\n> psql --createdb foo\n> psql --createuser bar --superuser\n> psql --reindex foo\n\nThat's a thought. Or perhaps better, allow pg_ctl to grow new\nsubcommands for those tasks?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:41:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Friday, March 29, 2019 4:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Christoph Berg myon@debian.org writes:\n>\n> > What might possibly make sense is to add options to psql to\n> > facilitate common tasks:\n>\n> > psql --createdb foo\n> > psql --createuser bar --superuser\n> > psql --reindex foo\n>\n> That's a thought. Or perhaps better, allow pg_ctl to grow new\n> subcommands for those tasks?\n\n+1 on using pg_ctl rather than psql, should we go down this path.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:44:05 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/29/19 11:44 AM, Daniel Gustafsson wrote:\n> On Friday, March 29, 2019 4:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> Christoph Berg myon@debian.org writes:\n>>\n>> > What might possibly make sense is to add options to psql to\n>> > facilitate common tasks:\n>>\n>> > psql --createdb foo\n>> > psql --createuser bar --superuser\n>> > psql --reindex foo\n>>\n>> That's a thought. Or perhaps better, allow pg_ctl to grow new\n>> subcommands for those tasks?\n> \n> +1 on using pg_ctl rather than psql, should we go down this path.\n\n\nAgreed -- another +1 here\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:48:47 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-29 11:41:26 -0400, Tom Lane wrote:\n> Or perhaps better, allow pg_ctl to grow new subcommands for those\n> tasks?\n\nWe'd need to be careful to somehow delineate commands that need access\nto the data directory / run locally on the server from the ones that\njust needs a client connection.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 29 Mar 2019 08:51:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-29 11:41:26 -0400, Tom Lane wrote:\n>> Or perhaps better, allow pg_ctl to grow new subcommands for those\n>> tasks?\n\n> We'd need to be careful to somehow delineate commands that need access\n> to the data directory / run locally on the server from the ones that\n> just needs a client connection.\n\nHmm, that's a good point: to put it in terms that make sense to a\npackager, it'd no longer be clear whether pg_ctl belongs in the\nserver package or the client package.\n\nI'm still not thrilled with wedging in these things as options\nto psql though: its command line semantics are overly complicated\nalready, when you consider things like multiple -c and -f options.\nI mean, somebody might think it's a feature to be able to do\n\n psql --createuser alice --createuser bob -c 'some command' -f somefile\n\nbut I don't.\n\nMaybe if we want to merge these things into one executable,\nit should be a new one. \"pg_util createrole bob\" ?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2019 12:25:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Mar-29, Tom Lane wrote:\n\n> Christoph Berg <myon@debian.org> writes:\n> > What might possibly make sense is to add options to psql to\n> > facilitate common tasks:\n> \n> > psql --createdb foo\n> > psql --createuser bar --superuser\n> > psql --reindex foo\n> \n> That's a thought. Or perhaps better, allow pg_ctl to grow new\n> subcommands for those tasks?\n\n+1, as I proposed in 2016:\nhttps://www.postgresql.org/message-id/20160826202911.GA320593@alvherre.pgsql\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 13:35:18 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Tom Lane 2019-03-29 <19517.1553876700@sss.pgh.pa.us>\n> >> Or perhaps better, allow pg_ctl to grow new subcommands for those\n> >> tasks?\n> \n> > We'd need to be careful to somehow delineate commands that need access\n> > to the data directory / run locally on the server from the ones that\n> > just needs a client connection.\n> \n> Hmm, that's a good point: to put it in terms that make sense to a\n> packager, it'd no longer be clear whether pg_ctl belongs in the\n> server package or the client package.\n\nTrue, and putting end-user commands like \"create database\" into the\nsame admin tool like \"stop\", \"promote\", and \"kill\" feels both wrong\nand dangerous to me. It would also cause people to wonder why \"pg_ctl\n-h remotehost stop\" doesn't work.\n\n> I'm still not thrilled with wedging in these things as options\n> to psql though: its command line semantics are overly complicated\n> already, when you consider things like multiple -c and -f options.\n> I mean, somebody might think it's a feature to be able to do\n> \n> psql --createuser alice --createuser bob -c 'some command' -f somefile\n> \n> but I don't.\n\nAck. (Otoh, just processing all arguments after another might be\nwell-defined, and not too hard?)\n\n> Maybe if we want to merge these things into one executable,\n> it should be a new one. \"pg_util createrole bob\" ?\n\n\"pg\" is unfortunately already taken :(\n\nFwiw, let's please keep supporting \"createuser\". Creating login roles\nis more common than non-login ones, and having to type \"createrole\n--login bob\" is cumbersome and will cause endless support requests by\nconfused users.\n\nOther idea: If we don't want to reinvent a new tool, how about\nsupporting prepared statements in psql?\n\n psql -c 'create user %i' --args 'bob w. space'\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 19:50:31 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "pá 29. 3. 2019 v 19:50 odesílatel Christoph Berg <myon@debian.org> napsal:\n\n> Re: Tom Lane 2019-03-29 <19517.1553876700@sss.pgh.pa.us>\n> > >> Or perhaps better, allow pg_ctl to grow new subcommands for those\n> > >> tasks?\n> >\n> > > We'd need to be careful to somehow delineate commands that need access\n> > > to the data directory / run locally on the server from the ones that\n> > > just needs a client connection.\n> >\n> > Hmm, that's a good point: to put it in terms that make sense to a\n> > packager, it'd no longer be clear whether pg_ctl belongs in the\n> > server package or the client package.\n>\n> True, and putting end-user commands like \"create database\" into the\n> same admin tool like \"stop\", \"promote\", and \"kill\" feels both wrong\n> and dangerous to me. It would also cause people to wonder why \"pg_ctl\n> -h remotehost stop\" doesn't work.\n>\n> > I'm still not thrilled with wedging in these things as options\n> > to psql though: its command line semantics are overly complicated\n> > already, when you consider things like multiple -c and -f options.\n> > I mean, somebody might think it's a feature to be able to do\n> >\n> > psql --createuser alice --createuser bob -c 'some command' -f\n> somefile\n> >\n> > but I don't.\n>\n> Ack. (Otoh, just processing all arguments after another might be\n> well-defined, and not too hard?)\n>\n> > Maybe if we want to merge these things into one executable,\n> > it should be a new one. \"pg_util createrole bob\" ?\n>\n> \"pg\" is unfortunately already taken :(\n>\n> Fwiw, let's please keep supporting \"createuser\". Creating login roles\n> is more common than non-login ones, and having to type \"createrole\n> --login bob\" is cumbersome and will cause endless support requests by\n> confused users.\n>\n> Other idea: If we don't want to reinvent a new tool, how about\n> supporting prepared statements in psql?\n>\n> psql -c 'create user %i' --args 'bob w. space'\n>\n\nPrepared statements cannot be DDL commands.\n\nBut psql has safe escaping via :\"xxx\" notation. So some like\n\npsql -c 'create role :\"role\"' -v role='my role' ...\n\nBut what I know the psql variables are not evaluated for -c query\n\nPavel\n\n\n>\n> Christoph\n>\n>\n>\n\npá 29. 3. 2019 v 19:50 odesílatel Christoph Berg <myon@debian.org> napsal:Re: Tom Lane 2019-03-29 <19517.1553876700@sss.pgh.pa.us>\n> >> Or perhaps better, allow pg_ctl to grow new subcommands for those\n> >> tasks?\n> \n> > We'd need to be careful to somehow delineate commands that need access\n> > to the data directory / run locally on the server from the ones that\n> > just needs a client connection.\n> \n> Hmm, that's a good point: to put it in terms that make sense to a\n> packager, it'd no longer be clear whether pg_ctl belongs in the\n> server package or the client package.\n\nTrue, and putting end-user commands like \"create database\" into the\nsame admin tool like \"stop\", \"promote\", and \"kill\" feels both wrong\nand dangerous to me. It would also cause people to wonder why \"pg_ctl\n-h remotehost stop\" doesn't work.\n\n> I'm still not thrilled with wedging in these things as options\n> to psql though: its command line semantics are overly complicated\n> already, when you consider things like multiple -c and -f options.\n> I mean, somebody might think it's a feature to be able to do\n> \n> psql --createuser alice --createuser bob -c 'some command' -f somefile\n> \n> but I don't.\n\nAck. (Otoh, just processing all arguments after another might be\nwell-defined, and not too hard?)\n\n> Maybe if we want to merge these things into one executable,\n> it should be a new one. \"pg_util createrole bob\" ?\n\n\"pg\" is unfortunately already taken :(\n\nFwiw, let's please keep supporting \"createuser\". Creating login roles\nis more common than non-login ones, and having to type \"createrole\n--login bob\" is cumbersome and will cause endless support requests by\nconfused users.\n\nOther idea: If we don't want to reinvent a new tool, how about\nsupporting prepared statements in psql?\n\n psql -c 'create user %i' --args 'bob w. space'Prepared statements cannot be DDL commands.But psql has safe escaping via :\"xxx\" notation. So some likepsql -c 'create role :\"role\"' -v role='my role' ...But what I know the psql variables are not evaluated for -c queryPavel \n\nChristoph",
"msg_date": "Fri, 29 Mar 2019 20:01:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/29/19 3:01 PM, Pavel Stehule wrote:\n> But psql has safe escaping via :\"xxx\" notation. So some like\n> \n> psql -c 'create role :\"role\"' -v role='my role' ...\n> \n> But what I know the psql variables are not evaluated for -c query\n\nYou can do this:\necho \"create role :\\\"role\\\"\" | psql -v role='my role'\nCREATE ROLE\n\necho \"\\password :\\\"role\\\"\" | psql -v role='my role'\nEnter new password:\nEnter it again:\n\nThat said, this is kind of off the topic of this thread.\nI like Tom's last suggestion of:\n\n pg_util <command> <options>\n\nOf course that does not lend itself to symlinking for backward\ncompatibility, does it? If there is a way I am not familiar with it.\n\nI guess the alternative would be an alias, but can packages install an\nalias? Or something else I am not thinking about?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:32:57 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Pavel Stehule 2019-03-29 <CAFj8pRAFpZcDGL5i1wMQiHZ43y5Qd=22=+2vTCFOrak_mrUxjw@mail.gmail.com>\n> > Other idea: If we don't want to reinvent a new tool, how about\n> > supporting prepared statements in psql?\n> >\n> > psql -c 'create user %i' --args 'bob w. space'\n> >\n> \n> Prepared statements cannot be DDL commands.\n\n\"Prepared\" in the sense of what format() does. (I should have used %I.)\n\n> But psql has safe escaping via :\"xxx\" notation. So some like\n> \n> psql -c 'create role :\"role\"' -v role='my role' ...\n\nThat's totally horrible to write, get correct, and to read again\nlater. We need something that people can actually use.\n\n> But what I know the psql variables are not evaluated for -c query\n\nI hate -c. It has so many caveats.\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 20:38:54 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/29/19 3:32 PM, Joe Conway wrote:\n> pg_util <command> <options>\n> \n> Of course that does not lend itself to symlinking for backward\n> compatibility, does it? If there is a way I am not familiar with it.\n\nOn Unix-like systems, you can have pg_util look at argv[0] to see\nif it was called createuser or what not.\n\nNot sure how translatable that is to other systems.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:41:48 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Joe Conway 2019-03-29 <48e5efaf-7ea2-ed70-a803-949bbfec8f6b@joeconway.com>\n> echo \"\\password :\\\"role\\\"\" | psql -v role='my role'\n> Enter new password:\n> Enter it again:\n> \n> That said, this is kind of off the topic of this thread.\n\nIt is on-topic because the reason we can't just tell people to replace\n createuser $foo\nwith\n psql -c \"create user $foo\"\nis because $foo might need escaping.\n\nIMHO if we find an way to do that which is acceptable for sh scripts,\nthe createuser/... commands could go.\n\n> I like Tom's last suggestion of:\n> \n> pg_util <command> <options>\n> \n> Of course that does not lend itself to symlinking for backward\n> compatibility, does it? If there is a way I am not familiar with it.\n\nWe could symlink createuser -> pg_util. It is pretty common for\ncommands to act differently based on the name the were invoked as.\n\n> I guess the alternative would be an alias, but can packages install an\n> alias? Or something else I am not thinking about?\n\nAliases won't work for non-interactive shell scripts.\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 20:43:30 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 3/29/19 3:43 PM, Christoph Berg wrote:\n> Re: Joe Conway 2019-03-29 <48e5efaf-7ea2-ed70-a803-949bbfec8f6b@joeconway.com>\n>> echo \"\\password :\\\"role\\\"\" | psql -v role='my role'\n>> Enter new password:\n>> Enter it again:\n>> \n>> That said, this is kind of off the topic of this thread.\n> \n> It is on-topic because the reason we can't just tell people to replace\n> createuser $foo\n> with\n> psql -c \"create user $foo\"\n> is because $foo might need escaping.\n> \n> IMHO if we find an way to do that which is acceptable for sh scripts,\n> the createuser/... commands could go.\n\nI think these commands *were* once (at least some of them) shell scripts\nand we went to executable C in order to make them work on Windows, IIRC.\n\n>> I like Tom's last suggestion of:\n>> \n>> pg_util <command> <options>\n>> \n>> Of course that does not lend itself to symlinking for backward\n>> compatibility, does it? If there is a way I am not familiar with it.\n> \n> We could symlink createuser -> pg_util. It is pretty common for\n> commands to act differently based on the name the were invoked as.\n\nYeah, I forgot about that. Does that also go for Windows?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Fri, 29 Mar 2019 16:05:29 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Joe Conway 2019-03-29 <f579fde8-8266-f2d6-4ba1-91c6046bc2f6@joeconway.com>\n> >> echo \"\\password :\\\"role\\\"\" | psql -v role='my role'\n> > \n> > It is on-topic because the reason we can't just tell people to replace\n> > createuser $foo\n> > with\n> > psql -c \"create user $foo\"\n> > is because $foo might need escaping.\n> > \n> > IMHO if we find an way to do that which is acceptable for sh scripts,\n> > the createuser/... commands could go.\n> \n> I think these commands *were* once (at least some of them) shell scripts\n> and we went to executable C in order to make them work on Windows, IIRC.\n\nI meant the interface to these programs. It needs to be something\npeople can use in sh scripts without wtf'ing. The :\\\"weirdness\\\" |\ncited above is IMHO not acceptable.\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 21:30:09 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "\nOn 3/29/19 11:41 AM, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n>> What might possibly make sense is to add options to psql to\n>> facilitate common tasks:\n>> psql --createdb foo\n>> psql --createuser bar --superuser\n>> psql --reindex foo\n> That's a thought. Or perhaps better, allow pg_ctl to grow new\n> subcommands for those tasks?\n>\n> \t\t\t\n\n\n\nI think that's a better direction. psql is already pretty cumbersome.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 17:06:35 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I think the proposal you put forward is great, and would love to see\nit go ahead and get implemented.\n\nOn Fri, Mar 29, 2019 at 5:35 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-29, Tom Lane wrote:\n>\n> > Christoph Berg <myon@debian.org> writes:\n> > > What might possibly make sense is to add options to psql to\n> > > facilitate common tasks:\n> >\n> > > psql --createdb foo\n> > > psql --createuser bar --superuser\n> > > psql --reindex foo\n> >\n> > That's a thought. Or perhaps better, allow pg_ctl to grow new\n> > subcommands for those tasks?\n>\n> +1, as I proposed in 2016:\n> https://www.postgresql.org/message-id/20160826202911.GA320593@alvherre.pgsql\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 30 Mar 2019 11:16:27 +0100",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-03-29 16:41, Tom Lane wrote:\n> Or perhaps better, allow pg_ctl to grow new\n> subcommands for those tasks?\n\npg_ctl is a tool to control the server; the commands being complained\nabout are client-side things.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 30 Mar 2019 12:24:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-03-29 20:32, Joe Conway wrote:\n> pg_util <command> <options>\n\nHow is that better than just renaming to pg_$oldname?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 30 Mar 2019 12:27:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "It looks like this thread is featured on LWN under the article:\nProgram names and \"pollution\".\nhttps://lwn.net/\nhttps://lwn.net/Articles/784508/ (Subscription required)\n\nOn Sat, Mar 30, 2019 at 12:27 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-03-29 20:32, Joe Conway wrote:\n> > pg_util <command> <options>\n>\n> How is that better than just renaming to pg_$oldname?\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Apr 2019 23:24:44 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "It seems we do have a clear path forward on how to accomplish this and\nimplement this change.\n\n1. Rename executables to carry the pg_ prefix.\n2. Create symlinks from the old names to the new names.\n3. Modify the executables to read argv[0] and print a warning if the\nexecutable is called from the old name (symlink).\n\nThis seems technically feasible and easy.\n\nHow can we proceed?\n\nOn Tue, Apr 2, 2019 at 11:24 PM Fred .Flintstone <eldmannen@gmail.com> wrote:\n>\n> It looks like this thread is featured on LWN under the article:\n> Program names and \"pollution\".\n> https://lwn.net/\n> https://lwn.net/Articles/784508/ (Subscription required)\n>\n> On Sat, Mar 30, 2019 at 12:27 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> >\n> > On 2019-03-29 20:32, Joe Conway wrote:\n> > > pg_util <command> <options>\n> >\n> > How is that better than just renaming to pg_$oldname?\n> >\n> > --\n> > Peter Eisentraut http://www.2ndQuadrant.com/\n> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 14:29:59 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Fred .Flintstone 2019-04-10 <CAJgfmqVQWFM7F-JogWEo6MWGA8Oa8BtB4dYAo=y7X5q=SBd35A@mail.gmail.com>\n> It seems we do have a clear path forward on how to accomplish this and\n> implement this change.\n> \n> 1. Rename executables to carry the pg_ prefix.\n> 2. Create symlinks from the old names to the new names.\n> 3. Modify the executables to read argv[0] and print a warning if the\n> executable is called from the old name (symlink).\n> \n> This seems technically feasible and easy.\n> \n> How can we proceed?\n\nYou can send a patch.\n\nBut I don't think there has been a \"clear\" agreement that this is a\ngood idea.\n\nChristoph\n\n\n",
"msg_date": "Wed, 10 Apr 2019 14:52:25 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "> On 2019-03-29 20:32, Joe Conway wrote:\n>> pg_util <command> <options>\n> \n> How is that better than just renaming to pg_$oldname?\n\nAs I already said in up thread:\n\n> This way, we would be free from the command name conflict problem\n> and plus, we could do:\n>\n> pgsql --help\n>\n> which will prints subscommand names when a user is not sure what is\n> the sub command name.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 10 Apr 2019 22:01:12 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Does anyone oppose the proposal?\nHow can we determine consensus?\nIs there any voting process?\n\nIs there any developer who is more versed than me with C than me who\ncan write this patch?\n\nOn Wed, Apr 10, 2019 at 2:52 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Fred .Flintstone 2019-04-10 <CAJgfmqVQWFM7F-JogWEo6MWGA8Oa8BtB4dYAo=y7X5q=SBd35A@mail.gmail.com>\n> > It seems we do have a clear path forward on how to accomplish this and\n> > implement this change.\n> >\n> > 1. Rename executables to carry the pg_ prefix.\n> > 2. Create symlinks from the old names to the new names.\n> > 3. Modify the executables to read argv[0] and print a warning if the\n> > executable is called from the old name (symlink).\n> >\n> > This seems technically feasible and easy.\n> >\n> > How can we proceed?\n>\n> You can send a patch.\n>\n> But I don't think there has been a \"clear\" agreement that this is a\n> good idea.\n>\n> Christoph\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:06:49 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Re: Fred .Flintstone 2019-04-10 <CAJgfmqXJA6f_JEiBP81yVxgOhCZd-SOYL0pO22nftug1W0b-Bw@mail.gmail.com>\n> Does anyone oppose the proposal?\n\nI don't think part #3 has been discussed, and I'd oppose printing\nthese warnings.\n\nChristoph\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:10:11 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "The warnings would only be printed if the programs were executed with\nthe old file names.\nThis in order to inform people relying on the old names that they are\ndeprecated and they should move to the new names with the pg_ prefix.\n\nOn Wed, Apr 10, 2019 at 3:10 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Fred .Flintstone 2019-04-10 <CAJgfmqXJA6f_JEiBP81yVxgOhCZd-SOYL0pO22nftug1W0b-Bw@mail.gmail.com>\n> > Does anyone oppose the proposal?\n>\n> I don't think part #3 has been discussed, and I'd oppose printing\n> these warnings.\n>\n> Christoph\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:15:13 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-04-10 15:15, Fred .Flintstone wrote:\n> The warnings would only be printed if the programs were executed with\n> the old file names.\n> This in order to inform people relying on the old names that they are\n> deprecated and they should move to the new names with the pg_ prefix.\n\nYeah, that would be annoying. Let's not do that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 21:44:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-04-10 15:01, Tatsuo Ishii wrote:\n>> On 2019-03-29 20:32, Joe Conway wrote:\n>>> pg_util <command> <options>\n>>\n>> How is that better than just renaming to pg_$oldname?\n> \n> As I already said in up thread:\n> \n>> This way, we would be free from the command name conflict problem\n\nWell, whatever we do -- if anything -- we would certainly need to keep\nthe old names around for a while somehow. So this doesn't really make\nthat issue go away.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 21:45:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I just want to be on record that I don't think there is a problem here that\nneeds to be solved. The choice to put Postgres-related binaries in /usr/bin\nor wherever is a distribution/packaging decision. As has been pointed out,\nif I download, build, and install Postgres, the binaries by default go\nin /usr/local/pgsql/bin.\n\nIt is a long-standing Unix tradition to have short-named commands from many\nsources in /usr/bin and /bin, not to mention other files, often with short\nnames, in various directories all over the system. For example, on one of\nthe Ubuntu machines at my work, take a look at all the 2-character commands\nin those directories, and how many different packages they come from, in\nthe list at the bottom of this message.\n\nAt this point I think Postgres absolutely owns the name \"psql\" as a Unix\nbinary and I would oppose any suggestion that this should be renamed. Just\nmy own effort to teach my fingers to type something different would\nprobably outweigh any benefit from renaming.\n\nHaving said this, if people are enthusiastic and can actually agree, there\nare a few changes that might make sense:\n\n- move clusterdb, createdb, etc. (*db, but not initdb because that is a\nserver, not client, program) into pg_db_util [subcommand] (or some such)\n- move createuser, dropuser into pg_role_util [subcommand] (or some such)\n- pgbench -> pg_bench (why no '_' anyway?)\n- ecpg -> pg_ec (usually invoked by makefiles anyway, I'm guessing)\n\nBut I consider this worth doing only if people consider that it's an\nimprovement for reasons other than just getting stuff out of /bin or\n/usr/bin.\n\nList of 2-character commands and their source packages on one of our\nsystems (the \"no path found\" ones are mostly symlinks into the Ubuntu\n\"alternatives\" system):\n\n16:52 ijmorlan@ubuntu1604-102$ dpkg -S /usr/bin/?? /bin/?? | sort\ndpkg-query: no path found matching pattern /usr/bin/cc\ndpkg-query: no path found matching pattern /usr/bin/ex\ndpkg-query: no path found matching pattern /usr/bin/fp\ndpkg-query: no path found matching pattern /usr/bin/js\ndpkg-query: no path found matching pattern /usr/bin/pc\ndpkg-query: no path found matching pattern /usr/bin/rn\ndpkg-query: no path found matching pattern /usr/bin/rt\ndpkg-query: no path found matching pattern /usr/bin/vi\ndpkg-query: no path found matching pattern /bin/mt\ndpkg-query: no path found matching pattern /bin/nc\nacct: /usr/bin/ac\napache2-utils: /usr/bin/ab\naspectj: /usr/bin/aj\nat: /usr/bin/at\nbc: /usr/bin/bc\nbf: /usr/bin/bf\nbinutils: /usr/bin/ar\nbinutils: /usr/bin/as\nbinutils: /usr/bin/ld\nbinutils: /usr/bin/nm\nbsdmainutils: /usr/bin/hd\nbsdmainutils: /usr/bin/ul\nbyobu: /usr/bin/NF\ncoreutils: /bin/cp\ncoreutils: /bin/dd\ncoreutils: /bin/df\ncoreutils: /bin/ln\ncoreutils: /bin/ls\ncoreutils: /bin/mv\ncoreutils: /bin/rm\ncoreutils: /usr/bin/du\ncoreutils: /usr/bin/id\ncoreutils: /usr/bin/nl\ncoreutils: /usr/bin/od\ncoreutils: /usr/bin/pr\ncoreutils: /usr/bin/tr\ncoreutils: /usr/bin/wc\ncups-client: /usr/bin/lp\ndash: /bin/sh\ndc: /usr/bin/dc\ndebhelper: /usr/bin/dh\ndiversion by dash from: /bin/sh\ndiversion by dash to: /bin/sh.distrib\ned: /bin/ed\nghostscript: /usr/bin/gs\ngraphviz: /usr/bin/gc\ngv: /usr/bin/gv\ni3-wm: /usr/bin/i3\nii: /usr/bin/ii\niproute2: /bin/ip\niproute2: /bin/ss\nispell: /usr/bin/sq\nlogin: /bin/su\nlogin: /usr/bin/sg\nm4: /usr/bin/m4\nmc: /usr/bin/mc\nmercurial: /usr/bin/hg\nmono-devel: /usr/bin/al\nmono-devel: /usr/bin/lc\nmono-devel: /usr/bin/sn\nmtools: /usr/bin/lz\nmtools: /usr/bin/uz\np7zip-full: /usr/bin/7z\nprocps: /bin/ps\nrcs: /usr/bin/ci\nrcs: /usr/bin/co\nrs: /usr/bin/rs\nruby: /usr/bin/ri\nsc: /usr/bin/sc\nspeech-tools: /usr/bin/dp\ntex4ht: /usr/bin/ht\ntexlive-binaries: /usr/bin/mf\nutil-linux: /usr/bin/pg\nxz-utils: /usr/bin/xz\n\nI just want to be on record that I don't think there is a problem here that needs to be solved. The choice to put Postgres-related binaries in /usr/bin or wherever is a distribution/packaging decision. As has been pointed out, if I download, build, and install Postgres, the binaries by default go in /usr/local/pgsql/bin.It is a long-standing Unix tradition to have short-named commands from many sources in /usr/bin and /bin, not to mention other files, often with short names, in various directories all over the system. For example, on one of the Ubuntu machines at my work, take a look at all the 2-character commands in those directories, and how many different packages they come from, in the list at the bottom of this message.At this point I think Postgres absolutely owns the name \"psql\" as a Unix binary and I would oppose any suggestion that this should be renamed. Just my own effort to teach my fingers to type something different would probably outweigh any benefit from renaming.Having said this, if people are enthusiastic and can actually agree, there are a few changes that might make sense:- move clusterdb, createdb, etc. (*db, but not initdb because that is a server, not client, program) into pg_db_util [subcommand] (or some such)- move createuser, dropuser into pg_role_util [subcommand] (or some such)- pgbench -> pg_bench (why no '_' anyway?)- ecpg -> pg_ec (usually invoked by makefiles anyway, I'm guessing)But I consider this worth doing only if people consider that it's an improvement for reasons other than just getting stuff out of /bin or /usr/bin.List of 2-character commands and their source packages on one of our systems (the \"no path found\" ones are mostly symlinks into the Ubuntu \"alternatives\" system):16:52 ijmorlan@ubuntu1604-102$ dpkg -S /usr/bin/?? /bin/?? | sortdpkg-query: no path found matching pattern /usr/bin/ccdpkg-query: no path found matching pattern /usr/bin/exdpkg-query: no path found matching pattern /usr/bin/fpdpkg-query: no path found matching pattern /usr/bin/jsdpkg-query: no path found matching pattern /usr/bin/pcdpkg-query: no path found matching pattern /usr/bin/rndpkg-query: no path found matching pattern /usr/bin/rtdpkg-query: no path found matching pattern /usr/bin/vidpkg-query: no path found matching pattern /bin/mtdpkg-query: no path found matching pattern /bin/ncacct: /usr/bin/acapache2-utils: /usr/bin/abaspectj: /usr/bin/ajat: /usr/bin/atbc: /usr/bin/bcbf: /usr/bin/bfbinutils: /usr/bin/arbinutils: /usr/bin/asbinutils: /usr/bin/ldbinutils: /usr/bin/nmbsdmainutils: /usr/bin/hdbsdmainutils: /usr/bin/ulbyobu: /usr/bin/NFcoreutils: /bin/cpcoreutils: /bin/ddcoreutils: /bin/dfcoreutils: /bin/lncoreutils: /bin/lscoreutils: /bin/mvcoreutils: /bin/rmcoreutils: /usr/bin/ducoreutils: /usr/bin/idcoreutils: /usr/bin/nlcoreutils: /usr/bin/odcoreutils: /usr/bin/prcoreutils: /usr/bin/trcoreutils: /usr/bin/wccups-client: /usr/bin/lpdash: /bin/shdc: /usr/bin/dcdebhelper: /usr/bin/dhdiversion by dash from: /bin/shdiversion by dash to: /bin/sh.distribed: /bin/edghostscript: /usr/bin/gsgraphviz: /usr/bin/gcgv: /usr/bin/gvi3-wm: /usr/bin/i3ii: /usr/bin/iiiproute2: /bin/ipiproute2: /bin/ssispell: /usr/bin/sqlogin: /bin/sulogin: /usr/bin/sgm4: /usr/bin/m4mc: /usr/bin/mcmercurial: /usr/bin/hgmono-devel: /usr/bin/almono-devel: /usr/bin/lcmono-devel: /usr/bin/snmtools: /usr/bin/lzmtools: /usr/bin/uzp7zip-full: /usr/bin/7zprocps: /bin/psrcs: /usr/bin/circs: /usr/bin/cors: /usr/bin/rsruby: /usr/bin/risc: /usr/bin/scspeech-tools: /usr/bin/dptex4ht: /usr/bin/httexlive-binaries: /usr/bin/mfutil-linux: /usr/bin/pgxz-utils: /usr/bin/xz",
"msg_date": "Wed, 10 Apr 2019 17:08:16 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Em sex, 29 de mar de 2019 às 13:25, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> Maybe if we want to merge these things into one executable,\n> it should be a new one. \"pg_util createrole bob\" ?\n>\n+1 as I proposed in\nhttps://www.postgresql.org/message-id/bdd1adb1-c26d-ad1f-2f15-cc52056065d4%40timbira.com.br\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:59:19 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": ">>> On 2019-03-29 20:32, Joe Conway wrote:\n>>>> pg_util <command> <options>\n>>>\n>>> How is that better than just renaming to pg_$oldname?\n>> \n>> As I already said in up thread:\n>> \n>>> This way, we would be free from the command name conflict problem\n> \n> Well, whatever we do -- if anything -- we would certainly need to keep\n> the old names around for a while somehow. So this doesn't really make\n> that issue go away.\n\nAnother complain was, it's hard to remember the tool names for novice\nusers. I think this way would solve the problem.\n\nI agree that command name conflicting problem will not be solved by\nthe idea. However I do not believe there's name conflicting problem in\nthe first place. So I am happy to keep the old names as they are.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:09:15 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "It would make the old commands more easily discoverable. Just type pg_\nand press the tab key for auto-completion.\n\nOn Wed, Apr 10, 2019 at 9:46 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-04-10 15:01, Tatsuo Ishii wrote:\n> >> On 2019-03-29 20:32, Joe Conway wrote:\n> >>> pg_util <command> <options>\n> >>\n> >> How is that better than just renaming to pg_$oldname?\n> >\n> > As I already said in up thread:\n> >\n> >> This way, we would be free from the command name conflict problem\n>\n> Well, whatever we do -- if anything -- we would certainly need to keep\n> the old names around for a while somehow. So this doesn't really make\n> that issue go away.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 14:25:30 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Friday, April 12, 2019 2:25 PM, Fred .Flintstone <eldmannen@gmail.com> wrote:\n\n> It would make the old commands more easily discoverable. Just type pg_\n> and press the tab key for auto-completion.\n\nThere are many good reasons for the changes proposed in this thread, but I'm\nnot sure if discoverability is one. Relying on autocompleting a filename to\nfigure out existing tooling for database maintenance and DBA type operations\nseems like a fragile usecase.\n\nIf commandline discoverability is of importance, providing a summary of the\ntools in \"man postgresql\" seems like a better option.\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 12 Apr 2019 12:56:26 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I would disagree.\n\nDiscoverability is important, and having a user space that is\nintuitive and predictable.\nWith the discoverability exposed by pg_<tab> then you immediately find\nout what is available.\n\nOne shouldn't have to delve down into manuals and books. Then forget\nwhat that darn command was next time its needed, just to have to\nreturn to the documentation again.\n\nPreferably a wrapper around the tools could provide a summary for all\nthe tools, just like git --help.\n\nOn Fri, Apr 12, 2019 at 2:56 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> On Friday, April 12, 2019 2:25 PM, Fred .Flintstone <eldmannen@gmail.com> wrote:\n>\n> > It would make the old commands more easily discoverable. Just type pg_\n> > and press the tab key for auto-completion.\n>\n> There are many good reasons for the changes proposed in this thread, but I'm\n> not sure if discoverability is one. Relying on autocompleting a filename to\n> figure out existing tooling for database maintenance and DBA type operations\n> seems like a fragile usecase.\n>\n> If commandline discoverability is of importance, providing a summary of the\n> tools in \"man postgresql\" seems like a better option.\n>\n> cheers ./daniel\n\n\n",
"msg_date": "Fri, 12 Apr 2019 15:04:34 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 2019-Apr-12, Daniel Gustafsson wrote:\n\n> There are many good reasons for the changes proposed in this thread, but I'm\n> not sure if discoverability is one. Relying on autocompleting a filename to\n> figure out existing tooling for database maintenance and DBA type operations\n> seems like a fragile usecase.\n> \n> If commandline discoverability is of importance, providing a summary of the\n> tools in \"man postgresql\" seems like a better option.\n\nThe first comment in the LWN article:\n \"It's broken and obviously a bad idea but we've been doing it for so long we\n shouldn't attempt to fix it\"\n\nIMO the future is longer than the past, and has more users, so let's do\nit right instead of perpetuating the mistakes.\n\n\n... unless you think PostgreSQL is going to become irrelevant before\n2050.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 12 Apr 2019 09:20:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Friday, April 12, 2019 3:20 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Apr-12, Daniel Gustafsson wrote:\n>\n> > There are many good reasons for the changes proposed in this thread, but I'm\n> > not sure if discoverability is one. Relying on autocompleting a filename to\n> > figure out existing tooling for database maintenance and DBA type operations\n> > seems like a fragile usecase.\n> > If commandline discoverability is of importance, providing a summary of the\n> > tools in \"man postgresql\" seems like a better option.\n>\n> The first comment in the LWN article:\n> \"It's broken and obviously a bad idea but we've been doing it for so long we\n> shouldn't attempt to fix it\"\n>\n> IMO the future is longer than the past, and has more users, so let's do\n> it right instead of perpetuating the mistakes.\n>\n> ... unless you think PostgreSQL is going to become irrelevant before\n> 2050.\n\nNot at all, and as I said there are many good reasons for doing this. I just\ndon't think \"discoverability\" is the driver, since I consider that a different\nthing from ease of use and avoid confusion with system tools etc (my reading of\nthat word is \"finding something new\", not \"how did I spell that tool again\").\n\ncheers ./daniel\n\n\n",
"msg_date": "Fri, 12 Apr 2019 13:36:59 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "I think of discoverability as as how easy it is to discover and\nrediscover things.\nLike rediscover commands you forgot the name of. Like \"what was the\ncommand to create a database?\", just type pg_ and press tab and see\nwhats there.\n\nThe LWN article is now unlocked to all readers, not just paying\nsubscribers. It have many comments which might bring value to this\ndiscussion.\nhttps://lwn.net/Articles/784508/\n\nOn Fri, Apr 12, 2019 at 3:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> On Friday, April 12, 2019 3:20 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> > On 2019-Apr-12, Daniel Gustafsson wrote:\n> >\n> > > There are many good reasons for the changes proposed in this thread, but I'm\n> > > not sure if discoverability is one. Relying on autocompleting a filename to\n> > > figure out existing tooling for database maintenance and DBA type operations\n> > > seems like a fragile usecase.\n> > > If commandline discoverability is of importance, providing a summary of the\n> > > tools in \"man postgresql\" seems like a better option.\n> >\n> > The first comment in the LWN article:\n> > \"It's broken and obviously a bad idea but we've been doing it for so long we\n> > shouldn't attempt to fix it\"\n> >\n> > IMO the future is longer than the past, and has more users, so let's do\n> > it right instead of perpetuating the mistakes.\n> >\n> > ... unless you think PostgreSQL is going to become irrelevant before\n> > 2050.\n>\n> Not at all, and as I said there are many good reasons for doing this. I just\n> don't think \"discoverability\" is the driver, since I consider that a different\n> thing from ease of use and avoid confusion with system tools etc (my reading of\n> that word is \"finding something new\", not \"how did I spell that tool again\").\n>\n> cheers ./daniel\n\n\n",
"msg_date": "Fri, 12 Apr 2019 16:56:58 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 3:20 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Apr-12, Daniel Gustafsson wrote:\n>\n> > There are many good reasons for the changes proposed in this thread, but\n> I'm\n> > not sure if discoverability is one. Relying on autocompleting a\n> filename to\n> > figure out existing tooling for database maintenance and DBA type\n> operations\n> > seems like a fragile usecase.\n> >\n> > If commandline discoverability is of importance, providing a summary of\n> the\n> > tools in \"man postgresql\" seems like a better option.\n>\n> The first comment in the LWN article:\n> \"It's broken and obviously a bad idea but we've been doing it for so long\n> we\n> shouldn't attempt to fix it\"\n>\n> IMO the future is longer than the past, and has more users, so let's do\n> it right instead of perpetuating the mistakes.\n>\n\nI agree we should look at fixing these. However I have two concerns.\n\n1. naming things is surprisingly hard. How sure are we that we can do this\nright? Can we come up with a correct name for initdb? Maybe\npg_createcluster?\n2. How long would our deprecation cycle be? 5 years? 10 years? Given\nthat people may need to support multiple versions I would propose no\nwarnings until both formats are supported, then warnings for 2 years, then\ndrop the old ones.\n\n>\n>\n> ... unless you think PostgreSQL is going to become irrelevant before\n> 2050.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Fri, Apr 12, 2019 at 3:20 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Apr-12, Daniel Gustafsson wrote:\n\n> There are many good reasons for the changes proposed in this thread, but I'm\n> not sure if discoverability is one. Relying on autocompleting a filename to\n> figure out existing tooling for database maintenance and DBA type operations\n> seems like a fragile usecase.\n> \n> If commandline discoverability is of importance, providing a summary of the\n> tools in \"man postgresql\" seems like a better option.\n\nThe first comment in the LWN article:\n \"It's broken and obviously a bad idea but we've been doing it for so long we\n shouldn't attempt to fix it\"\n\nIMO the future is longer than the past, and has more users, so let's do\nit right instead of perpetuating the mistakes.I agree we should look at fixing these. However I have two concerns.1. naming things is surprisingly hard. How sure are we that we can do this right? Can we come up with a correct name for initdb? Maybe pg_createcluster?2. How long would our deprecation cycle be? 5 years? 10 years? Given that people may need to support multiple versions I would propose no warnings until both formats are supported, then warnings for 2 years, then drop the old ones.\n\n\n... unless you think PostgreSQL is going to become irrelevant before\n2050.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Fri, 12 Apr 2019 17:14:51 +0200",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On 4/12/19 5:14 PM, Chris Travers wrote:\n> 1. naming things is surprisingly hard. How sure are we that we can do \n> this right? Can we come up with a correct name for initdb? Maybe \n> pg_createcluster?\n\nThe Debian packagers already use pg_createcluster for their script which \nwraps initdb, and while pg_initdb is a bit misleading (it creates a \ncluster rather than a database) it is not that bad.\n\nAndreas\n\n\n",
"msg_date": "Fri, 12 Apr 2019 17:19:20 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Could I please ask a couple of questions?\n\nWhy does the first answer to everything seem to be \"we need to destroy\nsomething to make it better for others\"? Why does createdb need to be\nremoved? Why do we use the \"newbie that can't understand whether or not\ncreatedb is for PostgreSQL or MySQL or ....\" and then ignore the fact that\nthis would be the one person least able to handle a breakage of a 5 year\nold internal script that simply does it's job for them each day?\n\nWhat if someone has a nice little script that is really well written and\nfails on warnings because the company policy is that \"warnings are to be\nrespected\"? How does that person properly do their job if they need to\nbreak open that script one morning because we've dropped a \"warning bomb\"\non them without any option but to fix the entire script in one shot with no\noption to continue otherwise? What if there is a semi-strict QA policy at\nsaid company and they are placed in a bind due to the volume and nature of\nthe changes required here because of steps taken that are impossible to\nreasonably work around (possibly even outside of the script itself)?\n\nSo I would like to offer the beginning of a \"framework\" of steps that could\naccomplish the desired task with the bare minimum of breakage and with\nsimple steps that can be offered to help people affected by these changes.\n\n1) Any new name is a symlink to the old name. We do not break existing\ntooling for any non-obvious reason. Any notion of symlinking the old names\nand then discussing \"packagers could add a PostgreSQL-Legacy-Symlinks\npackage\" is not ok. We cannot have users breaking because of a missing\npackage and then have them running around with their head cut off trying to\nfigure out where that package is for their particular system. We make\nacross the board changes that are easily explainable.\n\n2) We can certainly add a warning to the old names that warn of future\nremoval. However we need to offer a simple option to remove these warnings\nin a future friendly fashion. Remember the person that is not ok running\naround deep inside a 1000 line script.\n\n3) Long term (or even fairly short term) we move the old names back to a\nmore appropriate location - lets say /opt//pgsql/bin - if someone ignored\nthe warnings then they are broken - there is nothing that can be done with\nthat - but we've now accomplished the stated goal - hide names like\n\"createdb\" from standard paths.\n\nHowever how do we deal with the VERY bad side of #2/#3? That's what i feel\nhas been missing here. So lets walk through something\n\nIf someone has a script that breaks on warnings - or they are generally not\nsomeone that is comfortable making many changes to a script - we need a\nsingle line option for them.\n\nWARNING - createdb is no longer the preferred method - please either change\nto pg_createdb OR add the following line near the top of your\nscript/environment\n\nsource pg_legacy_environment\n\n(Wording is not my strong suit - bear with me please)\n\nWhat is pg_legacy_environment? Well it's a provided file that starts it's\nlife as simple as this\n\nexport PG_LEGACY_ENVIRONMENT=1\n\nAnd the warnings that check for usage of the old command names check for\nPG_LEGACY_ENVIRONMENT and understand that if that variable exists - the\nuser has chosen to make the minimal change to their script and should be\nrespected. We will fix their environment for them as needed to allow them\nto continue using old names.\n\nThat solves #2 and allows for someone to very quickly remove warnings\nwithout any major changes. A single line change is as simple as one can\nimagine or do. If someone cannot accomplish this change - what possibly can\nwe do for them?\n\nWhen #3 hits and the old names are removed from the path -\npg_legacy_environment could change to something along these lines\n\nexport PATH=$PATH:/opt/pgsql/bin\n\nAnd now we have a removal of the old names, that does not break anyone that\nhas followed the warning until this point - and allows for a simple, one\nline fix, to anyone that walks in the door and screams FIRE because they\nignored the warning and now have a problem.\n\nI feel the above respects the people that are supposed to be the people we\nhave empathy for - they are also steps that can be done even fairly quickly\nbecause the fix is handled via modification to the script environment as\nopposed to the core workings of a script itself. In fact - one could add\nthe pg_legacy_environment line to their shell environment and not even\nmodify a single script at all.\n\nJohn\n\nCould I please ask a couple of questions?Why does the first answer to everything seem to be \"we need to destroy something to make it better for others\"? Why does createdb need to be removed? Why do we use the \"newbie that can't understand whether or not createdb is for PostgreSQL or MySQL or ....\" and then ignore the fact that this would be the one person least able to handle a breakage of a 5 year old internal script that simply does it's job for them each day?What if someone has a nice little script that is really well written and fails on warnings because the company policy is that \"warnings are to be respected\"? How does that person properly do their job if they need to break open that script one morning because we've dropped a \"warning bomb\" on them without any option but to fix the entire script in one shot with no option to continue otherwise? What if there is a semi-strict QA policy at said company and they are placed in a bind due to the volume and nature of the changes required here because of steps taken that are impossible to reasonably work around (possibly even outside of the script itself)?So I would like to offer the beginning of a \"framework\" of steps that could accomplish the desired task with the bare minimum of breakage and with simple steps that can be offered to help people affected by these changes.1) Any new name is a symlink to the old name. We do not break existing tooling for any non-obvious reason. Any notion of symlinking the old names and then discussing \"packagers could add a PostgreSQL-Legacy-Symlinks package\" is not ok. We cannot have users breaking because of a missing package and then have them running around with their head cut off trying to figure out where that package is for their particular system. We make across the board changes that are easily explainable.2) We can certainly add a warning to the old names that warn of future removal. However we need to offer a simple option to remove these warnings in a future friendly fashion. Remember the person that is not ok running around deep inside a 1000 line script.3) Long term (or even fairly short term) we move the old names back to a more appropriate location - lets say /opt//pgsql/bin - if someone ignored the warnings then they are broken - there is nothing that can be done with that - but we've now accomplished the stated goal - hide names like \"createdb\" from standard paths.However how do we deal with the VERY bad side of #2/#3? That's what i feel has been missing here. So lets walk through somethingIf someone has a script that breaks on warnings - or they are generally not someone that is comfortable making many changes to a script - we need a single line option for them.WARNING - createdb is no longer the preferred method - please either change to pg_createdb OR add the following line near the top of your script/environmentsource pg_legacy_environment(Wording is not my strong suit - bear with me please)What is pg_legacy_environment? Well it's a provided file that starts it's life as simple as thisexport PG_LEGACY_ENVIRONMENT=1And the warnings that check for usage of the old command names check for PG_LEGACY_ENVIRONMENT and understand that if that variable exists - the user has chosen to make the minimal change to their script and should be respected. We will fix their environment for them as needed to allow them to continue using old names.That solves #2 and allows for someone to very quickly remove warnings without any major changes. A single line change is as simple as one can imagine or do. If someone cannot accomplish this change - what possibly can we do for them?When #3 hits and the old names are removed from the path - pg_legacy_environment could change to something along these linesexport PATH=$PATH:/opt/pgsql/binAnd now we have a removal of the old names, that does not break anyone that has followed the warning until this point - and allows for a simple, one line fix, to anyone that walks in the door and screams FIRE because they ignored the warning and now have a problem. I feel the above respects the people that are supposed to be the people we have empathy for - they are also steps that can be done even fairly quickly because the fix is handled via modification to the script environment as opposed to the core workings of a script itself. In fact - one could add the pg_legacy_environment line to their shell environment and not even modify a single script at all.John",
"msg_date": "Fri, 12 Apr 2019 10:15:47 -0700",
"msg_from": "John W Higgins <wishdev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "So there is no regression potential.\n\nWhen and who can send the patch to rename the programs to carry the\npg_ prefixes, and create symlinks from the old names?\n\nOn Fri, Apr 12, 2019 at 5:19 PM Andreas Karlsson <andreas@proxel.se> wrote:\n>\n> On 4/12/19 5:14 PM, Chris Travers wrote:\n> > 1. naming things is surprisingly hard. How sure are we that we can do\n> > this right? Can we come up with a correct name for initdb? Maybe\n> > pg_createcluster?\n>\n> The Debian packagers already use pg_createcluster for their script which\n> wraps initdb, and while pg_initdb is a bit misleading (it creates a\n> cluster rather than a database) it is not that bad.\n>\n> Andreas\n\n\n",
"msg_date": "Fri, 12 Apr 2019 20:56:35 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "Please don't top post. It makes it unnecessarily difficult to follow the\ndiscussion. See https://wiki.postgresql.org/wiki/Mailing_Lists\n\nOn Fri, Apr 12, 2019 at 08:56:35PM +0200, Fred .Flintstone wrote:\n>So there is no regression potential.\n>\n\nI fail to understand how you came to this conclusion? Andreas pointed\nout Debian already uses pg_createcluster, so there clearly is potential\nfor conflict and a regression.\n\n>When and who can send the patch to rename the programs to carry the\n>pg_ prefixes, and create symlinks from the old names?\n>\n\nWell, presumably that would be you, sometime in the future?\n\nTBH I don't quite understand what are we trying to achieve in this\nthread. It started with the presumption that PostgreSQL \"pollutes\" the\nfilesystem with scripts/binaries - which may or may not be true, but for\nthe sake of the argument let's assume that it is. How does keeping the\noriginal stuff and adding symblinks improve the situation?\n\n>On Fri, Apr 12, 2019 at 5:19 PM Andreas Karlsson <andreas@proxel.se> wrote:\n>>\n>> On 4/12/19 5:14 PM, Chris Travers wrote:\n>> > 1. naming things is surprisingly hard. How sure are we that we can do\n>> > this right? Can we come up with a correct name for initdb? Maybe\n>> > pg_createcluster?\n>>\n>> The Debian packagers already use pg_createcluster for their script which\n>> wraps initdb, and while pg_initdb is a bit misleading (it creates a\n>> cluster rather than a database) it is not that bad.\n>>\n>> Andreas\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Apr 2019 15:36:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "On Sat, Apr 13, 2019 at 3:36 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Fri, Apr 12, 2019 at 08:56:35PM +0200, Fred .Flintstone wrote:\n> >So there is no regression potential.\n> >\n>\n> I fail to understand how you came to this conclusion? Andreas pointed\n> out Debian already uses pg_createcluster, so there clearly is potential\n> for conflict and a regression.\nBut there is no \"createcluster\" in PostgreSQL so that is not a problem.\nI don't know if there is any other third-party programs that carry the\npg_ prefix though.\n\n> >When and who can send the patch to rename the programs to carry the\n> >pg_ prefixes, and create symlinks from the old names?\n> >\n>\n> Well, presumably that would be you, sometime in the future?\nIt would be better if someone with more experienced than me did it.\n\n> TBH I don't quite understand what are we trying to achieve in this\n> thread. It started with the presumption that PostgreSQL \"pollutes\" the\n> filesystem with scripts/binaries - which may or may not be true, but for\n> the sake of the argument let's assume that it is. How does keeping the\n> original stuff and adding symblinks improve the situation?\nIt would ease in discoverability and make the user space more coherent\nand predictable which would make it easier to use.\nIt would also allow to move the symlinks into an optional package or\nremove them in the future.\n\n\n",
"msg_date": "Sat, 13 Apr 2019 15:43:42 +0200",
"msg_from": "\"Fred .Flintstone\" <eldmannen@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL pollutes the file system"
},
{
"msg_contents": "\tAndreas Karlsson wrote:\n\n> The Debian packagers already use pg_createcluster for their script which \n> wraps initdb, and while pg_initdb is a bit misleading (it creates a \n> cluster rather than a database) it is not that bad.\n\nBut that renaming wouldn't achieve anything in relation to the stated goal,\nsince initdb is not in the $PATH in Debian/Ubuntu systems.\nIt's part of the version-specific binaries located in\n/usr/lib/postgresql/$VERSION/bin, which are not meant to be called\ndirectly, but by the pg_*cluster* scripts that you mention, or pg_wrapper.\n\nMoreover, aside from package-specific issues, initdb can already be\ninvoked through \"pg_ctl initdb\" since 2010 and version 9.0, as\nmentioned in:\n https://www.postgresql.org/docs/9.0/app-initdb.html\n\nThis evolution was the result of discussions pretty much like\nthe present thread.\n9 years later, who bothers to use or recommend the new form?\nAFAICS nobody cares.\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 13 Apr 2019 17:57:44 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL pollutes the file system"
}
] |
[
{
"msg_contents": "Hi all!\n\nWe are also facing some issues when using extensions. We are using\nthem quite intensively as a tool for maintaining our custom \"DB\napplications\" with versioning, all tables, data, regression tests...\nWe find extensions great! We do not need other tool like flyway. My\ncolleague already posted some report to bug mailing list [1] but with\nno response.\n\nOur observations correspond well with your outline:\n\n#1: Dependencies\n\n* It is not possible to specify the version of extension we are\ndependent on in .control file.\n\n#2: Data in Extensions\n\nI am not sure about the name \"Configuration\" Tables. From my point of\nview extensions can hold two sorts of data:\n1) \"static\" data: delivered with extension, inserted by update\nscripts; the same \"static\" data are present across multiple\ninstallation of extension in the same version. This data are not\nsupposed to be dumped.\n2) \"dynamic\" data: inserted by users, have to be included in dumps,\nare marked with pg_extension_config_dump and are called\n\"configuration\" tables/data ... but why \"configuration\"?\n\n#3 pg_dump and Extensions\n\n* We have described some behavior of pg_dump, which we believe are in\nfact bugs: see [1] \"1) pg_dump with --schema parameter\" and \"2)\nHanging OID in extconfig\".\n* Maybe it would be good to introduce new switch pg_dump --extension\nextA dumping all \"dynamic\" data from extension tables regardless on\nschema\n\n#4: Extension owned\n\n* It is not possible to alter extension owner\n\nThanks, Jiří & Ivo.\n\n[1] https://www.postgresql.org/message-id/15616-260dc9cb3bec7e7e@postgresql.org\n\n",
"msg_date": "Tue, 19 Mar 2019 11:47:13 +0100",
"msg_from": "=?UTF-8?B?SmnFmcOtIEZlamZhcg==?= <jurafejfar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: extensions are hitting the ceiling"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI was going through the regression testing framework used in postgres. I\nwas trying to understand the custom functions written in perl for postgres.\n\nI could not find the perldoc for TestLib module in src/test/perl and found\nsome difficultly in understanding what each function does while other\nmodules in the src/test folder had perldoc and it was easy to understand\nthe functions.\n\nI would like to contribute for the perldoc for TestLib. I am looking for\nyour suggestions if this contribution is worth doing.\n\nRegards,\nPrajwal\n\nHi Hackers,I was going through the regression testing framework used in postgres. I was trying to understand the custom functions written in perl for postgres. I could not find the perldoc for TestLib module in src/test/perl and found some difficultly in understanding what each function does while other modules in the src/test folder had perldoc and it was easy to understand the functions.I would like to contribute for the perldoc for TestLib. I am looking for your suggestions if this contribution is worth doing.Regards,Prajwal",
"msg_date": "Tue, 19 Mar 2019 16:21:16 +0530",
"msg_from": "Prajwal A V <prajwal450@gmail.com>",
"msg_from_op": true,
"msg_subject": "Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On 2019-Mar-19, Prajwal A V wrote:\n\n> I could not find the perldoc for TestLib module in src/test/perl and found\n> some difficultly in understanding what each function does while other\n> modules in the src/test folder had perldoc and it was easy to understand\n> the functions.\n> \n> I would like to contribute for the perldoc for TestLib. I am looking for\n> your suggestions if this contribution is worth doing.\n\nYes, it is, please do.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 19 Mar 2019 09:05:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Tue, Mar 19, 2019 at 09:05:29AM -0300, Alvaro Herrera wrote:\n> Yes, it is, please do.\n\n+1.\n--\nMichael",
"msg_date": "Tue, 19 Mar 2019 21:16:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "Hi,\nCan I take this up?\n\nRegards,\nRam\n\nHi,Can I take this up?Regards,Ram",
"msg_date": "Thu, 21 Mar 2019 19:10:50 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "Sure, please go ahead.\n\nRegards,\nPrajwal.\n\nOn Thu, 21 Mar 2019, 19:11 Ramanarayana, <raam.soft@gmail.com> wrote:\n\n> Hi,\n> Can I take this up?\n>\n> Regards,\n> Ram\n>\n\nSure, please go ahead.Regards,Prajwal.On Thu, 21 Mar 2019, 19:11 Ramanarayana, <raam.soft@gmail.com> wrote:Hi,Can I take this up?Regards,Ram",
"msg_date": "Thu, 21 Mar 2019 19:32:30 +0530",
"msg_from": "Prajwal A V <prajwal450@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "Hi,\n\nPlease find the first version of the patch for review. I was not sure what\nsome of the functions are used for and marked them with TODO.\n\nCheers\nRam 4.0",
"msg_date": "Fri, 22 Mar 2019 16:59:58 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 04:59:58PM +0530, Ramanarayana wrote:\n> Please find the first version of the patch for review. I was not sure what\n> some of the functions are used for and marked them with TODO.\n\nThis is only adding some documentation to an internal perl module we\nship, so it is far from being a critical part and we could still get\nthat into v12, still there are many patches waiting for integration\ninto v12 and this has showed up very late. Could you please register\nthis patch to the commit fest once you have a patch you think is fit\nfor merging? Here is the next commit fest link:\nhttps://commitfest.postgresql.org/23/\n--\nMichael",
"msg_date": "Sat, 23 Mar 2019 10:41:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "Hi,\nPlease find the updated patch. Added to the commitfest as well\nRegards,\nRam.",
"msg_date": "Sun, 7 Apr 2019 23:34:59 +0530",
"msg_from": "Ramanarayana <raam.soft@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "Hi Ram,\r\n\r\nI think this documentation helps people who want to understand functions.\r\n>Please find the updated patch. Added to the commitfest as well\r\nI have some comments.\r\n\r\nI think some users who would like to know custom function check src/test/perl/README at first.\r\nHow about add comments to the README file, such as \"See Testlib.pm if you want to know more details\"?\r\n\r\nIn the above document, why not write a description after the function name?\r\nI think it is better to write the function name first and then the description.\r\nIn your code;\r\n #Checks if all the tests passed or not\r\n all_tests_passing()\r\n\r\nIn my suggestion;\r\n all_tests_passing()\r\n Checks if all the tests passed or not\r\n\r\nAnd some functions return value. How about adding return information to the above doc?\r\n\r\nI find ^M character in your new code. Please use LF line ending not CRLF or get rid of it in next patch.\r\n\r\nRegards,\r\nAya Iwata\r\n\n\n\n\n\n\n\n\n\nHi Ram,\n \nI think this documentation helps people who want to understand functions.\n\n>Please find the updated patch. Added to the commitfest as well\nI have some comments.\n \nI think some users who would like to know custom function check src/test/perl/README at first.\nHow about add comments to the README file, such as \"See Testlib.pm if you want to know more details\"?\n \nIn the above document, why not write a description after the function name?\nI think it is better to write the function name first and then the description.\nIn your code;\n #Checks if all the tests passed or not\n all_tests_passing()\n \nIn my suggestion;\n all_tests_passing()\n Checks if all the tests passed or not\n \nAnd some functions return value. How about adding return information to the above doc?\n \nI find ^M character in your new code. Please use LF line ending not CRLF or get rid of it in next patch.\r\n\n \nRegards,\nAya Iwata",
"msg_date": "Thu, 11 Apr 2019 02:10:24 +0000",
"msg_from": "\"Iwata, Aya\" <iwata.aya@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "> On 7 Apr 2019, at 20:04, Ramanarayana <raam.soft@gmail.com> wrote:\n\n> Please find the updated patch. Added to the commitfest as well\n\nThe v2 patch is somewhat confused as it has Windows carriage returns rather\nthan newlines, so it replaces the entire file making the diff hard to read. It\nalso includes a copy of TestLib and the v1 patch and has a lot of whitespace\nnoise.\n\nPlease redo the patch on a clean tree to get a more easily digestable patch.\n\ncheers ./daniel\n\n\n\n",
"msg_date": "Tue, 9 Jul 2019 15:16:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Tue, Jul 09, 2019 at 03:16:01PM +0200, Daniel Gustafsson wrote:\n> The v2 patch is somewhat confused as it has Windows carriage returns rather\n> than newlines, so it replaces the entire file making the diff hard to read. It\n> also includes a copy of TestLib and the v1 patch and has a lot of whitespace\n> noise.\n\nNobody can provide a clear review if the files are just fully\nrewritten even based on a read of the patch. Perhaps you are working\non Windows and forgot to configure core.autocrlf with \"git config\".\nThat could make your life easier.\n\nI have switched the patch as \"waiting on author\" for now.\n--\nMichael",
"msg_date": "Wed, 10 Jul 2019 16:33:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On 2019-Apr-11, Iwata, Aya wrote:\n\n> In the above document, why not write a description after the function name?\n> I think it is better to write the function name first and then the description.\n> In your code;\n> #Checks if all the tests passed or not\n> all_tests_passing()\n> \n> In my suggestion;\n> all_tests_passing()\n> Checks if all the tests passed or not\n\nYeah, so there are two parts in the submitted patch: first the synopsis\nlist the methods using this format you describe, and later the METHODS\nsection lists then again, using your suggested style. I think we should\ndo away with the long synopsis -- maybe keep it as just the \"use\nTestLib\" line, and then let the METHODS section list and describe the\nmethods.\n\n> And some functions return value. How about adding return information\n> to the above doc?\n\nThat's already in the METHODS section for some of them. For example:\n\n all_tests_passing()\n Returns 1 if all the tests pass. Otherwise returns 0\n\nIt's missing for others, such as \"tempdir\".\n\nIn slurp_file you have this:\n Opens the file provided as an argument to the function in read mode(as\n indicated by <).\nI think the parenthical comment is useless; remove that.\n\nPlease break long source lines (say to 78 chars -- make sure pgperltidy\nagrees), and keep some spaces after sentence-ending periods and other\npunctuation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jul 2019 09:38:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On 7/10/19 9:38 AM, Alvaro Herrera wrote:\n> On 2019-Apr-11, Iwata, Aya wrote:\n>\n>> In the above document, why not write a description after the function name?\n>> I think it is better to write the function name first and then the description.\n>> In your code;\n>> #Checks if all the tests passed or not\n>> all_tests_passing()\n>>\n>> In my suggestion;\n>> all_tests_passing()\n>> Checks if all the tests passed or not\n> Yeah, so there are two parts in the submitted patch: first the synopsis\n> list the methods using this format you describe, and later the METHODS\n> section lists then again, using your suggested style. I think we should\n> do away with the long synopsis -- maybe keep it as just the \"use\n> TestLib\" line, and then let the METHODS section list and describe the\n> methods.\n>\n>> And some functions return value. How about adding return information\n>> to the above doc?\n> That's already in the METHODS section for some of them. For example:\n>\n> all_tests_passing()\n> Returns 1 if all the tests pass. Otherwise returns 0\n>\n> It's missing for others, such as \"tempdir\".\n>\n> In slurp_file you have this:\n> Opens the file provided as an argument to the function in read mode(as\n> indicated by <).\n> I think the parenthical comment is useless; remove that.\n>\n> Please break long source lines (say to 78 chars -- make sure pgperltidy\n> agrees), and keep some spaces after sentence-ending periods and other\n> punctuation.\n>\n\n\nI've fixed the bitrot and some other infelicities on this patch. It's\nnot commitable yet but I think it's more reviewable.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 26 Jul 2019 09:51:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Fri, Jul 26, 2019 at 09:51:34AM -0400, Andrew Dunstan wrote:\n> I've fixed the bitrot and some other infelicities on this patch. It's\n> not commitable yet but I think it's more reviewable.\n\nThanks, I had a look at this version.\n\n+ # Returns the real directory for a virtual path directory under msys\n+ real_dir(dir)\nreal_dir() is no more.\n\nperl2host() is missing.\n\n+ #TODO\n+ command_like_safe(cmd, expected_stdout, test_name)\n[...]\n+=pod\n+\n+=item command_like_safe(cmd, expected_stdout, test_name)\n+\n+TODO\n+\n+=cut\nUpdate not to miss.\n\n+Runs the command which is passed as argument to the function. On failure it\n+abandons further tests and exits the program.\n\"On failure the test suite exits immediately.\"\n\n\nI think that the SYNOPSIS could be shaped better. As of now it is a\nsimple succession of the same commands listed without any link to each\nother, which is contrary for example to what we do in PostgresNode.pm,\nwhere it shows up a set of small examples which are useful to\nunderstand how to shape the tests and the possible interactions\nbetween the routines of the module. My take would be to keep it\nsimple and minimal as TestLib.pm is the lowest level of our TAP test\ninfrastructure. So here are some simple suggestions, and we could go\nwith this set to begin with:\n# Test basic output of a command.\nprogram_help_ok('initdb');\nprogram_version_ok('initdb');\nprogram_options_handling_ok('initdb');\n\n# Test option combinations\ncommand_fails(['initdb', '--invalid-option'],\n 'command fails with invalid option');\nmy $tempdir = TestLib::tempdir;\ncommand_ok('initdb', '-D', $tempdir);\n\nAnother thing is that the examples should not overlap with what\nPostgresNode.pm presents, and that it is not necessary to show up all\nthe routines. It also makes little sense to describe in the synopsis\nthe routines in a way which duplicates with the descriptions on top of\neach routine.\n--\nMichael",
"msg_date": "Tue, 30 Jul 2019 13:46:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 4:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Jul 26, 2019 at 09:51:34AM -0400, Andrew Dunstan wrote:\n> > I've fixed the bitrot and some other infelicities on this patch. It's\n> > not commitable yet but I think it's more reviewable.\n>\n> Thanks, I had a look at this version.\n>\n> [review listing things to fix]\n\nHi Ram,\n\nBased on the above, it sounds like we want this patch but it needs a\nbit more work. It's now the end of CF1. I'm moving this one to CF2\n(September). Please post a new patch when ready.\n\nThanks!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 18:49:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On 2019-Jul-30, Michael Paquier wrote:\n\n> I think that the SYNOPSIS could be shaped better. As of now it is a\n> simple succession of the same commands listed without any link to each\n> other, which is contrary for example to what we do in PostgresNode.pm,\n> where it shows up a set of small examples which are useful to\n> understand how to shape the tests and the possible interactions\n> between the routines of the module. My take would be to keep it\n> simple and minimal as TestLib.pm is the lowest level of our TAP test\n> infrastructure.\n\nAgreed ... that's pretty much the same thing I tried to say upthread. I\nwrote my own synopsis, partly using your suggestions. I reworded the\ndescription for the routines (I don't think there's any I didn't touch),\nadded a mention of $windows_os, added a =head1 to split out the ad-hoc\nroutines from the Test::More wrappers.\n\nAnd pushed.\n\nPlease give it another look. It might need more polish.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 13:48:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
},
{
"msg_contents": "On Mon, Sep 02, 2019 at 01:48:14PM -0400, Alvaro Herrera wrote:\n> Agreed ... that's pretty much the same thing I tried to say upthread. I\n> wrote my own synopsis, partly using your suggestions. I reworded the\n> description for the routines (I don't think there's any I didn't touch),\n> added a mention of $windows_os, added a =head1 to split out the ad-hoc\n> routines from the Test::More wrappers.\n> \n> And pushed.\n> \n> Please give it another look. It might need more polish.\n\nThanks for committing. I have read through the commit and I am not\nnoticing any issue sorting out. One thing may be to give a short\ndescription for some of the routine's arguments like\ncheck_mode_recursive, but I think that we could live without that\neither.\n--\nMichael",
"msg_date": "Tue, 3 Sep 2019 15:30:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Contribution to Perldoc for TestLib module in Postgres"
}
] |
[
{
"msg_contents": "I want to build automation to recover a database to a specific LSN\n*inclusive*, even if that LSN is from a subtransaction. The problem I am\nfacing is that I know what specific LSN wrote a row on a remote system, but\nif I create a recovery.conf file with:\n\nrecovery_target_lsn = '95F/BBA36DF8'\n\nand 95F/BBA36DF8 is actually a subtransaction, then even if I use default\nbehavior of recovery_target_inclusive = true, that transaction will NOT be\nincluded in the restore point, because it is prior to the actual COMMIT LSN\nof which this lsn/subxact is a part.\n\nMy hack for now is to simply manually scan down until I find the COMMIT,\nwhich is the only way so far I can figure to find it out. I don't want to\nhack some kind of search script based on this if there is already a better\nway to get this information... anyone know of a way?\n\nThank you,\nJeremy\n\nI want to build automation to recover a database to a specific LSN *inclusive*, even if that LSN is from a subtransaction. The problem I am facing is that I know what specific LSN wrote a row on a remote system, but if I create a recovery.conf file with:recovery_target_lsn = '95F/BBA36DF8'and 95F/BBA36DF8 is actually a subtransaction, then even if I use default behavior of recovery_target_inclusive = true, that transaction will NOT be included in the restore point, because it is prior to the actual COMMIT LSN of which this lsn/subxact is a part.My hack for now is to simply manually scan down until I find the COMMIT, which is the only way so far I can figure to find it out. I don't want to hack some kind of search script based on this if there is already a better way to get this information... anyone know of a way?Thank you,Jeremy",
"msg_date": "Tue, 19 Mar 2019 12:16:34 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": "At Tue, 19 Mar 2019 12:16:34 -0500, Jeremy Finzel <finzelj@gmail.com> wrote in <CAMa1XUjZyq9sf1COSL-VPe9khpdu52WUoeWECUQDthGwtmb3vQ@mail.gmail.com>\n> I want to build automation to recover a database to a specific LSN\n> *inclusive*, even if that LSN is from a subtransaction. The problem I am\n> facing is that I know what specific LSN wrote a row on a remote system, but\n> if I create a recovery.conf file with:\n> \n> recovery_target_lsn = '95F/BBA36DF8'\n> \n> and 95F/BBA36DF8 is actually a subtransaction, then even if I use default\n> behavior of recovery_target_inclusive = true, that transaction will NOT be\n> included in the restore point, because it is prior to the actual COMMIT LSN\n> of which this lsn/subxact is a part.\n> \n> My hack for now is to simply manually scan down until I find the COMMIT,\n> which is the only way so far I can figure to find it out. I don't want to\n> hack some kind of search script based on this if there is already a better\n> way to get this information... anyone know of a way?\n\nFWIW it seems to be the only way starting from an LSN. If you can\nidentify the XID or end timestamp of the transaction, it would be\nusable instead.\n\nIf recovery_target_inclusive were able to take the third value\n\"xact\", is it exactly what you want?\n\nAnd is it acceptable?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Mar 2019 11:56:01 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": ">\n> If recovery_target_inclusive were able to take the third value\n> \"xact\", is it exactly what you want?\n>\n> And is it acceptable?\n>\n\nYes, that would be exactly what I would want. It would work to have a 3rd\nvalue for recovery_target_inclusive, although perhaps it's debatable that\ninstead, it should actually be the default behavior to roll any LSN with\nrecovery_target_inclusive = true until it is actually visible? If I say I\nwant to \"include\" an LSN in my recovery target, it doesn't make much sense\nif that just won't be visible unless it's an actual commit LSN, so in fact\nthe recovery point does not include the LSN.\n\nA related problem kind of demonstrates the same odd behavior. If you put\nin recovery_target_xid to a subtransaction_id, it just skips it and\ncontinues recovering, which really seems to be undesirable behavior. It\nwould be nice if that also could roll up to the next valid actual commit\ntransaction.\n\nThanks!\nJeremy\n\nIf recovery_target_inclusive were able to take the third value\n\"xact\", is it exactly what you want?\n\nAnd is it acceptable?Yes, that would be exactly what I would want. It would work to have a 3rd value for recovery_target_inclusive, although perhaps it's debatable that instead, it should actually be the default behavior to roll any LSN with recovery_target_inclusive = true until it is actually visible? If I say I want to \"include\" an LSN in my recovery target, it doesn't make much sense if that just won't be visible unless it's an actual commit LSN, so in fact the recovery point does not include the LSN.A related problem kind of demonstrates the same odd behavior. If you put in recovery_target_xid to a subtransaction_id, it just skips it and continues recovering, which really seems to be undesirable behavior. It would be nice if that also could roll up to the next valid actual commit transaction.Thanks!Jeremy",
"msg_date": "Wed, 20 Mar 2019 10:27:19 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> A related problem kind of demonstrates the same odd behavior. If you put\n> in recovery_target_xid to a subtransaction_id, it just skips it and\n> continues recovering, which really seems to be undesirable behavior. It\n> would be nice if that also could roll up to the next valid actual commit\n> transaction.\n\nIt would seem like what you're asking for is to continue until the commit\nof the parent transaction, not just the next commit after the subcommit.\nOtherwise (if that's an unrelated xact) the subxact would still not be\ncommitted, so that you might as well have stopped short of it.\n\nI'd be in favor of that for recovery_target_xid, but I'm not at all\nconvinced about changing the behavior for a target LSN. The fact that\nthe target is a subcommit seems irrelevant when you specify by LSN.\n\nI don't recall this for sure, but doesn't a parent xact's commit record\ninclude all subxact XIDs? If so, the implementation would just require\nsearching the subxacts as well as the main XID for a match to\nrecovery_target_xid.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 12:43:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": ">\n> It would seem like what you're asking for is to continue until the commit\n> of the parent transaction, not just the next commit after the subcommit.\n> Otherwise (if that's an unrelated xact) the subxact would still not be\n> committed, so that you might as well have stopped short of it.\n>\n\nRight, the parent transaction is what I meant.\n\n\n> I'd be in favor of that for recovery_target_xid, but I'm not at all\n> convinced about changing the behavior for a target LSN. The fact that\n> the target is a subcommit seems irrelevant when you specify by LSN.\n>\n\nPerhaps some context will help. There have been 2 cases in which I have\ntried to do this, both of them based on logical decoding, and finding\neither a transaction id or an LSN to recover to. Actually, the only reason\nI have ever used transaction id instead of LSN is on <= 9.6 because the\nlatter isn't supported until pg10.\n\nFor this use case, my goal is simply to be able to recover the the point\nimmediately after a particular decoded log line is visible, without\nnecessarily having to find out the final parent transaction id.\n\nGiven this, I am open to different implementations but I would like to\neither be able to specify an LSN or transaction ID, and have a feature that\nallows the recovery target to roll forward just until it is visible, even\nif the LSN or transaction ID is not the actual commit of the parent\ntransaction.\n\n\n> I don't recall this for sure, but doesn't a parent xact's commit record\n> include all subxact XIDs? If so, the implementation would just require\n> searching the subxacts as well as the main XID for a match to\n> recovery_target_xid.\n>\n\nYes, I believe so.\n\nThanks,\nJeremy\n\nIt would seem like what you're asking for is to continue until the commit\nof the parent transaction, not just the next commit after the subcommit.\nOtherwise (if that's an unrelated xact) the subxact would still not be\ncommitted, so that you might as well have stopped short of it.Right, the parent transaction is what I meant. \nI'd be in favor of that for recovery_target_xid, but I'm not at all\nconvinced about changing the behavior for a target LSN. The fact that\nthe target is a subcommit seems irrelevant when you specify by LSN.Perhaps some context will help. There have been 2 cases in which I have tried to do this, both of them based on logical decoding, and finding either a transaction id or an LSN to recover to. Actually, the only reason I have ever used transaction id instead of LSN is on <= 9.6 because the latter isn't supported until pg10.For this use case, my goal is simply to be able to recover the the point immediately after a particular decoded log line is visible, without necessarily having to find out the final parent transaction id.Given this, I am open to different implementations but I would like to either be able to specify an LSN or transaction ID, and have a feature that allows the recovery target to roll forward just until it is visible, even if the LSN or transaction ID is not the actual commit of the parent transaction. \nI don't recall this for sure, but doesn't a parent xact's commit record\ninclude all subxact XIDs? If so, the implementation would just require\nsearching the subxacts as well as the main XID for a match to\nrecovery_target_xid.Yes, I believe so.Thanks,Jeremy",
"msg_date": "Thu, 21 Mar 2019 08:27:20 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n>> I'd be in favor of that for recovery_target_xid, but I'm not at all\n>> convinced about changing the behavior for a target LSN. The fact that\n>> the target is a subcommit seems irrelevant when you specify by LSN.\n\n> For this use case, my goal is simply to be able to recover the the point\n> immediately after a particular decoded log line is visible, without\n> necessarily having to find out the final parent transaction id.\n\n> Given this, I am open to different implementations but I would like to\n> either be able to specify an LSN or transaction ID, and have a feature that\n> allows the recovery target to roll forward just until it is visible, even\n> if the LSN or transaction ID is not the actual commit of the parent\n> transaction.\n\nI find this to be unactionably vague. What does it mean to claim \"an\nLSN is visible\"? An LSN might not even point to a WAL record, or it\nmight point to one that has nontransactional effects. Moreover, any\nbehavior of this sort would destroy what I regard as a bedrock property\nof recover-to-LSN, which is that there's a well defined, guaranteed-finite\nstopping point. (A property that recover-to-XID lacks, since the\nspecified XID might've crashed without recording either commit or abort.)\n\nI think what you ought to be doing is digging the xmin out of the inserted\ntuple you're concerned with and then doing recover-to-XID. There's\ndefinitely room for us to make it easier if the XID is a subxact rather\nthan a main xact. But I think identifying the target XID is your job\nnot the job of the recovery-stop-point mechanism.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Thu, 21 Mar 2019 10:26:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
},
{
"msg_contents": ">\n> I find this to be unactionably vague. What does it mean to claim \"an\n> LSN is visible\"? An LSN might not even point to a WAL record, or it\n> might point to one that has nontransactional effects. Moreover, any\n> behavior of this sort would destroy what I regard as a bedrock property\n> of recover-to-LSN, which is that there's a well defined, guaranteed-finite\n> stopping point. (A property that recover-to-XID lacks, since the\n> specified XID might've crashed without recording either commit or abort.)\n>\n\nI mentioned that my specific use case is that I am picking out an LSN or\nXID within the context of logical decoding. So I don't think that's vague,\nbut let me clarify. When using the peek_changes or get_changes functions,\nthey only show a particular update to a particular row, with a\ncorresponding LSN and transaction ID. That's what I see using both\ntest_decoding and pglogical_output, both of which I have used in this way.\nIn that context at least, all LSNs and XIDs point to committed WAL data\nonly.\n\n\n> I think what you ought to be doing is digging the xmin out of the inserted\n> tuple you're concerned with and then doing recover-to-XID. There's\n> definitely room for us to make it easier if the XID is a subxact rather\n> than a main xact. But I think identifying the target XID is your job\n> not the job of the recovery-stop-point mechanism.\n>\n\nI'm open to that, but how will it help if I can't guarantee that the tuple\nwasn't updated again after the original insert/update of interest?\n\nThank you,\nJeremy\n\nI find this to be unactionably vague. What does it mean to claim \"an\nLSN is visible\"? An LSN might not even point to a WAL record, or it\nmight point to one that has nontransactional effects. Moreover, any\nbehavior of this sort would destroy what I regard as a bedrock property\nof recover-to-LSN, which is that there's a well defined, guaranteed-finite\nstopping point. (A property that recover-to-XID lacks, since the\nspecified XID might've crashed without recording either commit or abort.)I mentioned that my specific use case is that I am picking out an LSN or XID within the context of logical decoding. So I don't think that's vague, but let me clarify. When using the peek_changes or get_changes functions, they only show a particular update to a particular row, with a corresponding LSN and transaction ID. That's what I see using both test_decoding and pglogical_output, both of which I have used in this way. In that context at least, all LSNs and XIDs point to committed WAL data only. I think what you ought to be doing is digging the xmin out of the inserted\ntuple you're concerned with and then doing recover-to-XID. There's\ndefinitely room for us to make it easier if the XID is a subxact rather\nthan a main xact. But I think identifying the target XID is your job\nnot the job of the recovery-stop-point mechanism.I'm open to that, but how will it help if I can't guarantee that the tuple wasn't updated again after the original insert/update of interest?Thank you,Jeremy",
"msg_date": "Thu, 21 Mar 2019 14:06:49 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Automated way to find actual COMMIT LSN of subxact LSN"
}
] |
[
{
"msg_contents": "Hello,\n\nThere are many projects that use alternate QueryId \ndistinct from the famous pg_stat_statements jumbling algorithm.\n\nhttps://github.com/postgrespro/aqo\nquery_hash\n\nhttps://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Optimize.ViewPlans.html\nsql_hash\n\nhttps://github.com/ossc-db/pg_hint_plan\nqueryid\n\nEven pg_stat_statement has a normalize function, \nthat would answer the current question ...\n\nHere are some *needs* :\n\nneeds.1: stable accross different databases,\nneeds.2: doesn't change after database or object rebuild,\nneeds.3: search_path / schema independant,\nneeds.4: pg version independant (as long as possible),\n...\n\nand some *normalization rules*:\n\nnorm.1: case insensitive\nnorm.2: blank reduction \nnorm.3: hash algoritm ?\nnorm.4: CURRENT_DATE, CURRENT_TIME, LOCALTIME, LOCALTIMESTAMP not normalized\nnorm.5: NULL, IS NULL not normalized ?\nnorm.6: booleans t, f, true, false not normalized\nnorm.7: order by 1,2 or group by 1,2 should not be normalized\nnorm.8: pl/pgsql anonymous blocks not normalized\nnorm.9: comments aware\n\nDo not hesitate to add your thougths\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Tue, 19 Mar 2019 14:00:15 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "From: legrand legrand [mailto:legrand_legrand@hotmail.com]\n> There are many projects that use alternate QueryId\n> distinct from the famous pg_stat_statements jumbling algorithm.\n\nI'd like to welcome the standard QueryID that DBAs and extension developers can depend on. Are you surveying the needs for you to develop the QueryID that can meet as many needs as possible?\n\n\n> needs.1: stable accross different databases,\n\nDoes this mean different database clusters, not different databases in a single database cluster?\n\n\nneeds.5: minimal overhead to calculate\nneeds.6: doesn't change across database server restarts\nneeds.7: same value on both the primary and standby?\n\n\n> norm.9: comments aware\n\nIs this to distinguish queries that have different comments for optimizer hints? If yes, I agree.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 20 Mar 2019 00:23:30 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "At Wed, 20 Mar 2019 00:23:30 +0000, \"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com> wrote in <0A3221C70F24FB45833433255569204D1FBE20A4@G01JPEXMBYT05>\n> From: legrand legrand [mailto:legrand_legrand@hotmail.com]\n> > There are many projects that use alternate QueryId\n> > distinct from the famous pg_stat_statements jumbling algorithm.\n> \n> I'd like to welcome the standard QueryID that DBAs and extension developers can depend on. Are you surveying the needs for you to develop the QueryID that can meet as many needs as possible?\n \n+1 to the necessity.\n\nThere's a similar thread about adding queryid in pg_stat_activity.\n\nhttps://www.postgresql.org/message-id/CA%2B8PKvQnMfOE-c3YLRwxOsCYXQDyP8VXs6CDtMZp1V4%3DD4LuFA%40mail.gmail.com\n\n> > needs.1: stable accross different databases,\n> \n> Does this mean different database clusters, not different databases in a single database cluster?\n\nDoes this mean you want different QueryID for the same-looking\nquery for another database in the same cluster?\n\n\n> needs.5: minimal overhead to calculate\n> needs.6: doesn't change across database server restarts\n> needs.7: same value on both the primary and standby?\n> \n> \n> > norm.9: comments aware\n> \n> Is this to distinguish queries that have different comments for optimizer hints? If yes, I agree.\n\nOr, any means to give an explict query id? I saw many instances\nof query that follows a comment describing a query id.\n\n> needs.2: doesn't change after database or object rebuild,\n> needs.3: search_path / schema independant,\n\npg_stat_statements even ignores table/object/column names.\n\n> needs.4: pg version independant (as long as possible),\n\nI don't think this cannot be guaranteed.\n\n> norm.1: case insensitive\n> norm.2: blank reduction \n> norm.3: hash algoritm ?\n> norm.4: CURRENT_DATE, CURRENT_TIME, LOCALTIME, LOCALTIMESTAMP not normalized\n> norm.5: NULL, IS NULL not normalized ?\n> norm.6: booleans t, f, true, false not normalized\n> norm.7: order by 1,2 or group by 1,2 should not be normalized\n> norm.8: pl/pgsql anonymous blocks not normalized\n\npg_stat_statements can be the base of the discussion on them.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 20 Mar 2019 10:44:46 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "From: Kyotaro HORIGUCHI [mailto:horiguchi.kyotaro@lab.ntt.co.jp]\n> > > needs.1: stable accross different databases,\n> >\n> > Does this mean different database clusters, not different databases in\n> a single database cluster?\n> \n> Does this mean you want different QueryID for the same-looking\n> query for another database in the same cluster?\n\n(I'm afraid this question may be targeted at legland legland, not me...)\nI think the same query text can have either same or different QueryID in different databases in the database cluster. Even if the QueryID value is the same, we can use DatabaseID to choose desired information.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n\n",
"msg_date": "Wed, 20 Mar 2019 03:15:04 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "> From: \"Tsunakawa, Takayuki\"\n>> From: legrand legrand [mailto:legrand_legrand@]\n>> There are many projects that use alternate QueryId\n>> distinct from the famous pg_stat_statements jumbling algorithm.\n\n>I'd like to welcome the standard QueryID that DBAs and extension developers\ncan depend on. \n>Are you surveying the needs for you to develop the QueryID that can meet as\nmany needs as possible?\n\nYes, I would like first to understand what are the main needs, \nthen identify how it would be possible to implement it \nin core, in a new extension or simply with a modified pg_stat_statements.\n(I'm just a DBA not a C developer, so it will only be restricted to very\nsimple enhancements)\n\n\n>> needs.1: stable accross different databases,\n\n>Does this mean different database clusters, not different databases in a\nsingle database cluster?\n\nSame looking query should give same QueryId on any database (in the same\ncluster or in distinct clusters). It can be differentiated with dbid.\n\n\n>needs.5: minimal overhead to calculate\n\nOK will add it\n\n\n>needs.6: doesn't change across database server restarts\n\nReally ? does this already occurs ?\n\n\n>needs.7: same value on both the primary and standby?\n\nI would say yes (I don't use standby) isn't this included into needs.1 ?\n\n\n>> norm.9: comments aware\n\n>Is this to distinguish queries that have different comments for optimizer\nhints? If yes, I agree.\n\nYes and others like playing with : \nset ...\nselect /* test 1*/ ...\n\nset ... \nselect /* test 2*/ ...\n \n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 12:39:25 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "> From \"Kyotaro HORIGUCHI-2\"\n>>At Wed, 20 Mar 2019 00:23:30 +0000, \"Tsunakawa, Takayuki\" \n>>> From: legrand legrand [mailto:legrand_legrand@]\n>>> norm.9: comments aware\n>> Is this to distinguish queries that have different comments for optimizer\n>> hints? If yes, I agree.\n\n> Or, any means to give an explict query id? I saw many instances\n> of query that follows a comment describing a query id.\n\nYes, in fact didn't thought about different kink of comments, but all of\nthem\n\n\n>> needs.3: search_path / schema independant,\n\n>pg_stat_statements even ignores table/object/column names.\n\nthere is a very interesting thread about that in \"pg_stat_statements and non\ndefault search_path\"\nhttps://www.postgresql.org/message-id/8f54c609-17c6-945b-fe13-8b07c0866420@dalibo.com\n\nexpecting distinct QueryIds when using distinct schemas ...\nmaybe that It should be switched to \"Schema dependant\"\n\n\n\n>> needs.4: pg version independant (as long as possible),\n\n>I don't think this cannot be guaranteed.\n\nmaybe using a QueryId versioning GUC \n \n\n>> norm.1: case insensitive\n>> norm.2: blank reduction \n>> norm.3: hash algoritm ?\n>> norm.4: CURRENT_DATE, CURRENT_TIME, LOCALTIME, LOCALTIMESTAMP not\n>> normalized\n>> norm.5: NULL, IS NULL not normalized ?\n>> norm.6: booleans t, f, true, false not normalized\n>> norm.7: order by 1,2 or group by 1,2 should not be normalized\n>> norm.8: pl/pgsql anonymous blocks not normalized\n\n> pg_stat_statements can be the base of the discussion on them.\n\nOK\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 13:05:06 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 8:39 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Yes, I would like first to understand what are the main needs,\n\nI don't really see one implementation that suits every need, as\nprobably not everyone will agree on using relation name vs fully\nqualified relation name for starter. The idea to take into account or\nnormalise comments will also probably require a lot of argumentation\nto reach a consensus.\n\nAlso, most of what's listed here would require catcache lookup for\nevery objects impacted in a query, at every execution. That would be\n*super* expensive (at least for OLTP workload). As far as the need is\nto gather statistics like pg_stat_statements and similar extensions\nare doing, current queryid semantics and underlying limitations is not\nenough of a problem to justify paying that price IMHO. Or do you have\nother needs and/or problems that really can't be solved with current\nimplementation?\n\nIn other words, my first need would be to be able to deactivate\neverything that would make queryid computation significantly more\nexpensive than it's today, and/or to be able to replace it with\nthird-party implementation.\n\n> >> needs.1: stable accross different databases,\n> [...]\n>\n> >needs.7: same value on both the primary and standby?\n>\n> I would say yes (I don't use standby) isn't this included into needs.1 ?\n\nPhysical replication servers have identical oids, so identical\nqueryid. That's obviously not true for logical replication.\n\n",
"msg_date": "Wed, 20 Mar 2019 21:30:01 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Wed, Mar 20, 2019 at 8:39 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> Yes, I would like first to understand what are the main needs,\n> \n> I don't really see one implementation that suits every need, as\n> probably not everyone will agree on using relation name vs fully\n> qualified relation name for starter. The idea to take into account or\n> normalise comments will also probably require a lot of argumentation\n> to reach a consensus.\n> \n> Also, most of what's listed here would require catcache lookup for\n> every objects impacted in a query, at every execution. That would be\n> *super* expensive (at least for OLTP workload). As far as the need is\n> to gather statistics like pg_stat_statements and similar extensions\n> are doing, current queryid semantics and underlying limitations is not\n> enough of a problem to justify paying that price IMHO. Or do you have\n> other needs and/or problems that really can't be solved with current\n> implementation?\n> \n> In other words, my first need would be to be able to deactivate\n> everything that would make queryid computation significantly more\n> expensive than it's today, and/or to be able to replace it with\n> third-party implementation.\n> \n>> >> needs.1: stable accross different databases,\n>> [...]\n>>\n>> >needs.7: same value on both the primary and standby?\n>>\n>> I would say yes (I don't use standby) isn't this included into needs.1 ?\n> \n> Physical replication servers have identical oids, so identical\n> queryid. That's obviously not true for logical replication.\n\nOn my personal point of view, I need to get the same Queryid between (OLAP)\nenvironments\nto be able to compare Production, Pre-production, Qualif performances \n(and I don't need Fully qualified relation names). Today to do that,\nI'm using a custom extension computing the QueryId based on the normalized\nQuery \ntext. \n\nThis way to do, seems very popular and maybe including it in core (as a\ndedicated extension) \nor proposing an alternate queryid (based on relation name) in PGSS (Guc\nenabled) \nwould fullfill 95% of the needs ...\n\nI agree with you on the last point: being able to replace actual QueryId\nwith third-party \nimplementation IS the first need.\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 14:17:59 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "maybe this patch (with a GUC)\nhttps://www.postgresql.org/message-id/55E51C48.1060102@uptime.jp\nwould be enough for thoses actually using a text normalization function.\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 14:30:33 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 10:18 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> On my personal point of view, I need to get the same Queryid between (OLAP)\n> environments\n> to be able to compare Production, Pre-production, Qualif performances\n> (and I don't need Fully qualified relation names). Today to do that,\n> I'm using a custom extension computing the QueryId based on the normalized\n> Query\n> text.\n\nIIUC, your need is to compare pgss (maybe other extensions) counters\nfrom different primary servers, for queries generated by the same\napplication(s). A naive workaround like exporting each environment\ncounters (COPY SELECT 'production', * FROM pg_stat_statements TO\n'...'), importing all of them on a server and then comparing\neverything using the query text (which should be the same if the\napplication is the same) instead of queryid wouldn't work? Or even\nusing foreign tables if exporting data is too troublesome. That's\nclearly not ideal, but that's an easy workaround which doesn't add any\nperformance hit at runtime.\n\n",
"msg_date": "Wed, 20 Mar 2019 22:44:30 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 10:30 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> maybe this patch (with a GUC)\n> https://www.postgresql.org/message-id/55E51C48.1060102@uptime.jp\n> would be enough for thoses actually using a text normalization function.\n\nThe rest of thread raise quite a lot of concerns about the semantics,\nthe cost and the correctness of this patch. After 5 minutes checking,\nit wouldn't suits your need if you use custom functions, custom types,\ncustom operators (say using intarray extension) or if your tables\ndon't have columns in the same order in every environment. And there\nare probably other caveats that I didn't see;\n\n",
"msg_date": "Wed, 20 Mar 2019 23:04:22 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Wed, Mar 20, 2019 at 10:18 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> On my personal point of view, I need to get the same Queryid between\n>> (OLAP)\n>> environments\n>> to be able to compare Production, Pre-production, Qualif performances\n>> (and I don't need Fully qualified relation names). Today to do that,\n>> I'm using a custom extension computing the QueryId based on the\n>> normalized\n>> Query\n>> text.\n> \n> IIUC, your need is to compare pgss (maybe other extensions) counters\n> from different primary servers, for queries generated by the same\n> application(s). A naive workaround like exporting each environment\n> counters (COPY SELECT 'production', * FROM pg_stat_statements TO\n> '...'), importing all of them on a server and then comparing\n> everything using the query text (which should be the same if the\n> application is the same) instead of queryid wouldn't work? Or even\n> using foreign tables if exporting data is too troublesome. That's\n> clearly not ideal, but that's an easy workaround which doesn't add any\n> performance hit at runtime.\n\nThank you Julien for the workaround,\nIt is not easy to build \"cross tables\" in excel to join metrics per query\ntext ...\nand I'm not ready to build a MD5(query) as many query could lead to the same\nQueryId\nI've been using SQL_IDs for ten years, and I have some (who say old) habits\n:^)\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 15:10:22 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "Julien Rouhaud wrote\n> On Wed, Mar 20, 2019 at 10:30 PM legrand legrand\n> <\n\n> legrand_legrand@\n\n> > wrote:\n>>\n>> maybe this patch (with a GUC)\n>> https://www.postgresql.org/message-id/\n\n> 55E51C48.1060102@\n\n>> would be enough for thoses actually using a text normalization function.\n> \n> The rest of thread raise quite a lot of concerns about the semantics,\n> the cost and the correctness of this patch. After 5 minutes checking,\n> it wouldn't suits your need if you use custom functions, custom types,\n> custom operators (say using intarray extension) or if your tables\n> don't have columns in the same order in every environment. And there\n> are probably other caveats that I didn't see;\n\nYes I know,\nIt would have to be extended at less at functions, types, operators ...\nnames\nand a guc pg_stat_statements.queryid_based= 'names' (default being 'oids')\n\nand with a second guc ('fullyqualifed' ?)\nsould include tables, functions, types, operators ... namespaces\n\nlet \"users\" specify their needs, we will see ;o)\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n",
"msg_date": "Wed, 20 Mar 2019 15:19:58 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 11:10 PM legrand legrand\n<legrand_legrand@hotmail.com> wrote:\n>\n> Thank you Julien for the workaround,\n> It is not easy to build \"cross tables\" in excel to join metrics per query\n> text ...\n\nthen keep only one queryid over all environments, that's easy enough in SQL:\n\nSELECT min(queryid) OVER (partition by query ORDER BY environment),\n... FROM all_pg_stat_statements\n\nif you have your environment named like 0_production,\n1_preproduction... you'll get the queryid from production. Once\nagain, that's not ideal but it's easy to deal with it when consuming\nthe data.\n\n> and I'm not ready to build a MD5(query) as many query could lead to the same\n> QueryId\n\nI'd be really surprised if you see a single collision in your whole\nlife, whatever pg_stat_statements.max you're using. I'm also pretty\nsure that the collision risk is technically higher with an 8B queryId\nfield rather than a 16B md5, but maybe I'm wrong.\n\n",
"msg_date": "Wed, 20 Mar 2019 23:56:07 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 03:19:58PM -0700, legrand legrand wrote:\n> > The rest of thread raise quite a lot of concerns about the semantics,\n> > the cost and the correctness of this patch. After 5 minutes checking,\n> > it wouldn't suits your need if you use custom functions, custom types,\n> > custom operators (say using intarray extension) or if your tables\n> > don't have columns in the same order in every environment. And there\n> > are probably other caveats that I didn't see;\n> \n> Yes I know,\n> It would have to be extended at less at functions, types, operators ...\n> names\n> and a guc pg_stat_statements.queryid_based= 'names' (default being 'oids')\n> \n> and with a second guc ('fullyqualifed' ?)\n> sould include tables, functions, types, operators ... namespaces\n> \n> let \"users\" specify their needs, we will see ;o)\n\nWhy can't we just explose the hash computation as an SQL function and\nlet people call it with pg_stat_activity.query or wherever they want the\nvalue? We can install multiple functions if needed.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 9 Apr 2019 17:26:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 11:26 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Mar 20, 2019 at 03:19:58PM -0700, legrand legrand wrote:\n> > > The rest of thread raise quite a lot of concerns about the semantics,\n> > > the cost and the correctness of this patch. After 5 minutes checking,\n> > > it wouldn't suits your need if you use custom functions, custom types,\n> > > custom operators (say using intarray extension) or if your tables\n> > > don't have columns in the same order in every environment. And there\n> > > are probably other caveats that I didn't see;\n> >\n> > Yes I know,\n> > It would have to be extended at less at functions, types, operators ...\n> > names\n> > and a guc pg_stat_statements.queryid_based= 'names' (default being 'oids')\n> >\n> > and with a second guc ('fullyqualifed' ?)\n> > sould include tables, functions, types, operators ... namespaces\n> >\n> > let \"users\" specify their needs, we will see ;o)\n>\n> Why can't we just explose the hash computation as an SQL function and\n> let people call it with pg_stat_activity.query or wherever they want the\n> value? We can install multiple functions if needed.\n\nIt'd be very nice to exposing the queryid computation at SQL level.\nHowever it would allow to compute only the top-level queryid from\npg_stat_activity. For monitoring and performance purpose, people\nwould probably want to see the queryid of the underlying query\nactually running I think.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 08:45:54 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "I missed this thread. I'd be happy to post the code for what we use as the\nstable query identifier, but we could definitely come up with a more\nefficient algorithm if we're willing to assume that the sql statements are\nthe same if and only if the parse tree structure is the same.\n\nCurrently what we do for the sql hash is to simply replace all the literals\nand then hash the resulting SQL string, because for our use case we wanted\nto be insensitive to the even the structure of the parse tree from one\nrelease to the next. That may be too conservative for other use cases. If\nit's ok to assume that the structure of the Query tree doesn't change, then\nyou could define a stable identifier for each node type, ignore literal\nconstants, and hash fully-qualified object names instead of OIDs. That\nshould be pretty fast.\n\nWe also compute a plan hash that converts Plan tree node id's into stable\nidentifiers, and computes a cheap hash function over all nodes in the plan. \nThis is fast and efficient. It's also pretty straightforward to convert\nnode id's to stable identifiers.\n\nA complication that we recently had to deal with was hashing and normalizing\nthe text of queries inside pl/pgsql functions, where variables are converted\nto parameter markers. In that case the sql text is transformed to contain\nboth parameter markers and literal replacement markers before computing the\nsql hash.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 9 Aug 2019 18:27:04 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Sat, Aug 10, 2019 at 3:27 AM Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> I missed this thread. I'd be happy to post the code for what we use as the\n> stable query identifier, but we could definitely come up with a more\n> efficient algorithm if we're willing to assume that the sql statements are\n> the same if and only if the parse tree structure is the same.\n>\n> Currently what we do for the sql hash is to simply replace all the literals\n> and then hash the resulting SQL string\n\nIsn't that what pg_store_plan is already doing? Except that it\nremoves extraneous whitespaces and put identifiers in uppercase so\nthat you get a reasonable query identifier.\n\n> you could define a stable identifier for each node type, ignore literal\n> constants, and hash fully-qualified object names instead of OIDs. That\n> should be pretty fast.\n\nThis has been discussed already, and resolving all object names and\nqualifier names will add a dramatic overhead for many workloads.\n\n\n",
"msg_date": "Sat, 10 Aug 2019 10:00:25 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "Hi Jim,\n\nIts never too later, as nothing has been concluded about that survey ;o)\n\nFor information, I thought It would be possible to get a more stable\nQueryId,\nby hashing relation name or fully qualified names.\n\nWith the support of Julien Rouhaud, I tested with this kind of code:\n\n \tcase RTE_RELATION:\n\t\t\tif (pgss_queryid_oid)\n\t\t\t\t{\n\t\t\t\t\tAPP_JUMB(rte->relid);\n\t\t\t\t}\n\t\t\t\telse\n\t\t\t\t{\n\t\t\t\t\trel = RelationIdGetRelation(rte->relid);\n\t\t\t\t\tAPP_JUMB_STRING(RelationGetRelationName(rel));\n\t\t\t\t\tAPP_JUMB_STRING(get_namespace_name(get_rel_namespace(rte->relid)));\n\t\t\t\t\tRelationClose(rel);\n\t\t\t\t{\n\t\t\t\t\nthinking that 3 hash options would be interesting in pgss:\n1- actual OID\n2- relation names only (for databases WITHOUT distinct schemas contaning\nsame tables)\n3- fully qualified names schema.relname (for databases WITH distinct schemas\ncontaning same tables)\n\nbut performances where quite bad (it was a few month ago, but I remenber\nabout a 1-5% decrease).\nI also remenber that's this was not portable between distinct pg versions\n11/12\nand also not sure it was stable between windows / linux ports ...\n\nSo I stopped here ... Maybe its time to test deeper this alternative \n(to get fully qualified names hashes in One call) knowing that such\ntransformations \nwill have to be done for all objects types (not only relations) ?\n\nI'm ready to continue testing as it seems the less impacting solution to\nkeep actual pgss ...\n\nIf this doesn't work, then trying with a normalized query text (associated\nwith search_path) would be the \nother alternative, but impacts on actual pgss would be higher ... \n\nRegards\nPAscal\n\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 10 Aug 2019 13:34:38 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "If hashing names instead of using OIDs is too expensive for some workload,\nthen that workload would need to be able to turn statement hashing off. So\nit needs to be optional, just like queryId is optionally computed today. \nFor many cases the extra overhead of hashing object names is small compared\nto optimization time plus execution time.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 12 Aug 2019 05:40:24 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "Hi!\nWhat about adding new column in pg_stat_statements e.g. sql_id it's hash from normalized query. Аnd add function which get that hash (using raw_parser, raw_expression_tree_walker) for any query\n`\npostgres=# select get_queryid('select 1');\n get_queryid \n-------------\n 680388963\n(1 row)\n`\nthat function can be used on pg_stat_activity(query) for join pg_stat_statements if it need.\n\n12.08.2019, 14:51, \"legrand legrand\" <legrand_legrand@hotmail.com>:\n> Hi Jim,\n>\n> Its never too later, as nothing has been concluded about that survey ;o)\n>\n> For information, I thought It would be possible to get a more stable\n> QueryId,\n> by hashing relation name or fully qualified names.\n>\n> With the support of Julien Rouhaud, I tested with this kind of code:\n>\n> case RTE_RELATION:\n> if (pgss_queryid_oid)\n> {\n> APP_JUMB(rte->relid);\n> }\n> else\n> {\n> rel = RelationIdGetRelation(rte->relid);\n> APP_JUMB_STRING(RelationGetRelationName(rel));\n> APP_JUMB_STRING(get_namespace_name(get_rel_namespace(rte->relid)));\n> RelationClose(rel);\n> {\n>\n> thinking that 3 hash options would be interesting in pgss:\n> 1- actual OID\n> 2- relation names only (for databases WITHOUT distinct schemas contaning\n> same tables)\n> 3- fully qualified names schema.relname (for databases WITH distinct schemas\n> contaning same tables)\n>\n> but performances where quite bad (it was a few month ago, but I remenber\n> about a 1-5% decrease).\n> I also remenber that's this was not portable between distinct pg versions\n> 11/12\n> and also not sure it was stable between windows / linux ports ...\n>\n> So I stopped here ... Maybe its time to test deeper this alternative\n> (to get fully qualified names hashes in One call) knowing that such\n> transformations\n> will have to be done for all objects types (not only relations) ?\n>\n> I'm ready to continue testing as it seems the less impacting solution to\n> keep actual pgss ...\n>\n> If this doesn't work, then trying with a normalized query text (associated\n> with search_path) would be the\n> other alternative, but impacts on actual pgss would be higher ...\n>\n> Regards\n> PAscal\n>\n> --\n> Sent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n-------- \nEfimkin Evgeny\n\n\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:52:19 +0300",
"msg_from": "Evgeniy Efimkin <efimkin@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 2:40 PM Jim Finnerty <jfinnert@amazon.com> wrote:\n>\n> If hashing names instead of using OIDs is too expensive for some workload,\n> then that workload would need to be able to turn statement hashing off. So\n> it needs to be optional, just like queryId is optionally computed today.\n> For many cases the extra overhead of hashing object names is small compared\n> to optimization time plus execution time.\n\nAre you suggesting a fallback to oid based queryid or to entirely\ndisable queryid generation?\n\nHow would that work with pg_stat_statements or similar extension?\n\n\n",
"msg_date": "Mon, 12 Aug 2019 14:55:23 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 2:52 PM Evgeniy Efimkin <efimkin@yandex-team.ru> wrote:\n>\n> Hi!\n> What about adding new column in pg_stat_statements e.g. sql_id it's hash from normalized query. Аnd add function which get that hash (using raw_parser, raw_expression_tree_walker) for any query\n> `\n> postgres=# select get_queryid('select 1');\n> get_queryid\n> -------------\n> 680388963\n> (1 row)\n\nOne problem with pg_stat_statement's normalized query is that it's not\nstable, it's storing the normalized version of the first query string\npassed when an entry is created. So you could have different strings\ndepending on whether the query was fully qualified or relying on\nsearch path for instance.\n\nExposing the queryid computation at SQL level has already been\nproposed, and FWIW I'm all for it.\n\n\n",
"msg_date": "Mon, 12 Aug 2019 15:02:00 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "\n\n> One problem with pg_stat_statement's normalized query is that it's not\n> stable, it's storing the normalized version of the first query string\n> passed when an entry is created. So you could have different strings\n> depending on whether the query was fully qualified or relying on\n> search path for instance.\nI think normalized query stored in pg_stat_statement it's not very important.\nit might look something like that\n`\n query | calls | queryid | sql_id\n-----------------------+-------+------------+------------\n Select * from t | 4 | 762359559 | 680388963\n select * from t | 7 | 3438533065 | 680388963\n select * from test2.t | 1 | 230362373 | 680388963\n`\nwe can cut schema name in sql normalization \nalgorithm \n-------- \nEfimkin Evgeny\n\n\n\n",
"msg_date": "Mon, 12 Aug 2019 17:01:46 +0300",
"msg_from": "Evgeniy Efimkin <efimkin@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "On Mon, Aug 12, 2019 at 4:01 PM Evgeniy Efimkin <efimkin@yandex-team.ru> wrote:\n>\n> > One problem with pg_stat_statement's normalized query is that it's not\n> > stable, it's storing the normalized version of the first query string\n> > passed when an entry is created. So you could have different strings\n> > depending on whether the query was fully qualified or relying on\n> > search path for instance.\n> I think normalized query stored in pg_stat_statement it's not very important.\n> it might look something like that\n> `\n> query | calls | queryid | sql_id\n> -----------------------+-------+------------+------------\n> Select * from t | 4 | 762359559 | 680388963\n> select * from t | 7 | 3438533065 | 680388963\n> select * from test2.t | 1 | 230362373 | 680388963\n> `\n> we can cut schema name in sql normalization\n> algorithm\n\nNot only schema name but all kind of qualification and indeed extra\nwhitespaces. Things get harder for other difference that aren't\nmeaningful (LIKE vs ~~, IN vs = ANY...). That would also imply that\neveryone wants to ignore schemas in query normalization, I'm not sure\nhow realistic that is.\n\n\n",
"msg_date": "Mon, 12 Aug 2019 16:15:31 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
},
{
"msg_contents": "my understanding is\n\n* pg_stat_statements.track = 'none' or 'top' (default) or 'all' \n to make queryId optionally computed\n\n* a new GUC: pg_stat_statements.queryid_based = 'oids' (default) or 'names'\nor 'fullnames'\n to choose the queryid computation algorithm\n\nam I rigth ?\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 12 Aug 2019 10:01:30 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [survey] New \"Stable\" QueryId based on normalized query text"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at a partition pruning bug [1], I noticed something that\nstarted to feel like a regression:\n\nSetup:\n\ncreate table p (a int) partition by list (a);\ncreate table p1 partition of p for values in (1);\n\nIn PG 10:\n\nset constraint_exclusion to on;\nexplain select * from p1 where a = 2;\n QUERY PLAN\n──────────────────────────────────────────\n Result (cost=0.00..0.00 rows=0 width=4)\n One-Time Filter: false\n(2 rows)\n\nIn PG 11 (and HEAD):\n\nset constraint_exclusion to on;\nexplain select * from p1 where a = 2;\n QUERY PLAN\n────────────────────────────────────────────────────\n Seq Scan on p1 (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 2)\n(2 rows)\n\nThat's because get_relation_constraints() no longer (as of PG 11) includes\nthe partition constraint for SELECT queries. But that's based on an\nassumption that partitions are always accessed via parent, so partition\npruning would make loading the partition constraint unnecessary. That's\nnot always true, as shown in the above example.\n\nShould we fix that? I'm attaching a patch here.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/00e601d4ca86$932b8bc0$b982a340$@lab.ntt.co.jp",
"msg_date": "Wed, 20 Mar 2019 13:37:13 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "selecting from partitions and constraint exclusion"
},
{
"msg_contents": "On Wed, 20 Mar 2019 at 17:37, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> That's because get_relation_constraints() no longer (as of PG 11) includes\n> the partition constraint for SELECT queries. But that's based on an\n> assumption that partitions are always accessed via parent, so partition\n> pruning would make loading the partition constraint unnecessary. That's\n> not always true, as shown in the above example.\n>\n> Should we fix that? I'm attaching a patch here.\n\nPerhaps we should. The constraint_exclusion documents [1] just mention:\n\n> Controls the query planner's use of table constraints to optimize queries.\n\nand I'm pretty sure you could class the partition constraint as a\ntable constraint.\n\nAs for the patch:\n\n+ if ((root->parse->commandType == CMD_SELECT && !IS_OTHER_REL(rel)) ||\n\nShouldn't this really be checking rel->reloptkind == RELOPT_BASEREL\ninstead of !IS_OTHER_REL(rel) ?\n\nFor the comments:\n\n+ * For selects, we only need those if the partition is directly mentioned\n+ * in the query, that is not via parent. In case of the latter, partition\n+ * pruning, which uses the parent table's partition bound descriptor,\n+ * ensures that we only consider partitions whose partition constraint\n+ * satisfy the query quals (or, the two don't contradict each other), so\n+ * loading them is pointless.\n+ *\n+ * For updates and deletes, we always need those for performing partition\n+ * pruning using constraint exclusion, but, only if pruning is enabled.\n\nYou mention \"the latter\", normally you'd only do that if there was a\nformer, but in this case there's not.\n\nHow about just making it:\n\n/*\n * Append partition predicates, if any.\n *\n * For selects, partition pruning uses the parent table's partition bound\n * descriptor, so there's no need to include the partition constraint for\n * this case. However, if the partition is referenced directly in the query\n * then no partition pruning will occur, so we'll include it in the case.\n */\nif ((root->parse->commandType != CMD_SELECT && enable_partition_pruning) ||\n (root->parse->commandType == CMD_SELECT && rel->reloptkind ==\nRELOPT_BASEREL))\n\nFor the tests, it seems excessive to create some new tables for this.\nWon't the tables in the previous test work just fine?\n\n[1] https://www.postgresql.org/docs/devel/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 20 Mar 2019 23:41:53 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "Hi David,\n\nThanks for checking.\n\nOn 2019/03/20 19:41, David Rowley wrote:\n> On Wed, 20 Mar 2019 at 17:37, Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> That's because get_relation_constraints() no longer (as of PG 11) includes\n>> the partition constraint for SELECT queries. But that's based on an\n>> assumption that partitions are always accessed via parent, so partition\n>> pruning would make loading the partition constraint unnecessary. That's\n>> not always true, as shown in the above example.\n>>\n>> Should we fix that? I'm attaching a patch here.\n> \n> Perhaps we should. The constraint_exclusion documents [1] just mention:\n> \n>> Controls the query planner's use of table constraints to optimize queries.\n> \n> and I'm pretty sure you could class the partition constraint as a\n> table constraint.\n\nYes.\n\n> As for the patch:\n> \n> + if ((root->parse->commandType == CMD_SELECT && !IS_OTHER_REL(rel)) ||\n> \n> Shouldn't this really be checking rel->reloptkind == RELOPT_BASEREL\n> instead of !IS_OTHER_REL(rel) ?\n\nHmm, thought I'd use the macro if we have one, but I'll change as you\nsuggest if that's what makes the code easier to follow. As you might\nknow, we can only get \"simple\" relations here.\n\n> For the comments:\n> \n> + * For selects, we only need those if the partition is directly mentioned\n> + * in the query, that is not via parent. In case of the latter, partition\n> + * pruning, which uses the parent table's partition bound descriptor,\n> + * ensures that we only consider partitions whose partition constraint\n> + * satisfy the query quals (or, the two don't contradict each other), so\n> + * loading them is pointless.\n> + *\n> + * For updates and deletes, we always need those for performing partition\n> + * pruning using constraint exclusion, but, only if pruning is enabled.\n> \n> You mention \"the latter\", normally you'd only do that if there was a\n> former, but in this case there's not.\n\nI was trying to go for \"accessing partition directly\" as the former and\n\"accessing it via the parent\" as the latter, but maybe the sentence as\nwritten cannot be read that way.\n\n> How about just making it:\n> \n> /*\n> * Append partition predicates, if any.\n> *\n> * For selects, partition pruning uses the parent table's partition bound\n> * descriptor, so there's no need to include the partition constraint for\n> * this case. However, if the partition is referenced directly in the query\n> * then no partition pruning will occur, so we'll include it in the case.\n> */\n> if ((root->parse->commandType != CMD_SELECT && enable_partition_pruning) ||\n> (root->parse->commandType == CMD_SELECT && rel->reloptkind ==\n> RELOPT_BASEREL))\n\nOK, I will use this text.\n\n> For the tests, it seems excessive to create some new tables for this.\n> Won't the tables in the previous test work just fine?\n\nOK, I have revised the tests to use existing tables.\n\nI'll add this to July fest to avoid forgetting about this.\n\nThanks,\nAmit",
"msg_date": "Fri, 22 Mar 2019 17:17:25 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "On 2019/03/22 17:17, Amit Langote wrote:\n> I'll add this to July fest to avoid forgetting about this.\n\nI'd forgotten to do this, but done today. :)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 25 Mar 2019 09:31:41 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "On Wed, Mar 20, 2019 at 12:37 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> That's because get_relation_constraints() no longer (as of PG 11) includes\n> the partition constraint for SELECT queries.\n\nWhat commit made that change?\n\nThis sounds to me like maybe it should be an open item.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 11:21:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "On 2019/03/26 0:21, Robert Haas wrote:\n> On Wed, Mar 20, 2019 at 12:37 AM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> That's because get_relation_constraints() no longer (as of PG 11) includes\n>> the partition constraint for SELECT queries.\n> \n> What commit made that change?\n\nThat would be 9fdb675fc5d2 (faster partition pruning) that got into PG 11.\n\n> This sounds to me like maybe it should be an open item.\n\nI've added this under Older Bugs.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 26 Mar 2019 09:36:39 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "\nLe 25/03/2019 à 01:31, Amit Langote a écrit :\n> On 2019/03/22 17:17, Amit Langote wrote:\n>> I'll add this to July fest to avoid forgetting about this.\n> I'd forgotten to do this, but done today. :)\n>\n> Thanks,\n> Amit\n\nHello Amit,\n\nJust a quick information that your last patch does not apply on head:\n\n$ git apply\n~/Téléchargements/v2-0001-Fix-planner-to-load-partition-constraint-in-some-.patch\nerror: patch failed: src/test/regress/expected/partition_prune.out:3637\nerror: src/test/regress/expected/partition_prune.out: patch does not apply\n\nManually applying it on top of Hosoya's last 2 patches, It corrects the\ndifferent cases we found so far.\nI will keep on testing next week.\n\nCordialement,\n\nThibaut\n\n\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:12:12 +0200",
"msg_from": "Thibaut <thibaut.madelaine@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
},
{
"msg_contents": "Hi Thibaut,\n\nOn 2019/04/06 1:12, Thibaut wrote:\n> Le 25/03/2019 à 01:31, Amit Langote a écrit :\n>> On 2019/03/22 17:17, Amit Langote wrote:\n>>> I'll add this to July fest to avoid forgetting about this.\n>> I'd forgotten to do this, but done today. :)\n>>\n>> Thanks,\n>> Amit\n> \n> Hello Amit,\n> \n> Just a quick information that your last patch does not apply on head:\n> \n> $ git apply\n> ~/Téléchargements/v2-0001-Fix-planner-to-load-partition-constraint-in-some-.patch\n> error: patch failed: src/test/regress/expected/partition_prune.out:3637\n> error: src/test/regress/expected/partition_prune.out: patch does not apply\n> \n> Manually applying it on top of Hosoya's last 2 patches, It corrects the\n> different cases we found so far.\n> I will keep on testing next week.\n\nThanks for the heads up.\n\nWe are discussing this and another related matter on a different thread\n(titled \"speeding up planning with partitions\" [1]). Maybe, the problem\noriginally reported here will get resolved there once we reach consensus\nfirst on what to do in the HEAD branch and what's back-patchable as a\nbug-fix to the PG 11 branch.\n\n[1]\nhttps://www.postgresql.org/message-id/50415da6-0258-d135-2ba4-197041b57c5b%40lab.ntt.co.jp\n\n\n\n",
"msg_date": "Mon, 8 Apr 2019 13:45:35 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: selecting from partitions and constraint exclusion"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15708\nLogged by: Daurnimator\nEmail address: quae@daurnimator.com\nPostgreSQL version: 11.2\nOperating system: linux\nDescription: \n\n(from https://gist.github.com/daurnimator/b1d2c16359e346a466b3093ae2757acf\n)\r\n\r\nThis fails, seemingly because the RLS on 'bar' is being checked by alice,\ninstead of the view owner bob:\r\n```sql\r\ncreate role alice;\r\n\r\ncreate table bar(a integer);\r\nalter table bar enable row level security;\r\ncreate table qux(b integer);\r\n\r\ncreate role bob;\r\ncreate policy blahblah on bar to bob\r\n\tusing(exists(select 1 from qux));\r\ngrant select on table bar to bob;\r\ngrant select on table qux to bob;\r\n\r\ncreate view foo as select * from bar;\r\nalter view foo owner to bob;\r\ngrant select on table foo to alice;\r\n-- grant select on table qux to alice; -- shouldn't be required\r\n\r\nset role alice;\r\nselect * from foo;\r\n```\r\n\r\n```\r\n$ psql -f rls_trouble.sql \r\nCREATE ROLE\r\nCREATE TABLE\r\nALTER TABLE\r\nCREATE TABLE\r\nCREATE ROLE\r\nCREATE POLICY\r\nGRANT\r\nGRANT\r\nCREATE VIEW\r\nALTER VIEW\r\nGRANT\r\nSET\r\npsql:rls_trouble.sql:18: ERROR: permission denied for table qux\r\n```\r\n\r\nIf we add an indirection via another view, then I get the result I\nexpected...\r\n```sql\r\ncreate role alice;\r\n\r\ncreate table bar(a integer);\r\nalter table bar enable row level security;\r\ncreate table qux(b integer);\r\n\r\n-- if we add a layer of indirection it works.... wat?\r\ncreate view indirection as select * from bar;\r\n\r\ncreate role bob;\r\ncreate policy blahblah on bar to bob\r\n\tusing(exists(select 1 from qux));\r\ngrant select on table bar to bob;\r\ngrant select on table indirection to bob;\r\ngrant select on table qux to bob;\r\n\r\ncreate view foo as select * from indirection;\r\nalter view foo owner to bob;\r\ngrant select on table foo to alice;\r\n\r\nset role alice;\r\nselect * from foo;\r\n```",
"msg_date": "Wed, 20 Mar 2019 23:53:56 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15708: RLS 'using' running as wrong user when called from a view"
},
{
"msg_contents": "On Thu, 21 Mar 2019 at 00:39, PG Bug reporting form\n<noreply@postgresql.org> wrote:\n>\n> This fails, seemingly because the RLS on 'bar' is being checked by alice,\n> instead of the view owner bob:\n>\n\nYes I agree, that appears to be a bug. The subquery in the RLS policy\nshould be checked as the view owner -- i.e., we need to propagate the\ncheckAsUser for the RTE with RLS to any subqueries in its RLS\npolicies.\n\nIt looks like the best place to fix it is in\nget_policies_for_relation(), since that's where all the policies to be\napplied for a given RTE are pulled together. Patch attached.\n\nRegards,\nDean",
"msg_date": "Sun, 24 Mar 2019 11:19:52 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15708: RLS 'using' running as wrong user when called from a\n view"
},
{
"msg_contents": "Greetings,\n\n* Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> On Thu, 21 Mar 2019 at 00:39, PG Bug reporting form\n> <noreply@postgresql.org> wrote:\n> >\n> > This fails, seemingly because the RLS on 'bar' is being checked by alice,\n> > instead of the view owner bob:\n> \n> Yes I agree, that appears to be a bug. The subquery in the RLS policy\n> should be checked as the view owner -- i.e., we need to propagate the\n> checkAsUser for the RTE with RLS to any subqueries in its RLS\n> policies.\n\nAgreed.\n\n> It looks like the best place to fix it is in\n> get_policies_for_relation(), since that's where all the policies to be\n> applied for a given RTE are pulled together. Patch attached.\n\nYes, on a quick review, that looks like a good solution to me as well.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 25 Mar 2019 16:27:23 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15708: RLS 'using' running as wrong user when called from a\n view"
},
{
"msg_contents": "On Mon, 25 Mar 2019 at 20:27, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n>\n> > It looks like the best place to fix it is in\n> > get_policies_for_relation(), since that's where all the policies to be\n> > applied for a given RTE are pulled together. Patch attached.\n>\n> Yes, on a quick review, that looks like a good solution to me as well.\n>\n\nOn second thoughts, it actually needs to be in\nget_row_security_policies(), after making copies of the quals from the\npolicies, otherwise it would be scribbling on the copies from the\nrelcache. Actually that makes the code change a bit simpler too.\n\nRegards,\nDean",
"msg_date": "Wed, 27 Mar 2019 12:46:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15708: RLS 'using' running as wrong user when called from a\n view"
},
{
"msg_contents": "On Wed, 27 Mar 2019 at 23:46, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> On second thoughts, it actually needs to be in\n> get_row_security_policies(), after making copies of the quals from the\n> policies, otherwise it would be scribbling on the copies from the\n> relcache. Actually that makes the code change a bit simpler too.\n\nThanks for writing the patch!\n\nI'm sad this missed the last commit fest; I think this bug might be\ncausing security issues in a few deployments.\nCould you submit the patch for the next commit fest?\n\n\n",
"msg_date": "Mon, 29 Apr 2019 13:56:02 +1000",
"msg_from": "Daurnimator <quae@daurnimator.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15708: RLS 'using' running as wrong user when called from a\n view"
},
{
"msg_contents": "On Mon, 29 Apr 2019 at 04:56, Daurnimator <quae@daurnimator.com> wrote:\n>\n> On Wed, 27 Mar 2019 at 23:46, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > On second thoughts, it actually needs to be in\n> > get_row_security_policies(), after making copies of the quals from the\n> > policies, otherwise it would be scribbling on the copies from the\n> > relcache. Actually that makes the code change a bit simpler too.\n>\n> Thanks for writing the patch!\n>\n> I'm sad this missed the last commit fest; I think this bug might be\n> causing security issues in a few deployments.\n> Could you submit the patch for the next commit fest?\n\nActually I pushed the fix for this a while ago [1] (sorry I forgot to\nreply back to this thread), so it will be available in the next set of\nminor version updates later this week.\n\nRegards,\nDean\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e2d28c0f404713f564dc2250646551c75172f17b\n\n\n",
"msg_date": "Mon, 29 Apr 2019 08:49:32 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15708: RLS 'using' running as wrong user when called from a\n view"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nHere I attached a patch that supports building of PostgreSQL with VS 2019.\nVS 2019 is going to release on Apr 2nd 2019, it will be good if version 12\nsupports compiling. The attached for is for review, it may needs some\nupdates\nonce the final version is released.\n\nCommit d9dd406fe281d22d5238d3c26a7182543c711e74 has reduced the\nminimum visual studio support to 2013 to support C99 standards, because of\nthis\nreason, the current attached patch cannot be backpatched as it is.\n\nI can provide a separate back branches patch later once this patch comes to\na stage of commit. Currently all the supported branches are possible to\ncompile with VS 2017.\n\ncomments?\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Thu, 21 Mar 2019 11:36:42 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 11:36:42AM +1100, Haribabu Kommi wrote:\n> I can provide a separate back branches patch later once this patch comes to\n> a stage of commit. Currently all the supported branches are possible to\n> compile with VS 2017.\n\nWhen it comes to support newer versions of MSVC, we have come up\nlately to backpatch that down to two stable versions but not further\ndown (see f2ab389 for v10 and v9.6), so it looks sensible to target\nv11 and v10 as well if we were to do it today, and v11/v12 if we do it\nin six months per the latest trends.\n--\nMichael",
"msg_date": "Thu, 21 Mar 2019 09:47:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 09:47:02AM +0900, Michael Paquier wrote:\n> When it comes to support newer versions of MSVC, we have come up\n> lately to backpatch that down to two stable versions but not further\n> down (see f2ab389 for v10 and v9.6), so it looks sensible to target\n> v11 and v10 as well if we were to do it today, and v11/v12 if we do it\n> in six months per the latest trends.\n\nBy the way, you mentioned upthread that all the branches can compile\nwith MSVC 2017, but that's not actually the case for 9.5 and 9.4 if\nyou don't back-patch f2ab389 further down. Or did you mean that the\ncode as-is is compilable if the scripts are patched manually?\n--\nMichael",
"msg_date": "Thu, 21 Mar 2019 10:31:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 12:31 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Thu, Mar 21, 2019 at 09:47:02AM +0900, Michael Paquier wrote:\n> > When it comes to support newer versions of MSVC, we have come up\n> > lately to backpatch that down to two stable versions but not further\n> > down (see f2ab389 for v10 and v9.6), so it looks sensible to target\n> > v11 and v10 as well if we were to do it today, and v11/v12 if we do it\n> > in six months per the latest trends.\n>\n> By the way, you mentioned upthread that all the branches can compile\n> with MSVC 2017, but that's not actually the case for 9.5 and 9.4 if\n> you don't back-patch f2ab389 further down.\n>\n\nThe commit f2ab389 is later back-patch to version till 9.3 in commit\n19acfd65.\nI guess that building the windows installer for all the versions using the\nsame\nvisual studio is may be the reason behind that back-patch.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Mar 21, 2019 at 12:31 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 21, 2019 at 09:47:02AM +0900, Michael Paquier wrote:\n> When it comes to support newer versions of MSVC, we have come up\n> lately to backpatch that down to two stable versions but not further\n> down (see f2ab389 for v10 and v9.6), so it looks sensible to target\n> v11 and v10 as well if we were to do it today, and v11/v12 if we do it\n> in six months per the latest trends.\n\nBy the way, you mentioned upthread that all the branches can compile\nwith MSVC 2017, but that's not actually the case for 9.5 and 9.4 if\nyou don't back-patch f2ab389 further down.The commit f2ab389 is later back-patch to version till 9.3 in commit 19acfd65.I guess that building the windows installer for all the versions using the samevisual studio is may be the reason behind that back-patch.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 21 Mar 2019 12:45:57 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 12:45:57PM +1100, Haribabu Kommi wrote:\n> The commit f2ab389 is later back-patch to version till 9.3 in commit\n> 19acfd65. I guess that building the windows installer for all the\n> versions using the same visual studio is may be the reason behind\n> that back-patch.\n\nI did not remember this one, thanks for pointing it out. So my\nmemories on that were incorrect. If it is possible to get the code to\nbuild with MSVC 2019 on all the supported branches, we could do so.\n--\nMichael",
"msg_date": "Thu, 21 Mar 2019 16:13:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "\nOn 3/21/19 3:13 AM, Michael Paquier wrote:\n> On Thu, Mar 21, 2019 at 12:45:57PM +1100, Haribabu Kommi wrote:\n>> The commit f2ab389 is later back-patch to version till 9.3 in commit\n>> 19acfd65. I guess that building the windows installer for all the\n>> versions using the same visual studio is may be the reason behind\n>> that back-patch.\n> I did not remember this one, thanks for pointing it out. So my\n> memories on that were incorrect. If it is possible to get the code to\n> build with MSVC 2019 on all the supported branches, we could do so.\n\n\n\n\nVS2019 is currently in preview. I think we'd probably be better off\nwaiting until the full release. I don't know of any pressing urgency for\nus to support it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 25 Mar 2019 15:44:45 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "I\n\nOn 3/25/19 3:44 PM, Andrew Dunstan wrote:\n> On 3/21/19 3:13 AM, Michael Paquier wrote:\n>> On Thu, Mar 21, 2019 at 12:45:57PM +1100, Haribabu Kommi wrote:\n>>> The commit f2ab389 is later back-patch to version till 9.3 in commit\n>>> 19acfd65. I guess that building the windows installer for all the\n>>> versions using the same visual studio is may be the reason behind\n>>> that back-patch.\n>> I did not remember this one, thanks for pointing it out. So my\n>> memories on that were incorrect. If it is possible to get the code to\n>> build with MSVC 2019 on all the supported branches, we could do so.\n>\n>\n>\n> VS2019 is currently in preview. I think we'd probably be better off\n> waiting until the full release. I don't know of any pressing urgency for\n> us to support it.\n>\n>\n\nI see Haribabu mentioned that in his original email.\n\n\nI'll take a look at verifying the patch.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 25 Mar 2019 17:32:42 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "\nOn 3/20/19 8:36 PM, Haribabu Kommi wrote:\n> Hi Hackers,\n>\n> Here I attached a patch that supports building of PostgreSQL with VS 2019.\n> VS 2019 is going to release on Apr 2nd 2019, it will be good if version 12\n> supports compiling. The attached for is for review, it may needs some\n> updates\n> once the final version is released.\n>\n> Commit d9dd406fe281d22d5238d3c26a7182543c711e74 has reduced the\n> minimum visual studio support to 2013 to support C99 standards,\n> because of this\n> reason, the current attached patch cannot be backpatched as it is.\n>\n> I can provide a separate back branches patch later once this patch\n> comes to a stage of commit. Currently all the supported branches are\n> possible to compile with VS 2017.\n>\n> comments?\n>\n>\n\n\nI have verified that this works with VS2019.\n\n\nThere are a few typos in the comments that need cleaning up.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Mar 2019 12:03:05 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 3:03 AM Andrew Dunstan <\nandrew.dunstan@2ndquadrant.com> wrote:\n\n>\n> On 3/20/19 8:36 PM, Haribabu Kommi wrote:\n> > Hi Hackers,\n> >\n> > Here I attached a patch that supports building of PostgreSQL with VS\n> 2019.\n> > VS 2019 is going to release on Apr 2nd 2019, it will be good if version\n> 12\n> > supports compiling. The attached for is for review, it may needs some\n> > updates\n> > once the final version is released.\n> >\n> > Commit d9dd406fe281d22d5238d3c26a7182543c711e74 has reduced the\n> > minimum visual studio support to 2013 to support C99 standards,\n> > because of this\n> > reason, the current attached patch cannot be backpatched as it is.\n> >\n> > I can provide a separate back branches patch later once this patch\n> > comes to a stage of commit. Currently all the supported branches are\n> > possible to compile with VS 2017.\n> >\n> > comments?\n> >\n> >\n>\n>\n> I have verified that this works with VS2019.\n>\n>\n> There are a few typos in the comments that need cleaning up.\n>\n\nThanks for the review.\n\nI corrected the typos in the comments, hopefully I covered everything.\nAttached is the updated patch. Once the final VS 2019, I will check the\npatch if it needs any more updates.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Wed, 27 Mar 2019 11:42:02 +1100",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 11:42 AM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> On Wed, Mar 27, 2019 at 3:03 AM Andrew Dunstan <\n> andrew.dunstan@2ndquadrant.com> wrote:\n>\n>>\n>> On 3/20/19 8:36 PM, Haribabu Kommi wrote:\n>> > Hi Hackers,\n>> >\n>> > Here I attached a patch that supports building of PostgreSQL with VS\n>> 2019.\n>> > VS 2019 is going to release on Apr 2nd 2019, it will be good if version\n>> 12\n>> > supports compiling. The attached for is for review, it may needs some\n>> > updates\n>> > once the final version is released.\n>> >\n>> > Commit d9dd406fe281d22d5238d3c26a7182543c711e74 has reduced the\n>> > minimum visual studio support to 2013 to support C99 standards,\n>> > because of this\n>> > reason, the current attached patch cannot be backpatched as it is.\n>> >\n>> > I can provide a separate back branches patch later once this patch\n>> > comes to a stage of commit. Currently all the supported branches are\n>> > possible to compile with VS 2017.\n>> >\n>> > comments?\n>> >\n>> >\n>>\n>>\n>> I have verified that this works with VS2019.\n>>\n>>\n>> There are a few typos in the comments that need cleaning up.\n>>\n>\n> Thanks for the review.\n>\n> I corrected the typos in the comments, hopefully I covered everything.\n> Attached is the updated patch. Once the final VS 2019, I will check the\n> patch if it needs any more updates.\n>\n\nVisual Studio 2019 is officially released. There is no major change in the\npatch, except some small comments update.\n\nAlso attached patches for the back branches also.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Tue, 9 Apr 2019 17:46:56 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "I have gone through path '0001-Support-building-with-visual-studio-2019.patch' only, but I am sure some comments will also apply to back branches.\r\n\r\n1. The VisualStudioVersion value looks odd:\r\n\r\n+\t$self->{VisualStudioVersion} = '16.0.32.32432';\r\n\r\nAre you using a pre-release version [1]?\r\n\r\n2. There is a typo: s/stuido/studio/:\r\n\r\n+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15.\r\n\r\nThere is something in the current code that I think should be also updated. The code for _GetVisualStudioVersion contains:\r\n\r\n if ($major > 14)\r\n \t{\r\n \tcarp\r\n \t \"The determined version of Visual Studio is newer than the latest supported version. Returning the latest supported version instead.\";\r\n \treturn '14.00';\r\n \t}\r\n\r\nShouldn't the returned value be '14.20' for Visual Studio 2019?\r\n\r\nRegards,\r\n\r\nJuan José Santamaría Flecha\r\n\r\n[1] https://docs.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers",
"msg_date": "Tue, 21 May 2019 21:35:24 +0000",
"msg_from": "Juanjo Santamaria Flecha <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, May 22, 2019 at 7:36 AM Juanjo Santamaria Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> I have gone through path\n> '0001-Support-building-with-visual-studio-2019.patch' only, but I am sure\n> some comments will also apply to back branches.\n>\n\nThanks for the review.\n\n\n\n> 1. The VisualStudioVersion value looks odd:\n>\n> + $self->{VisualStudioVersion} = '16.0.32.32432';\n>\n> Are you using a pre-release version [1]?\n>\n\nI first developed this patch on the preview version.\nI updated it to version 16.0.28729.10.\n\n\n> 2. There is a typo: s/stuido/studio/:\n>\n> + # The major visual stuido that is suppored has nmake version >=\n> 14.20 and < 15.\n>\n> There is something in the current code that I think should be also\n> updated. The code for _GetVisualStudioVersion contains:\n>\n> if ($major > 14)\n> {\n> carp\n> \"The determined version of Visual Studio is newer than the latest\n> supported version. Returning the latest supported version instead.\";\n> return '14.00';\n> }\n>\n> Shouldn't the returned value be '14.20' for Visual Studio 2019?\n>\n\nYes, that will be good to return Visual Studio 2019, updated.\n\nUpdated patches are attached for all branches.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Thu, 23 May 2019 11:44:34 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, May 23, 2019 at 3:44 AM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> Updated patches are attached for all branches.\n>\n>\nI have gone through all patches and there are a couple of typos:\n\n 1. s/prodcutname/productname/\n\n 1.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch\n@@ -97,8 +97,9 @@\n <productname>Visual Studio 2013</productname>. Building with\n <productname>Visual Studio 2015</productname> is supported down to\n <productname>Windows Vista</> and <productname>Windows Server 2008</>.\n- Building with <productname>Visual Studio 2017</productname> is supported\n- down to <productname>Windows 7 SP1</> and <productname>Windows Server\n2008 R2 SP1</>.\n+ Building with <productname>Visual Studio 2017</productname> and\n+ <prodcutname>Visual Studio 2019</prodcutname> are supported down to\n\n 1.2 In file:\n0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch\n@@ -97,8 +97,9 @@\n <productname>Visual Studio 2013</productname>. Building with\n <productname>Visual Studio 2015</productname> is supported down to\n <productname>Windows Vista</> and <productname>Windows Server 2008</>.\n- Building with <productname>Visual Studio 2017</productname> is supported\n- down to <productname>Windows 7 SP1</> and <productname>Windows Server\n2008 R2 SP1</>.\n+ Building with <productname>Visual Studio 2017</productname> and\n+ <prodcutname>Visual Studio 2019</prodcutname> are supported down to\n\n 2. s/stuido/studio/\n\n 2.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch\n @@ -143,12 +173,12 @@ sub DetermineVisualStudioVersion\n sub _GetVisualStudioVersion\n {\n my ($major, $minor) = @_;\n- # visual 2017 hasn't changed the nmake version to 15, so still using the\nolder version for comparison.\n+ # The major visual stuido that is suppored has nmake version >= 14.20 and\n< 15.\n\n 2.2 In file:\n0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch\n@@ -132,12 +162,12 @@ sub DetermineVisualStudioVersion\n sub _GetVisualStudioVersion\n {\n my ($major, $minor) = @_;\n- # visual 2017 hasn't changed the nmake version to 15, so still using the\nolder version for comparison.\n+ # The major visual stuido that is suppored has nmake version >= 14.20 and\n< 15.\n\n 2.3 In file: 0001-support-building-with-visual-studio-2019_v11.patch\n@@ -139,12 +165,12 @@ sub _GetVisualStudioVersion\n {\n my ($major, $minor) = @_;\n\n- # visual 2017 hasn't changed the nmake version to 15, so still using the\nolder version for comparison.\n+ # The major visual stuido that is suppored has nmake version >= 14.20 and\n< 15.\n\n 2.4 In file: 0001-Support-building-with-visual-studio-2019_HEAD.patch\n@@ -106,17 +132,17 @@ sub _GetVisualStudioVersion\n {\n my ($major, $minor) = @_;\n\n- # visual 2017 hasn't changed the nmake version to 15, so still using the\nolder version for comparison.\n+ # The major visual stuido that is suppored has nmake version >= 14.20 and\n< 15.\n\n\nOther than that, since this is affects comments and docs, I have already\ntested that the patches build and pass tests on all the intended versions.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, May 23, 2019 at 3:44 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:Updated patches are attached for all branches.I have gone through all patches and there are a couple of typos: 1. s/prodcutname/productname/ 1.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch@@ -97,8 +97,9 @@ <productname>Visual Studio 2013</productname>. Building with <productname>Visual Studio 2015</productname> is supported down to <productname>Windows Vista</> and <productname>Windows Server 2008</>.- Building with <productname>Visual Studio 2017</productname> is supported- down to <productname>Windows 7 SP1</> and <productname>Windows Server 2008 R2 SP1</>.+ Building with <productname>Visual Studio 2017</productname> and+ <prodcutname>Visual Studio 2019</prodcutname> are supported down to 1.2 In file: 0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch@@ -97,8 +97,9 @@ <productname>Visual Studio 2013</productname>. Building with <productname>Visual Studio 2015</productname> is supported down to <productname>Windows Vista</> and <productname>Windows Server 2008</>.- Building with <productname>Visual Studio 2017</productname> is supported- down to <productname>Windows 7 SP1</> and <productname>Windows Server 2008 R2 SP1</>.+ Building with <productname>Visual Studio 2017</productname> and+ <prodcutname>Visual Studio 2019</prodcutname> are supported down to 2. s/stuido/studio/ 2.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch @@ -143,12 +173,12 @@ sub DetermineVisualStudioVersion sub _GetVisualStudioVersion { \tmy ($major, $minor) = @_;-\t# visual 2017 hasn't changed the nmake version to 15, so still using the older version for comparison.+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15. 2.2 In file: 0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch@@ -132,12 +162,12 @@ sub DetermineVisualStudioVersion sub _GetVisualStudioVersion { \tmy ($major, $minor) = @_;-\t# visual 2017 hasn't changed the nmake version to 15, so still using the older version for comparison.+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15. 2.3 In file: 0001-support-building-with-visual-studio-2019_v11.patch@@ -139,12 +165,12 @@ sub _GetVisualStudioVersion { \tmy ($major, $minor) = @_; -\t# visual 2017 hasn't changed the nmake version to 15, so still using the older version for comparison.+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15. 2.4 In file: 0001-Support-building-with-visual-studio-2019_HEAD.patch@@ -106,17 +132,17 @@ sub _GetVisualStudioVersion { \tmy ($major, $minor) = @_; -\t# visual 2017 hasn't changed the nmake version to 15, so still using the older version for comparison.+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15.Other than that, since this is affects comments and docs, I have already tested that the patches build and pass tests on all the intended versions.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 27 May 2019 12:14:02 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Mon, May 27, 2019 at 8:14 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> On Thu, May 23, 2019 at 3:44 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n>\n>>\n>> Updated patches are attached for all branches.\n>>\n>>\n> I have gone through all patches and there are a couple of typos:\n>\n\nThanks for the review.\n\n\n> 1. s/prodcutname/productname/\n>\n> 1.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch\n> @@ -97,8 +97,9 @@\n> <productname>Visual Studio 2013</productname>. Building with\n> <productname>Visual Studio 2015</productname> is supported down to\n> <productname>Windows Vista</> and <productname>Windows Server 2008</>.\n> - Building with <productname>Visual Studio 2017</productname> is\n> supported\n> - down to <productname>Windows 7 SP1</> and <productname>Windows Server\n> 2008 R2 SP1</>.\n> + Building with <productname>Visual Studio 2017</productname> and\n> + <prodcutname>Visual Studio 2019</prodcutname> are supported down to\n>\n> 1.2 In file:\n> 0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch\n> @@ -97,8 +97,9 @@\n> <productname>Visual Studio 2013</productname>. Building with\n> <productname>Visual Studio 2015</productname> is supported down to\n> <productname>Windows Vista</> and <productname>Windows Server 2008</>.\n> - Building with <productname>Visual Studio 2017</productname> is\n> supported\n> - down to <productname>Windows 7 SP1</> and <productname>Windows Server\n> 2008 R2 SP1</>.\n> + Building with <productname>Visual Studio 2017</productname> and\n> + <prodcutname>Visual Studio 2019</prodcutname> are supported down to\n>\n\nCorrected.\n\n\n> 2. s/stuido/studio/\n>\n> 2.1 In file: 0001-support-building-with-visual-studio-2019_v9.4.patch\n> @@ -143,12 +173,12 @@ sub DetermineVisualStudioVersion\n> sub _GetVisualStudioVersion\n> {\n> my ($major, $minor) = @_;\n> - # visual 2017 hasn't changed the nmake version to 15, so still using\n> the older version for comparison.\n> + # The major visual stuido that is suppored has nmake version >= 14.20\n> and < 15.\n>\n> 2.2 In file:\n> 0001-support-building-with-visual-studio-2019_v10_to_v9.5.patch\n> @@ -132,12 +162,12 @@ sub DetermineVisualStudioVersion\n> sub _GetVisualStudioVersion\n> {\n> my ($major, $minor) = @_;\n> - # visual 2017 hasn't changed the nmake version to 15, so still using\n> the older version for comparison.\n> + # The major visual stuido that is suppored has nmake version >= 14.20\n> and < 15.\n>\n> 2.3 In file: 0001-support-building-with-visual-studio-2019_v11.patch\n> @@ -139,12 +165,12 @@ sub _GetVisualStudioVersion\n> {\n> my ($major, $minor) = @_;\n>\n> - # visual 2017 hasn't changed the nmake version to 15, so still using\n> the older version for comparison.\n> + # The major visual stuido that is suppored has nmake version >= 14.20\n> and < 15.\n>\n> 2.4 In file: 0001-Support-building-with-visual-studio-2019_HEAD.patch\n> @@ -106,17 +132,17 @@ sub _GetVisualStudioVersion\n> {\n> my ($major, $minor) = @_;\n>\n> - # visual 2017 hasn't changed the nmake version to 15, so still using\n> the older version for comparison.\n> + # The major visual stuido that is suppored has nmake version >= 14.20\n> and < 15.\n>\n\nCorrected. And also 'supported' spelling is also wrong.\n\nUpdated patches are attached.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia",
"msg_date": "Wed, 29 May 2019 18:30:20 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, May 29, 2019 at 10:30 AM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> Updated patches are attached.\n>\n>\nAll patches apply, build and pass tests. The patch\n'0001-support-building-with-visual-studio-2019_v10_to_v9.6_v3.patch'\napplies on version 9.5.\n\nNot sure if more review is needed before moving to 'ready for commite'r,\nbut I have no more comments to make on current patches.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, May 29, 2019 at 10:30 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:Updated patches are attached.All patches apply, build and pass tests. The patch '0001-support-building-with-visual-studio-2019_v10_to_v9.6_v3.patch' applies on version 9.5.Not sure if more review is needed before moving to 'ready for commite'r, but I have no more comments to make on current patches.Regards,Juan José Santamaría Flecha",
"msg_date": "Wed, 5 Jun 2019 09:22:09 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, 5 Jun 2019 at 17:22, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> On Wed, May 29, 2019 at 10:30 AM Haribabu Kommi <kommi.haribabu@gmail.com>\n> wrote:\n>\n>>\n>> Updated patches are attached.\n>>\n>>\n> All patches apply, build and pass tests. The patch\n> '0001-support-building-with-visual-studio-2019_v10_to_v9.6_v3.patch'\n> applies on version 9.5.\n>\n> Not sure if more review is needed before moving to 'ready for commite'r,\n> but I have no more comments to make on current patches.\n>\n\nThanks for the review. Yes, that patch applies till 9.5, it is my mistake\nin naming the patch.\n\nRegards,\nHaribabu Kommi\n\nOn Wed, 5 Jun 2019 at 17:22, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Wed, May 29, 2019 at 10:30 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:Updated patches are attached.All patches apply, build and pass tests. The patch '0001-support-building-with-visual-studio-2019_v10_to_v9.6_v3.patch' applies on version 9.5.Not sure if more review is needed before moving to 'ready for commite'r, but I have no more comments to make on current patches.Thanks for the review. Yes, that patch applies till 9.5, it is my mistake in naming the patch.Regards,Haribabu Kommi",
"msg_date": "Wed, 26 Jun 2019 22:29:05 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, Jun 26, 2019 at 10:29:05PM +1000, Haribabu Kommi wrote:\n> Thanks for the review. Yes, that patch applies till 9.5, it is my mistake\n> in naming the patch.\n\nI have been able to finally set up an environment with VS 2019 (as\nusual this stuff needs time, anyway..), and I can confirm that the\npatch is able to compile properly.\n\n- <productname>Visual Studio 2017</productname> (including Express editions),\n- as well as standalone Windows SDK releases 6.0 to 8.1.\n+ <productname>Visual Studio 2019</productname> (including Express editions),\n+ as well as standalone Windows SDK releases 8.1a to 10.\nI would like to understand why this range of requirements is updated.\nIs there any reason to do so. If we change these docs, what does it\nmean in terms of versions of Visual Studio supported?\n\n-or a VS2015Solution or a VS2017Solution, all in Solution.pm, depending on\n-the user's build environment) and adding objects implementing the corresponding\n-Project interface (VC2013Project or VC2015Project or VC2017Project from\n-MSBuildProject.pm) to it.\n+or a VS2015Solution or a VS2017Solution or a VS2019Solution, all in Solution.pm,\n+depending on the user's build environment) and adding objects implementing\n+the corresponding Project interface (VC2013Project or VC2015Project or VC2017Project\n+or VC2019Project from MSBuildProject.pm) to it.\nThis formulation is weird the more we accumulate new objects, let's\nput that in a proper list of elements separated with commas except\nfor the two last ones which should use \"or\".\n\ns/greather/greater/.\n\nThe patch still has typos, and the format is not satisfying yet, so I\nhave done a set of fixes as per the attached.\n\n- elsif ($major < 6)\n+ elsif ($major < 12)\n {\n croak\n- \"Unable to determine Visual Studio version:\n Visual Studio versions before 6.0 aren't supported.\";\n+ \"Unable to determine Visual Studio version:\n\t Visual Studio versions before 12.0 aren't supported.\";\nWell, this is a separate bug fix, still I don't mind fixing that in\nthe same patch as we meddle with those code paths now. Good catch.\n\n- croak $visualStudioVersion;\n+ carp $visualStudioVersion;\nSame here. Just wouldn't it be better to print the version found in\nthe same message?\n\n+ # The major visual studio that is supported has nmake version >=\n 14.20 and < 15.\n if ($major > 14)\nComment line is too long. It seems to me that the condition here\nshould be ($major >= 14 && $minor >= 30). That's not completely\ncorrect either as we have a version higher than 14.20 for VS 2019 but\nthat's better than just using 14.29 or a fake number I guess.\n\nSo for now I have the attached which applies to HEAD. The patch is\nnot indented yet because the conditions in CreateProject() and\nCreateSolution() get messed up, but I'll figure out something.\n\nAny comments? I am wondering the update related to the version range\nof the standalone SDKs though. No need for backpatched versions yet,\nfirst let's agree about the shape of what we want on HEAD.\n--\nMichael",
"msg_date": "Thu, 27 Jun 2019 16:27:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Thu, 27 Jun 2019 at 17:28, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Jun 26, 2019 at 10:29:05PM +1000, Haribabu Kommi wrote:\n> > Thanks for the review. Yes, that patch applies till 9.5, it is my mistake\n> > in naming the patch.\n>\n> I have been able to finally set up an environment with VS 2019 (as\n> usual this stuff needs time, anyway..), and I can confirm that the\n> patch is able to compile properly.\n>\n\nThanks for the review.\n\n\n> - <productname>Visual Studio 2017</productname> (including Express\n> editions),\n> - as well as standalone Windows SDK releases 6.0 to 8.1.\n> + <productname>Visual Studio 2019</productname> (including Express\n> editions),\n> + as well as standalone Windows SDK releases 8.1a to 10.\n> I would like to understand why this range of requirements is updated.\n> Is there any reason to do so. If we change these docs, what does it\n> mean in terms of versions of Visual Studio supported?\n>\n\nWe stopped the support of building with all the visual studio versions less\nthan 2013.\nI updated the SDK versions accordingly.\n\n\n\n> -or a VS2015Solution or a VS2017Solution, all in Solution.pm, depending on\n> -the user's build environment) and adding objects implementing the\n> corresponding\n> -Project interface (VC2013Project or VC2015Project or VC2017Project from\n> -MSBuildProject.pm) to it.\n> +or a VS2015Solution or a VS2017Solution or a VS2019Solution, all in\n> Solution.pm,\n> +depending on the user's build environment) and adding objects implementing\n> +the corresponding Project interface (VC2013Project or VC2015Project or\n> VC2017Project\n> +or VC2019Project from MSBuildProject.pm) to it.\n> This formulation is weird the more we accumulate new objects, let's\n> put that in a proper list of elements separated with commas except\n> for the two last ones which should use \"or\".\n>\n> s/greather/greater/.\n>\n> The patch still has typos, and the format is not satisfying yet, so I\n> have done a set of fixes as per the attached.\n>\n\nThe change in the patch is good.\n\n\n>\n> - elsif ($major < 6)\n> + elsif ($major < 12)\n> {\n> croak\n> - \"Unable to determine Visual Studio version:\n> Visual Studio versions before 6.0 aren't supported.\";\n> + \"Unable to determine Visual Studio version:\n> Visual Studio versions before 12.0 aren't supported.\";\n> Well, this is a separate bug fix, still I don't mind fixing that in\n> the same patch as we meddle with those code paths now. Good catch.\n>\n> - croak $visualStudioVersion;\n> + carp $visualStudioVersion;\n> Same here. Just wouldn't it be better to print the version found in\n> the same message?\n>\n\nYes, that is a good change, I thought of doing the same, but I don't know\nhow to do it.\n\nThe similar change is required for the CreateProject also.\n\n\n\n> + # The major visual studio that is supported has nmake version >=\n> 14.20 and < 15.\n> if ($major > 14)\n> Comment line is too long. It seems to me that the condition here\n> should be ($major >= 14 && $minor >= 30). That's not completely\n> correct either as we have a version higher than 14.20 for VS 2019 but\n> that's better than just using 14.29 or a fake number I guess.\n>\n\nThe change is good, but the comment is wrong.\n\n+ # The major visual studio that is supported has nmake\n+ # version >= 14.30, so stick with it as the latest version\n\nThe major visual studio version that is supported has nmake version <=14.30\n\nExcept for the above two changes, overall the patch is in good shape.\n\nRegards,\nHaribabu Kommi\n\nOn Thu, 27 Jun 2019 at 17:28, Michael Paquier <michael@paquier.xyz> wrote:On Wed, Jun 26, 2019 at 10:29:05PM +1000, Haribabu Kommi wrote:\n> Thanks for the review. Yes, that patch applies till 9.5, it is my mistake\n> in naming the patch.\n\nI have been able to finally set up an environment with VS 2019 (as\nusual this stuff needs time, anyway..), and I can confirm that the\npatch is able to compile properly.Thanks for the review. \n- <productname>Visual Studio 2017</productname> (including Express editions),\n- as well as standalone Windows SDK releases 6.0 to 8.1.\n+ <productname>Visual Studio 2019</productname> (including Express editions),\n+ as well as standalone Windows SDK releases 8.1a to 10.\nI would like to understand why this range of requirements is updated.\nIs there any reason to do so. If we change these docs, what does it\nmean in terms of versions of Visual Studio supported?We stopped the support of building with all the visual studio versions less than 2013.I updated the SDK versions accordingly. \n-or a VS2015Solution or a VS2017Solution, all in Solution.pm, depending on\n-the user's build environment) and adding objects implementing the corresponding\n-Project interface (VC2013Project or VC2015Project or VC2017Project from\n-MSBuildProject.pm) to it.\n+or a VS2015Solution or a VS2017Solution or a VS2019Solution, all in Solution.pm,\n+depending on the user's build environment) and adding objects implementing\n+the corresponding Project interface (VC2013Project or VC2015Project or VC2017Project\n+or VC2019Project from MSBuildProject.pm) to it.\nThis formulation is weird the more we accumulate new objects, let's\nput that in a proper list of elements separated with commas except\nfor the two last ones which should use \"or\".\n\ns/greather/greater/.\n\nThe patch still has typos, and the format is not satisfying yet, so I\nhave done a set of fixes as per the attached.The change in the patch is good. \n\n- elsif ($major < 6)\n+ elsif ($major < 12)\n {\n croak\n- \"Unable to determine Visual Studio version:\n Visual Studio versions before 6.0 aren't supported.\";\n+ \"Unable to determine Visual Studio version:\n Visual Studio versions before 12.0 aren't supported.\";\nWell, this is a separate bug fix, still I don't mind fixing that in\nthe same patch as we meddle with those code paths now. Good catch.\n\n- croak $visualStudioVersion;\n+ carp $visualStudioVersion;\nSame here. Just wouldn't it be better to print the version found in\nthe same message?Yes, that is a good change, I thought of doing the same, but I don't knowhow to do it.The similar change is required for the CreateProject also. \n+ # The major visual studio that is supported has nmake version >=\n 14.20 and < 15.\n if ($major > 14)\nComment line is too long. It seems to me that the condition here\nshould be ($major >= 14 && $minor >= 30). That's not completely\ncorrect either as we have a version higher than 14.20 for VS 2019 but\nthat's better than just using 14.29 or a fake number I guess.The change is good, but the comment is wrong.+\t# The major visual studio that is supported has nmake+\t# version >= 14.30, so stick with it as the latest versionThe major visual studio version that is supported has nmake version <=14.30 Except for the above two changes, overall the patch is in good shape. Regards,Haribabu Kommi",
"msg_date": "Mon, 1 Jul 2019 19:56:29 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Mon, Jul 01, 2019 at 07:56:29PM +1000, Haribabu Kommi wrote:\n> We stopped the support of building with all the visual studio versions less\n> than 2013. I updated the SDK versions accordingly.\n\nI have spent some time looking around, and wikipedia-sensei has proved\nto be helpful to grasp the release references:\nhttps://en.wikipedia.org/wiki/Microsoft_Windows_SDK\n\nSo the suggestions from the patch are fine. This one was actually\nforgotten:\nsrc/tools/msvc/README:from www.microsoft.com (v6.0 or greater).\n\n> The similar change is required for the CreateProject also.\n\nI have changed both messages so as the version of VS attempted to be\nused is reported in the error message directly.\n\n> + # The major visual studio that is supported has nmake\n> + # version >= 14.30, so stick with it as the latest version\n> \n> The major visual studio version that is supported has nmake version\n> <=14.30\n\nDamn. Thanks for pointing out that.\n\n> Except for the above two changes, overall the patch is in good shape.\n\nOK, committed to HEAD for now after perltidy'ing the patch. Let's see\nwhat the buildfarm has to say about it first. Once we are sure that\nthe thing is stable, I'll try to backpatch it. This works on my own\ndev machines with VS 2015 and 2019, but who knows what hides in the\nshadows... \n--\nMichael",
"msg_date": "Tue, 2 Jul 2019 14:10:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Tue, Jul 02, 2019 at 02:10:11PM +0900, Michael Paquier wrote:\n> OK, committed to HEAD for now after perltidy'ing the patch. Let's see\n> what the buildfarm has to say about it first. Once we are sure that\n> the thing is stable, I'll try to backpatch it. This works on my own\n> dev machines with VS 2015 and 2019, but who knows what hides in the\n> shadows... \n\nThe buildfarm did not have much to say, so backpatched down to 9.4,\nadjusting things on the way.\n--\nMichael",
"msg_date": "Wed, 3 Jul 2019 09:01:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
},
{
"msg_contents": "On Wed, 3 Jul 2019 at 10:01, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jul 02, 2019 at 02:10:11PM +0900, Michael Paquier wrote:\n> > OK, committed to HEAD for now after perltidy'ing the patch. Let's see\n> > what the buildfarm has to say about it first. Once we are sure that\n> > the thing is stable, I'll try to backpatch it. This works on my own\n> > dev machines with VS 2015 and 2019, but who knows what hides in the\n> > shadows...\n>\n> The buildfarm did not have much to say, so backpatched down to 9.4,\n> adjusting things on the way.\n\n\nThanks Michael.\n\nRegards,\nHaribabu Kommi\n\nOn Wed, 3 Jul 2019 at 10:01, Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jul 02, 2019 at 02:10:11PM +0900, Michael Paquier wrote:\n> OK, committed to HEAD for now after perltidy'ing the patch. Let's see\n> what the buildfarm has to say about it first. Once we are sure that\n> the thing is stable, I'll try to backpatch it. This works on my own\n> dev machines with VS 2015 and 2019, but who knows what hides in the\n> shadows... \n\nThe buildfarm did not have much to say, so backpatched down to 9.4,\nadjusting things on the way.Thanks Michael.Regards,Haribabu Kommi",
"msg_date": "Wed, 3 Jul 2019 18:51:59 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPeter E added some nice tests for LDAP and Kerberos, but they assume\nyou have Homebrew when testing on a Mac. Here's a patch to make them\nwork with MacPorts too (a competing open source port/package\ndistribution that happens to be the one that I use). The third\n\"extra\" test is ssl, but that was already working.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 21 Mar 2019 13:55:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "MacPorts support for \"extra\" tests"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Peter E added some nice tests for LDAP and Kerberos, but they assume\n> you have Homebrew when testing on a Mac. Here's a patch to make them\n> work with MacPorts too (a competing open source port/package\n> distribution that happens to be the one that I use). The third\n> \"extra\" test is ssl, but that was already working.\n\n+1, but could we comment that a bit? I'm thinking of something like\n\n # typical library location for Homebrew\n\nin each of the if-branches.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 20 Mar 2019 23:39:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MacPorts support for \"extra\" tests"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 4:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Peter E added some nice tests for LDAP and Kerberos, but they assume\n> > you have Homebrew when testing on a Mac. Here's a patch to make them\n> > work with MacPorts too (a competing open source port/package\n> > distribution that happens to be the one that I use). The third\n> > \"extra\" test is ssl, but that was already working.\n>\n> +1, but could we comment that a bit? I'm thinking of something like\n>\n> # typical library location for Homebrew\n>\n> in each of the if-branches.\n\nPushed.\n\nI tried half-heartedly to understand why Apple's /usr/libexec/slapd\ndoesn't work for our tests. I noticed that is does actually start up\nand run in the foreground if you use -d 255 (= debug level), and\nprints similar debug output to upstream slapd, but it complains about\nTLS stuff that AFAIK it should be happy with. Perhaps it wants to do\nTLS stuff via the Apple keychain technology? Without the debug switch\nit doesn't launch at all (making -d a bit of a heisendebug level if\nyou ask me), while upstream slapd double-forks a daemon, and that's\nthe first thing stopping our test from working (though of course the\nTLS stuff would be the next problem). One magical thing about it is\nthat it's one of those signed executables that won't let you dtruss it\nunless you disable SIP. I wonder if that's relevant. Anyway, I frown\nin the general direction of California, and hereby give up.\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n",
"msg_date": "Tue, 26 Mar 2019 11:45:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MacPorts support for \"extra\" tests"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nA very common question among new users is how wal_level works and it\nlevels. I heard about some situations like that, a user create a new\npublication in its master database and he/she simply does not change\nwal_level to logical, sometimes, this person lost maintenance window, or a\nchance to restart postgres service, usually a production database, and it\nwill discover that wal_level is not right just in subscription creation.\nAttempting to iterate between new (and even experienced) users with logical\nreplication, I am sending a patch that when an PUBLICATION is created and\nthe wal_level is different from logical prints a WARNING in console/log:\n\n-> WARNING: `PUBLICATION` created but wal_level `is` not set to logical,\nyou need to change it before creating any SUBSCRIPTION\n\nInitiatives like this can make a good user experience with PostgreSQL and\nits own logical replication.\n\nThanks\n\n--\n\n*Lucas Viecelli*\n\n<http://www.leosoft.com.br/coopcred>",
"msg_date": "Thu, 21 Mar 2019 19:45:59 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "warning to publication created and wal_level is not set to logical"
},
{
"msg_contents": "On Thu, Mar 21, 2019 at 07:45:59PM -0300, Lucas Viecelli wrote:\n> Hi everyone,\n> \n> A very common question among new users is how wal_level works and it\n> levels. I heard about some situations like that, a user create a new\n> publication in its master database and he/she simply does not change\n> wal_level to logical, sometimes, this person lost maintenance\n> window, or a chance to restart postgres service, usually a\n> production database, and it will discover that wal_level is not\n> right just in subscription creation. Attempting to iterate between\n> new (and even experienced) users with logical replication, I am\n> sending a patch that when an PUBLICATION is created and the\n> wal_level is different from logical prints a WARNING in console/log:\n\nIs a WARNING sufficient? Maybe I'm misunderstanding something\nimportant, but I think the attempt should fail with a HINT to set the\nwal_level ahead of time.\n\nPossibly in a separate patch, setting the wal_level to anything lower\nthan logical when publications exist should also fail.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sun, 24 Mar 2019 18:54:48 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Thu, Mar 21, 2019 at 07:45:59PM -0300, Lucas Viecelli wrote:\n>> I am sending a patch that when an PUBLICATION is created and the\n>> wal_level is different from logical prints a WARNING in console/log:\n\n> Is a WARNING sufficient? Maybe I'm misunderstanding something\n> important, but I think the attempt should fail with a HINT to set the\n> wal_level ahead of time.\n\nThat would be a booby-trap for dump/restore and pg_upgrade, so I don't\nthink making CREATE PUBLICATION fail outright would be wise.\n\n> Possibly in a separate patch, setting the wal_level to anything lower\n> than logical when publications exist should also fail.\n\nI do not believe this is practical either. GUC manipulation cannot\nlook at the catalogs.\n\nI agree that it'd be nice to be noisier about the problem, but I'm\nnot sure we can do more than bleat in the postmaster log from time\nto time if a publication is active and wal_level is too low.\n(And we'd better be careful about the log-spam aspect of that...)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Sun, 24 Mar 2019 14:06:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": ">\n> > Is a WARNING sufficient? Maybe I'm misunderstanding something\n>\n> important, but I think the attempt should fail with a HINT to set the\n> > wal_level ahead of time.\n>\n\nI thought about this possibility, but I was afraid to change the current\nbehavior a lot, but it's worth discussing.\n\n\n>\n>\nI agree that it'd be nice to be noisier about the problem, but I'm\n> not sure we can do more than bleat in the postmaster log from time\n> to time if a publication is active and wal_level is too low.\n> (And we'd better be careful about the log-spam aspect of that...)\n>\n\nI agree on being noisier, but I think the main thing is to let the user\naware of the situation and in that the\npatch resolves, stating that he needs to adjust wal_level. Initially\nWARNING will appear only at the time\nthe publication is created, precisely not to put spam in the log.\n\nIs it better to warn from time to time that wal_level needs to change\nbecause it has some publication that will not work?\n-- \n\n*Lucas Viecelli*\n\n\n<http://www.leosoft.com.br/coopcred>\n\n> Is a WARNING sufficient? Maybe I'm misunderstanding something\n> important, but I think the attempt should fail with a HINT to set the\n> wal_level ahead of time.I thought about this possibility, but I was afraid to change the current behavior a lot, but it's worth discussing. \nI agree that it'd be nice to be noisier about the problem, but I'm\nnot sure we can do more than bleat in the postmaster log from time\nto time if a publication is active and wal_level is too low.\n(And we'd better be careful about the log-spam aspect of that...)I agree on being noisier, but I think the main thing is to let the user aware of the situation and in that the patch resolves, stating that he needs to adjust wal_level. Initially WARNING will appear only at the time the publication is created, precisely not to put spam in the log.Is it better to warn from time to time that wal_level needs to change because it has some publication that will not work?-- Lucas Viecelli",
"msg_date": "Mon, 25 Mar 2019 10:20:54 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 02:06:59PM -0400, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > On Thu, Mar 21, 2019 at 07:45:59PM -0300, Lucas Viecelli wrote:\n> >> I am sending a patch that when an PUBLICATION is created and the\n> >> wal_level is different from logical prints a WARNING in console/log:\n> \n> > Is a WARNING sufficient? Maybe I'm misunderstanding something\n> > important, but I think the attempt should fail with a HINT to set the\n> > wal_level ahead of time.\n> \n> That would be a booby-trap for dump/restore and pg_upgrade, so I don't\n> think making CREATE PUBLICATION fail outright would be wise.\n\nI haven't yet come up with a situation where it would be appropriate\nboth for wal_level to be below logical and for a PUBLICATION to exist,\neven as some intermediate state during pg_restore.\n\n> > Possibly in a separate patch, setting the wal_level to anything lower\n> > than logical when publications exist should also fail.\n> \n> I do not believe this is practical either. GUC manipulation cannot\n> look at the catalogs.\n\nIn this case, it really has to do something. Is setting GUCs a path so\ncritical it can't take one branch?\n\n> I agree that it'd be nice to be noisier about the problem, but I'm\n> not sure we can do more than bleat in the postmaster log from time\n> to time if a publication is active and wal_level is too low.\n> (And we'd better be careful about the log-spam aspect of that...)\n\nWith utmost respect, we have a lot more responsibility to the users of\nthis feature than this might imply. If there are circumstances where\nthere should be both a PUBLICATION and a wal_level less than logical,\nby all means, let's document them very clearly in all the relevant\nplaces.\n\nIf, as I strongly suspect, no such circumstance exists, it should not\nbe possible for someone to have both of those at once, however\ninconvenient it is for us to arrange it.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 25 Mar 2019 15:15:33 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 10:15 AM David Fetter <david@fetter.org> wrote:\n> > I do not believe this is practical either. GUC manipulation cannot\n> > look at the catalogs.\n>\n> In this case, it really has to do something. Is setting GUCs a path so\n> critical it can't take one branch?\n\nNo, but that has about zero to do with the actual problem that Tom is\ndescribing.\n\n> If, as I strongly suspect, no such circumstance exists, it should not\n> be possible for someone to have both of those at once, however\n> inconvenient it is for us to arrange it.\n\nUh, Tom already told you how it can happen. You just take a pg_dump\nof an existing database, run initdb to create a new cluster, and then\ntry to restore the dump on the new cluster. That shouldn't fail just\nbecause wal_level = 'logical' isn't configured yet. If it did, that\nwould be creating a huge booby-trap for users that doesn't exist\ntoday. You can't just dismiss that as nothing. I think users have\nevery right to expect that a dump and restore is going to work without\npreconfiguring things like wal_level -- it's bad enough that you\nalready have to struggle with things like encoding to get dumps to\nrestore properly. Adding more ways for dump restoration to fail is a\nreally bad idea.\n\nBesides that, it is obviously impractical to stop somebody from\nshutting down the server, changing wal_level, and then restarting the\nserver. Nor can you make all publications magically go away if\nsomeone does that. Nor would it be a good idea if we could do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 13:39:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Mar 25, 2019 at 10:15 AM David Fetter <david@fetter.org> wrote:\n>>> I do not believe this is practical either. GUC manipulation cannot\n>>> look at the catalogs.\n\n>> In this case, it really has to do something. Is setting GUCs a path so\n>> critical it can't take one branch?\n\n> No, but that has about zero to do with the actual problem that Tom is\n> describing.\n\nTo clarify, the problems with that are\n\n(1) Initial GUC settings are absorbed by the postmaster, which cannot\nexamine catalogs *at all*. It is neither connected to any database\nnor allowed to participate in transactions. These are not things that\nwill change.\n\n(2) wal_level is a global setting, but the catalogs we'd have to look\nat to discover the existence of a publication are per-database. Thus\nfor example there is no reliable way for \"ALTER SYSTEM SET wal_level\"\nto detect whether publications exist in other databases of the cluster.\n(To say nothing of race conditions against concurrent publication\ncreation commands.)\n\nAdding the dump/restore issue on top of that, it seems clear to me that\nwe can't usefully prevent a conflicting setting of wal_level from being\nestablished. The best we can do is whine about it later.\n\nOne idea that might be useful is to have walsenders refuse to transmit\nany logical-replication data if they see wal_level is too low. That\nwould get users' attention pretty quickly.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 13:53:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 13:53:32 -0400, Tom Lane wrote:\n> One idea that might be useful is to have walsenders refuse to transmit\n> any logical-replication data if they see wal_level is too low. That\n> would get users' attention pretty quickly.\n\nThey do:\n\n\n/*\n * Load previously initiated logical slot and prepare for sending data (via\n * WalSndLoop).\n */\nstatic void\nStartLogicalReplication(StartReplicationCmd *cmd)\n{\n\tStringInfoData buf;\n\n\t/* make sure that our requirements are still fulfilled */\n\tCheckLogicalDecodingRequirements();\n\nand CheckLogicalDecodingReqs contains:\n\n\tif (wal_level < WAL_LEVEL_LOGICAL)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n\t\t\t\t errmsg(\"logical decoding requires wal_level >= logical\")));\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 11:06:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-25 13:53:32 -0400, Tom Lane wrote:\n>> One idea that might be useful is to have walsenders refuse to transmit\n>> any logical-replication data if they see wal_level is too low. That\n>> would get users' attention pretty quickly.\n\n> They do:\n\nOh, OK, then this seems like it's basically covered already. I think\nthe original suggestion to add a WARNING during CREATE PUBLICATION\nisn't unreasonable. But we don't need to do more than that (and it\nshouldn't be higher than WARNING).\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 14:19:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": ">> One idea that might be useful is to have walsenders refuse to transmit\n> >> any logical-replication data if they see wal_level is too low. That\n> >> would get users' attention pretty quickly.\n>\n> > They do:\n>\n\nI checked this before creating the patch\n\n\n>\n> Oh, OK, then this seems like it's basically covered already. I think\n> the original suggestion to add a WARNING during CREATE PUBLICATION\n> isn't unreasonable. But we don't need to do more than that (and it\n> shouldn't be higher than WARNING).\n>\n\nOkay, I think it will improve understanding of new users.\n\nSince everything is fine, thank you all for the comments\n-- \n\nAtenciosamente.\n\n*Lucas Viecelli*\n\n<http://www.leosoft.com.br/coopcred>\n\n>> One idea that might be useful is to have walsenders refuse to transmit\n>> any logical-replication data if they see wal_level is too low. That\n>> would get users' attention pretty quickly.\n\n> They do:I checked this before creating the patch \n\nOh, OK, then this seems like it's basically covered already. I think\nthe original suggestion to add a WARNING during CREATE PUBLICATION\nisn't unreasonable. But we don't need to do more than that (and it\nshouldn't be higher than WARNING).\nOkay, I think it will improve understanding of new users.Since everything is fine, thank you all for the comments-- Atenciosamente.Lucas Viecelli",
"msg_date": "Tue, 26 Mar 2019 09:36:10 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 1:36 AM Lucas Viecelli <lviecelli199@gmail.com> wrote:\n>> Oh, OK, then this seems like it's basically covered already. I think\n>> the original suggestion to add a WARNING during CREATE PUBLICATION\n>> isn't unreasonable. But we don't need to do more than that (and it\n>> shouldn't be higher than WARNING).\n>\n> Okay, I think it will improve understanding of new users.\n>\n> Since everything is fine, thank you all for the comments\n\nHi Lucas,\n\nThe July Commitfest has started. This patch is in \"Needs review\"\nstatus, but it doesn't apply. If I read the above discussion\ncorrectly, it seems there is agreement that a warning here is a good\nidea to commit this patch. Could you please post a rebased patch?\n\nA note on the message:\n\nWARNING: `PUBLICATION` created but wal_level `is` not set to logical,\nyou need to change it before creating any SUBSCRIPTION\n\nI wonder if it would be more typical project style to put the clue on\nwhat to do into an \"errhint\" message, something like this:\n\nWARNING: insufficient wal_level to publish logical changes\nHINT: Set wal_level to logical before creating subscriptions.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:04:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Hi Thomas.\n\nAttached is the rebased\n\n\n> The July Commitfest has started. This patch is in \"Needs review\"\n> status, but it doesn't apply. If I read the above discussion\n> correctly, it seems there is agreement that a warning here is a good\n> idea to commit this patch. Could you please post a rebased patch?\n>\n>\nI followed your suggestion and changed the message and added HINT. I hope\neverything is agreed now.\n\n\n> I wonder if it would be more typical project style to put the clue on\n> what to do into an \"errhint\" message, something like this:\n>\n> WARNING: insufficient wal_level to publish logical changes\n> HINT: Set wal_level to logical before creating subscriptions.\n>\n\n-- \n\n*Lucas Viecelli*\n\n\n<http://www.leosoft.com.br/coopcred>",
"msg_date": "Tue, 9 Jul 2019 00:56:51 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Follow the correct file, I added the wrong patch in the previous email\n\n\n> Attached is the rebased\n>\n\n\nthanks a lot\n\n*Lucas Viecelli*",
"msg_date": "Tue, 9 Jul 2019 02:40:42 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 5:40 PM Lucas Viecelli <lviecelli199@gmail.com> wrote:\n> Follow the correct file, I added the wrong patch in the previous email\n\nNew status: Ready for Committer. If nobody wants to bikeshed the\nwording or other details, I will commit this tomorrow.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2019 11:43:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> New status: Ready for Committer. If nobody wants to bikeshed the\n> wording or other details, I will commit this tomorrow.\n\nHm, so:\n\n1.\n\n+\t\t\terrmsg(\"insufficient wal_level to publish logical changes\"),\n\nMight read better as \"wal_level is insufficient to publish logical changes\"?\n\n2.\n\n+\t\t\terrhint(\"Set wal_level to logical before creating subscriptions\")));\n\nThis definitely is not per style guidelines, needs a trailing period.\n\n3. AFAICS, the proposed test case changes will cause the core regression\ntests to fail if wal_level is not replica. This is not true today ---\nthey pass regardless of wal_level --- and I object in the strongest terms\nto making it otherwise.\n\nI'm not really convinced that we need regression tests for this change at\nall, but if we do, put them in one of the TAP replication test suites,\nwhich already depend on wal_level being set to something in particular.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Jul 2019 20:47:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 12:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 1.\n>\n> + errmsg(\"insufficient wal_level to publish logical changes\"),\n>\n> Might read better as \"wal_level is insufficient to publish logical changes\"?\n>\n> 2.\n>\n> + errhint(\"Set wal_level to logical before creating subscriptions\")));\n>\n> This definitely is not per style guidelines, needs a trailing period.\n\nAgreed, fixed. Also run through pgindent.\n\n> 3. AFAICS, the proposed test case changes will cause the core regression\n> tests to fail if wal_level is not replica. This is not true today ---\n> they pass regardless of wal_level --- and I object in the strongest terms\n> to making it otherwise.\n>\n> I'm not really convinced that we need regression tests for this change at\n> all, but if we do, put them in one of the TAP replication test suites,\n> which already depend on wal_level being set to something in particular.\n\nI agree that it's not really worth having tests for this, and I take\nyour point about the dependency on wal_level that we don't currently\nhave. The problem is that the core tests include publications\nalready, and it doesn't seem like a great idea to move the whole lot\nto a TAP test. Creating alternative expected files seems like a bad\nidea too (annoying to maintain, wouldn't compose well with the next\nthing like this). So... how about we just suppress WARNINGs for\nCREATE PUBLICATION commands that are expected to succeed? Like in the\nattached. This version passes installcheck with any wal_level.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 11 Jul 2019 17:14:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "> Agreed, fixed. Also run through pgindent\n>\n\nThank you for the adjustments.\n\n\n> I agree that it's not really worth having tests for this, and I take\n> your point about the dependency on wal_level that we don't currently\n> have. The problem is that the core tests include publications\n> already, and it doesn't seem like a great idea to move the whole lot\n> to a TAP test. Creating alternative expected files seems like a bad\n> idea too (annoying to maintain, wouldn't compose well with the next\n> thing like this). So... how about we just suppress WARNINGs for\n> CREATE PUBLICATION commands that are expected to succeed? Like in the\n> attached. This version passes installcheck with any wal_level.\n>\nAll right, for me. If wal_level can not interfere with the testes result,\nit seems to a better approach\n\n*Lucas Viecelli*\n\n\n<http://www.leosoft.com.br/coopcred>\n\nAgreed, fixed. Also run through pgindentThank you for the adjustments. I agree that it's not really worth having tests for this, and I take\nyour point about the dependency on wal_level that we don't currently\nhave. The problem is that the core tests include publications\nalready, and it doesn't seem like a great idea to move the whole lot\nto a TAP test. Creating alternative expected files seems like a bad\nidea too (annoying to maintain, wouldn't compose well with the next\nthing like this). So... how about we just suppress WARNINGs for\nCREATE PUBLICATION commands that are expected to succeed? Like in the\nattached. This version passes installcheck with any wal_level.All right, for me. If wal_level can not interfere with the testes result, it seems to a better approachLucas Viecelli",
"msg_date": "Fri, 12 Jul 2019 12:21:33 -0300",
"msg_from": "Lucas Viecelli <lviecelli199@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 10, 2019 at 12:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 3. AFAICS, the proposed test case changes will cause the core regression\n>> tests to fail if wal_level is not replica. This is not true today ---\n>> they pass regardless of wal_level --- and I object in the strongest terms\n>> to making it otherwise.\n\n> ... how about we just suppress WARNINGs for\n> CREATE PUBLICATION commands that are expected to succeed? Like in the\n> attached. This version passes installcheck with any wal_level.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jul 2019 11:33:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
},
{
"msg_contents": "On Sat, Jul 13, 2019 at 3:21 AM Lucas Viecelli <lviecelli199@gmail.com> wrote:\n>> Agreed, fixed. Also run through pgindent\n>\n> Thank you for the adjustments.\n\n> All right, for me. If wal_level can not interfere with the testes result, it seems to a better approach\n\nPushed. Thanks for the patch!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jul 2019 13:10:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: warning to publication created and wal_level is not set to\n logical"
}
] |
[
{
"msg_contents": "Lately, PostgreSQL has moved many defaults from \"bare minimum\" more to\nthe \"user friendly by default\" side, e.g. hot_standby & replication in\nthe default configuration, parallelism, and generally higher defaults\nfor resource knobs like *_mem, autovacuum_* and so on.\n\nI think, the next step in that direction would be to enable data\nchecksums by default. They make sense in most setups, and people who\nplan to run very performance-critical systems where checksums might be\ntoo much need to tune many knobs anyway, and can as well choose to\ndisable them manually, instead of having everyone else have to enable\nthem manually. Also, disabling is much easier than enabling.\n\nOne argument against checksums used to be that we lack tools to fix\nproblems with them. But ignore_checksum_failure and the pg_checksums\ntool fix that.\n\nThe attached patch flips the default in initdb. It also adds a new\noption -k --no-data-checksums that wasn't present previously. Docs are\nupdated to say what the new default is, and the testsuite exercises\nthe -K option.\n\nChristoph",
"msg_date": "Fri, 22 Mar 2019 16:16:54 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Enable data checksums by default"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I think, the next step in that direction would be to enable data\n> checksums by default. They make sense in most setups,\n\nWell, that is exactly the point that needs some proof, not just\nan unfounded assertion.\n\nIMO, the main value of checksums is that they allow the Postgres\nproject to deflect blame. That's nice for us but I'm not sure\nthat it's a benefit for users. I've seen little if any data to\nsuggest that checksums actually catch enough problems to justify\nthe extra CPU costs and the risk of false positives.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Mar 2019 12:07:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-22 12:07:22 -0400, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > I think, the next step in that direction would be to enable data\n> > checksums by default. They make sense in most setups,\n> \n> Well, that is exactly the point that needs some proof, not just\n> an unfounded assertion.\n> \n> IMO, the main value of checksums is that they allow the Postgres\n> project to deflect blame. That's nice for us but I'm not sure\n> that it's a benefit for users. I've seen little if any data to\n> suggest that checksums actually catch enough problems to justify\n> the extra CPU costs and the risk of false positives.\n\nIDK, being able to verify in some form that backups aren't corrupted on\nan IO level is mighty nice. That often does allow to detect the issue\nwhile one still has older backups around.\n\nMy problem is more that I'm not confident the checks are mature\nenough. The basebackup checks are atm not able to detect random data,\nand neither basebackup nor backend checks detect zeroed out files/file\nranges.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Mar 2019 09:10:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "\n\nOn 3/22/19 5:10 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-22 12:07:22 -0400, Tom Lane wrote:\n>> Christoph Berg <myon@debian.org> writes:\n>>> I think, the next step in that direction would be to enable data\n>>> checksums by default. They make sense in most setups,\n>>\n>> Well, that is exactly the point that needs some proof, not just\n>> an unfounded assertion.\n>>\n>> IMO, the main value of checksums is that they allow the Postgres\n>> project to deflect blame. That's nice for us but I'm not sure\n>> that it's a benefit for users. I've seen little if any data to\n>> suggest that checksums actually catch enough problems to justify\n>> the extra CPU costs and the risk of false positives.\n> \n\nI'm not sure about checksums being an effective tool to deflect blame.\nConsidering the recent fsync retry issues - due to the assumption that\nwe can just retry fsync we might have lost some of the writes, resulting\nin torn pages and checksum failures. I'm sure we could argue about how\nmuch sense the fsync behavior makes, but I doubt checksum failures are\nenough to deflect blame here.\n\n> IDK, being able to verify in some form that backups aren't corrupted on\n> an IO level is mighty nice. That often does allow to detect the issue\n> while one still has older backups around.\n> \n\nYeah, I agree that's a valuable capability. I think the question is how\neffective it actually is considering how much the storage changed over\nthe past few years (which necessarily affects the type of failures\npeople have to deal with).\n\n> My problem is more that I'm not confident the checks are mature\n> enough. The basebackup checks are atm not able to detect random data,\n> and neither basebackup nor backend checks detect zeroed out files/file\n> ranges.\n> \n\nYep :-( The pg_basebackup vulnerability to random garbage in a page\nheader is unfortunate, we better improve that.\n\nIt's not clear to me what can checksums do about zeroed pages (and/or\ntruncated files) though.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 17:32:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-22 17:32:10 +0100, Tomas Vondra wrote:\n> On 3/22/19 5:10 PM, Andres Freund wrote:\n> > IDK, being able to verify in some form that backups aren't corrupted on\n> > an IO level is mighty nice. That often does allow to detect the issue\n> > while one still has older backups around.\n> > \n> \n> Yeah, I agree that's a valuable capability. I think the question is how\n> effective it actually is considering how much the storage changed over\n> the past few years (which necessarily affects the type of failures\n> people have to deal with).\n\nI'm not sure I understand? How do the changes around storage\nmeaningfully affect the need to have some trust in backups and\nbenefiting from earlier detection?\n\n\n> It's not clear to me what can checksums do about zeroed pages (and/or\n> truncated files) though.\n\nWell, there's nothing fundamental about needing added pages be\nzeroes. We could expand them to be initialized with actual valid\nchecksums instead of\n\t\t/* new buffers are zero-filled */\n\t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n\t\t/* don't set checksum for all-zero page */\n\t\tsmgrextend(smgr, forkNum, blockNum, (char *) bufBlock, false);\n\nthe problem is that it's hard to do so safely without adding a lot of\nadditional WAL logging. A lot of filesystems will journal metadata\nchanges (like the size of the file), but not contents. So after a crash\nthe tail end might appear zeroed out, even if we never wrote\nzeroes. That's obviously solvable by WAL logging, but that's not cheap.\n\nIt might still be a good idea to just write a page with an initialized\nheader / checksum at that point, as that ought to still detect a number\nof problems we can't detect right now.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Mar 2019 09:41:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On 3/22/19 5:41 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-22 17:32:10 +0100, Tomas Vondra wrote:\n>> On 3/22/19 5:10 PM, Andres Freund wrote:\n>>> IDK, being able to verify in some form that backups aren't corrupted on\n>>> an IO level is mighty nice. That often does allow to detect the issue\n>>> while one still has older backups around.\n>>>\n>>\n>> Yeah, I agree that's a valuable capability. I think the question is how\n>> effective it actually is considering how much the storage changed over\n>> the past few years (which necessarily affects the type of failures\n>> people have to deal with).\n> \n> I'm not sure I understand? How do the changes around storage\n> meaningfully affect the need to have some trust in backups and\n> benefiting from earlier detection?\n> \n\nHaving trusted in backups is still desirable - nothing changes that,\nobviously. The question I was posing was rather \"Are checksums still\neffective on current storage systems?\"\n\nI'm wondering if the storage systems people use nowadays may be failing\nin ways that are not reliably detectable by checksums. I don't have any\ndata to either support or reject that hypothesis, though.\n\n> \n>> It's not clear to me what can checksums do about zeroed pages (and/or\n>> truncated files) though.\n> \n> Well, there's nothing fundamental about needing added pages be\n> zeroes. We could expand them to be initialized with actual valid\n> checksums instead of\n> \t\t/* new buffers are zero-filled */\n> \t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n> \t\t/* don't set checksum for all-zero page */\n> \t\tsmgrextend(smgr, forkNum, blockNum, (char *) bufBlock, false);\n> \n> the problem is that it's hard to do so safely without adding a lot of\n> additional WAL logging. A lot of filesystems will journal metadata\n> changes (like the size of the file), but not contents. So after a crash\n> the tail end might appear zeroed out, even if we never wrote\n> zeroes. That's obviously solvable by WAL logging, but that's not cheap.\n> \n\nHmmm. I'd say a filesystem that does not guarantee having all the data\nafter an fsync is outright broken, but maybe that's what checksums are\nmeant to protect against.\n\n> It might still be a good idea to just write a page with an initialized\n> header / checksum at that point, as that ought to still detect a number\n> of problems we can't detect right now.\n> \n\nSounds reasonable.\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Fri, 22 Mar 2019 18:01:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On 2019-03-22 18:01:32 +0100, Tomas Vondra wrote:\n> On 3/22/19 5:41 PM, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2019-03-22 17:32:10 +0100, Tomas Vondra wrote:\n> >> On 3/22/19 5:10 PM, Andres Freund wrote:\n> >>> IDK, being able to verify in some form that backups aren't corrupted on\n> >>> an IO level is mighty nice. That often does allow to detect the issue\n> >>> while one still has older backups around.\n> >>>\n> >>\n> >> Yeah, I agree that's a valuable capability. I think the question is how\n> >> effective it actually is considering how much the storage changed over\n> >> the past few years (which necessarily affects the type of failures\n> >> people have to deal with).\n> > \n> > I'm not sure I understand? How do the changes around storage\n> > meaningfully affect the need to have some trust in backups and\n> > benefiting from earlier detection?\n> > \n> \n> Having trusted in backups is still desirable - nothing changes that,\n> obviously. The question I was posing was rather \"Are checksums still\n> effective on current storage systems?\"\n> \n> I'm wondering if the storage systems people use nowadays may be failing\n> in ways that are not reliably detectable by checksums. I don't have any\n> data to either support or reject that hypothesis, though.\n\nI don't think it's useful to paint unsubstantiated doom-and-gloom\npictures.\n\n\n> >> It's not clear to me what can checksums do about zeroed pages (and/or\n> >> truncated files) though.\n> > \n> > Well, there's nothing fundamental about needing added pages be\n> > zeroes. We could expand them to be initialized with actual valid\n> > checksums instead of\n> > \t\t/* new buffers are zero-filled */\n> > \t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n> > \t\t/* don't set checksum for all-zero page */\n> > \t\tsmgrextend(smgr, forkNum, blockNum, (char *) bufBlock, false);\n> > \n> > the problem is that it's hard to do so safely without adding a lot of\n> > additional WAL logging. A lot of filesystems will journal metadata\n> > changes (like the size of the file), but not contents. So after a crash\n> > the tail end might appear zeroed out, even if we never wrote\n> > zeroes. That's obviously solvable by WAL logging, but that's not cheap.\n> > \n> \n> Hmmm. I'd say a filesystem that does not guarantee having all the data\n> after an fsync is outright broken, but maybe that's what checksums are\n> meant to protect against.\n\nThere's no fsync here. smgrextend(with-valid-checksum);crash; - the OS\nwill probably have journalled the file size change, but not the\ncontents. After a crash it's thus likely that the data page will appear\nzeroed. Which prevents us from erroring out when encountering a zeroed\npage, even though that'd be very good for error detection capabilities,\nbecause storage systems will show corrupted data as zeroes in a number\nof cases.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Mar 2019 10:07:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Re: Tom Lane 2019-03-22 <4368.1553270842@sss.pgh.pa.us>\n> Christoph Berg <myon@debian.org> writes:\n> > I think, the next step in that direction would be to enable data\n> > checksums by default. They make sense in most setups,\n> \n> Well, that is exactly the point that needs some proof, not just\n> an unfounded assertion.\n\nI run a benchmark with checksums disabled/enabled. shared_buffers is\n512kB to make sure almost any read will fetch the page from the OS\ncache; scale factor is 50 (~750MB) to make sure the whole cluster fits\ninto RAM.\n\nmodel name: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz (8 threads)\nalter system set shared_buffers = '512kB';\npgbench -s 50 -i\npgbench -P 5 -M prepared -c 8 -j 8 -T 60 --select-only\n\nwithout checksums:\ntps = 96893.627255 (including connections establishing)\ntps = 97570.587793 (including connections establishing)\ntps = 97455.484419 (including connections establishing)\ntps = 97533.668801 (including connections establishing)\naverage: 97363\n\nwith checksums:\ntps = 91942.502487 (including connections establishing)\ntps = 92390.556925 (including connections establishing)\ntps = 92956.923271 (including connections establishing)\ntps = 92914.205047 (including connections establishing)\naverage: 92551\n\nselect 92551.0/97363;\n0.9506\n\nSo the cost is 5% in this very contrived case. In almost any other\nsetting, the cost would be lower, I'd think.\n\nChristoph\n\n",
"msg_date": "Tue, 26 Mar 2019 16:14:46 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 9:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> IMO, the main value of checksums is that they allow the Postgres\n> project to deflect blame. That's nice for us but I'm not sure\n> that it's a benefit for users. I've seen little if any data to\n> suggest that checksums actually catch enough problems to justify\n> the extra CPU costs and the risk of false positives.\n\nI share your concern.\n\nSome users have a peculiar kind of cognitive dissonance around\ncorruption, at least in my experience. It's very difficult for them to\nmake a choice on whether or not to fail hard. Perhaps that needs to be\ntaken into account, without being indulged.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 26 Mar 2019 12:17:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Re: To Tom Lane 2019-03-26 <20190326151446.GG3829@msg.df7cb.de>\n> I run a benchmark with checksums disabled/enabled. shared_buffers is\n> 512kB to make sure almost any read will fetch the page from the OS\n> cache; scale factor is 50 (~750MB) to make sure the whole cluster fits\n> into RAM.\n[...]\n> So the cost is 5% in this very contrived case. In almost any other\n> setting, the cost would be lower, I'd think.\n\n(That was on 12devel, btw.)\n\nThat was about the most extreme OLTP read-only workload. After\nthinking about it some more, I realized that exercising large seqscans\nmight be an even better way to test it because of less per-query\noverhead.\n\nSame setup again, shared_buffers = 16 (128kB), jit = off,\nmax_parallel_workers_per_gather = 0:\n\nselect count(bid) from pgbench_accounts;\n\nno checksums: ~456ms\nwith checksums: ~489ms\n\n456.0/489 = 0.9325\n\nThe cost of checksums is about 6.75% here.\n\nChristoph\n\n\n",
"msg_date": "Wed, 27 Mar 2019 14:56:58 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Wed, Mar 27, 2019, 15:57 Christoph Berg <myon@debian.org> wrote:\n\n> Re: To Tom Lane 2019-03-26 <20190326151446.GG3829@msg.df7cb.de>\n> > I run a benchmark with checksums disabled/enabled. shared_buffers is\n> > 512kB to make sure almost any read will fetch the page from the OS\n> > cache; scale factor is 50 (~750MB) to make sure the whole cluster fits\n> > into RAM.\n> [...]\n> > So the cost is 5% in this very contrived case. In almost any other\n> > setting, the cost would be lower, I'd think.\n>\n> (That was on 12devel, btw.)\n>\n> That was about the most extreme OLTP read-only workload. After\n> thinking about it some more, I realized that exercising large seqscans\n> might be an even better way to test it because of less per-query\n> overhead.\n>\n> Same setup again, shared_buffers = 16 (128kB), jit = off,\n> max_parallel_workers_per_gather = 0:\n>\n> select count(bid) from pgbench_accounts;\n>\n> no checksums: ~456ms\n> with checksums: ~489ms\n>\n> 456.0/489 = 0.9325\n>\n> The cost of checksums is about 6.75% here.\n>\n\nCan you try with postgres compiled with CFLAGS=\"-O2 -march=native\"? There's\na bit of low hanging fruit there to use a runtime CPU check to pick a\nbetter optimized checksum function.\n\nRegards,\nAnts Aasma\n\n>\n\nOn Wed, Mar 27, 2019, 15:57 Christoph Berg <myon@debian.org> wrote:Re: To Tom Lane 2019-03-26 <20190326151446.GG3829@msg.df7cb.de>\n> I run a benchmark with checksums disabled/enabled. shared_buffers is\n> 512kB to make sure almost any read will fetch the page from the OS\n> cache; scale factor is 50 (~750MB) to make sure the whole cluster fits\n> into RAM.\n[...]\n> So the cost is 5% in this very contrived case. In almost any other\n> setting, the cost would be lower, I'd think.\n\n(That was on 12devel, btw.)\n\nThat was about the most extreme OLTP read-only workload. After\nthinking about it some more, I realized that exercising large seqscans\nmight be an even better way to test it because of less per-query\noverhead.\n\nSame setup again, shared_buffers = 16 (128kB), jit = off,\nmax_parallel_workers_per_gather = 0:\n\nselect count(bid) from pgbench_accounts;\n\nno checksums: ~456ms\nwith checksums: ~489ms\n\n456.0/489 = 0.9325\n\nThe cost of checksums is about 6.75% here.Can you try with postgres compiled with CFLAGS=\"-O2 -march=native\"? There's a bit of low hanging fruit there to use a runtime CPU check to pick a better optimized checksum function.Regards,Ants Aasma",
"msg_date": "Wed, 27 Mar 2019 22:51:16 +0200",
"msg_from": "Ants Aasma <ants.aasma@eesti.ee>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Re: Ants Aasma 2019-03-27 <CA+CSw_twXdRzDN2XsSZBxEj63DeZ+f6_hs3Qf7hmXfenxSq+jg@mail.gmail.com>\n> Can you try with postgres compiled with CFLAGS=\"-O2 -march=native\"? There's\n> a bit of low hanging fruit there to use a runtime CPU check to pick a\n> better optimized checksum function.\n\nFrankly, no. This is with the apt.pg.o packages which are supposed to\nbe usable by everyone. If there is a better per-CPU checksum function,\nPG should pick it at runtime. Special compiler flags are a no-go here.\n\nCPPFLAGS = -Wdate-time -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/mit-krb5\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -fno-omit-frame-pointer\n\nChristoph\n\n\n",
"msg_date": "Thu, 28 Mar 2019 09:38:16 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On 2019-03-22 16:16, Christoph Berg wrote:\n> I think, the next step in that direction would be to enable data\n> checksums by default. They make sense in most setups, and people who\n> plan to run very performance-critical systems where checksums might be\n> too much need to tune many knobs anyway, and can as well choose to\n> disable them manually, instead of having everyone else have to enable\n> them manually. Also, disabling is much easier than enabling.\n\nIt would also enable pg_rewind to work by default.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 13:03:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Am Dienstag, den 26.03.2019, 16:14 +0100 schrieb Christoph Berg:\n> select 92551.0/97363;\n> 0.9506\n> \n> So the cost is 5% in this very contrived case. In almost any other\n> setting, the cost would be lower, I'd think.\n\nWell, my machine (Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz, 32 GByte\nRAM) tells me this:\n\npgbench -s 50 -i pgbench\npg_ctl -o \"--shared-buffers=128kB\" restart\npgbench -r -P4 -Mprepared -T60 -c $clients -j $clients -n -S\n\n...prewarm...\n\nClients\tchecksums\n1\t20110\n2\t35338\n4\t67207\n8\t96627\n16\t110091\n\nClients no checksums\n1\t21716\n2\t38543\n4\t72118\n8\t117545\n16\t121415\n\nClients\tImpact\n1\t0,926045312212194\n2\t0,916846119918014\n4\t0,931903269641421\n8\t0,822042621974563\n16\t0,906733105464728\n\nSo between ~7% to 18% impact with checksums in this specific case here.\n\n\tBernd\n\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:16:11 +0100",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 10:38 AM Christoph Berg <myon@debian.org> wrote:\n\n> Re: Ants Aasma 2019-03-27 <\n> CA+CSw_twXdRzDN2XsSZBxEj63DeZ+f6_hs3Qf7hmXfenxSq+jg@mail.gmail.com>\n> > Can you try with postgres compiled with CFLAGS=\"-O2 -march=native\"?\n> There's\n> > a bit of low hanging fruit there to use a runtime CPU check to pick a\n> > better optimized checksum function.\n>\n> Frankly, no. This is with the apt.pg.o packages which are supposed to\n> be usable by everyone. If there is a better per-CPU checksum function,\n> PG should pick it at runtime. Special compiler flags are a no-go here.\n>\n\nI went ahead and tested it on the count(*) test, same settings as upthread.\nMedian of 5 runs of 20txs on Intel i5-2500k @ 4GHz.\n\nNo checksum: 344ms\nChecksums: 384ms (+12%)\nNo checksum march=native: 344ms\nChecksums march=native: 369ms (+7%)\n\nThe checksum code was written to be easily auto-vectorized by the compiler.\nSo if we just compile the same function with different compiler flags and\npick between them at runtime the overhead can be approximately halved. Not\nsaying that this needs to be done before enabling checksums by default,\njust that when considering overhead, we can foresee it being much lower in\nfuture versions.\n\nRegards,\nAnts Aasma\n\nOn Thu, Mar 28, 2019 at 10:38 AM Christoph Berg <myon@debian.org> wrote:Re: Ants Aasma 2019-03-27 <CA+CSw_twXdRzDN2XsSZBxEj63DeZ+f6_hs3Qf7hmXfenxSq+jg@mail.gmail.com>\n> Can you try with postgres compiled with CFLAGS=\"-O2 -march=native\"? There's\n> a bit of low hanging fruit there to use a runtime CPU check to pick a\n> better optimized checksum function.\n\nFrankly, no. This is with the apt.pg.o packages which are supposed to\nbe usable by everyone. If there is a better per-CPU checksum function,\nPG should pick it at runtime. Special compiler flags are a no-go here.I went ahead and tested it on the count(*) test, same settings as upthread. Median of 5 runs of 20txs on Intel i5-2500k @ 4GHz.No checksum: 344msChecksums: 384ms (+12%)No checksum march=native: 344msChecksums march=native: 369ms (+7%)The checksum code was written to be easily auto-vectorized by the compiler. So if we just compile the same function with different compiler flags and pick between them at runtime the overhead can be approximately halved. Not saying that this needs to be done before enabling checksums by default, just that when considering overhead, we can foresee it being much lower in future versions.Regards,Ants Aasma",
"msg_date": "Fri, 29 Mar 2019 13:18:14 +0200",
"msg_from": "Ants Aasma <ants.aasma@eesti.ee>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 11:16:11AM +0100, Bernd Helmle wrote:\n> So between ~7% to 18% impact with checksums in this specific case here.\n\nI can't really believe that many people set up shared_buffers at 128kB\nwhich would cause such a large number of page evictions, but I can\nbelieve that many users have shared_buffers set to its default value\nand that we are going to get complains about \"performance drop after\nupgrade to v12\" if we switch data checksums to on by default.\n--\nMichael",
"msg_date": "Fri, 29 Mar 2019 23:10:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Am Freitag, den 29.03.2019, 23:10 +0900 schrieb Michael Paquier:\n> \n> I can't really believe that many people set up shared_buffers at\n> 128kB\n> which would cause such a large number of page evictions, but I can\n> believe that many users have shared_buffers set to its default value\n> and that we are going to get complains about \"performance drop after\n> upgrade to v12\" if we switch data checksums to on by default.\n\nYeah, i think Christoph's benchmark is based on this thinking. I assume\nthis very unrealistic scenery should emulate the worst case (many\nbuffer_reads, high checksum calculation load). \n\n\tBernd\n\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 20:25:41 +0100",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Re: Bernd Helmle 2019-03-29 <3586bb9345a59bfc8d13a50a7c729be1ee6759fd.camel@oopsware.de>\n> Am Freitag, den 29.03.2019, 23:10 +0900 schrieb Michael Paquier:\n> > \n> > I can't really believe that many people set up shared_buffers at\n> > 128kB\n> > which would cause such a large number of page evictions, but I can\n> > believe that many users have shared_buffers set to its default value\n> > and that we are going to get complains about \"performance drop after\n> > upgrade to v12\" if we switch data checksums to on by default.\n> \n> Yeah, i think Christoph's benchmark is based on this thinking. I assume\n> this very unrealistic scenery should emulate the worst case (many\n> buffer_reads, high checksum calculation load). \n\nIt's not unrealistic to have large seqscans that are all buffer\nmisses, the table just has to be big enough. The idea in my benchmark\nwas that if I make shared buffers really small, and the table still\nfits in to RAM, I should be seeing only buffer misses, but without any\ndelay for actually reading from disk.\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 20:35:26 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 08:35:26PM +0100, Christoph Berg wrote:\n>Re: Bernd Helmle 2019-03-29 <3586bb9345a59bfc8d13a50a7c729be1ee6759fd.camel@oopsware.de>\n>> Am Freitag, den 29.03.2019, 23:10 +0900 schrieb Michael Paquier:\n>> >\n>> > I can't really believe that many people set up shared_buffers at\n>> > 128kB\n>> > which would cause such a large number of page evictions, but I can\n>> > believe that many users have shared_buffers set to its default value\n>> > and that we are going to get complains about \"performance drop after\n>> > upgrade to v12\" if we switch data checksums to on by default.\n>>\n>> Yeah, i think Christoph's benchmark is based on this thinking. I assume\n>> this very unrealistic scenery should emulate the worst case (many\n>> buffer_reads, high checksum calculation load).\n>\n>It's not unrealistic to have large seqscans that are all buffer\n>misses, the table just has to be big enough. The idea in my benchmark\n>was that if I make shared buffers really small, and the table still\n>fits in to RAM, I should be seeing only buffer misses, but without any\n>delay for actually reading from disk.\n>\n>Christoph\n>\n\nFWIW I think it's a mistake to focus solely on CPU utilization, which\nall the benchmarks performed on this thread do because they look at tps\nof in-memory read-only workloads. Checksums have other costs too, not\njust the additional CPU time. Most importanly they require wal_log_hints\nto be set (which people may or may not want anyway).\n\nI've done a simple benchmark, that does read-only (-S) and read-write\n(-N) pgbench runs with different scales, but also measures duration of\nthe pgbench init and amount of WAL produced during the tests.\n\nOn a small machine (i5, 8GB RAM, SSD RAID) the results are these:\n\n scale config | init tps wal\n =========================|==================================\n ro 10 no-hints | 2 117038 130\n hints | 2 116378 146\n checksums | 2 115619 147\n -------------------|----------------------------------\n 200 no-hints | 32 88340 2407\n hints | 37 86154 2628\n checksums | 36 83336 2624\n -------------------|----------------------------------\n 2000 no-hints | 365 38680 1967\n hints | 423 38670 2123\n checksums | 504 37510 2046\n -------------------------|----------------------------------\n rw 10 no-hints | 2 19691 437\n hints | 2 19712 437\n checksums | 2 19654 437\n -------------------|----------------------------------\n 200 no-hints | 32 15839 2745\n hints | 37 15735 2783\n checksums | 36 15646 2775\n -------------------|----------------------------------\n 2000 no-hints | 365 5371 3721\n hints | 423 5270 3671\n checksums | 504 5094 3574\n\nThe no-hints config is default (wal_log_hints=off, data_checksums=off),\nhints sets wal_log_hints=on and checksums enables data checksums. All\nthe configs were somewhat tuned (1GB shared buffers, max_wal_size high\nenough not to hit checkpoints very often, etc.).\n\nI've also done the tests on the a larger machine (2x E5-2620v4, 32GB of\nRAM, NVMe SSD), and the general pattern is about the same - while the\ntps and amount of WAL (not covering the init) does not change, the time\nfor initialization increases significantly (by 20-40%).\n\nThis effect is even clearer when using slower storage (SATA-based RAID).\nThe results then look like this:\n\n scale config | init tps wal\n =========================|==================================\n ro 100 no-hints | 49 229459 122\n hints | 101 167983 190\n checksums | 103 156307 190\n -------------------|----------------------------------\n 1000 no-hints | 580 152167 109\n hints | 1047 122814 142\n checksums | 1080 118586 141\n -------------------|----------------------------------\n 6000 no-hints | 4035 508 1\n hints | 11193 502 1\n checksums | 11376 506 1\n -------------------------|----------------------------------\n rw 100 no-hints | 49 279 192\n hints | 101 275 190\n checksums | 103 275 190\n -------------------|----------------------------------\n 1000 no-hints | 580 237 210\n hints | 1047 225 201\n checksums | 1080 224 200\n -------------------|----------------------------------\n 6000 no-hints | 4035 135 123\n hints | 11193 133 122\n checksums | 11376 132 121\n\nand when expressed as relative to no-hints:\n\n scale config | init tps wal\n ============================|===============================\n ro 100 hints | 206% 73% 155%\n checksums | 210% 68% 155%\n -------------------|--------------------------------\n 1000 hints | 181% 81% 131%\n checksums | 186% 78% 129%\n -------------------|--------------------------------\n 6000 hints | 277% 99% 100%\n checksums | 282% 100% 104%\n ----------------------------|--------------------------------\n rw 100 hints | 206% 99% 99%\n checksums | 210% 99% 99%\n -------------------|--------------------------------\n 1000 hints | 181% 95% 96%\n checksums | 186% 95% 95%\n -------------------|--------------------------------\n 6000 hints | 277% 99% 99%\n checksums | 282% 98% 98%\n\nI have not investigated the exact reasons, but my hypothesis it's about\nthe amount of WAL generated during the initial CREATE INDEX (because it\nprobably ends up setting the hint bits), which puts additional pressure\non the storage.\n\nUnfortunately, this additional cost is unlikely to go away :-(\n\nNow, maybe we want to enable checksums by default anyway, but we should\nnot pretent the only cost related to checksums is CPU usage.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 30 Mar 2019 20:25:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "\n\nOn March 30, 2019 3:25:43 PM EDT, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>On Fri, Mar 29, 2019 at 08:35:26PM +0100, Christoph Berg wrote:\n>>Re: Bernd Helmle 2019-03-29\n><3586bb9345a59bfc8d13a50a7c729be1ee6759fd.camel@oopsware.de>\n>>> Am Freitag, den 29.03.2019, 23:10 +0900 schrieb Michael Paquier:\n>>> >\n>>> > I can't really believe that many people set up shared_buffers at\n>>> > 128kB\n>>> > which would cause such a large number of page evictions, but I can\n>>> > believe that many users have shared_buffers set to its default\n>value\n>>> > and that we are going to get complains about \"performance drop\n>after\n>>> > upgrade to v12\" if we switch data checksums to on by default.\n>>>\n>>> Yeah, i think Christoph's benchmark is based on this thinking. I\n>assume\n>>> this very unrealistic scenery should emulate the worst case (many\n>>> buffer_reads, high checksum calculation load).\n>>\n>>It's not unrealistic to have large seqscans that are all buffer\n>>misses, the table just has to be big enough. The idea in my benchmark\n>>was that if I make shared buffers really small, and the table still\n>>fits in to RAM, I should be seeing only buffer misses, but without any\n>>delay for actually reading from disk.\n>>\n>>Christoph\n>>\n>\n>FWIW I think it's a mistake to focus solely on CPU utilization, which\n>all the benchmarks performed on this thread do because they look at tps\n>of in-memory read-only workloads. Checksums have other costs too, not\n>just the additional CPU time. Most importanly they require\n>wal_log_hints\n>to be set (which people may or may not want anyway).\n>\n>I've done a simple benchmark, that does read-only (-S) and read-write\n>(-N) pgbench runs with different scales, but also measures duration of\n>the pgbench init and amount of WAL produced during the tests.\n>\n>On a small machine (i5, 8GB RAM, SSD RAID) the results are these:\n>\n> scale config | init tps wal\n> =========================|==================================\n> ro 10 no-hints | 2 117038 130\n> hints | 2 116378 146\n> checksums | 2 115619 147\n> -------------------|----------------------------------\n> 200 no-hints | 32 88340 2407\n> hints | 37 86154 2628\n> checksums | 36 83336 2624\n> -------------------|----------------------------------\n> 2000 no-hints | 365 38680 1967\n> hints | 423 38670 2123\n> checksums | 504 37510 2046\n> -------------------------|----------------------------------\n> rw 10 no-hints | 2 19691 437\n> hints | 2 19712 437\n> checksums | 2 19654 437\n> -------------------|----------------------------------\n> 200 no-hints | 32 15839 2745\n> hints | 37 15735 2783\n> checksums | 36 15646 2775\n> -------------------|----------------------------------\n> 2000 no-hints | 365 5371 3721\n> hints | 423 5270 3671\n> checksums | 504 5094 3574\n>\n>The no-hints config is default (wal_log_hints=off, data_checksums=off),\n>hints sets wal_log_hints=on and checksums enables data checksums. All\n>the configs were somewhat tuned (1GB shared buffers, max_wal_size high\n>enough not to hit checkpoints very often, etc.).\n>\n>I've also done the tests on the a larger machine (2x E5-2620v4, 32GB of\n>RAM, NVMe SSD), and the general pattern is about the same - while the\n>tps and amount of WAL (not covering the init) does not change, the time\n>for initialization increases significantly (by 20-40%).\n>\n>This effect is even clearer when using slower storage (SATA-based\n>RAID).\n>The results then look like this:\n>\n> scale config | init tps wal\n> =========================|==================================\n> ro 100 no-hints | 49 229459 122\n> hints | 101 167983 190\n> checksums | 103 156307 190\n> -------------------|----------------------------------\n> 1000 no-hints | 580 152167 109\n> hints | 1047 122814 142\n> checksums | 1080 118586 141\n> -------------------|----------------------------------\n> 6000 no-hints | 4035 508 1\n> hints | 11193 502 1\n> checksums | 11376 506 1\n> -------------------------|----------------------------------\n> rw 100 no-hints | 49 279 192\n> hints | 101 275 190\n> checksums | 103 275 190\n> -------------------|----------------------------------\n> 1000 no-hints | 580 237 210\n> hints | 1047 225 201\n> checksums | 1080 224 200\n> -------------------|----------------------------------\n> 6000 no-hints | 4035 135 123\n> hints | 11193 133 122\n> checksums | 11376 132 121\n>\n>and when expressed as relative to no-hints:\n>\n> scale config | init tps wal\n> ============================|===============================\n> ro 100 hints | 206% 73% 155%\n> checksums | 210% 68% 155%\n> -------------------|--------------------------------\n> 1000 hints | 181% 81% 131%\n> checksums | 186% 78% 129%\n> -------------------|--------------------------------\n> 6000 hints | 277% 99% 100%\n> checksums | 282% 100% 104%\n> ----------------------------|--------------------------------\n> rw 100 hints | 206% 99% 99%\n> checksums | 210% 99% 99%\n> -------------------|--------------------------------\n> 1000 hints | 181% 95% 96%\n> checksums | 186% 95% 95%\n> -------------------|--------------------------------\n> 6000 hints | 277% 99% 99%\n> checksums | 282% 98% 98%\n>\n>I have not investigated the exact reasons, but my hypothesis it's about\n>the amount of WAL generated during the initial CREATE INDEX (because it\n>probably ends up setting the hint bits), which puts additional pressure\n>on the storage.\n>\n>Unfortunately, this additional cost is unlikely to go away :-(\n>\n>Now, maybe we want to enable checksums by default anyway, but we should\n>not pretent the only cost related to checksums is CPU usage.\n\nThanks for running these, very helpful.\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 30 Mar 2019 16:17:20 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Re: Tomas Vondra 2019-03-30 <20190330192543.GH4719@development>\n> I have not investigated the exact reasons, but my hypothesis it's about\n> the amount of WAL generated during the initial CREATE INDEX (because it\n> probably ends up setting the hint bits), which puts additional pressure\n> on the storage.\n> \n> Unfortunately, this additional cost is unlikely to go away :-(\n\nIf WAL volume is a problem, would wal_compression help?\n\n> Now, maybe we want to enable checksums by default anyway, but we should\n> not pretent the only cost related to checksums is CPU usage.\n\nThanks for doing these tests. The point I'm trying to make is, why do\nwe run without data checksums by default? For example, we do checksum\nthe WAL all the time, and there's not even an option to disable it,\neven if that might make things faster. Why don't we enable data\nchecksums by default as well?\n\nChristoph\n\n\n",
"msg_date": "Mon, 1 Apr 2019 10:16:47 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 10:17 AM Christoph Berg <myon@debian.org> wrote:\n\n> Re: Tomas Vondra 2019-03-30 <20190330192543.GH4719@development>\n> > I have not investigated the exact reasons, but my hypothesis it's about\n> > the amount of WAL generated during the initial CREATE INDEX (because it\n> > probably ends up setting the hint bits), which puts additional pressure\n> > on the storage.\n> >\n> > Unfortunately, this additional cost is unlikely to go away :-(\n>\n> If WAL volume is a problem, would wal_compression help?\n>\n> > Now, maybe we want to enable checksums by default anyway, but we should\n> > not pretent the only cost related to checksums is CPU usage.\n>\n> Thanks for doing these tests. The point I'm trying to make is, why do\n> we run without data checksums by default? For example, we do checksum\n> the WAL all the time, and there's not even an option to disable it,\n> even if that might make things faster. Why don't we enable data\n> checksums by default as well?\n>\n\nI think one of the often overlooked original reasons was that we need to\nlog hint bits, same as when wal_log_hints is set.\n\nOf course, if we consider it today, you have to do that in order to use\npg_rewind as well, so a lot of people who want to run any form of HA setup\nwill be having that turned on anyway. I think that has turned out to be a\nmuch weaker reason than it originally was thought to be.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 1, 2019 at 10:17 AM Christoph Berg <myon@debian.org> wrote:Re: Tomas Vondra 2019-03-30 <20190330192543.GH4719@development>\n> I have not investigated the exact reasons, but my hypothesis it's about\n> the amount of WAL generated during the initial CREATE INDEX (because it\n> probably ends up setting the hint bits), which puts additional pressure\n> on the storage.\n> \n> Unfortunately, this additional cost is unlikely to go away :-(\n\nIf WAL volume is a problem, would wal_compression help?\n\n> Now, maybe we want to enable checksums by default anyway, but we should\n> not pretent the only cost related to checksums is CPU usage.\n\nThanks for doing these tests. The point I'm trying to make is, why do\nwe run without data checksums by default? For example, we do checksum\nthe WAL all the time, and there's not even an option to disable it,\neven if that might make things faster. Why don't we enable data\nchecksums by default as well?I think one of the often overlooked original reasons was that we need to log hint bits, same as when wal_log_hints is set.Of course, if we consider it today, you have to do that in order to use pg_rewind as well, so a lot of people who want to run any form of HA setup will be having that turned on anyway. I think that has turned out to be a much weaker reason than it originally was thought to be. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 1 Apr 2019 10:25:57 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 10:16:47AM +0200, Christoph Berg wrote:\n> Re: Tomas Vondra 2019-03-30 <20190330192543.GH4719@development>\n> > I have not investigated the exact reasons, but my hypothesis it's about\n> > the amount of WAL generated during the initial CREATE INDEX (because it\n> > probably ends up setting the hint bits), which puts additional pressure\n> > on the storage.\n> > \n> > Unfortunately, this additional cost is unlikely to go away :-(\n> \n> If WAL volume is a problem, would wal_compression help?\n> \n> > Now, maybe we want to enable checksums by default anyway, but we should\n> > not pretent the only cost related to checksums is CPU usage.\n> \n> Thanks for doing these tests. The point I'm trying to make is, why do\n> we run without data checksums by default? For example, we do checksum\n> the WAL all the time, and there's not even an option to disable it,\n> even if that might make things faster. Why don't we enable data\n> checksums by default as well?\n\nWe checksum wal because we know partial WAL writes are likely to happen\nduring power failure during a write. Data pages have pre-images (GUC\nfull_page_writes) stored in WAL so they are protected from partial\nwrites, hence are less likely to need checksum protection to detect\ncorruption.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 9 Apr 2019 23:09:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 12:07:22PM -0400, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > I think, the next step in that direction would be to enable data\n> > checksums by default. They make sense in most setups,\n> \n> Well, that is exactly the point that needs some proof, not just\n> an unfounded assertion.\n> \n> IMO, the main value of checksums is that they allow the Postgres\n> project to deflect blame. That's nice for us but I'm not sure\n> that it's a benefit for users. I've seen little if any data to\n> suggest that checksums actually catch enough problems to justify\n> the extra CPU costs and the risk of false positives.\n\nEnabling checksums by default will require anyone using pg_upgrade to\nrun initdb to disable checksums before running pg_upgrade, for one\nrelease. We could add checksums for non-link pg_upgrade runs, but we\ndon't have code to do that yet, and most people use link anyway.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 9 Apr 2019 23:11:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-09 23:11:03 -0400, Bruce Momjian wrote:\n> Enabling checksums by default will require anyone using pg_upgrade to\n> run initdb to disable checksums before running pg_upgrade, for one\n> release. We could add checksums for non-link pg_upgrade runs, but we\n> don't have code to do that yet, and most people use link anyway.\n\nHm. We could just have pg_ugprade run pg_checksums --enable/disable,\nbased on the old cluster, and print a warning on mismatches. Not sure if\nthat's worth it, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:58:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Thursday, April 11, 2019 6:58 PM, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-04-09 23:11:03 -0400, Bruce Momjian wrote:\n>\n> > Enabling checksums by default will require anyone using pg_upgrade to\n> > run initdb to disable checksums before running pg_upgrade, for one\n> > release. We could add checksums for non-link pg_upgrade runs, but we\n> > don't have code to do that yet, and most people use link anyway.\n>\n> Hm. We could just have pg_ugprade run pg_checksums --enable/disable,\n> based on the old cluster, and print a warning on mismatches. Not sure if\n> that's worth it, but ...\n\nThat would be for link mode, for copy-mode you'd have to initdb with checksums\nturned off and run pg_checksums on the new cluster, else the non-destructive\nnature of copy mode would be lost.\n\nAnother option would be to teach pg_upgrade to checksum the cluster during the\nupgrade on the fly. That would however be a big conceptual change for\npg_upgrade as it's currently not modifying the cluster data. In Greenplum we\nhave done this, but it was an easier choice there as we are rewriting all the\npages anyways. It would also create yet another utility which can checksum an\noffline cluster, but wanted to bring the idea to the table.\n\ncheers ./daniel\n\n\n",
"msg_date": "Thu, 11 Apr 2019 18:15:41 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-11 18:15:41 +0000, Daniel Gustafsson wrote:\n> On Thursday, April 11, 2019 6:58 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2019-04-09 23:11:03 -0400, Bruce Momjian wrote:\n> >\n> > > Enabling checksums by default will require anyone using pg_upgrade to\n> > > run initdb to disable checksums before running pg_upgrade, for one\n> > > release. We could add checksums for non-link pg_upgrade runs, but we\n> > > don't have code to do that yet, and most people use link anyway.\n> >\n> > Hm. We could just have pg_ugprade run pg_checksums --enable/disable,\n> > based on the old cluster, and print a warning on mismatches. Not sure if\n> > that's worth it, but ...\n> \n> That would be for link mode, for copy-mode you'd have to initdb with checksums\n> turned off and run pg_checksums on the new cluster, else the non-destructive\n> nature of copy mode would be lost.\n\nI don't think so? But I think we might just have misunderstood each\nother. What I was suggesting is that we could take the burden of having\nto match the old cluster's checksum enabled/disabled setting when\ninitdb'ing the new cluster, by changing the new cluster instead of\nerroring out with:\n\tif (oldctrl->data_checksum_version == 0 &&\n\t\tnewctrl->data_checksum_version != 0)\n\t\tpg_fatal(\"old cluster does not use data checksums but the new one does\\n\");\n\telse if (oldctrl->data_checksum_version != 0 &&\n\t\t\t newctrl->data_checksum_version == 0)\n\t\tpg_fatal(\"old cluster uses data checksums but the new one does not\\n\");\n\telse if (oldctrl->data_checksum_version != newctrl->data_checksum_version)\n\t\tpg_fatal(\"old and new cluster pg_controldata checksum versions do not match\\n\");\n\n\nAs the new cluster at that time isn't yet related to the old cluster, I\ndon't see why that'd influence the non-destructive nature?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:56:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
},
{
"msg_contents": "On Thursday, April 11, 2019 8:56 PM, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-04-11 18:15:41 +0000, Daniel Gustafsson wrote:\n>\n> > On Thursday, April 11, 2019 6:58 PM, Andres Freund andres@anarazel.de wrote:\n> >\n> > > On 2019-04-09 23:11:03 -0400, Bruce Momjian wrote:\n> > >\n> > > > Enabling checksums by default will require anyone using pg_upgrade to\n> > > > run initdb to disable checksums before running pg_upgrade, for one\n> > > > release. We could add checksums for non-link pg_upgrade runs, but we\n> > > > don't have code to do that yet, and most people use link anyway.\n> > >\n> > > Hm. We could just have pg_ugprade run pg_checksums --enable/disable,\n> > > based on the old cluster, and print a warning on mismatches. Not sure if\n> > > that's worth it, but ...\n> >\n> > That would be for link mode, for copy-mode you'd have to initdb with checksums\n> > turned off and run pg_checksums on the new cluster, else the non-destructive\n> > nature of copy mode would be lost.\n>\n> I don't think so? But I think we might just have misunderstood each\n> other. What I was suggesting is that we could take the burden of having\n> to match the old cluster's checksum enabled/disabled setting when\n> initdb'ing the new cluster, by changing the new cluster instead of\n> erroring out with:\n> if (oldctrl->data_checksum_version == 0 &&\n>\n> \tnewctrl->data_checksum_version != 0)\n>\n> \tpg_fatal(\"old cluster does not use data checksums but the new one does\\\\n\");\n> else if (oldctrl->data_checksum_version != 0 &&\n>\n> \t\t newctrl->data_checksum_version == 0)\n>\n> \tpg_fatal(\"old cluster uses data checksums but the new one does not\\\\n\");\n> else if (oldctrl->data_checksum_version != newctrl->data_checksum_version)\n>\n> \tpg_fatal(\"old and new cluster pg_controldata checksum versions do not match\\\\n\");\n>\n>\n> As the new cluster at that time isn't yet related to the old cluster, I\n> don't see why that'd influence the non-destructive nature?\n\nRight, now I see what you mean, and I indeed misunderstood you. Thanks for\nclarifying.\n\ncheers ./daniel\n\n\n",
"msg_date": "Thu, 11 Apr 2019 21:08:07 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Enable data checksums by default"
}
] |
[
{
"msg_contents": "As noted by a PostgreSQL user to me, error messages for NOT NULL\nconstraints are inconsistent - they do not mention the relation name in the\nmessage, as all other variants of this message do. e.g.\n\npostgres=# create table nn (id integer not null);\nCREATE TABLE\npostgres=# insert into nn values (NULL);\nERROR: null value in column \"id\" violates not-null constraint\nDETAIL: Failing row contains (null).\n\npostgres=# create table nn2 (id integer check (id is not null));\nCREATE TABLE\npostgres=# insert into nn2 values (NULL);\nERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"\nDETAIL: Failing row contains (null).\n\nI propose the attached patch as a fix, changing the wording (of the first\ncase) to\nERROR: null value in column \"id\" for relation \"nn\" violates not-null\nconstraint\n\nIt causes breakage in multiple tests, which is easy to fix once/if we agree\nto change.\n\nThanks\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 22 Mar 2019 13:25:31 -0400",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Error message inconsistency"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> As noted by a PostgreSQL user to me, error messages for NOT NULL\nconstraints are inconsistent - they do not mention the relation name in the\nmessage, as all other variants of this message do. e.g.\n>\n> postgres=# create table nn (id integer not null);\n> CREATE TABLE\n> postgres=# insert into nn values (NULL);\n> ERROR: null value in column \"id\" violates not-null constraint\n> DETAIL: Failing row contains (null).\n>\n> postgres=# create table nn2 (id integer check (id is not null));\n> CREATE TABLE\n> postgres=# insert into nn2 values (NULL);\n> ERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"\n> DETAIL: Failing row contains (null).\n>\n> I propose the attached patch as a fix, changing the wording (of the first\ncase) to\n> ERROR: null value in column \"id\" for relation \"nn\" violates not-null\nconstraint\n>\n> It causes breakage in multiple tests, which is easy to fix once/if we\nagree to change.\n>\n\nI totally agree with that change because I already get some negative\nfeedback from users about this message too.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com> wrote:>> As noted by a PostgreSQL user to me, error messages for NOT NULL constraints are inconsistent - they do not mention the relation name in the message, as all other variants of this message do. e.g.>> postgres=# create table nn (id integer not null);> CREATE TABLE> postgres=# insert into nn values (NULL);> ERROR: null value in column \"id\" violates not-null constraint> DETAIL: Failing row contains (null).>> postgres=# create table nn2 (id integer check (id is not null));> CREATE TABLE> postgres=# insert into nn2 values (NULL);> ERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"> DETAIL: Failing row contains (null).>> I propose the attached patch as a fix, changing the wording (of the first case) to> ERROR: null value in column \"id\" for relation \"nn\" violates not-null constraint>> It causes breakage in multiple tests, which is easy to fix once/if we agree to change.>I totally agree with that change because I already get some negative feedback from users about this message too.Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Fri, 22 Mar 2019 20:03:20 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 4:33 AM Fabrízio de Royes Mello\n<fabriziomello@gmail.com> wrote:\n>\n> On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > As noted by a PostgreSQL user to me, error messages for NOT NULL constraints are inconsistent - they do not mention the relation name in the message, as all other variants of this message do. e.g.\n> >\n> > postgres=# create table nn (id integer not null);\n> > CREATE TABLE\n> > postgres=# insert into nn values (NULL);\n> > ERROR: null value in column \"id\" violates not-null constraint\n> > DETAIL: Failing row contains (null).\n> >\n> > postgres=# create table nn2 (id integer check (id is not null));\n> > CREATE TABLE\n> > postgres=# insert into nn2 values (NULL);\n> > ERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"\n> > DETAIL: Failing row contains (null).\n> >\n> > I propose the attached patch as a fix, changing the wording (of the first case) to\n> > ERROR: null value in column \"id\" for relation \"nn\" violates not-null constraint\n> >\n\nI think we are inconsistent for a similar message at a few other\nplaces as well. See, below two messages:\n\ncolumn \\\"%s\\\" contains null values\ncolumn \\\"%s\\\" of table \\\"%s\\\" contains null values\n\nIf we decide to change this case, then why not change another place\nwhich has a similar symptom?\n\n> > It causes breakage in multiple tests, which is easy to fix once/if we agree to change.\n> >\n>\n> I totally agree with that change because I already get some negative feedback from users about this message too.\n>\n\nWhat kind of negative feedback did you get from users? If I see in\nthe log file, the message is displayed as :\n\n2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\nviolates not-null constraint\n2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values (NULL);\n\nSo, it is not difficult to identify the relation.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Sun, 24 Mar 2019 18:32:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "To me the error message that includes more detail is superior. Even though\nyou can get the detail from the logs, it seems like it would much more\nconvenient for it to be reported out via the error to allow\nusers/applications to identify the problem relation without fetching logs.\nI understand if that's not worth breaking numerous tests, though.\nPersonally, I think consistency here is important enough to warrant it.\n\nOn Sun, Mar 24, 2019, 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, Mar 23, 2019 at 4:33 AM Fabrízio de Royes Mello\n> <fabriziomello@gmail.com> wrote:\n> >\n> > On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com>\n> wrote:\n> > >\n> > > As noted by a PostgreSQL user to me, error messages for NOT NULL\n> constraints are inconsistent - they do not mention the relation name in the\n> message, as all other variants of this message do. e.g.\n> > >\n> > > postgres=# create table nn (id integer not null);\n> > > CREATE TABLE\n> > > postgres=# insert into nn values (NULL);\n> > > ERROR: null value in column \"id\" violates not-null constraint\n> > > DETAIL: Failing row contains (null).\n> > >\n> > > postgres=# create table nn2 (id integer check (id is not null));\n> > > CREATE TABLE\n> > > postgres=# insert into nn2 values (NULL);\n> > > ERROR: new row for relation \"nn2\" violates check constraint\n> \"nn2_id_check\"\n> > > DETAIL: Failing row contains (null).\n> > >\n> > > I propose the attached patch as a fix, changing the wording (of the\n> first case) to\n> > > ERROR: null value in column \"id\" for relation \"nn\" violates not-null\n> constraint\n> > >\n>\n> I think we are inconsistent for a similar message at a few other\n> places as well. See, below two messages:\n>\n> column \\\"%s\\\" contains null values\n> column \\\"%s\\\" of table \\\"%s\\\" contains null values\n>\n> If we decide to change this case, then why not change another place\n> which has a similar symptom?\n>\n> > > It causes breakage in multiple tests, which is easy to fix once/if we\n> agree to change.\n> > >\n> >\n> > I totally agree with that change because I already get some negative\n> feedback from users about this message too.\n> >\n>\n> What kind of negative feedback did you get from users? If I see in\n> the log file, the message is displayed as :\n>\n> 2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\n> violates not-null constraint\n> 2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n> 2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values\n> (NULL);\n>\n> So, it is not difficult to identify the relation.\n>\n> --\n> With Regards,\n> Amit Kapila.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n\nTo me the error message that includes more detail is superior. Even though you can get the detail from the logs, it seems like it would much more convenient for it to be reported out via the error to allow users/applications to identify the problem relation without fetching logs. I understand if that's not worth breaking numerous tests, though. Personally, I think consistency here is important enough to warrant it. On Sun, Mar 24, 2019, 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Mar 23, 2019 at 4:33 AM Fabrízio de Royes Mello\n<fabriziomello@gmail.com> wrote:\n>\n> On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > As noted by a PostgreSQL user to me, error messages for NOT NULL constraints are inconsistent - they do not mention the relation name in the message, as all other variants of this message do. e.g.\n> >\n> > postgres=# create table nn (id integer not null);\n> > CREATE TABLE\n> > postgres=# insert into nn values (NULL);\n> > ERROR: null value in column \"id\" violates not-null constraint\n> > DETAIL: Failing row contains (null).\n> >\n> > postgres=# create table nn2 (id integer check (id is not null));\n> > CREATE TABLE\n> > postgres=# insert into nn2 values (NULL);\n> > ERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"\n> > DETAIL: Failing row contains (null).\n> >\n> > I propose the attached patch as a fix, changing the wording (of the first case) to\n> > ERROR: null value in column \"id\" for relation \"nn\" violates not-null constraint\n> >\n\nI think we are inconsistent for a similar message at a few other\nplaces as well. See, below two messages:\n\ncolumn \\\"%s\\\" contains null values\ncolumn \\\"%s\\\" of table \\\"%s\\\" contains null values\n\nIf we decide to change this case, then why not change another place\nwhich has a similar symptom?\n\n> > It causes breakage in multiple tests, which is easy to fix once/if we agree to change.\n> >\n>\n> I totally agree with that change because I already get some negative feedback from users about this message too.\n>\n\nWhat kind of negative feedback did you get from users? If I see in\nthe log file, the message is displayed as :\n\n2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\nviolates not-null constraint\n2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values (NULL);\n\nSo, it is not difficult to identify the relation.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 24 Mar 2019 09:41:01 -0400",
"msg_from": "Greg Steiner <greg.steiner89@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sun, 24 Mar 2019 at 13:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, Mar 23, 2019 at 4:33 AM Fabrízio de Royes Mello\n> <fabriziomello@gmail.com> wrote:\n> >\n> > On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com>\n> wrote:\n> > >\n> > > As noted by a PostgreSQL user to me, error messages for NOT NULL\n> constraints are inconsistent - they do not mention the relation name in the\n> message, as all other variants of this message do. e.g.\n> > >\n> > > postgres=# create table nn (id integer not null);\n> > > CREATE TABLE\n> > > postgres=# insert into nn values (NULL);\n> > > ERROR: null value in column \"id\" violates not-null constraint\n> > > DETAIL: Failing row contains (null).\n> > >\n> > > postgres=# create table nn2 (id integer check (id is not null));\n> > > CREATE TABLE\n> > > postgres=# insert into nn2 values (NULL);\n> > > ERROR: new row for relation \"nn2\" violates check constraint\n> \"nn2_id_check\"\n> > > DETAIL: Failing row contains (null).\n> > >\n> > > I propose the attached patch as a fix, changing the wording (of the\n> first case) to\n> > > ERROR: null value in column \"id\" for relation \"nn\" violates not-null\n> constraint\n> > >\n>\n> I think we are inconsistent for a similar message at a few other\n> places as well. See, below two messages:\n>\n> column \\\"%s\\\" contains null values\n> column \\\"%s\\\" of table \\\"%s\\\" contains null values\n>\n> If we decide to change this case, then why not change another place\n> which has a similar symptom?\n>\n\nYes, lets do that.\n\nI'm passing on feedback, so if it applies in other cases, I'm happy to\nchange other common cases also for the benefit of users.\n\nDo you have a list of cases you'd like to see changed?\n\n\n> > > It causes breakage in multiple tests, which is easy to fix once/if we\n> agree to change.\n> > >\n> >\n> > I totally agree with that change because I already get some negative\n> feedback from users about this message too.\n> >\n>\n> What kind of negative feedback did you get from users? If I see in\n> the log file, the message is displayed as :\n>\n> 2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\n> violates not-null constraint\n> 2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n> 2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values\n> (NULL);\n>\n> So, it is not difficult to identify the relation.\n>\n\nThe user is not shown the failing statement, and if they are, it might have\nbeen generated for them.\n\nYour example assumes the user has access to the log, that\nlog_min_error_statement is set appropriately and that the user can locate\ntheir log entries to identify the table name. The log contains timed\nentries but the user may not be aware of the time of the error accurately\nenough to locate the correct statement amongst many others.\n\nIf the statement is modified by triggers or rules, then you have no chance.\n\ne.g. add this to the above example:\n\ncreate or replace rule rr as on insert to nn2 do instead insert into nn\nvalues (new.*);\n\n\nand its clear that the LOG of the statement, even if it is visible, is\nmisleading since the SQL refers to table nn, but the error is generated by\nthe insert into table nn2.\n\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Sun, 24 Mar 2019 at 13:02, Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Mar 23, 2019 at 4:33 AM Fabrízio de Royes Mello\n<fabriziomello@gmail.com> wrote:\n>\n> On Fri, Mar 22, 2019 at 2:25 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> >\n> > As noted by a PostgreSQL user to me, error messages for NOT NULL constraints are inconsistent - they do not mention the relation name in the message, as all other variants of this message do. e.g.\n> >\n> > postgres=# create table nn (id integer not null);\n> > CREATE TABLE\n> > postgres=# insert into nn values (NULL);\n> > ERROR: null value in column \"id\" violates not-null constraint\n> > DETAIL: Failing row contains (null).\n> >\n> > postgres=# create table nn2 (id integer check (id is not null));\n> > CREATE TABLE\n> > postgres=# insert into nn2 values (NULL);\n> > ERROR: new row for relation \"nn2\" violates check constraint \"nn2_id_check\"\n> > DETAIL: Failing row contains (null).\n> >\n> > I propose the attached patch as a fix, changing the wording (of the first case) to\n> > ERROR: null value in column \"id\" for relation \"nn\" violates not-null constraint\n> >\n\nI think we are inconsistent for a similar message at a few other\nplaces as well. See, below two messages:\n\ncolumn \\\"%s\\\" contains null values\ncolumn \\\"%s\\\" of table \\\"%s\\\" contains null values\n\nIf we decide to change this case, then why not change another place\nwhich has a similar symptom?Yes, lets do that.I'm passing on feedback, so if it applies in other cases, I'm happy to change other common cases also for the benefit of users.Do you have a list of cases you'd like to see changed? \n> > It causes breakage in multiple tests, which is easy to fix once/if we agree to change.\n> >\n>\n> I totally agree with that change because I already get some negative feedback from users about this message too.\n>\n\nWhat kind of negative feedback did you get from users? If I see in\nthe log file, the message is displayed as :\n\n2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\nviolates not-null constraint\n2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values (NULL);\n\nSo, it is not difficult to identify the relation.The user is not shown the failing statement, and if they are, it might have been generated for them.Your example assumes the user has access to the log, that log_min_error_statement is set appropriately and that the user can locate their log entries to identify the table name. The log contains timed entries but the user may not be aware of the time of the error accurately enough to locate the correct statement amongst many others.If the statement is modified by triggers or rules, then you have no chance.e.g. add this to the above example:create or replace rule rr as on insert to nn2 do instead insert into nn values (new.*); and its clear that the LOG of the statement, even if it is visible, is misleading since the SQL refers to table nn, but the error is generated by the insert into table nn2.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 24 Mar 2019 18:23:17 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 11:53 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n>\n> On Sun, 24 Mar 2019 at 13:02, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I think we are inconsistent for a similar message at a few other\n>> places as well. See, below two messages:\n>>\n>> column \\\"%s\\\" contains null values\n>> column \\\"%s\\\" of table \\\"%s\\\" contains null values\n>>\n>> If we decide to change this case, then why not change another place\n>> which has a similar symptom?\n>\n>\n> Yes, lets do that.\n>\n> I'm passing on feedback, so if it applies in other cases, I'm happy to change other common cases also for the benefit of users.\n>\n> Do you have a list of cases you'd like to see changed?\n>\n\nI think we can once scrutinize all the error messages with error codes\nERRCODE_NOT_NULL_VIOLATION and ERRCODE_CHECK_VIOLATION to see if\nanything else need change.\n\n>>\n>> > > It causes breakage in multiple tests, which is easy to fix once/if we agree to change.\n>> > >\n>> >\n>> > I totally agree with that change because I already get some negative feedback from users about this message too.\n>> >\n>>\n>> What kind of negative feedback did you get from users? If I see in\n>> the log file, the message is displayed as :\n>>\n>> 2019-03-24 18:12:49.331 IST [6348] ERROR: null value in column \"id\"\n>> violates not-null constraint\n>> 2019-03-24 18:12:49.331 IST [6348] DETAIL: Failing row contains (null).\n>> 2019-03-24 18:12:49.331 IST [6348] STATEMENT: insert into nn values (NULL);\n>>\n>> So, it is not difficult to identify the relation.\n>\n>\n> The user is not shown the failing statement, and if they are, it might have been generated for them.\n>\n\nI can imagine that in some cases where queries/statements are\ngenerated for some application, they might be presented just with\nerrors that occurred while execution and now it will be difficult to\nidentify the relation for which that problem has occurred.\n\n> Your example assumes the user has access to the log, that log_min_error_statement is set appropriately and that the user can locate their log entries to identify the table name. The log contains timed entries but the user may not be aware of the time of the error accurately enough to locate the correct statement amongst many others.\n>\n> If the statement is modified by triggers or rules, then you have no chance.\n>\n> e.g. add this to the above example:\n>\n> create or replace rule rr as on insert to nn2 do instead insert into nn values (new.*);\n>\n>\n> and its clear that the LOG of the statement, even if it is visible, is misleading since the SQL refers to table nn, but the error is generated by the insert into table nn2.\n>\n\nThis example also indicates that it will be helpful for users to see\nthe relation name in the error message.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 25 Mar 2019 08:45:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 7:11 PM Greg Steiner <greg.steiner89@gmail.com> wrote:\n>\n> To me the error message that includes more detail is superior. Even though you can get the detail from the logs, it seems like it would much more convenient for it to be reported out via the error to allow users/applications to identify the problem relation without fetching logs. I understand if that's not worth breaking numerous tests, though.\n>\n\nYeah, I think that is the main point. There will be a quite some\nchurn in the regression test output, but OTOH, if it is for good of\nusers, then it might be worth.\n\n> Personally, I think consistency here is important enough to warrant it.\n>\n\nFair point. Can such an error message change break any application?\nI see some cases where users have check based on Error Code, but not\nsure if there are people who have check based on error messages.\n\nAnyone else having an opinion on this matter? Basically, I would like\nto hear if anybody thinks that this change can cause any sort of\nproblem.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 25 Mar 2019 08:53:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 11:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Fair point. Can such an error message change break any application?\n> I see some cases where users have check based on Error Code, but not\n> sure if there are people who have check based on error messages.\n\nI'm sure there are -- in fact, I've written code that does that. But\nI also don't think that's a reason not to improve the error messages.\nIf we start worrying about stuff like this, we'll be unable to ever\nimprove anything.\n\n> Anyone else having an opinion on this matter? Basically, I would like\n> to hear if anybody thinks that this change can cause any sort of\n> problem.\n\nI don't think it's going to cause a problem for users, provided the\npatch is correct. I wondered whether it was always going to pick up\nthe relation name, e.g. if partitioning is involved, but I didn't\ncheck into it at all, so it may be fine.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 12:11:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Do we have an actual patch here?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Jul 2019 11:40:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Do we have an actual patch here?\n>\n\nWe have a patch, but it needs some more work like finding similar\nplaces and change all of them at the same time and then change the\ntests to adapt the same.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 6 Jul 2019 09:52:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n> >\n> > Do we have an actual patch here?\n> >\n>\n> We have a patch, but it needs some more work like finding similar\n> places and change all of them at the same time and then change the\n> tests to adapt the same.\n>\n\nHi all,\nBased on above discussion, I tried to find out all the places where we need\nto change error for \"not null constraint\". As Amit Kapila pointed out 1\nplace, I changed the error and adding modified patch.\n\n\n*What does this patch? *\nBefore this patch, to display error of \"not-null constraint\", we were not\ndisplaying relation name in some cases so attached patch is adding relation\nname with the \"not-null constraint\" error in 2 places. I didn't changed out\nfiles of test suite as we haven't finalized error messages.\n\nI verified Robert's point of for partition tables also. With the error, we\nare adding relation name of \"child table\" and i think, it is correct.\n\nPlease review attached patch and let me know feedback.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 6 Jan 2020 18:30:58 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Hi Mahendra,\n\nThanks for the patch.\nI am not sure but maybe the relation name should also be added to the\nfollowing test case?\n\ncreate table t4 (id int);\ninsert into t4 values (1);\nALTER TABLE t4 ADD CONSTRAINT c1 CHECK (id > 10) NOT VALID; -- succeeds\nALTER TABLE t4 VALIDATE CONSTRAINT c1;\n*ERROR: check constraint \"c1\" is violated by some row*\n\nOn Mon, 6 Jan 2020 at 18:31, Mahendra Singh Thalor <mahi6run@gmail.com>\nwrote:\n\n> On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> > >\n> > > Do we have an actual patch here?\n> > >\n> >\n> > We have a patch, but it needs some more work like finding similar\n> > places and change all of them at the same time and then change the\n> > tests to adapt the same.\n> >\n>\n> Hi all,\n> Based on above discussion, I tried to find out all the places where we\n> need to change error for \"not null constraint\". As Amit Kapila pointed out\n> 1 place, I changed the error and adding modified patch.\n>\n>\n> *What does this patch? *\n> Before this patch, to display error of \"not-null constraint\", we were not\n> displaying relation name in some cases so attached patch is adding relation\n> name with the \"not-null constraint\" error in 2 places. I didn't changed out\n> files of test suite as we haven't finalized error messages.\n>\n> I verified Robert's point of for partition tables also. With the error, we\n> are adding relation name of \"child table\" and i think, it is correct.\n>\n> Please review attached patch and let me know feedback.\n>\n> Thanks and Regards\n> Mahendra Singh Thalor\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n\n-- \n--\nM Beena Emerson\n\nHi Mahendra,Thanks for the patch.I am not sure but maybe the relation name should also be added to the following test case?create table t4 (id int);insert into t4 values (1);ALTER TABLE t4 ADD CONSTRAINT c1 CHECK (id > 10) NOT VALID; -- succeedsALTER TABLE t4 VALIDATE CONSTRAINT c1;ERROR: check constraint \"c1\" is violated by some rowOn Mon, 6 Jan 2020 at 18:31, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:> >> > Do we have an actual patch here?> >>> We have a patch, but it needs some more work like finding similar> places and change all of them at the same time and then change the> tests to adapt the same.>Hi all,Based on above discussion, I tried to find out all the places where we need to change error for \"not null constraint\". As Amit Kapila pointed out 1 place, I changed the error and adding modified patch.What does this patch? Before this patch, to display error of \"not-null constraint\", we were not displaying relation name in some cases so attached patch is adding relation name with the \"not-null constraint\" error in 2 places. I didn't changed out files of test suite as we haven't finalized error messages.I verified Robert's point of for partition tables also. With the error, we are adding relation name of \"child table\" and i think, it is correct. Please review attached patch and let me know feedback. Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com\n-- --M Beena Emerson",
"msg_date": "Thu, 9 Jan 2020 17:42:19 +0530",
"msg_from": "MBeena Emerson <mbeena.emerson@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 5:42 PM MBeena Emerson <mbeena.emerson@gmail.com> wrote:\n>\n\nHi Beena,\n\nIt is better to reply inline.\n\n> Hi Mahendra,\n>\n> Thanks for the patch.\n> I am not sure but maybe the relation name should also be added to the following test case?\n>\n> create table t4 (id int);\n> insert into t4 values (1);\n> ALTER TABLE t4 ADD CONSTRAINT c1 CHECK (id > 10) NOT VALID; -- succeeds\n> ALTER TABLE t4 VALIDATE CONSTRAINT c1;\n> ERROR: check constraint \"c1\" is violated by some row\n>\n\nI see that in this case, we are using errtableconstraint which should\nset table/schema name, but then that doesn't seem to be used. Can we\nexplore it a bit from that angle?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jan 2020 10:49:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 6:31 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n>\n> On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > >\n> > > Do we have an actual patch here?\n> > >\n> >\n> > We have a patch, but it needs some more work like finding similar\n> > places and change all of them at the same time and then change the\n> > tests to adapt the same.\n> >\n>\n> Hi all,\n> Based on above discussion, I tried to find out all the places where we need to change error for \"not null constraint\". As Amit Kapila pointed out 1 place, I changed the error and adding modified patch.\n>\n\nIt seems you have not used the two error codes\n(ERRCODE_NOT_NULL_VIOLATION and ERRCODE_CHECK_VIOLATION) pointed by me\nabove.\n\n> What does this patch?\n> Before this patch, to display error of \"not-null constraint\", we were not displaying relation name in some cases so attached patch is adding relation name with the \"not-null constraint\" error in 2 places. I didn't changed out files of test suite as we haven't finalized error messages.\n>\n> I verified Robert's point of for partition tables also. With the error, we are adding relation name of \"child table\" and i think, it is correct.\n>\n\nCan you show the same with the help of an example?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Jan 2020 10:51:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Hi Amit,\n\nOn Tue, 21 Jan 2020 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Jan 9, 2020 at 5:42 PM MBeena Emerson <mbeena.emerson@gmail.com>\n> wrote:\n> >\n>\n> Hi Beena,\n>\n> It is better to reply inline.\n>\n> > Hi Mahendra,\n> >\n> > Thanks for the patch.\n> > I am not sure but maybe the relation name should also be added to the\n> following test case?\n> >\n> > create table t4 (id int);\n> > insert into t4 values (1);\n> > ALTER TABLE t4 ADD CONSTRAINT c1 CHECK (id > 10) NOT VALID; -- succeeds\n> > ALTER TABLE t4 VALIDATE CONSTRAINT c1;\n> > ERROR: check constraint \"c1\" is violated by some row\n> >\n>\n> I see that in this case, we are using errtableconstraint which should\n> set table/schema name, but then that doesn't seem to be used. Can we\n> explore it a bit from that angle?\n>\n\nThe usage of the function errtableconstraint seems only to set the\nschema_name table_name constraint_name internally and not for display\npurposes. As seen in the following two cases where the relation name is\ndisplayed using RelationGetRelationName and errtableconstraint is called as\npart of errcode parameter not errmsg.\n\n ereport(ERROR,\n (errcode(ERRCODE_CHECK_VIOLATION),\n errmsg(\"new row for relation \\\"%s\\\" violates check\nconstraint \\\"%s\\\"\",\n RelationGetRelationName(orig_rel), failed),\n val_desc ? errdetail(\"Failing row contains %s.\",\nval_desc) : 0,\n errtableconstraint(orig_rel, failed)));\n\n ereport(ERROR,\n (errcode(ERRCODE_UNIQUE_VIOLATION),\n errmsg(\"duplicate key value violates\nunique constraint \\\"%s\\\"\",\n RelationGetRelationName(rel)),\n key_desc ? errdetail(\"Key %s already\nexists.\",\n key_desc) : 0,\n errtableconstraint(heapRel,\n\nRelationGetRelationName(rel))));\n\n\n--\nM Beena Emerson\n\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Amit,On Tue, 21 Jan 2020 at 10:49, Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Jan 9, 2020 at 5:42 PM MBeena Emerson <mbeena.emerson@gmail.com> wrote:\n>\n\nHi Beena,\n\nIt is better to reply inline.\n\n> Hi Mahendra,\n>\n> Thanks for the patch.\n> I am not sure but maybe the relation name should also be added to the following test case?\n>\n> create table t4 (id int);\n> insert into t4 values (1);\n> ALTER TABLE t4 ADD CONSTRAINT c1 CHECK (id > 10) NOT VALID; -- succeeds\n> ALTER TABLE t4 VALIDATE CONSTRAINT c1;\n> ERROR: check constraint \"c1\" is violated by some row\n>\n\nI see that in this case, we are using errtableconstraint which should\nset table/schema name, but then that doesn't seem to be used. Can we\nexplore it a bit from that angle?The usage of the function errtableconstraint seems only to set the schema_name table_name constraint_name internally and not for display purposes. As seen in the following two cases where the relation name is displayed using RelationGetRelationName and errtableconstraint is called as part of errcode parameter not errmsg. ereport(ERROR, (errcode(ERRCODE_CHECK_VIOLATION), errmsg(\"new row for relation \\\"%s\\\" violates check constraint \\\"%s\\\"\", RelationGetRelationName(orig_rel), failed), val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0, errtableconstraint(orig_rel, failed))); ereport(ERROR, (errcode(ERRCODE_UNIQUE_VIOLATION), errmsg(\"duplicate key value violates unique constraint \\\"%s\\\"\", RelationGetRelationName(rel)), key_desc ? errdetail(\"Key %s already exists.\", key_desc) : 0, errtableconstraint(heapRel, RelationGetRelationName(rel))));--M Beena EmersonEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 21 Jan 2020 11:07:50 +0530",
"msg_from": "MBeena Emerson <mbeena.emerson@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On 2020-Jan-21, MBeena Emerson wrote:\n\n> The usage of the function errtableconstraint seems only to set the\n> schema_name table_name constraint_name internally and not for display\n> purposes. As seen in the following two cases where the relation name is\n> displayed using RelationGetRelationName and errtableconstraint is called as\n> part of errcode parameter not errmsg.\n\nYou can see those fields by raising the log verbosity; it's a\nclient-side thing. For example, in psql you can use\n\\set VERBOSITY verbose\n\nIn psql you can also use \\errverbose after an error to print those\nfields.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 21 Jan 2020 11:39:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 20:09, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jan-21, MBeena Emerson wrote:\n>\n> > The usage of the function errtableconstraint seems only to set the\n> > schema_name table_name constraint_name internally and not for display\n> > purposes. As seen in the following two cases where the relation name is\n> > displayed using RelationGetRelationName and errtableconstraint is called as\n> > part of errcode parameter not errmsg.\n>\n> You can see those fields by raising the log verbosity; it's a\n> client-side thing. For example, in psql you can use\n> \\set VERBOSITY verbose\n>\n> In psql you can also use \\errverbose after an error to print those\n> fields.\n\nThanks for the explanation.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jan 2020 13:09:58 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Tue, 21 Jan 2020 at 10:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 6, 2020 at 6:31 PM Mahendra Singh Thalor <mahi6run@gmail.com>\nwrote:\n> >\n> > On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > >\n> > > On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <\nalvherre@2ndquadrant.com> wrote:\n> > > >\n> > > > Do we have an actual patch here?\n> > > >\n> > >\n> > > We have a patch, but it needs some more work like finding similar\n> > > places and change all of them at the same time and then change the\n> > > tests to adapt the same.\n> > >\n> >\n> > Hi all,\n> > Based on above discussion, I tried to find out all the places where we\nneed to change error for \"not null constraint\". As Amit Kapila pointed out\n1 place, I changed the error and adding modified patch.\n> >\n>\n> It seems you have not used the two error codes\n> (ERRCODE_NOT_NULL_VIOLATION and ERRCODE_CHECK_VIOLATION) pointed by me\n> above.\n\nThanks Amit and Beena for reviewing patch.\n\nYes, you are correct. I searched using error messages not error code. That\nwas my mistake. Now, I grepped using above error codes and found that\nthese error codes are used in 19 places. Below is the code parts of 19\nplaces.\n\n1. src/backend/utils/adt/domains.c\n\n - 146 if (isnull)\n - 147 ereport(ERROR,\n - 148 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 149 errmsg(\"domain %s does not allow null\n values\",\n - 150\n format_type_be(my_extra->domain_type)),\n - 151 errdatatype(my_extra->domain_type)));\n - 152 break;\n\nI think, above error is for domain, so there is no need to add anything in\nerror message.\n-----------------------------------------------------------------------------------------------------\n2. src/backend/utils/adt/domains.c\n\n - 181 if (!ExecCheck(con->check_exprstate,\n econtext))\n - 182 ereport(ERROR,\n - 183 (errcode(ERRCODE_CHECK_VIOLATION),\n - 184 errmsg(\"value for domain %s\n violates check constraint \\\"%s\\\"\",\n - 185\n format_type_be(my_extra->domain_type),\n - 186 con->name),\n - 187\n errdomainconstraint(my_extra->domain_type,\n - 188 con->name)));\n\nI think, above error is for domain, so there is no need to add anything in\nerror message.\n-----------------------------------------------------------------------------------------------------\n3. src/backend/partitioning/partbounds.c\n\n - 1330 if (part_rel->rd_rel->relkind ==\n RELKIND_FOREIGN_TABLE)\n - 1331 ereport(WARNING,\n - 1332 (errcode(ERRCODE_CHECK_VIOLATION),\n - 1333 errmsg(\"skipped scanning foreign table\n \\\"%s\\\" which is a partition of default partition \\\"%s\\\"\",\n - 1334 RelationGetRelationName(part_rel),\n - 1335\n RelationGetRelationName(default_rel))));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n4. src/backend/partitioning/partbounds.c\n\n - 1363 if (!ExecCheck(partqualstate, econtext))\n - 1364 ereport(ERROR,\n - 1365 (errcode(ERRCODE_CHECK_VIOLATION),\n - 1366 errmsg(\"updated partition constraint for\n default partition \\\"%s\\\" would be violated by some row\",\n - 1367\n RelationGetRelationName(default_rel))));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n5. src/backend/executor/execPartition.c\n\n - 342 ereport(ERROR,\n - 343 (errcode(ERRCODE_CHECK_VIOLATION),\n - 344 errmsg(\"no partition of relation \\\"%s\\\"\n found for row\",\n - 345 RelationGetRelationName(rel)),\n - 346 val_desc ?\n - 347 errdetail(\"Partition key of the failing row\n contains %s.\",\n - 348 val_desc) : 0));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n6. src/backend/executor/execMain.c\n\n - 1877 ereport(ERROR,\n - 1878 (errcode(ERRCODE_CHECK_VIOLATION),\n - 1879 errmsg(\"new row for relation \\\"%s\\\" violates\n partition constraint\",\n - 1880\n RelationGetRelationName(resultRelInfo->ri_RelationDesc)),\n - 1881 val_desc ? errdetail(\"Failing row contains %s.\",\n val_desc) : 0));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n7. src/backend/executor/execMain.c\n\n - 1958 ereport(ERROR,\n - 1959 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 1960 errmsg(\"null value in column \\\"%s\\\"\n violates not-null constraint\",\n - 1961 NameStr(att->attname)),\n - 1962 val_desc ? errdetail(\"Failing row\n contains %s.\", val_desc) : 0,\n - 1963 errtablecol(orig_rel, attrChk)));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (nullif(a,\n0)) STORED NOT NULL);\nINSERT INTO test (a) VALUES (1);\nINSERT INTO test (a) VALUES (0);\n\n*Without patch:*\nERROR: null value in column \"b\" violates not-null constraint\nDETAIL: Failing row contains (0, null).\n*With patch:*\nERROR: null value in column \"b\" of relation \"test\" violates not-null\nconstraint\nDETAIL: Failing row contains (0, null).\n-----------------------------------------------------------------------------------------------------\n8. src/backend/executor/execMain.c\n\n - 2006 ereport(ERROR,\n - 2007 (errcode(ERRCODE_CHECK_VIOLATION),\n - 2008 errmsg(\"new row for relation \\\"%s\\\" violates\n check constraint \\\"%s\\\"\",\n - 2009 RelationGetRelationName(orig_rel),\n failed),\n - 2010 val_desc ? errdetail(\"Failing row contains\n %s.\", val_desc) : 0,\n - 2011 errtableconstraint(orig_rel, failed)));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n9. src/backend/executor/execExprInterp.c\n\n - 3600 ereport(ERROR,\n - 3601 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 3602 errmsg(\"domain %s does not allow null values\",\n - 3603\n format_type_be(op->d.domaincheck.resulttype)),\n - 3604 errdatatype(op->d.domaincheck.resulttype)));\n\nI think, above error is for domain, so there is no need to add anything in\nerror message.\n-----------------------------------------------------------------------------------------------------\n10. src/backend/executor/execExprInterp.c\n\n - 3615 ereport(ERROR,\n - 3616 (errcode(ERRCODE_CHECK_VIOLATION),\n - 3617 errmsg(\"value for domain %s violates check\n constraint \\\"%s\\\"\",\n - 3618\n format_type_be(op->d.domaincheck.resulttype),\n - 3619 op->d.domaincheck.constraintname),\n - 3620 errdomainconstraint(op->d.domaincheck.resulttype,\n - 3621\n op->d.domaincheck.constraintname)));\n\nI think, above error is for domain, so there is no need to add anything in\nerror message.\n-----------------------------------------------------------------------------------------------------\n11. src/backend/commands/tablecmds.c\n\n - 5273 ereport(ERROR,\n - 5274 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 5275 errmsg(\"column \\\"%s\\\" contains null\n values\",\n - 5276 NameStr(attr->attname)),\n - 5277 errtablecol(oldrel, attn + 1)));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE test (a int);\nINSERT INTO test VALUES (0), (1);\nALTER TABLE test ADD COLUMN b int NOT NULL, ALTER COLUMN b ADD GENERATED\nALWAYS AS IDENTITY;\n\n*Without patch:*\nERROR: column \"b\" contains null values\n*With patch*:\nERROR: column \"b\" of relation \"test\" contains null values\n-----------------------------------------------------------------------------------------------------\n12. src/backend/commands/tablecmds.c\n\n - 5288 if (!ExecCheck(con->qualstate,\n econtext))\n - 5289 ereport(ERROR,\n - 5290\n (errcode(ERRCODE_CHECK_VIOLATION),\n - 5291 errmsg(\"check constraint\n \\\"%s\\\" is violated by some row\",\n - 5292 con->name),\n - 5293 errtableconstraint(oldrel,\n con->name)));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2)\nSTORED);\nINSERT INTO test (a) VALUES (10), (30);\nALTER TABLE test ADD CHECK (b < 50);\n\n*Without patch:*\nERROR: check constraint \"test_b_check\" is violated by some row\n*With patch:*\nERROR: check constraint \"test_b_check\" of relation \"test\" is violated by\nsome row\n-----------------------------------------------------------------------------------------------------\n13. src/backend/commands/tablecmds.c\n\n - 5306 if (tab->validate_default)\n - 5307 ereport(ERROR,\n - 5308 (errcode(ERRCODE_CHECK_VIOLATION),\n - 5309 errmsg(\"updated partition\n constraint for default partition would be violated by some row\")));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE list_parted ( a int, b char ) PARTITION BY LIST (a);\nCREATE TABLE list_parted_def PARTITION OF list_parted DEFAULT;\nINSERT INTO list_parted_def VALUES (11, 'z');\nCREATE TABLE part_1 (LIKE list_parted);\nALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (11);\n\n*Without patch:*\nERROR: updated partition constraint for default partition would be\nviolated by some row\n*With patch:*\nERROR: updated partition constraint for default partition\n\"list_parted_def\" would be violated by some row\n-----------------------------------------------------------------------------------------------------\n14. src/backend/commands/tablecmds.c\n\n - 5310 else\n - 5311 ereport(ERROR,\n - 5312 (errcode(ERRCODE_CHECK_VIOLATION),\n - 5313 errmsg(\"partition constraint is\n violated by some row\")));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE list_parted (a int,b char)PARTITION BY LIST (a);\nCREATE TABLE part_1 (LIKE list_parted);\nINSERT INTO part_1 VALUES (3, 'a');\nALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (2);\n\n*Without patch:*\nERROR: partition constraint is violated by some row\n*With patch:*\nERROR: partition constraint \"part_1\" is violated by some row\n---------------------------------------------------------------------------\n15. src/backend/commands/tablecmds.c\n\n - 10141 ereport(ERROR,\n - 10142 (errcode(ERRCODE_CHECK_VIOLATION),\n - 10143 errmsg(\"check constraint \\\"%s\\\" is violated\n by some row\",\n - 10144 NameStr(constrForm->conname)),\n - 10145 errtableconstraint(rel,\n NameStr(constrForm->conname))));\n\nAdded relation name for this error. This can be verified by below example:\n*Ex:*\nCREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2)\nSTORED);\nINSERT INTO test (a) VALUES (10), (30);\nALTER TABLE test ADD CONSTRAINT chk CHECK (b < 50) NOT VALID;\nALTER TABLE test VALIDATE CONSTRAINT chk;\n\n*Without patch:*\nERROR: check constraint \"chk\" is violated by some row\n*With patch:*\nERROR: check constraint \"chk\" of relation \"test\" is violated by some row\n-----------------------------------------------------------------------------------------------------\n16. src/backend/commands/typecmds.c\n\n - 2396 ereport(ERROR,\n - 2397\n (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 2398 errmsg(\"column \\\"%s\\\" of table\n \\\"%s\\\" contains null values\",\n - 2399 NameStr(attr->attname),\n - 2400\n RelationGetRelationName(testrel)),\n - 2401 errtablecol(testrel, attnum)));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n17. src/backend/commands/typecmds.c\n\n - 2824 ereport(ERROR,\n - 2825 (errcode(ERRCODE_CHECK_VIOLATION),\n - 2826 errmsg(\"column \\\"%s\\\" of table\n \\\"%s\\\" contains values that violate the new constraint\",\n - 2827 NameStr(attr->attname),\n - 2828\n RelationGetRelationName(testrel)),\n - 2829 errtablecol(testrel, attnum)));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n18. src/backend/commands/typecmds.c\n\n - 2396 ereport(ERROR,\n - 2397\n (errcode(ERRCODE_NOT_NULL_VIOLATION),\n - 2398 errmsg(\"column \\\"%s\\\" of table\n \\\"%s\\\" contains null values\",\n - 2399 NameStr(attr->attname),\n - 2400\n RelationGetRelationName(testrel)),\n - 2401 errtablecol(testrel, attnum)));\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n19. src/backend/commands/typecmds.c\n\n - 2824 ereport(ERROR,\n - 2825 (errcode(ERRCODE_CHECK_VIOLATION),\n - 2826 errmsg(\"column \\\"%s\\\" of table\n \\\"%s\\\" contains values that violate the new constraint\",\n - 2827 NameStr(attr->attname),\n - 2828\n RelationGetRelationName(testrel)),\n - 2829 errtablecol(testrel, attnum)))\n\nRelation name is already appended in error messgae.\n-----------------------------------------------------------------------------------------------------\n>\n> > What does this patch?\n> > Before this patch, to display error of \"not-null constraint\", we were\nnot displaying relation name in some cases so attached patch is adding\nrelation name with the \"not-null constraint\" error in 2 places. I didn't\nchanged out files of test suite as we haven't finalized error messages.\n> >\n> > I verified Robert's point of for partition tables also. With the error,\nwe are adding relation name of \"child table\" and i think, it is correct.\n> >\n>\n> Can you show the same with the help of an example?\n\nOkay. Below is the example:\ncreate table parent (a int, b int not null) partition by range (a);\ncreate table ch1 partition of parent for values from ( 10 ) to (20);\npostgres=# insert into parent values (9);\nERROR: no partition of relation \"parent\" found for row\nDETAIL: Partition key of the failing row contains (a) = (9).\npostgres=# insert into parent values (11);\n*ERROR: null value in column \"b\" of relation \"ch1\" violates not-null\nconstraint*\nDETAIL: Failing row contains (11, null).\n\nAttaching a patch for review. In this patch, total 6 places I added\nrelation name in error message and verifyed same with above mentioned\nexamples.\n\nPlease review attahced patch and let me know your feedback. I haven't\nmodifed .out files because we haven't finalied patch.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 22 Jan 2020 13:25:55 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Hi Mahendra,\n\nThanks for working on this.\n\nOn Wed, 22 Jan 2020 at 13:26, Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n>\n> On Tue, 21 Jan 2020 at 10:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jan 6, 2020 at 6:31 PM Mahendra Singh Thalor <mahi6run@gmail.com> wrote:\n> > >\n> > > On Sat, 6 Jul 2019 at 09:53, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jul 1, 2019 at 10:05 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > > > >\n> > > > > Do we have an actual patch here?\n> > > > >\n> > > >\n> > > > We have a patch, but it needs some more work like finding similar\n> > > > places and change all of them at the same time and then change the\n> > > > tests to adapt the same.\n> > > >\n> > >\n> > > Hi all,\n> > > Based on above discussion, I tried to find out all the places where we need to change error for \"not null constraint\". As Amit Kapila pointed out 1 place, I changed the error and adding modified patch.\n> > >\n> >\n> > It seems you have not used the two error codes\n> > (ERRCODE_NOT_NULL_VIOLATION and ERRCODE_CHECK_VIOLATION) pointed by me\n> > above.\n>\n> Thanks Amit and Beena for reviewing patch.\n>\n> Yes, you are correct. I searched using error messages not error code. That was my mistake. Now, I grepped using above error codes and found that these error codes are used in 19 places. Below is the code parts of 19 places.\n>\n> 1. src/backend/utils/adt/domains.c\n>\n> 146 if (isnull)\n> 147 ereport(ERROR,\n> 148 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 149 errmsg(\"domain %s does not allow null values\",\n> 150 format_type_be(my_extra->domain_type)),\n> 151 errdatatype(my_extra->domain_type)));\n> 152 break;\n>\n> I think, above error is for domain, so there is no need to add anything in error message.\n> -----------------------------------------------------------------------------------------------------\n> 2. src/backend/utils/adt/domains.c\n>\n> 181 if (!ExecCheck(con->check_exprstate, econtext))\n> 182 ereport(ERROR,\n> 183 (errcode(ERRCODE_CHECK_VIOLATION),\n> 184 errmsg(\"value for domain %s violates check constraint \\\"%s\\\"\",\n> 185 format_type_be(my_extra->domain_type),\n> 186 con->name),\n> 187 errdomainconstraint(my_extra->domain_type,\n> 188 con->name)));\n>\n> I think, above error is for domain, so there is no need to add anything in error message.\n> -----------------------------------------------------------------------------------------------------\n> 3. src/backend/partitioning/partbounds.c\n>\n> 1330 if (part_rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n> 1331 ereport(WARNING,\n> 1332 (errcode(ERRCODE_CHECK_VIOLATION),\n> 1333 errmsg(\"skipped scanning foreign table \\\"%s\\\" which is a partition of default partition \\\"%s\\\"\",\n> 1334 RelationGetRelationName(part_rel),\n> 1335 RelationGetRelationName(default_rel))));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 4. src/backend/partitioning/partbounds.c\n>\n> 1363 if (!ExecCheck(partqualstate, econtext))\n> 1364 ereport(ERROR,\n> 1365 (errcode(ERRCODE_CHECK_VIOLATION),\n> 1366 errmsg(\"updated partition constraint for default partition \\\"%s\\\" would be violated by some row\",\n> 1367 RelationGetRelationName(default_rel))));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 5. src/backend/executor/execPartition.c\n>\n> 342 ereport(ERROR,\n> 343 (errcode(ERRCODE_CHECK_VIOLATION),\n> 344 errmsg(\"no partition of relation \\\"%s\\\" found for row\",\n> 345 RelationGetRelationName(rel)),\n> 346 val_desc ?\n> 347 errdetail(\"Partition key of the failing row contains %s.\",\n> 348 val_desc) : 0));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 6. src/backend/executor/execMain.c\n>\n> 1877 ereport(ERROR,\n> 1878 (errcode(ERRCODE_CHECK_VIOLATION),\n> 1879 errmsg(\"new row for relation \\\"%s\\\" violates partition constraint\",\n> 1880 RelationGetRelationName(resultRelInfo->ri_RelationDesc)),\n> 1881 val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 7. src/backend/executor/execMain.c\n>\n> 1958 ereport(ERROR,\n> 1959 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 1960 errmsg(\"null value in column \\\"%s\\\" violates not-null constraint\",\n> 1961 NameStr(att->attname)),\n> 1962 val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0,\n> 1963 errtablecol(orig_rel, attrChk)));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (nullif(a, 0)) STORED NOT NULL);\n> INSERT INTO test (a) VALUES (1);\n> INSERT INTO test (a) VALUES (0);\n>\n> Without patch:\n> ERROR: null value in column \"b\" violates not-null constraint\n> DETAIL: Failing row contains (0, null).\n> With patch:\n> ERROR: null value in column \"b\" of relation \"test\" violates not-null constraint\n> DETAIL: Failing row contains (0, null).\n> -----------------------------------------------------------------------------------------------------\n> 8. src/backend/executor/execMain.c\n>\n> 2006 ereport(ERROR,\n> 2007 (errcode(ERRCODE_CHECK_VIOLATION),\n> 2008 errmsg(\"new row for relation \\\"%s\\\" violates check constraint \\\"%s\\\"\",\n> 2009 RelationGetRelationName(orig_rel), failed),\n> 2010 val_desc ? errdetail(\"Failing row contains %s.\", val_desc) : 0,\n> 2011 errtableconstraint(orig_rel, failed)));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 9. src/backend/executor/execExprInterp.c\n>\n> 3600 ereport(ERROR,\n> 3601 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 3602 errmsg(\"domain %s does not allow null values\",\n> 3603 format_type_be(op->d.domaincheck.resulttype)),\n> 3604 errdatatype(op->d.domaincheck.resulttype)));\n>\n> I think, above error is for domain, so there is no need to add anything in error message.\n> -----------------------------------------------------------------------------------------------------\n> 10. src/backend/executor/execExprInterp.c\n>\n> 3615 ereport(ERROR,\n> 3616 (errcode(ERRCODE_CHECK_VIOLATION),\n> 3617 errmsg(\"value for domain %s violates check constraint \\\"%s\\\"\",\n> 3618 format_type_be(op->d.domaincheck.resulttype),\n> 3619 op->d.domaincheck.constraintname),\n> 3620 errdomainconstraint(op->d.domaincheck.resulttype,\n> 3621 op->d.domaincheck.constraintname)));\n>\n> I think, above error is for domain, so there is no need to add anything in error message.\n> -----------------------------------------------------------------------------------------------------\n> 11. src/backend/commands/tablecmds.c\n>\n> 5273 ereport(ERROR,\n> 5274 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 5275 errmsg(\"column \\\"%s\\\" contains null values\",\n> 5276 NameStr(attr->attname)),\n> 5277 errtablecol(oldrel, attn + 1)));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE test (a int);\n> INSERT INTO test VALUES (0), (1);\n> ALTER TABLE test ADD COLUMN b int NOT NULL, ALTER COLUMN b ADD GENERATED ALWAYS AS IDENTITY;\n>\n> Without patch:\n> ERROR: column \"b\" contains null values\n> With patch:\n> ERROR: column \"b\" of relation \"test\" contains null values\n> -----------------------------------------------------------------------------------------------------\n> 12. src/backend/commands/tablecmds.c\n>\n> 5288 if (!ExecCheck(con->qualstate, econtext))\n> 5289 ereport(ERROR,\n> 5290 (errcode(ERRCODE_CHECK_VIOLATION),\n> 5291 errmsg(\"check constraint \\\"%s\\\" is violated by some row\",\n> 5292 con->name),\n> 5293 errtableconstraint(oldrel, con->name)));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\n> INSERT INTO test (a) VALUES (10), (30);\n> ALTER TABLE test ADD CHECK (b < 50);\n>\n> Without patch:\n> ERROR: check constraint \"test_b_check\" is violated by some row\n> With patch:\n> ERROR: check constraint \"test_b_check\" of relation \"test\" is violated by some row\n> -----------------------------------------------------------------------------------------------------\n> 13. src/backend/commands/tablecmds.c\n>\n> 5306 if (tab->validate_default)\n> 5307 ereport(ERROR,\n> 5308 (errcode(ERRCODE_CHECK_VIOLATION),\n> 5309 errmsg(\"updated partition constraint for default partition would be violated by some row\")));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE list_parted ( a int, b char ) PARTITION BY LIST (a);\n> CREATE TABLE list_parted_def PARTITION OF list_parted DEFAULT;\n> INSERT INTO list_parted_def VALUES (11, 'z');\n> CREATE TABLE part_1 (LIKE list_parted);\n> ALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (11);\n>\n> Without patch:\n> ERROR: updated partition constraint for default partition would be violated by some row\n> With patch:\n> ERROR: updated partition constraint for default partition \"list_parted_def\" would be violated by some row\n> -----------------------------------------------------------------------------------------------------\n> 14. src/backend/commands/tablecmds.c\n>\n> 5310 else\n> 5311 ereport(ERROR,\n> 5312 (errcode(ERRCODE_CHECK_VIOLATION),\n> 5313 errmsg(\"partition constraint is violated by some row\")));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE list_parted (a int,b char)PARTITION BY LIST (a);\n> CREATE TABLE part_1 (LIKE list_parted);\n> INSERT INTO part_1 VALUES (3, 'a');\n> ALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (2);\n>\n> Without patch:\n> ERROR: partition constraint is violated by some row\n> With patch:\n> ERROR: partition constraint \"part_1\" is violated by some row\n\nHere it seems as if \"part_1\" is the constraint name. It would be\nbetter to change it to:\n\npartition constraint is violated by some row in relation \"part_1\" OR\npartition constraint of relation \"part_1\" is violated b some row\n\n\n> ---------------------------------------------------------------------------\n> 15. src/backend/commands/tablecmds.c\n>\n> 10141 ereport(ERROR,\n> 10142 (errcode(ERRCODE_CHECK_VIOLATION),\n> 10143 errmsg(\"check constraint \\\"%s\\\" is violated by some row\",\n> 10144 NameStr(constrForm->conname)),\n> 10145 errtableconstraint(rel, NameStr(constrForm->conname))));\n>\n> Added relation name for this error. This can be verified by below example:\n> Ex:\n> CREATE TABLE test (a int PRIMARY KEY, b int GENERATED ALWAYS AS (a * 2) STORED);\n> INSERT INTO test (a) VALUES (10), (30);\n> ALTER TABLE test ADD CONSTRAINT chk CHECK (b < 50) NOT VALID;\n> ALTER TABLE test VALIDATE CONSTRAINT chk;\n>\n> Without patch:\n> ERROR: check constraint \"chk\" is violated by some row\n> With patch:\n> ERROR: check constraint \"chk\" of relation \"test\" is violated by some row\n> -----------------------------------------------------------------------------------------------------\n> 16. src/backend/commands/typecmds.c\n>\n> 2396 ereport(ERROR,\n> 2397 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 2398 errmsg(\"column \\\"%s\\\" of table \\\"%s\\\" contains null values\",\n> 2399 NameStr(attr->attname),\n> 2400 RelationGetRelationName(testrel)),\n> 2401 errtablecol(testrel, attnum)));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 17. src/backend/commands/typecmds.c\n>\n> 2824 ereport(ERROR,\n> 2825 (errcode(ERRCODE_CHECK_VIOLATION),\n> 2826 errmsg(\"column \\\"%s\\\" of table \\\"%s\\\" contains values that violate the new constraint\",\n> 2827 NameStr(attr->attname),\n> 2828 RelationGetRelationName(testrel)),\n> 2829 errtablecol(testrel, attnum)));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 18. src/backend/commands/typecmds.c\n>\n> 2396 ereport(ERROR,\n> 2397 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n> 2398 errmsg(\"column \\\"%s\\\" of table \\\"%s\\\" contains null values\",\n> 2399 NameStr(attr->attname),\n> 2400 RelationGetRelationName(testrel)),\n> 2401 errtablecol(testrel, attnum)));\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> 19. src/backend/commands/typecmds.c\n>\n> 2824 ereport(ERROR,\n> 2825 (errcode(ERRCODE_CHECK_VIOLATION),\n> 2826 errmsg(\"column \\\"%s\\\" of table \\\"%s\\\" contains values that violate the new constraint\",\n> 2827 NameStr(attr->attname),\n> 2828 RelationGetRelationName(testrel)),\n> 2829 errtablecol(testrel, attnum)))\n>\n> Relation name is already appended in error messgae.\n> -----------------------------------------------------------------------------------------------------\n> >\n> > > What does this patch?\n> > > Before this patch, to display error of \"not-null constraint\", we were not displaying relation name in some cases so attached patch is adding relation name with the \"not-null constraint\" error in 2 places. I didn't changed out files of test suite as we haven't finalized error messages.\n> > >\n> > > I verified Robert's point of for partition tables also. With the error, we are adding relation name of \"child table\" and i think, it is correct.\n> > >\n> >\n> > Can you show the same with the help of an example?\n>\n> Okay. Below is the example:\n> create table parent (a int, b int not null) partition by range (a);\n> create table ch1 partition of parent for values from ( 10 ) to (20);\n> postgres=# insert into parent values (9);\n> ERROR: no partition of relation \"parent\" found for row\n> DETAIL: Partition key of the failing row contains (a) = (9).\n> postgres=# insert into parent values (11);\n> ERROR: null value in column \"b\" of relation \"ch1\" violates not-null constraint\n> DETAIL: Failing row contains (11, null).\n>\n> Attaching a patch for review. In this patch, total 6 places I added relation name in error message and verifyed same with above mentioned examples.\n>\n> Please review attahced patch and let me know your feedback. I haven't modifed .out files because we haven't finalied patch.\n>\n\n\n\n\n-- \n\nM Beena Emerson\n\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jan 2020 15:23:15 +0530",
"msg_from": "MBeena Emerson <mbeena.emerson@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "I wonder if we shouldn't be using errtablecol() here instead of (in\naddition to?) patching the errmsg() to include the table name.\n\nDiscussion: If we really like having the table names in errtable(), then\nwe should have psql display it by default, and other tools will follow\nsuit; in that case we should remove the table name from error messages,\nor at least not add it to even more messages.\n\nIf we instead think that errtable() is there just for programmatically\nknowing the affected table, then we should add the table name to all\nerrmsg() where relevant, as in the patch under discussion.\n\nWhat do others think?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:32:49 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I wonder if we shouldn't be using errtablecol() here instead of (in\n> addition to?) patching the errmsg() to include the table name.\n\n> Discussion: If we really like having the table names in errtable(), then\n> we should have psql display it by default, and other tools will follow\n> suit; in that case we should remove the table name from error messages,\n> or at least not add it to even more messages.\n\n> If we instead think that errtable() is there just for programmatically\n> knowing the affected table, then we should add the table name to all\n> errmsg() where relevant, as in the patch under discussion.\n\n> What do others think?\n\nI believe that the intended use of errtable() and friends is so that\napplications don't have to parse those names out of a human-friendly\nmessage. We should add calls to them in cases where (a) an application\ncan tell from the SQLSTATE that some particular table is involved\nand (b) it's likely that the app would wish to know which table that is.\nI don't feel a need to sprinkle every single ereport() in the backend\nwith errtable(), just ones where there's a plausible use-case for the\nextra cycles that will be spent.\n\nOn the other side of the coin, whether we use errtable() is not directly\na factor in deciding what the human-friendly messages should say.\nI do find it hard to envision a case where we'd want to use errtable()\nand *not* put the table name in the error message, just because if\napplications need to know something then humans probably do too. But\nsaying that we can make the messages omit info because it's available\nfrom these program-friendly fields seems 100% wrong to me, even if one\nturns a blind eye to the fact that existing client code likely won't\nshow those fields to users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jan 2020 10:18:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 3:23 PM MBeena Emerson <mbeena.emerson@gmail.com> wrote:\n>\n> > -----------------------------------------------------------------------------------------------------\n> > 14. src/backend/commands/tablecmds.c\n> >\n> > 5310 else\n> > 5311 ereport(ERROR,\n> > 5312 (errcode(ERRCODE_CHECK_VIOLATION),\n> > 5313 errmsg(\"partition constraint is violated by some row\")));\n> >\n> > Added relation name for this error. This can be verified by below example:\n> > Ex:\n> > CREATE TABLE list_parted (a int,b char)PARTITION BY LIST (a);\n> > CREATE TABLE part_1 (LIKE list_parted);\n> > INSERT INTO part_1 VALUES (3, 'a');\n> > ALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (2);\n> >\n> > Without patch:\n> > ERROR: partition constraint is violated by some row\n> > With patch:\n> > ERROR: partition constraint \"part_1\" is violated by some row\n>\n> Here it seems as if \"part_1\" is the constraint name.\n>\n\nI agree.\n\n> It would be\n> better to change it to:\n>\n> partition constraint is violated by some row in relation \"part_1\" OR\n> partition constraint of relation \"part_1\" is violated b some row\n>\n\n+1 for the second option suggested by Beena.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jan 2020 10:01:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 8:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I wonder if we shouldn't be using errtablecol() here instead of (in\n> > addition to?) patching the errmsg() to include the table name.\n>\n> > Discussion: If we really like having the table names in errtable(), then\n> > we should have psql display it by default, and other tools will follow\n> > suit; in that case we should remove the table name from error messages,\n> > or at least not add it to even more messages.\n>\n> > If we instead think that errtable() is there just for programmatically\n> > knowing the affected table, then we should add the table name to all\n> > errmsg() where relevant, as in the patch under discussion.\n>\n> > What do others think?\n>\n> I believe that the intended use of errtable() and friends is so that\n> applications don't have to parse those names out of a human-friendly\n> message. We should add calls to them in cases where (a) an application\n> can tell from the SQLSTATE that some particular table is involved\n> and (b) it's likely that the app would wish to know which table that is.\n> I don't feel a need to sprinkle every single ereport() in the backend\n> with errtable(), just ones where there's a plausible use-case for the\n> extra cycles that will be spent.\n>\n> On the other side of the coin, whether we use errtable() is not directly\n> a factor in deciding what the human-friendly messages should say.\n> I do find it hard to envision a case where we'd want to use errtable()\n> and *not* put the table name in the error message, just because if\n> applications need to know something then humans probably do too.\n>\n\nmakes sense.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jan 2020 10:20:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Thu, 23 Jan 2020 at 10:20, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 22, 2020 at 8:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > I wonder if we shouldn't be using errtablecol() here instead of (in\n> > > addition to?) patching the errmsg() to include the table name.\n> >\n> > > Discussion: If we really like having the table names in errtable(), then\n> > > we should have psql display it by default, and other tools will follow\n> > > suit; in that case we should remove the table name from error messages,\n> > > or at least not add it to even more messages.\n> >\n> > > If we instead think that errtable() is there just for programmatically\n> > > knowing the affected table, then we should add the table name to all\n> > > errmsg() where relevant, as in the patch under discussion.\n> >\n> > > What do others think?\n> >\n> > I believe that the intended use of errtable() and friends is so that\n> > applications don't have to parse those names out of a human-friendly\n> > message. We should add calls to them in cases where (a) an application\n> > can tell from the SQLSTATE that some particular table is involved\n> > and (b) it's likely that the app would wish to know which table that is.\n> > I don't feel a need to sprinkle every single ereport() in the backend\n> > with errtable(), just ones where there's a plausible use-case for the\n> > extra cycles that will be spent.\n> >\n> > On the other side of the coin, whether we use errtable() is not directly\n> > a factor in deciding what the human-friendly messages should say.\n> > I do find it hard to envision a case where we'd want to use errtable()\n> > and *not* put the table name in the error message, just because if\n> > applications need to know something then humans probably do too.\n> >\n>\n> makes sense.\n>\n\nThanks all for reviewing and giving comments.\n\n> > > Added relation name for this error. This can be verified by below example:\n> > > Ex:\n> > > CREATE TABLE list_parted (a int,b char)PARTITION BY LIST (a);\n> > > CREATE TABLE part_1 (LIKE list_parted);\n> > > INSERT INTO part_1 VALUES (3, 'a');\n> > > ALTER TABLE list_parted ATTACH PARTITION part_1 FOR VALUES IN (2);\n> > >\n> > > Without patch:\n> > > ERROR: partition constraint is violated by some row\n> > > With patch:\n> > > ERROR: partition constraint \"part_1\" is violated by some row\n> >\n> > Here it seems as if \"part_1\" is the constraint name.\n> >\n>\n> I agree.\n>\n> > It would be\n> > better to change it to:\n> >\n> > partition constraint is violated by some row in relation \"part_1\" OR\n> > partition constraint of relation \"part_1\" is violated b some row\n> >\n>\n> +1 for the second option suggested by Beena.\n\nI fixed above comment and updated expected .out files. Attaching\nupdated patches.\n\nTo make review simple, I made 3 patches as:\n\nv4_0001_rationalize_constraint_error_messages.patch:\nThis patch has .c file changes. Added relation name in 6 error\nmessages for check constraint.\n\nv4_0002_updated-regress-expected-.out-files.patch:\nThis patch has changes of expected .out files for regress test suite.\n\nv4_0003_updated-constraints.source-file.patch:\nThis patch has changes of .source file for constraints.sql regress test.\n\nPlease review attached patches and let me know your comments.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Jan 2020 17:51:00 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 5:51 PM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n>\n> I fixed above comment and updated expected .out files. Attaching\n> updated patches.\n>\n\nLGTM. I have combined them into the single patch. What do we think\nabout backpatching this? As there are quite a few changes in the\nregression tests, so it might be a good idea to keep the back branches\ncode in sync, but, OTOH, this is more of a change related to providing\nmore information, so we don't have any pressing need to backpatch\nthis. What do others think?\n\nOne thing to note is that there are places in code where we use\n'table' instead of 'relation' for the same thing in the error messages\nas seen in the below places (the first one uses 'relation', the second\none uses 'table') and the patch is using 'relation' which I think is\nfine.\n\n1. src/backend/executor/execPartition.c\n342 ereport(ERROR,\n 343 (errcode(ERRCODE_CHECK_VIOLATION),\n 344 errmsg(\"no partition of relation \\\"%s\\\"\nfound for row\",\n 345 RelationGetRelationName(rel)),\n 346 val_desc ?\n 347 errdetail(\"Partition key of the failing row\ncontains %s.\",\n 348 val_desc) : 0));\n\n\n2. src/backend/commands/typecmds.c\n2396 ereport(ERROR,\n2397 (errcode(ERRCODE_NOT_NULL_VIOLATION),\n2398 errmsg(\"column \\\"%s\\\" of table\n\\\"%s\\\" contains null values\",\n2399 NameStr(attr->attname),\n2400 RelationGetRelationName(testrel)),\n2401 errtablecol(testrel, attnum)));\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 24 Jan 2020 10:51:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> LGTM. I have combined them into the single patch. What do we think\n> about backpatching this?\n\nNo objection to the patch for HEAD, but it does not seem like\nback-patch material: it is not fixing a bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Jan 2020 11:07:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > LGTM. I have combined them into the single patch. What do we think\n> > about backpatching this?\n>\n> No objection to the patch for HEAD, but it does not seem like\n> back-patch material: it is not fixing a bug.\n>\n\nOkay, I will commit this early next week (by Tuesday).\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 Jan 2020 10:16:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 12:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> One thing to note is that there are places in code where we use\n> 'table' instead of 'relation' for the same thing in the error messages\n> as seen in the below places (the first one uses 'relation', the second\n> one uses 'table') and the patch is using 'relation' which I think is\n> fine.\n\nWe often use \"relation\" as a sort of a weasel-word when we don't know\nthe relkind; i.e. when we're complaining about something that might be\na view or index or foreign table or whatever. If we say \"table,\" we\nneed to know that it is, precisely, a table.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Jan 2020 10:54:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Sat, Jan 25, 2020 at 10:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 24, 2020 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > LGTM. I have combined them into the single patch. What do we think\n> > > about backpatching this?\n> >\n> > No objection to the patch for HEAD, but it does not seem like\n> > back-patch material: it is not fixing a bug.\n> >\n>\n> Okay, I will commit this early next week (by Tuesday).\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Jan 2020 18:13:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
},
{
"msg_contents": "On Tue, 28 Jan 2020 at 18:13, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jan 25, 2020 at 10:16 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 24, 2020 at 9:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > LGTM. I have combined them into the single patch. What do we think\n> > > > about backpatching this?\n> > >\n> > > No objection to the patch for HEAD, but it does not seem like\n> > > back-patch material: it is not fixing a bug.\n> > >\n> >\n> > Okay, I will commit this early next week (by Tuesday).\n> >\n>\n> Pushed.\n>\n\nThank you for committing it!\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:29:23 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Error message inconsistency"
}
] |
[
{
"msg_contents": "Hi all,\n\nBefore we introduce pg_lsn datatype the LSN was expressed as a TEXT type,\nso a simple query using MIN/MAX functions works as expected. Query like:\n\nSELECT min(restart_lsn) FROM pg_replication_slots;\nSELECT min(sent_lsn) FROM pg_stat_replication ;\n\nSo attached patch aims to introduce MIN/MAX aggregate functions to pg_lsn\ndatatype.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Fri, 22 Mar 2019 16:49:57 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 04:49:57PM -0300, Fabrízio de Royes Mello wrote:\n> So attached patch aims to introduce MIN/MAX aggregate functions to pg_lsn\n\nFine by me. This looks helpful for monitoring.\n\nPlease make sure to register it to the next commit fest:\nhttps://commitfest.postgresql.org/23/\nIt is too late for Postgres 12 unfortunately.\n--\nMichael",
"msg_date": "Sat, 23 Mar 2019 10:27:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 10:27 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n>\n> On Fri, Mar 22, 2019 at 04:49:57PM -0300, Fabrízio de Royes Mello wrote:\n> > So attached patch aims to introduce MIN/MAX aggregate functions to\npg_lsn\n>\n> Fine by me. This looks helpful for monitoring.\n>\n> Please make sure to register it to the next commit fest:\n> https://commitfest.postgresql.org/23/\n> It is too late for Postgres 12 unfortunately.\n\nSure, added:\nhttps://commitfest.postgresql.org/23/2070/\n\nRegards,\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Fri, Mar 22, 2019 at 10:27 PM Michael Paquier <michael@paquier.xyz> wrote:>> On Fri, Mar 22, 2019 at 04:49:57PM -0300, Fabrízio de Royes Mello wrote:> > So attached patch aims to introduce MIN/MAX aggregate functions to pg_lsn>> Fine by me. This looks helpful for monitoring.>> Please make sure to register it to the next commit fest:> https://commitfest.postgresql.org/23/> It is too late for Postgres 12 unfortunately.Sure, added:https://commitfest.postgresql.org/23/2070/Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Sat, 23 Mar 2019 08:39:19 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "Hi,\nHere are same review comment\n- <entry>any numeric, string, date/time, network, or enum type,\n+ <entry>any numeric, string, date/time, network, lsn, or enum type,\n or arrays of these types</entry>\n <entry>same as argument type</entry>\nIn the documentation it refereed as pg_lsn type rather than lsn alone\n+Datum\n+pg_lsn_larger(PG_FUNCTION_ARGS)\n+{\n+ XLogRecPtr lsn1 = PG_GETARG_LSN(0);\n+ XLogRecPtr lsn2 = PG_GETARG_LSN(1);\n+ XLogRecPtr result;\n+\n+ result = ((lsn1 > lsn2) ? lsn1 : lsn2);\n+\n+ PG_RETURN_LSN(result);\n+}\n\nrather than using additional variable its more readable and effective to\nreturn the argument\nitself like we do in date data type and other place\nregards\nSurafel\n\nHi,Here are same review comment - <entry>any numeric, string, date/time, network, or enum type,+ <entry>any numeric, string, date/time, network, lsn, or enum type, or arrays of these types</entry> <entry>same as argument type</entry>In the documentation it refereed as pg_lsn type rather than lsn alone +Datum+pg_lsn_larger(PG_FUNCTION_ARGS)+{+\tXLogRecPtr\tlsn1 = PG_GETARG_LSN(0);+\tXLogRecPtr\tlsn2 = PG_GETARG_LSN(1);+\tXLogRecPtr\tresult;++\tresult = ((lsn1 > lsn2) ? lsn1 : lsn2);++\tPG_RETURN_LSN(result);+}rather than using additional variable its more readable and effective to return the argument itself like we do in date data type and other placeregards Surafel",
"msg_date": "Tue, 2 Jul 2019 13:22:32 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 7:22 AM Surafel Temesgen <surafel3000@gmail.com>\nwrote:\n>\n> Hi,\n> Here are same review comment\n\nThanks for your review.\n\n> - <entry>any numeric, string, date/time, network, or enum type,\n> + <entry>any numeric, string, date/time, network, lsn, or enum type,\n> or arrays of these types</entry>\n> <entry>same as argument type</entry>\n> In the documentation it refereed as pg_lsn type rather than lsn alone\n\nFixed.\n\n> +Datum\n> +pg_lsn_larger(PG_FUNCTION_ARGS)\n> +{\n> + XLogRecPtr lsn1 = PG_GETARG_LSN(0);\n> + XLogRecPtr lsn2 = PG_GETARG_LSN(1);\n> + XLogRecPtr result;\n> +\n> + result = ((lsn1 > lsn2) ? lsn1 : lsn2);\n> +\n> + PG_RETURN_LSN(result);\n> +}\n>\n> rather than using additional variable its more readable and effective to\nreturn the argument\n> itself like we do in date data type and other place\n>\n\nFixed.\n\nNew version attached.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Tue, 2 Jul 2019 11:31:49 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Tue, Jul 02, 2019 at 11:31:49AM -0300, Fabrízio de Royes Mello wrote:\n> New version attached.\n\nThis looks in pretty good shape to me, and no objections from me to\nget those functions as the min() flavor is useful for monitoring WAL\nretention for complex deployments.\n\nDo you have a particular use-case in mind for max() one? I can think\nof at least one case: monitoring the flush LSNs of a set of standbys\nto find out how much has been replayed at most.\n--\nMichael",
"msg_date": "Thu, 4 Jul 2019 17:17:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 4:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Do you have a particular use-case in mind for max() one? I can think\n> of at least one case: monitoring the flush LSNs of a set of standbys\n> to find out how much has been replayed at most.\n\nIt would be pretty silly to have one and not the other, regardless of\nwhether we can think of an immediate use case.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Jul 2019 09:57:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 5:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 02, 2019 at 11:31:49AM -0300, Fabrízio de Royes Mello wrote:\n> > New version attached.\n>\n> This looks in pretty good shape to me, and no objections from me to\n> get those functions as the min() flavor is useful for monitoring WAL\n> retention for complex deployments.\n>\n> Do you have a particular use-case in mind for max() one? I can think\n> of at least one case: monitoring the flush LSNs of a set of standbys\n> to find out how much has been replayed at most.\n>\n\nI use min/max to measure the amount of generated WAL (diff) during some\nperiods based on wal position stored in some monitoring system.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Thu, Jul 4, 2019 at 5:17 AM Michael Paquier <michael@paquier.xyz> wrote:>> On Tue, Jul 02, 2019 at 11:31:49AM -0300, Fabrízio de Royes Mello wrote:> > New version attached.>> This looks in pretty good shape to me, and no objections from me to> get those functions as the min() flavor is useful for monitoring WAL> retention for complex deployments.>> Do you have a particular use-case in mind for max() one? I can think> of at least one case: monitoring the flush LSNs of a set of standbys> to find out how much has been replayed at most.>I use min/max to measure the amount of generated WAL (diff) during some periods based on wal position stored in some monitoring system. Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 4 Jul 2019 13:48:06 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 4, 2019 at 4:17 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n> > Do you have a particular use-case in mind for max() one? I can think\n> > of at least one case: monitoring the flush LSNs of a set of standbys\n> > to find out how much has been replayed at most.\n>\n> It would be pretty silly to have one and not the other, regardless of\n> whether we can think of an immediate use case.\n>\n\n+1\n\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Thu, Jul 4, 2019 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:>> On Thu, Jul 4, 2019 at 4:17 AM Michael Paquier <michael@paquier.xyz> wrote:> > Do you have a particular use-case in mind for max() one? I can think> > of at least one case: monitoring the flush LSNs of a set of standbys> > to find out how much has been replayed at most.>> It would be pretty silly to have one and not the other, regardless of> whether we can think of an immediate use case.>+1-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 4 Jul 2019 13:48:24 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Thu, Jul 04, 2019 at 01:48:24PM -0300, Fabrízio de Royes Mello wrote:\n> On Thu, Jul 4, 2019 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> It would be pretty silly to have one and not the other, regardless of\n>> whether we can think of an immediate use case.\n> \n> +1\n\nOK, applied with a catalog version bump. This is cool to have.\n--\nMichael",
"msg_date": "Fri, 5 Jul 2019 12:21:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 12:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 04, 2019 at 01:48:24PM -0300, Fabrízio de Royes Mello wrote:\n> > On Thu, Jul 4, 2019 at 10:57 AM Robert Haas <robertmhaas@gmail.com>\nwrote:\n> >> It would be pretty silly to have one and not the other, regardless of\n> >> whether we can think of an immediate use case.\n> >\n> > +1\n>\n> OK, applied with a catalog version bump. This is cool to have.\n>\n\nAwesome... thanks.\n\nAtt,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Fri, Jul 5, 2019 at 12:22 AM Michael Paquier <michael@paquier.xyz> wrote:>> On Thu, Jul 04, 2019 at 01:48:24PM -0300, Fabrízio de Royes Mello wrote:> > On Thu, Jul 4, 2019 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:> >> It would be pretty silly to have one and not the other, regardless of> >> whether we can think of an immediate use case.> >> > +1>> OK, applied with a catalog version bump. This is cool to have.> Awesome... thanks.Att,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Fri, 5 Jul 2019 09:52:15 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Introduce MIN/MAX aggregate functions to pg_lsn"
}
] |
[
{
"msg_contents": "Hi,\n\nFor the umpteenth time I was annoyed by the names of labels in\nheapam.c. It's really not useful to see a 'goto l1;' etc.\n\nHow about renaming l1 to retry_delete_locked, l2 to retry_update_locked,\nl3 to retry_lock_tuple_locked etc? Especially with the subsidiary\nfunctions for updates and locking, it's not always clear from context\nwhere the goto jumps to.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Mar 2019 13:58:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "rename labels in heapam.c?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> For the umpteenth time I was annoyed by the names of labels in\n> heapam.c. It's really not useful to see a 'goto l1;' etc.\n\nYeah, those label names are uninformative as can be.\n\n> How about renaming l1 to retry_delete_locked, l2 to retry_update_locked,\n> l3 to retry_lock_tuple_locked etc? Especially with the subsidiary\n> functions for updates and locking, it's not always clear from context\n> where the goto jumps to.\n\nIs it practical to get rid of the goto's altogether? If not,\nrenaming would be an improvement.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Mar 2019 17:09:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename labels in heapam.c?"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-22 17:09:23 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > For the umpteenth time I was annoyed by the names of labels in\n> > heapam.c. It's really not useful to see a 'goto l1;' etc.\n> \n> Yeah, those label names are uninformative as can be.\n> \n> > How about renaming l1 to retry_delete_locked, l2 to retry_update_locked,\n> > l3 to retry_lock_tuple_locked etc? Especially with the subsidiary\n> > functions for updates and locking, it's not always clear from context\n> > where the goto jumps to.\n> \n> Is it practical to get rid of the goto's altogether? If not,\n> renaming would be an improvement.\n\nI don't think it'd be easy. We could probably split\nheap_{insert,delete,update} into sub-functions and then have the\ntoplevel function just loop over invocations of those, but that seems\nlike a pretty significant refactoring of the code. Since renaming the\nlabels isn't going to make that harder, I'm inclined to do that, rather\nthan wait for a refactoring that, while a good idea, isn't likely to\nhappen that soon.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Fri, 22 Mar 2019 14:12:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: rename labels in heapam.c?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-22 17:09:23 -0400, Tom Lane wrote:\n>> Is it practical to get rid of the goto's altogether? If not,\n>> renaming would be an improvement.\n\n> I don't think it'd be easy.\n\nFair enough. I just wanted to be sure we considered getting rid of\nthe pig before we put lipstick on it. I concur that it's not worth\nmajor refactoring to do so.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 22 Mar 2019 17:20:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rename labels in heapam.c?"
}
] |
[
{
"msg_contents": "Yesterday while doing some tests, I noticed that the following doesn't work\nproperly:\n\ncreate role test_role with login;\ncreate table ref(a int primary key);\ngrant references on ref to test_role;\nset role test_role;\ncreate table t1(a int, b int) partition by list (a);\nalter table t1 add constraint t1_b_key foreign key (b) references ref(a);\n\nIn postgres 11.2, this results in the following error:\n\nERROR: could not open file \"base/12537/16390\": No such file or directory\n\nand in the master branch it simply crashes.\n\nIt seems that validateForeignKeyConstraint() in tablecmds.c cannot\nuse RI_Initial_Check() to check the foreign key constraint, so it tries to\nopen the relation and scan it and verify each row by a call\nto RI_FKey_check_ins(). Opening and scanning the relation fails, because it\nis a partitioned table and has no storage.\n\nThe attached patch fixes the problem by skipping foreign key constraint\ncheck for relations with no storage. In partitioned table case, it will be\nverified by scanning the partitions, so we are safe to skip the parent\ntable.\n\n-- Hadi",
"msg_date": "Fri, 22 Mar 2019 16:00:42 -0700",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": true,
"msg_subject": "Fix foreign key constraint check for partitioned tables"
},
{
"msg_contents": "On Sat, 23 Mar 2019 at 12:01, Hadi Moshayedi <hadi@moshayedi.net> wrote:\n> Yesterday while doing some tests, I noticed that the following doesn't work properly:\n>\n> create role test_role with login;\n> create table ref(a int primary key);\n> grant references on ref to test_role;\n> set role test_role;\n> create table t1(a int, b int) partition by list (a);\n> alter table t1 add constraint t1_b_key foreign key (b) references ref(a);\n>\n> In postgres 11.2, this results in the following error:\n>\n> ERROR: could not open file \"base/12537/16390\": No such file or directory\n>\n> and in the master branch it simply crashes.\n>\n> It seems that validateForeignKeyConstraint() in tablecmds.c cannot use RI_Initial_Check() to check the foreign key constraint, so it tries to open the relation and scan it and verify each row by a call to RI_FKey_check_ins(). Opening and scanning the relation fails, because it is a partitioned table and has no storage.\n>\n> The attached patch fixes the problem by skipping foreign key constraint check for relations with no storage. In partitioned table case, it will be verified by scanning the partitions, so we are safe to skip the parent table.\n\nHi Hadi,\n\nI reproduced the problem and tested your fix. It looks simple and\ncorrect to me.\n\nI was a bit curious about the need for \"set role\" in the reproduction,\nbut I see that it's because RI_Initial_Check does some checks to see\nif a simple SELECT can be used, and one of the checks is for basic\ntable permissions.\n\nI wonder if the macro RELKIND_HAS_STORAGE should be used instead of\nchecking for each relkind? This would apply to the check on line 4405\ntoo.\n\nEdmund\n\n",
"msg_date": "Sun, 24 Mar 2019 16:26:47 +1300",
"msg_from": "Edmund Horner <ejrh00@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix foreign key constraint check for partitioned tables"
},
{
"msg_contents": "Hello Edmund,\n\nThanks for the review.\n\n\n> I was a bit curious about the need for \"set role\" in the reproduction,\n> but I see that it's because RI_Initial_Check does some checks to see\n> if a simple SELECT can be used, and one of the checks is for basic\n> table permissions.\n>\n\nI think to reproduce this the current user shouldn't be able to SELECT on\nboth tables, so RI_Initial_Check fails. Setting the owner of one of the\ntables isn't always enough as the current user can be a super user.\n\n\n> I wonder if the macro RELKIND_HAS_STORAGE should be used instead of\n> checking for each relkind? This would apply to the check on line 4405\n> too.\n>\n\ndone.\n\nThis patch also changed the output of some of tests, i.e. previously\nforeign key constraint failures errored on the partitioned table itself,\nbut now it shows the child table's name in the error message. I hope it is\nok.\n\nI also added a regression test which would fail without this patch.\n\nThanks,\nHadi",
"msg_date": "Mon, 25 Mar 2019 11:57:48 -0700",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": true,
"msg_subject": "Re: Fix foreign key constraint check for partitioned tables"
},
{
"msg_contents": "Posted this at the commitfest tool:\nhttps://commitfest.postgresql.org/23/2075/\n\n>\n\nPosted this at the commitfest tool: https://commitfest.postgresql.org/23/2075/",
"msg_date": "Tue, 26 Mar 2019 23:22:41 -0700",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": true,
"msg_subject": "Re: Fix foreign key constraint check for partitioned tables"
},
{
"msg_contents": "Hadi Moshayedi <hadi@moshayedi.net> writes:\n> [ fix-foreign-key-check.patch ]\n\nPushed with some adjustments, as discussed over at\nhttps://postgr.es/m/19030.1554574075@sss.pgh.pa.us\n\n> This patch also changed the output of some of tests, i.e. previously\n> foreign key constraint failures errored on the partitioned table itself,\n> but now it shows the child table's name in the error message. I hope it is\n> ok.\n\nYeah, I think that's OK. Interestingly, no such changes appear in\nHEAD's version of the regression test --- probably Alvaro's earlier\nchanges had the same effect.\n\n> I also added a regression test which would fail without this patch.\n\nThis needed a fair amount of work. You shouldn't have summarily\ndropped a table that the test script specifically says it meant\nto leave around, and we have a convention that role names created by\nthe regression test scripts always should begin with \"regress_\",\nand you didn't clean up the role at the end (which would lead to\nfailures in repeated installcheck runs).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Apr 2019 15:27:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix foreign key constraint check for partitioned tables"
},
{
"msg_contents": "On 2019-Apr-06, Tom Lane wrote:\n\n> Hadi Moshayedi <hadi@moshayedi.net> writes:\n\n> > This patch also changed the output of some of tests, i.e. previously\n> > foreign key constraint failures errored on the partitioned table itself,\n> > but now it shows the child table's name in the error message. I hope it is\n> > ok.\n> \n> Yeah, I think that's OK. Interestingly, no such changes appear in\n> HEAD's version of the regression test --- probably Alvaro's earlier\n> changes had the same effect.\n\nYeah, they did.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 7 Apr 2019 08:16:18 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix foreign key constraint check for partitioned tables"
}
] |
[
{
"msg_contents": "When compiling on an AWS 64 bit Arm machine, I get this compiler warning:\n\nimath.c: In function 's_ksqr':\nimath.c:2590:6: warning: variable 'carry' set but not used\n[-Wunused-but-set-variable]\n carry;\n ^~~~~\n\nWith this version():\n\nPostgreSQL 12devel on aarch64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0, 64-bit\n\nThe attached patch adds PG_USED_FOR_ASSERTS_ONLY to silence it. Perhaps\nthere is a better way, given that we want to change imath.c as little as\npossible from its upstream?\n\nCheers,\n\nJeff",
"msg_date": "Fri, 22 Mar 2019 20:20:53 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "On Fri, Mar 22, 2019 at 08:20:53PM -0400, Jeff Janes wrote:\n> PostgreSQL 12devel on aarch64-unknown-linux-gnu, compiled by gcc\n> (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0, 64-bit\n\nAdding Noah in CC as he has done the update of imath lately.\n\n> The attached patch adds PG_USED_FOR_ASSERTS_ONLY to silence it. Perhaps\n> there is a better way, given that we want to change imath.c as little as\n> possible from its upstream?\n\nMaybe others have better ideas, but marking the variable with\nPG_USED_FOR_ASSERTS_ONLY as you propose seems like the least invasive\nmethod of all.\n--\nMichael",
"msg_date": "Sat, 23 Mar 2019 10:20:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 10:20:16AM +0900, Michael Paquier wrote:\n> On Fri, Mar 22, 2019 at 08:20:53PM -0400, Jeff Janes wrote:\n> > PostgreSQL 12devel on aarch64-unknown-linux-gnu, compiled by gcc\n> > (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0, 64-bit\n> \n> Adding Noah in CC as he has done the update of imath lately.\n> \n> > The attached patch adds PG_USED_FOR_ASSERTS_ONLY to silence it. Perhaps\n> > there is a better way, given that we want to change imath.c as little as\n> > possible from its upstream?\n> \n> Maybe others have better ideas, but marking the variable with\n> PG_USED_FOR_ASSERTS_ONLY as you propose seems like the least invasive\n> method of all.\n\nThat patch looks good. Thanks. The main alternative would be to pass\n-Wno-unused for this file. Since you're proposing only one instance\nPG_USED_FOR_ASSERTS_ONLY, I favor PG_USED_FOR_ASSERTS_ONLY over -Wno-unused.\n\n",
"msg_date": "Sat, 23 Mar 2019 00:02:36 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "Hi Noah,\n\nOn 2019-03-23 00:02:36 -0700, Noah Misch wrote:\n> On Sat, Mar 23, 2019 at 10:20:16AM +0900, Michael Paquier wrote:\n> > On Fri, Mar 22, 2019 at 08:20:53PM -0400, Jeff Janes wrote:\n> > > PostgreSQL 12devel on aarch64-unknown-linux-gnu, compiled by gcc\n> > > (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0, 64-bit\n> > \n> > Adding Noah in CC as he has done the update of imath lately.\n> > \n> > > The attached patch adds PG_USED_FOR_ASSERTS_ONLY to silence it. Perhaps\n> > > there is a better way, given that we want to change imath.c as little as\n> > > possible from its upstream?\n> > \n> > Maybe others have better ideas, but marking the variable with\n> > PG_USED_FOR_ASSERTS_ONLY as you propose seems like the least invasive\n> > method of all.\n> \n> That patch looks good. Thanks. The main alternative would be to pass\n> -Wno-unused for this file. Since you're proposing only one instance\n> PG_USED_FOR_ASSERTS_ONLY, I favor PG_USED_FOR_ASSERTS_ONLY over -Wno-unused.\n\nThis is marked as an open item, owned by you. Could you commit the\npatch or otherwise resovle the issue?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 09:18:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "On Wed, May 01, 2019 at 09:18:02AM -0700, Andres Freund wrote:\n> On 2019-03-23 00:02:36 -0700, Noah Misch wrote:\n> > On Sat, Mar 23, 2019 at 10:20:16AM +0900, Michael Paquier wrote:\n> > > On Fri, Mar 22, 2019 at 08:20:53PM -0400, Jeff Janes wrote:\n> > > > PostgreSQL 12devel on aarch64-unknown-linux-gnu, compiled by gcc\n> > > > (Ubuntu/Linaro 7.3.0-27ubuntu1~18.04) 7.3.0, 64-bit\n> > > \n> > > Adding Noah in CC as he has done the update of imath lately.\n> > > \n> > > > The attached patch adds PG_USED_FOR_ASSERTS_ONLY to silence it. Perhaps\n> > > > there is a better way, given that we want to change imath.c as little as\n> > > > possible from its upstream?\n> > > \n> > > Maybe others have better ideas, but marking the variable with\n> > > PG_USED_FOR_ASSERTS_ONLY as you propose seems like the least invasive\n> > > method of all.\n> > \n> > That patch looks good. Thanks. The main alternative would be to pass\n> > -Wno-unused for this file. Since you're proposing only one instance\n> > PG_USED_FOR_ASSERTS_ONLY, I favor PG_USED_FOR_ASSERTS_ONLY over -Wno-unused.\n> \n> This is marked as an open item, owned by you. Could you commit the\n> patch or otherwise resovle the issue?\n\nI pushed Jeff's patch.\n\n\n",
"msg_date": "Sat, 4 May 2019 00:15:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "On Sat, May 04, 2019 at 12:15:19AM -0700, Noah Misch wrote:\n> I pushed Jeff's patch.\n\nUpon resolution, could you move the related open item on the wiki \npage to the list of resolved issues [1]?\n\n[1]: https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items#resolved_before_12beta1\n--\nMichael",
"msg_date": "Sat, 4 May 2019 16:35:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
},
{
"msg_contents": "On Sat, May 4, 2019 at 3:15 AM Noah Misch <noah@leadboat.com> wrote:\n\n>\n> I pushed Jeff's patch.\n>\n\nThank you. I've re-tested it and I get warning-free compilation now.\n\nCheers,\n\nJeff\n\nOn Sat, May 4, 2019 at 3:15 AM Noah Misch <noah@leadboat.com> wrote:\nI pushed Jeff's patch.Thank you. I've re-tested it and I get warning-free compilation now.Cheers,Jeff",
"msg_date": "Sat, 4 May 2019 09:23:47 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: compiler warning in pgcrypto imath.c"
}
] |
[
{
"msg_contents": "Bonjour Michaᅵl,\n\nOn Sat, 23 Mar 2019, Michael Paquier wrote:\n> On Fri, Mar 22, 2019 at 03:18:26PM +0100, Fabien COELHO wrote:\n>> Attached is a quick patch about \"pg_rewind\", so that the control file \n>> is updated after everything else is committed to disk.\n>\n> Could you start a new thread about that please? This one has already \n> been used for too many things.\n\nHere it is.\n\nThe attached patch reorders the cluster fsyncing and control file changes \nin \"pg_rewind\" so that the later is done after all data are committed to \ndisk, so as to reflect the actual cluster status, similarly to what is \ndone by \"pg_checksums\", per discussion in the thread about offline \nenabling of checksums:\n\nhttps://www.postgresql.org/message-id/20181221201616.GD4974@nighthawk.caipicrew.dd-dns.de\n\n-- \nFabien.",
"msg_date": "Sat, 23 Mar 2019 06:18:27 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "reorder pg_rewind control file sync"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 06:18:27AM +0100, Fabien COELHO wrote:\n> Here it is.\n\nThanks.\n\n> The attached patch reorders the cluster fsyncing and control file changes in\n> \"pg_rewind\" so that the later is done after all data are committed to disk,\n> so as to reflect the actual cluster status, similarly to what is done by\n> \"pg_checksums\", per discussion in the thread about offline enabling of\n> checksums:\n\nIt would be an interesting property to see that it is possible to\nretry a rewind of a node which has been partially rewound already,\nbut the operation failed in the middle. Because that's the real deal\nhere: as long as we know that its control file is in its previous\nstate, we can rely on it for retrying the operation. Logically, I\nthink that it should work, because we would still try to fetch the\nsame blocks from the source server since WAL has forked by looking at\nthe records of the target up from the last checkpoint before WAL has\nforked up to the last shutdown checkpoint, and the operation is lossy\nby design when it comes to deal with file differences.\n\nHave you tried to see if pg_rewind is able to repeat its operation for\nspecific scenarios? One is for example a database created on the\npromoted standby, used as a source, and a second, different database\ncreated on the primary after the standby has been promoted. You could\nmake the tool exit() before the rewind finishes, just before updating\nthe control file, and see if the operation is repeatable.\nInterrupting the tool would be fine as well, still less controllable.\n\nIt would be good to mention in the patch why the order matters.\n--\nMichael",
"msg_date": "Mon, 25 Mar 2019 16:14:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reorder pg_rewind control file sync"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> The attached patch reorders the cluster fsyncing and control file changes in\n>> \"pg_rewind\" so that the later is done after all data are committed to disk,\n>> so as to reflect the actual cluster status, similarly to what is done by\n>> \"pg_checksums\", per discussion in the thread about offline enabling of\n>> checksums:\n>\n> It would be an interesting property to see that it is possible to\n> retry a rewind of a node which has been partially rewound already,\n> but the operation failed in the middle.\n\nYes. I understand that the question is whether the Warning in pg_rewind \ndocumentation can be partially lifted. The short answer is that it is not \nobvious.\n\n> Because that's the real deal here: as long as we know that its control \n> file is in its previous state, we can rely on it for retrying the \n> operation. Logically, I think that it should work, because we would \n> still try to fetch the same blocks from the source server since WAL has \n> forked by looking at the records of the target up from the last \n> checkpoint before WAL has forked up to the last shutdown checkpoint, and \n> the operation is lossy by design when it comes to deal with file \n> differences.\n>\n> Have you tried to see if pg_rewind is able to repeat its operation for\n> specific scenarios?\n\nI have run the non regression tests. I'm not sure of what scenarii are \ncovered there, but probably not an interruption in the middle of a fsync, \nspecially if fsync is usually disabled to ease the tests:-)\n\n> One is for example a database created on the promoted standby, used as a \n> source, and a second, different database created on the primary after \n> the standby has been promoted. You could make the tool exit() before \n> the rewind finishes, just before updating the control file, and see if \n> the operation is repeatable. Interrupting the tool would be fine as \n> well, still less controllable.\n>\n> It would be good to mention in the patch why the order matters.\n\nYep. This requires a careful analysis of pg_rewind inner working, that I \ndo not have to do in the short terme.\n\n-- \nFabien.",
"msg_date": "Mon, 25 Mar 2019 10:29:46 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: reorder pg_rewind control file sync"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 10:29:46AM +0100, Fabien COELHO wrote:\n> I have run the non regression tests. I'm not sure of what scenarii are\n> covered there, but probably not an interruption in the middle of a fsync,\n> specially if fsync is usually disabled to ease the tests:-)\n\nForce the tool to stop at a specific point requires a booby-trap. And\neven if fsync is not killed, you could just enforce the tool to stop\nonce before updating the control file, and attempt a re-run without\nthe trap, checking if it works at the second attempt, so the problem\nis quite independent from the timing of fsync().\n--\nMichael",
"msg_date": "Tue, 26 Mar 2019 07:37:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reorder pg_rewind control file sync"
}
] |
[
{
"msg_contents": "Hi all,\n\nThe attached patch just a very minor adjustment to\nsrc/bin/pg_checksums/pg_checksums.c to add new line between some IF\nstatements.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Sat, 23 Mar 2019 08:54:26 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lack of new line between IF statements"
},
{
"msg_contents": "On Sat, Mar 23, 2019 at 08:54:26AM -0300, Fabrízio de Royes Mello wrote:\n> The attached patch just a very minor adjustment to\n> src/bin/pg_checksums/pg_checksums.c to add new line between some IF\n> statements.\n\nThanks. This makes the code more consistent with the surroundings, so\ndone. At the same time I have improved the error messages in the\narea as they should not have a period.\n--\nMichael",
"msg_date": "Sat, 23 Mar 2019 22:02:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Lack of new line between IF statements"
}
] |
[
{
"msg_contents": "Hi,\n\nMarc (in Cc) reported me a problematic query using a GIN index hit in\nproduction. The issue is that even if an GIN opclass says that the\nindex can be used for an operator, it's still possible that some\nvalues aren't really compatible and requires a full index scan.\n\nOne simple example is with a GIN pg_trgm index (but other opclasses\nhave similar restrictions) , doing a LIKE with wildcard on both side,\nwhere the pattern is shorter than a trigram, e.g. col LIKE '%a%'. So,\na where clause of the form:\n\nWHERE col LIKE '%verylongpattern%' AND col LIKE '%a%'\n\nis much more expensive than\n\nWHERE col LKE '%verylongpattern%'\n\nWhile there's nothing to do if the unhandled const is the only\npredicate, if there are multiple AND-ed predicates and at least one of\nthem doesn't require a full index scan, we can avoid it.\n\nAttached patch tries to fix the issue by detecting such cases and\ndropping the unhandled quals in the BitmapIndexScan, letting the\nrecheck in BitmapHeapScan do the proper filtering. I'm not happy to\ncall the extractQuery support functions an additional time, but i\ndidn't find a cleaner way. This is of course intended for pg13.",
"msg_date": "Sun, 24 Mar 2019 11:52:52 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sun, Mar 24, 2019 at 11:52 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Marc (in Cc) reported me a problematic query using a GIN index hit in\n> production. The issue is that even if an GIN opclass says that the\n> index can be used for an operator, it's still possible that some\n> values aren't really compatible and requires a full index scan.\n>\n> One simple example is with a GIN pg_trgm index (but other opclasses\n> have similar restrictions) , doing a LIKE with wildcard on both side,\n> where the pattern is shorter than a trigram, e.g. col LIKE '%a%'. So,\n> a where clause of the form:\n>\n> WHERE col LIKE '%verylongpattern%' AND col LIKE '%a%'\n>\n> is much more expensive than\n>\n> WHERE col LKE '%verylongpattern%'\n>\n> While there's nothing to do if the unhandled const is the only\n> predicate, if there are multiple AND-ed predicates and at least one of\n> them doesn't require a full index scan, we can avoid it.\n>\n> Attached patch tries to fix the issue by detecting such cases and\n> dropping the unhandled quals in the BitmapIndexScan, letting the\n> recheck in BitmapHeapScan do the proper filtering. I'm not happy to\n> call the extractQuery support functions an additional time, but i\n> didn't find a cleaner way. This is of course intended for pg13.\n\nPatch doesn't apply anymore (thanks cfbot). Rebased patch attached.",
"msg_date": "Fri, 28 Jun 2019 16:07:47 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi,\n\nI've briefly looked at the patch today. I think the idea is worthwhile,\nbut I found a couple of issues with the patch:\n\n\n1) The index_selfuncs.h header is included in the wrong place, it should\nbe included before lsyscache.h (because 'i' < 'l').\n\n\n2) I'm not sure it's a good idea to add dependency on a specific AM type\ninto indxpath.c. At the moment there are only two places, both referring\nto BTREE_AM_OID, do we really hard-code another OID here?\n\nI wonder if this could be generalized to another support proc in the\ninde AM API, with just GIN implementing it.\n\n\n3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,\nas it's only for functions computing selectivity estimates (and funcs\ndirectly related to that). And the new function is not related to that\nat all, so why not to define it in indxpath.c directly?\n\nOf course, if it gets into the index AM API then this would disappear.\n\n\n4) The gin_get_optimizable_quals is quite misleading. Firstly, it's not\nvery obvious what \"optimizable\" means in this context, but that's a\nminor issue. The bigger issue is that it's a lie - when there are no\n\"optimizable\" clauses (so all clauses would require full scan) the\nfunction returns the original list, which is by definition completely\nnon-optimizable.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 28 Jun 2019 18:10:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> I've briefly looked at the patch today. I think the idea is worthwhile,\n\nThanks!\n\n> 2) I'm not sure it's a good idea to add dependency on a specific AM type\n> into indxpath.c. At the moment there are only two places, both referring\n> to BTREE_AM_OID, do we really hard-code another OID here?\n>\n> I wonder if this could be generalized to another support proc in the\n> inde AM API, with just GIN implementing it.\n\nYes, this patch was more a POC than anything, to discuss the approach\nbefore spending too much time on infrastructure. I considered another\nsupport function, but I'm still unclear of how useful it'd be for\ncustom AM (as I don't see any use for that for the vanilla one I\nthink), or whether if this should be opclass specific or not.\n\n> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,\n> as it's only for functions computing selectivity estimates (and funcs\n> directly related to that). And the new function is not related to that\n> at all, so why not to define it in indxpath.c directly?\n\nI kept this function in selfuncs.c as it's using some private\nfunctions (gincost_opexpr and gincost_scalararrayopexpr) used by\ngincostestimate. That seemed the simplest approach at this stage.\nBTW there's also an ongoing discussion to move the (am)estimate\nfunctions in AM-specific files [1], so that'll directly impact this\ntoo.\n\n> 4) The gin_get_optimizable_quals is quite misleading. Firstly, it's not\n> very obvious what \"optimizable\" means in this context, but that's a\n> minor issue. The bigger issue is that it's a lie - when there are no\n> \"optimizable\" clauses (so all clauses would require full scan) the\n> function returns the original list, which is by definition completely\n> non-optimizable.\n\nThe comment is hopefully clearer about what this function does, but\ndefinitely this name is terrible. I'll try to come up with a better\none.\n\n[1] https://www.postgresql.org/message-id/4079.1561661677%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 28 Jun 2019 18:43:40 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> 2) I'm not sure it's a good idea to add dependency on a specific AM type\n>> into indxpath.c. At the moment there are only two places, both referring\n>> to BTREE_AM_OID, do we really hard-code another OID here?\n>> \n>> I wonder if this could be generalized to another support proc in the\n>> inde AM API, with just GIN implementing it.\n\n> Yes, this patch was more a POC than anything, to discuss the approach\n> before spending too much time on infrastructure. I considered another\n> support function, but I'm still unclear of how useful it'd be for\n> custom AM (as I don't see any use for that for the vanilla one I\n> think), or whether if this should be opclass specific or not.\n\nI just spent a lot of sweat to get rid of (most of) indxpath.c's knowledge\nabout specific AMs' capabilities; I'd be very sad if we started to put any\nback. Aside from being a modularity violation, it's going to fall foul\nof the principle that if index AM X wants something, some index AM Y is\ngoing to want it too, eventually.\n\nAlso, I'm quite unhappy about including index_selfuncs.h into indxpath.c\nat all, never mind whether you got the alphabetical ordering right.\nI have doubts still about how we ought to refactor the mess that is\n*selfuncs.c, but this isn't going in the right direction.\n\n>> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,\n>> as it's only for functions computing selectivity estimates (and funcs\n>> directly related to that). And the new function is not related to that\n>> at all, so why not to define it in indxpath.c directly?\n\nI not only don't want that function in indxpath.c, I don't even want\nit to be known/called from there. If we need the ability for the index\nAM to editorialize on the list of indexable quals (which I'm not very\nconvinced of yet), let's make an AM interface function to do it.\n\nBTW, I have no idea what you think you're doing here by messing with\nouter_relids, but it's almost certainly wrong, and if it isn't wrong\nthen it needs a comment explaining itself.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Jun 2019 15:03:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:\n>Julien Rouhaud <rjuju123@gmail.com> writes:\n>> On Fri, Jun 28, 2019 at 6:10 PM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>>> 2) I'm not sure it's a good idea to add dependency on a specific AM type\n>>> into indxpath.c. At the moment there are only two places, both referring\n>>> to BTREE_AM_OID, do we really hard-code another OID here?\n>>>\n>>> I wonder if this could be generalized to another support proc in the\n>>> inde AM API, with just GIN implementing it.\n>\n>> Yes, this patch was more a POC than anything, to discuss the approach\n>> before spending too much time on infrastructure. I considered another\n>> support function, but I'm still unclear of how useful it'd be for\n>> custom AM (as I don't see any use for that for the vanilla one I\n>> think), or whether if this should be opclass specific or not.\n>\n>I just spent a lot of sweat to get rid of (most of) indxpath.c's knowledge\n>about specific AMs' capabilities; I'd be very sad if we started to put any\n>back. Aside from being a modularity violation, it's going to fall foul\n>of the principle that if index AM X wants something, some index AM Y is\n>going to want it too, eventually.\n>\n>Also, I'm quite unhappy about including index_selfuncs.h into indxpath.c\n>at all, never mind whether you got the alphabetical ordering right.\n>I have doubts still about how we ought to refactor the mess that is\n>*selfuncs.c, but this isn't going in the right direction.\n>\n\nRight.\n\n>>> 3) selfuncs.c is hardly the right place for gin_get_optimizable_quals,\n>>> as it's only for functions computing selectivity estimates (and funcs\n>>> directly related to that). And the new function is not related to that\n>>> at all, so why not to define it in indxpath.c directly?\n>\n>I not only don't want that function in indxpath.c, I don't even want\n>it to be known/called from there. If we need the ability for the index\n>AM to editorialize on the list of indexable quals (which I'm not very\n>convinced of yet), let's make an AM interface function to do it.\n>\n\nWouldn't it be better to have a function that inspects a single qual and\nsays whether it's \"optimizable\" or not? That could be part of the AM\nimplementation, and we'd call it and it'd be us messing with the list.\n\nThat being said, is this really a binary thing - if you have a value\nthat matches 99% of rows, that probably is not much better than a full\nscan. It may be more difficult to decide (compared to the 'short\ntrigram' case), but perhaps we should allow that too? Essentially,\ninstead of 'optimizable' returning true/false, it might return value\nbetween 0.0 and 1.0, as a measure of 'optimizability'.\n\nBut that kinda resembles stuff we already have - selectivity/cost. So\nwhy shouldn't this be considered as part of costing? That is, could\ngincostestimate look at the index quals and decide what will be used for\nscanning the index? Of course, this would make the logic GIN-specific,\nand other index AMs would have to implement their own version ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 28 Jun 2019 21:54:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:\n>> I not only don't want that function in indxpath.c, I don't even want\n>> it to be known/called from there. If we need the ability for the index\n>> AM to editorialize on the list of indexable quals (which I'm not very\n>> convinced of yet), let's make an AM interface function to do it.\n\n> Wouldn't it be better to have a function that inspects a single qual and\n> says whether it's \"optimizable\" or not? That could be part of the AM\n> implementation, and we'd call it and it'd be us messing with the list.\n\nUh ... we already determined that the qual is indexable (ie is a member\nof the index's opclass), or allowed the index AM to derive an indexable\nclause from it, so I'm not sure what you envision would happen\nadditionally there. If I understand what Julien is concerned about\n--- and I may not --- it's that the set of indexable clauses *as a whole*\nmay have or lack properties of interest. So I'm thinking the answer\ninvolves some callback that can do something to the whole list, not\nqual-at-a-time. We've already got facilities for the latter case.\n\n> But that kinda resembles stuff we already have - selectivity/cost. So\n> why shouldn't this be considered as part of costing?\n\nYeah, I'm not entirely convinced that we need anything new here.\nThe cost estimate function can detect such situations, and so can\nthe index AM at scan start --- for example, btree checks for\ncontradictory quals at scan start. There's a certain amount of\nduplicative effort involved there perhaps, but you also have to\nkeep in mind that we don't know the values of run-time-determined\ncomparison values until scan start. So if you want certainty rather\nthan just a cost estimate, you may have to do these sorts of checks\nat scan start.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Jun 2019 16:16:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:\n> >> I not only don't want that function in indxpath.c, I don't even want\n> >> it to be known/called from there. If we need the ability for the index\n> >> AM to editorialize on the list of indexable quals (which I'm not very\n> >> convinced of yet), let's make an AM interface function to do it.\n>\n> > Wouldn't it be better to have a function that inspects a single qual and\n> > says whether it's \"optimizable\" or not? That could be part of the AM\n> > implementation, and we'd call it and it'd be us messing with the list.\n>\n> Uh ... we already determined that the qual is indexable (ie is a member\n> of the index's opclass), or allowed the index AM to derive an indexable\n> clause from it, so I'm not sure what you envision would happen\n> additionally there. If I understand what Julien is concerned about\n> --- and I may not --- it's that the set of indexable clauses *as a whole*\n> may have or lack properties of interest. So I'm thinking the answer\n> involves some callback that can do something to the whole list, not\n> qual-at-a-time. We've already got facilities for the latter case.\n\nYes, the root issue here is that with gin it's entirely possible that\n\"WHERE sometable.col op value1\" is way more efficient than \"WHERE\nsometable.col op value AND sometable.col op value2\", where both qual\nare determined indexable by the opclass. The only way to avoid that\nis indeed to inspect the whole list, as done in this poor POC.\n\nThis is a problem actually hit in production, and as far as I know\nthere's no easy way from the application POV to prevent unexpected\nslowdown. Maybe Marc will have more details about the actual problem\nand how expensive such a case was compared to the normal ones.\n\n> > But that kinda resembles stuff we already have - selectivity/cost. So\n> > why shouldn't this be considered as part of costing?\n>\n> Yeah, I'm not entirely convinced that we need anything new here.\n> The cost estimate function can detect such situations, and so can\n> the index AM at scan start --- for example, btree checks for\n> contradictory quals at scan start. There's a certain amount of\n> duplicative effort involved there perhaps, but you also have to\n> keep in mind that we don't know the values of run-time-determined\n> comparison values until scan start. So if you want certainty rather\n> than just a cost estimate, you may have to do these sorts of checks\n> at scan start.\n\nAh, I didn't know about _bt_preprocess_keys(). I'm not familiar with\nthis code, so please bear with me. IIUC the idea would be to add\nadditional logic in gingetbitmap() / ginNewScanKey() to drop some\nquals at runtime. But that would mean that additional logic would\nalso be required in BitmapHeapScan, or that all the returned bitmap\nshould be artificially marked as lossy to enforce a recheck?\n\n\n",
"msg_date": "Sat, 29 Jun 2019 00:23:13 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi!\n\nOn 29.06.2019 1:23, Julien Rouhaud wrote:\n>>> But that kinda resembles stuff we already have - selectivity/cost. So\n>>> why shouldn't this be considered as part of costing?\n>> Yeah, I'm not entirely convinced that we need anything new here.\n>> The cost estimate function can detect such situations, and so can\n>> the index AM at scan start --- for example, btree checks for\n>> contradictory quals at scan start. There's a certain amount of\n>> duplicative effort involved there perhaps, but you also have to\n>> keep in mind that we don't know the values of run-time-determined\n>> comparison values until scan start. So if you want certainty rather\n>> than just a cost estimate, you may have to do these sorts of checks\n>> at scan start.\n> Ah, I didn't know about _bt_preprocess_keys(). I'm not familiar with\n> this code, so please bear with me. IIUC the idea would be to add\n> additional logic in gingetbitmap() / ginNewScanKey() to drop some\n> quals at runtime. But that would mean that additional logic would\n> also be required in BitmapHeapScan, or that all the returned bitmap\n> should be artificially marked as lossy to enforce a recheck?\n\n\nWe have a similar solution for this problem. The idea is to avoid full index\nscan inside GIN itself when we have some GIN entries, and forcibly recheck\nall tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no\nGIN entries.\n\nThe attached patch in its current shape contain at least two ugly places:\n\n1. We still need to initialize empty scan key to call triconsistent(), but\n then we have to remove it from the list of scan keys. Simple refactoring\n of ginFillScanKey() can be helpful here.\n \n2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL\n if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL\n because we need to skip NULLs in GIN_SEARCH_MODE_ALL. Simplest example here\n is \"array @> '{}'\": triconsistent() returns GIN_TRUE, recheck is not forced,\n and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked. Maybe\n it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.\n\n\n\nExample:\n\nCREATE TABLE test AS SELECT i::text AS t FROM generate_series(0, 999999) i;\n\nCREATE INDEX ON test USING gin (t gin_trgm_ops);\n\n-- master\nEXPLAIN ANALYZE SELECT * FROM test WHERE LIKE '%1234%' AND t LIKE '%1%';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test (cost=11777.99..16421.73 rows=7999 width=32) (actual time=65.431..65.857 rows=300 loops=1)\n Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n Rows Removed by Index Recheck: 2\n Heap Blocks: exact=114\n -> Bitmap Index Scan on test_t_idx (cost=0.00..11775.99 rows=7999 width=0) (actual time=65.380..65.380 rows=302 loops=1)\n Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n Planning Time: 0.151 ms\n Execution Time: 65.900 ms\n(8 rows)\n\n\n-- patched\nEXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n Rows Removed by Index Recheck: 2\n Heap Blocks: exact=114\n -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n Planning Time: 0.080 ms\n Execution Time: 0.450 ms\n(8 rows)\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 29 Jun 2019 01:50:18 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 04:16:23PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:\n>>> I not only don't want that function in indxpath.c, I don't even want\n>>> it to be known/called from there. If we need the ability for the index\n>>> AM to editorialize on the list of indexable quals (which I'm not very\n>>> convinced of yet), let's make an AM interface function to do it.\n>\n>> Wouldn't it be better to have a function that inspects a single qual and\n>> says whether it's \"optimizable\" or not? That could be part of the AM\n>> implementation, and we'd call it and it'd be us messing with the list.\n>\n>Uh ... we already determined that the qual is indexable (ie is a member\n>of the index's opclass), or allowed the index AM to derive an indexable\n>clause from it, so I'm not sure what you envision would happen\n>additionally there. If I understand what Julien is concerned about\n>--- and I may not --- it's that the set of indexable clauses *as a whole*\n>may have or lack properties of interest. So I'm thinking the answer\n>involves some callback that can do something to the whole list, not\n>qual-at-a-time. We've already got facilities for the latter case.\n>\n\nI'm not sure I understand the problem either.\n\nI don't think \"indexable\" is the thing we care about here - in Julien's\noriginal example the qual with '%a%' is indexable. And we probably want\nto keep it that way.\n\nThe problem is that evaluating some of the quals may be inefficient with\na given index - but only if there are other quals. In Julien's example\nit makes sense to just drop the '%a%' qual, but only when there are some\nquals that work with the trigram index. But if there are no such 'good'\nquals, it may be better to keep al least the bad ones.\n\nSo I think you're right we need to look at the list as a whole.\n\n>> But that kinda resembles stuff we already have - selectivity/cost. So\n>> why shouldn't this be considered as part of costing?\n>\n>Yeah, I'm not entirely convinced that we need anything new here.\n>The cost estimate function can detect such situations, and so can\n>the index AM at scan start --- for example, btree checks for\n>contradictory quals at scan start. There's a certain amount of\n>duplicative effort involved there perhaps, but you also have to\n>keep in mind that we don't know the values of run-time-determined\n>comparison values until scan start. So if you want certainty rather\n>than just a cost estimate, you may have to do these sorts of checks\n>at scan start.\n>\n\nRight, that's why I suggested doing this as part of costing, but you're\nright scan start would be another option. I assume it should affect cost\nestimates in some way, so the cost function would be my first choice.\n\nBut does the cost function really has enough info to make such decision?\nFor example, ignoring quals is valid only if we recheck them later. For\nGIN that's not an issue thanks to the bitmap index scan.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 29 Jun 2019 00:54:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov\n<n.gluhov@postgrespro.ru> wrote:>\n> On 29.06.2019 1:23, Julien Rouhaud wrote:\n>\n> But that kinda resembles stuff we already have - selectivity/cost. So\n> why shouldn't this be considered as part of costing?\n>\n> Yeah, I'm not entirely convinced that we need anything new here.\n> The cost estimate function can detect such situations, and so can\n> the index AM at scan start --- for example, btree checks for\n> contradictory quals at scan start. There's a certain amount of\n> duplicative effort involved there perhaps, but you also have to\n> keep in mind that we don't know the values of run-time-determined\n> comparison values until scan start. So if you want certainty rather\n> than just a cost estimate, you may have to do these sorts of checks\n> at scan start.\n>\n> Ah, I didn't know about _bt_preprocess_keys(). I'm not familiar with\n> this code, so please bear with me. IIUC the idea would be to add\n> additional logic in gingetbitmap() / ginNewScanKey() to drop some\n> quals at runtime. But that would mean that additional logic would\n> also be required in BitmapHeapScan, or that all the returned bitmap\n> should be artificially marked as lossy to enforce a recheck?\n>\n> We have a similar solution for this problem. The idea is to avoid full index\n> scan inside GIN itself when we have some GIN entries, and forcibly recheck\n> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no\n> GIN entries.\n\nThanks for looking at it. That's I think a way better approach.\n\n> The attached patch in its current shape contain at least two ugly places:\n>\n> 1. We still need to initialize empty scan key to call triconsistent(), but\n> then we have to remove it from the list of scan keys. Simple refactoring\n> of ginFillScanKey() can be helpful here.\n>\n> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL\n> if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL\n> because we need to skip NULLs in GIN_SEARCH_MODE_ALL. Simplest example here\n> is \"array @> '{}'\": triconsistent() returns GIN_TRUE, recheck is not forced,\n> and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked. Maybe\n> it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.\n\nAlso\n\n+ if (searchMode == GIN_SEARCH_MODE_ALL && nQueryValues <= 0)\n+ {\n+ /*\n+ * Don't emit ALL key with no entries, check only whether\n+ * unconditional recheck is needed.\n+ */\n+ GinScanKey key = &so->keys[--so->nkeys];\n+\n+ hasSearchAllMode = true;\n+ so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n+ }\n\nShouldn't you make sure that the forcedRecheck flag can't reset?\n\n> -- patched\n> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n> Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> Rows Removed by Index Recheck: 2\n> Heap Blocks: exact=114\n> -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n> Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> Planning Time: 0.080 ms\n> Execution Time: 0.450 ms\n> (8 rows)\n\nOne thing that's bothering me is that the explain implies that the\nLIKE '%i% was part of the index scan, while in reality it wasn't. One\nof the reason why I tried to modify the qual while generating the path\nwas to have the explain be clearer about what is really done.\n\n\n",
"msg_date": "Sat, 29 Jun 2019 11:10:03 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:\n>On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov\n><n.gluhov@postgrespro.ru> wrote:>\n>> On 29.06.2019 1:23, Julien Rouhaud wrote:\n>>\n>> But that kinda resembles stuff we already have - selectivity/cost. So\n>> why shouldn't this be considered as part of costing?\n>>\n>> Yeah, I'm not entirely convinced that we need anything new here.\n>> The cost estimate function can detect such situations, and so can\n>> the index AM at scan start --- for example, btree checks for\n>> contradictory quals at scan start. There's a certain amount of\n>> duplicative effort involved there perhaps, but you also have to\n>> keep in mind that we don't know the values of run-time-determined\n>> comparison values until scan start. So if you want certainty rather\n>> than just a cost estimate, you may have to do these sorts of checks\n>> at scan start.\n>>\n>> Ah, I didn't know about _bt_preprocess_keys(). I'm not familiar with\n>> this code, so please bear with me. IIUC the idea would be to add\n>> additional logic in gingetbitmap() / ginNewScanKey() to drop some\n>> quals at runtime. But that would mean that additional logic would\n>> also be required in BitmapHeapScan, or that all the returned bitmap\n>> should be artificially marked as lossy to enforce a recheck?\n>>\n>> We have a similar solution for this problem. The idea is to avoid full index\n>> scan inside GIN itself when we have some GIN entries, and forcibly recheck\n>> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no\n>> GIN entries.\n>\n>Thanks for looking at it. That's I think a way better approach.\n>\n>> The attached patch in its current shape contain at least two ugly places:\n>>\n>> 1. We still need to initialize empty scan key to call triconsistent(), but\n>> then we have to remove it from the list of scan keys. Simple refactoring\n>> of ginFillScanKey() can be helpful here.\n>>\n>> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL\n>> if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL\n>> because we need to skip NULLs in GIN_SEARCH_MODE_ALL. Simplest example here\n>> is \"array @> '{}'\": triconsistent() returns GIN_TRUE, recheck is not forced,\n>> and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked. Maybe\n>> it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.\n>\n>Also\n>\n>+ if (searchMode == GIN_SEARCH_MODE_ALL && nQueryValues <= 0)\n>+ {\n>+ /*\n>+ * Don't emit ALL key with no entries, check only whether\n>+ * unconditional recheck is needed.\n>+ */\n>+ GinScanKey key = &so->keys[--so->nkeys];\n>+\n>+ hasSearchAllMode = true;\n>+ so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n>+ }\n>\n>Shouldn't you make sure that the forcedRecheck flag can't reset?\n>\n>> -- patched\n>> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------------------------\n>> Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n>> Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n>> Rows Removed by Index Recheck: 2\n>> Heap Blocks: exact=114\n>> -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n>> Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n>> Planning Time: 0.080 ms\n>> Execution Time: 0.450 ms\n>> (8 rows)\n>\n>One thing that's bothering me is that the explain implies that the\n>LIKE '%i% was part of the index scan, while in reality it wasn't. One\n>of the reason why I tried to modify the qual while generating the path\n>was to have the explain be clearer about what is really done.\n\nYeah, I think that's a bit annoying - it'd be nice to make it clear\nwhich quals were actually used to scan the index. It some cases it may\nnot be possible (e.g. in cases when the decision is done at runtime, not\nwhile planning the query), but it'd be nice to show it when possible.\n\nA related issue is that during costing is too late to modify cardinality\nestimates, so the 'Bitmap Index Scan' will be expected to return fewer\nrows than it actually returns (after ignoring the full-scan quals).\nIgnoring redundant quals (the way btree does it at execution) does not\nhave such consequence, of course.\n\nWhich may be an issue, because we essentially want to modify the list of\nquals to minimize the cost of\n\n bitmap index scan + recheck during bitmap heap scan\n\nOTOH it's not a huge issue, because it won't affect the rest of the plan\n(because that uses the bitmap heap scan estimates, and those are not\naffected by this).\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 29 Jun 2019 12:25:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:\n> >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov\n> >> -- patched\n> >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n> >> QUERY PLAN\n> >> -----------------------------------------------------------------------------------------------------------------------\n> >> Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n> >> Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> >> Rows Removed by Index Recheck: 2\n> >> Heap Blocks: exact=114\n> >> -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n> >> Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> >> Planning Time: 0.080 ms\n> >> Execution Time: 0.450 ms\n> >> (8 rows)\n> >\n> >One thing that's bothering me is that the explain implies that the\n> >LIKE '%i% was part of the index scan, while in reality it wasn't. One\n> >of the reason why I tried to modify the qual while generating the path\n> >was to have the explain be clearer about what is really done.\n>\n> Yeah, I think that's a bit annoying - it'd be nice to make it clear\n> which quals were actually used to scan the index. It some cases it may\n> not be possible (e.g. in cases when the decision is done at runtime, not\n> while planning the query), but it'd be nice to show it when possible.\n\nMaybe we could somehow add some runtime information about ignored\nquals, similar to the \"never executed\" information for loops?\n\n> A related issue is that during costing is too late to modify cardinality\n> estimates, so the 'Bitmap Index Scan' will be expected to return fewer\n> rows than it actually returns (after ignoring the full-scan quals).\n> Ignoring redundant quals (the way btree does it at execution) does not\n> have such consequence, of course.\n>\n> Which may be an issue, because we essentially want to modify the list of\n> quals to minimize the cost of\n>\n> bitmap index scan + recheck during bitmap heap scan\n>\n> OTOH it's not a huge issue, because it won't affect the rest of the plan\n> (because that uses the bitmap heap scan estimates, and those are not\n> affected by this).\n\nDoesn't this problem already exists, as the quals that we could drop\ncan't actually reduce the node's results?\n\n\n",
"msg_date": "Sat, 29 Jun 2019 14:50:51 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:\n>On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:\n>> >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov\n>> >> -- patched\n>> >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n>> >> QUERY PLAN\n>> >> -----------------------------------------------------------------------------------------------------------------------\n>> >> Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n>> >> Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n>> >> Rows Removed by Index Recheck: 2\n>> >> Heap Blocks: exact=114\n>> >> -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n>> >> Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n>> >> Planning Time: 0.080 ms\n>> >> Execution Time: 0.450 ms\n>> >> (8 rows)\n>> >\n>> >One thing that's bothering me is that the explain implies that the\n>> >LIKE '%i% was part of the index scan, while in reality it wasn't. One\n>> >of the reason why I tried to modify the qual while generating the path\n>> >was to have the explain be clearer about what is really done.\n>>\n>> Yeah, I think that's a bit annoying - it'd be nice to make it clear\n>> which quals were actually used to scan the index. It some cases it may\n>> not be possible (e.g. in cases when the decision is done at runtime, not\n>> while planning the query), but it'd be nice to show it when possible.\n>\n>Maybe we could somehow add some runtime information about ignored\n>quals, similar to the \"never executed\" information for loops?\n>\n\nMaybe. I suppose it depends on when exactly we make the decision about\nwhich quals to ignore.\n\n>> A related issue is that during costing is too late to modify cardinality\n>> estimates, so the 'Bitmap Index Scan' will be expected to return fewer\n>> rows than it actually returns (after ignoring the full-scan quals).\n>> Ignoring redundant quals (the way btree does it at execution) does not\n>> have such consequence, of course.\n>>\n>> Which may be an issue, because we essentially want to modify the list of\n>> quals to minimize the cost of\n>>\n>> bitmap index scan + recheck during bitmap heap scan\n>>\n>> OTOH it's not a huge issue, because it won't affect the rest of the plan\n>> (because that uses the bitmap heap scan estimates, and those are not\n>> affected by this).\n>\n>Doesn't this problem already exists, as the quals that we could drop\n>can't actually reduce the node's results?\n\nHow could it not reduce the node's results, if you ignore some quals\nthat are not redundant? My understanding is we have a plan like this:\n\n Bitmap Heap Scan\n -> Bitmap Index Scan\n\nand by ignoring some quals at the index scan level, we trade the (high)\ncost of evaluating the qual there for a plain recheck at the bitmap heap\nscan. But it means the index scan may produce more rows, so it's only a\nwin if the \"extra rechecks\" are cheaper than the (removed) full scan.\n\nSo the full scan might actually reduce the number of rows from the index\nscan, but clearly whatever we do the results from the bitmap heap scan\nmust remain the same.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 29 Jun 2019 15:11:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 3:11 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:\n> >On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra\n> >> A related issue is that during costing is too late to modify cardinality\n> >> estimates, so the 'Bitmap Index Scan' will be expected to return fewer\n> >> rows than it actually returns (after ignoring the full-scan quals).\n> >> Ignoring redundant quals (the way btree does it at execution) does not\n> >> have such consequence, of course.\n> >>\n> >> Which may be an issue, because we essentially want to modify the list of\n> >> quals to minimize the cost of\n> >>\n> >> bitmap index scan + recheck during bitmap heap scan\n> >>\n> >> OTOH it's not a huge issue, because it won't affect the rest of the plan\n> >> (because that uses the bitmap heap scan estimates, and those are not\n> >> affected by this).\n> >\n> >Doesn't this problem already exists, as the quals that we could drop\n> >can't actually reduce the node's results?\n>\n> How could it not reduce the node's results, if you ignore some quals\n> that are not redundant? My understanding is we have a plan like this:\n>\n> Bitmap Heap Scan\n> -> Bitmap Index Scan\n>\n> and by ignoring some quals at the index scan level, we trade the (high)\n> cost of evaluating the qual there for a plain recheck at the bitmap heap\n> scan. But it means the index scan may produce more rows, so it's only a\n> win if the \"extra rechecks\" are cheaper than the (removed) full scan.\n\nSorry, by node I meant the BitmapIndexScan. AIUI, if you have for\ninstance WHERE val LIKE '%abcde%' AND val LIKE '%z%' and a trgm index,\nthe BitmapIndexScan will have to through the whole index and discard\nrows based on the only opclass-optimizable qual (LIKE '%abcde%'),\nletting the recheck do the proper filtering for the other qual. So\nwhether you have the LIKE '%z%' or not in the index scan, the\nBitmapIndexScan will return the same number of rows, the only\ndifference being that in one case you'll have to scan the whole index,\nwhile in the other case you won't.\n\n> clearly whatever we do the results from the bitmap heap scan\n> must remain the same.\n\nOf course.\n\n\n",
"msg_date": "Sat, 29 Jun 2019 15:28:11 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 03:28:11PM +0200, Julien Rouhaud wrote:\n>On Sat, Jun 29, 2019 at 3:11 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Sat, Jun 29, 2019 at 02:50:51PM +0200, Julien Rouhaud wrote:\n>> >On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra\n>> >> A related issue is that during costing is too late to modify cardinality\n>> >> estimates, so the 'Bitmap Index Scan' will be expected to return fewer\n>> >> rows than it actually returns (after ignoring the full-scan quals).\n>> >> Ignoring redundant quals (the way btree does it at execution) does not\n>> >> have such consequence, of course.\n>> >>\n>> >> Which may be an issue, because we essentially want to modify the list of\n>> >> quals to minimize the cost of\n>> >>\n>> >> bitmap index scan + recheck during bitmap heap scan\n>> >>\n>> >> OTOH it's not a huge issue, because it won't affect the rest of the plan\n>> >> (because that uses the bitmap heap scan estimates, and those are not\n>> >> affected by this).\n>> >\n>> >Doesn't this problem already exists, as the quals that we could drop\n>> >can't actually reduce the node's results?\n>>\n>> How could it not reduce the node's results, if you ignore some quals\n>> that are not redundant? My understanding is we have a plan like this:\n>>\n>> Bitmap Heap Scan\n>> -> Bitmap Index Scan\n>>\n>> and by ignoring some quals at the index scan level, we trade the (high)\n>> cost of evaluating the qual there for a plain recheck at the bitmap heap\n>> scan. But it means the index scan may produce more rows, so it's only a\n>> win if the \"extra rechecks\" are cheaper than the (removed) full scan.\n>\n>Sorry, by node I meant the BitmapIndexScan. AIUI, if you have for\n>instance WHERE val LIKE '%abcde%' AND val LIKE '%z%' and a trgm index,\n>the BitmapIndexScan will have to through the whole index and discard\n>rows based on the only opclass-optimizable qual (LIKE '%abcde%'),\n>letting the recheck do the proper filtering for the other qual. So\n>whether you have the LIKE '%z%' or not in the index scan, the\n>BitmapIndexScan will return the same number of rows, the only\n>difference being that in one case you'll have to scan the whole index,\n>while in the other case you won't.\n>\n\nOh! I thought 'full scan' means we have to scan all the keys in the GIN\nindex, but we can still eliminate some of the keys (for example for the\ntrigrams we might check if the trigram contains the short string). But\nclearly I was mistaken and it does not work like that ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 29 Jun 2019 16:27:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi!\n\nOn Sat, Jun 29, 2019 at 1:52 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> We have a similar solution for this problem. The idea is to avoid full index\n> scan inside GIN itself when we have some GIN entries, and forcibly recheck\n> all tuples if triconsistent() returns GIN_MAYBE for the keys that emitted no\n> GIN entries.\n>\n> The attached patch in its current shape contain at least two ugly places:\n>\n> 1. We still need to initialize empty scan key to call triconsistent(), but\n> then we have to remove it from the list of scan keys. Simple refactoring\n> of ginFillScanKey() can be helpful here.\n>\n> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL\n> if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL\n> because we need to skip NULLs in GIN_SEARCH_MODE_ALL. Simplest example here\n> is \"array @> '{}'\": triconsistent() returns GIN_TRUE, recheck is not forced,\n> and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked. Maybe\n> it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.\n\nThank you for publishing this!\n\nWhat would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT\nscan on first column and GIN_SEARCH_MODE_ALL on second? I think even\nif triconsistent() for second column returns GIN_TRUE, we still need\nto recheck to verify second columns is not NULL.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 29 Jun 2019 20:06:47 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 1:25 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> A related issue is that during costing is too late to modify cardinality\n> estimates, so the 'Bitmap Index Scan' will be expected to return fewer\n> rows than it actually returns (after ignoring the full-scan quals).\n> Ignoring redundant quals (the way btree does it at execution) does not\n> have such consequence, of course.\n\nAdjust cardinality estimates should be possible in gincostestimate(),\nbecause we call extractquery() method there. However, it seems to be\nquite independent issue. Number of rows returned by 'Bitmap Index\nScan' doesn't vary much whether we execute GIN_SEARCH_MODE_ALL or not.\nThe only difference is for multicolumn index, GIN_SEARCH_MODE_ALL\nallows to exclude NULL on one column, when normal scan is performed on\nanother column. And we can take it into account in gincostestimate().\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 29 Jun 2019 20:27:04 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jun 29, 2019 at 3:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Sat, Jun 29, 2019 at 12:25 PM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sat, Jun 29, 2019 at 11:10:03AM +0200, Julien Rouhaud wrote:\n> > >On Sat, Jun 29, 2019 at 12:51 AM Nikita Glukhov\n> > >> -- patched\n> > >> EXPLAIN ANALYZE SELECT * FROM test WHERE t LIKE '%1234%' AND t LIKE '%1%';\n> > >> QUERY PLAN\n> > >> -----------------------------------------------------------------------------------------------------------------------\n> > >> Bitmap Heap Scan on test (cost=20.43..176.79 rows=42 width=6) (actual time=0.287..0.424 rows=300 loops=1)\n> > >> Recheck Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> > >> Rows Removed by Index Recheck: 2\n> > >> Heap Blocks: exact=114\n> > >> -> Bitmap Index Scan on test_t_idx (cost=0.00..20.42 rows=42 width=0) (actual time=0.271..0.271 rows=302 loops=1)\n> > >> Index Cond: ((t ~~ '%1234%'::text) AND (t ~~ '%1%'::text))\n> > >> Planning Time: 0.080 ms\n> > >> Execution Time: 0.450 ms\n> > >> (8 rows)\n> > >\n> > >One thing that's bothering me is that the explain implies that the\n> > >LIKE '%i% was part of the index scan, while in reality it wasn't. One\n> > >of the reason why I tried to modify the qual while generating the path\n> > >was to have the explain be clearer about what is really done.\n> >\n> > Yeah, I think that's a bit annoying - it'd be nice to make it clear\n> > which quals were actually used to scan the index. It some cases it may\n> > not be possible (e.g. in cases when the decision is done at runtime, not\n> > while planning the query), but it'd be nice to show it when possible.\n>\n> Maybe we could somehow add some runtime information about ignored\n> quals, similar to the \"never executed\" information for loops?\n\n+1,\nThis sounds reasonable for me.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 29 Jun 2019 20:28:41 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On 29/06/2019 00:23, Julien Rouhaud wrote:\n> On Fri, Jun 28, 2019 at 10:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> On Fri, Jun 28, 2019 at 03:03:19PM -0400, Tom Lane wrote:\n>>>> I not only don't want that function in indxpath.c, I don't even want\n>>>> it to be known/called from there. If we need the ability for the index\n>>>> AM to editorialize on the list of indexable quals (which I'm not very\n>>>> convinced of yet), let's make an AM interface function to do it.\n>>\n>>> Wouldn't it be better to have a function that inspects a single qual and\n>>> says whether it's \"optimizable\" or not? That could be part of the AM\n>>> implementation, and we'd call it and it'd be us messing with the list.\n>>\n>> Uh ... we already determined that the qual is indexable (ie is a member\n>> of the index's opclass), or allowed the index AM to derive an indexable\n>> clause from it, so I'm not sure what you envision would happen\n>> additionally there. If I understand what Julien is concerned about\n>> --- and I may not --- it's that the set of indexable clauses *as a whole*\n>> may have or lack properties of interest. So I'm thinking the answer\n>> involves some callback that can do something to the whole list, not\n>> qual-at-a-time. We've already got facilities for the latter case.\n> \n> Yes, the root issue here is that with gin it's entirely possible that\n> \"WHERE sometable.col op value1\" is way more efficient than \"WHERE\n> sometable.col op value AND sometable.col op value2\", where both qual\n> are determined indexable by the opclass. The only way to avoid that\n> is indeed to inspect the whole list, as done in this poor POC.\n> \n> This is a problem actually hit in production, and as far as I know\n> there's no easy way from the application POV to prevent unexpected\n> slowdown. Maybe Marc will have more details about the actual problem\n> and how expensive such a case was compared to the normal ones.\n\nSorry for the delay...\n\nYes, quite easily, here is what we had (it's just a bit simplified, we have other criterions but I think it shows the problem):\n\nrh2=> explain analyze select * from account_employee where typeahead like '%albert%';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on account_employee (cost=53.69..136.27 rows=734 width=666) (actual time=15.562..35.044 rows=8957 loops=1)\n Recheck Cond: (typeahead ~~ '%albert%'::text)\n Rows Removed by Index Recheck: 46\n Heap Blocks: exact=8919\n -> Bitmap Index Scan on account_employee_site_typeahead_gin_idx (cost=0.00..53.51 rows=734 width=0) (actual time=14.135..14.135 rows=9011 loops=1)\n Index Cond: (typeahead ~~ '%albert%'::text)\n Planning time: 0.224 ms\n Execution time: 35.389 ms\n(8 rows)\n\nrh2=> explain analyze select * from account_employee where typeahead like '%albert%' and typeahead like '%lo%';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on account_employee (cost=28358.38..28366.09 rows=67 width=666) (actual time=18210.109..18227.134 rows=1172 loops=1)\n Recheck Cond: ((typeahead ~~ '%albert%'::text) AND (typeahead ~~ '%lo%'::text))\n Rows Removed by Index Recheck: 7831\n Heap Blocks: exact=8919\n -> Bitmap Index Scan on account_employee_site_typeahead_gin_idx (cost=0.00..28358.37 rows=67 width=0) (actual time=18204.756..18204.756 rows=9011 loops=1)\n Index Cond: ((typeahead ~~ '%albert%'::text) AND (typeahead ~~ '%lo%'::text))\n Planning time: 0.288 ms\n Execution time: 18230.182 ms\n(8 rows)\n\n\nWe noticed this because the application timed out for users searching someone whose name was 2 characters ( it happens :) ).\n\nWe reject such filters when it's the only criterion, as we know it's going to be slow, but ignoring it as a supplementary filter would be a bit weird.\n\nOf course there is the possibility of filtering with two stages with a CTE, but that's not as great as having PostgreSQL doing it itself.\n\n\nBy the way, while preparing this, I noticed that it seems that during this kind of index scan, the interrupt signal is masked\nfor a very long time. Control-C takes a very long while to cancel the query. But it's an entirely different problem :)\n\nRegards",
"msg_date": "Mon, 1 Jul 2019 17:56:17 +0200",
"msg_from": "Marc Cousin <cousinmarc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Marc Cousin <cousinmarc@gmail.com> writes:\n> By the way, while preparing this, I noticed that it seems that during this kind of index scan, the interrupt signal is masked\n> for a very long time. Control-C takes a very long while to cancel the query. But it's an entirely different problem :)\n\nYeah, that seems like an independent problem/patch, but it's not obvious\nwhere to fix --- can you provide a self-contained test case?\n\nMeanwhile, I looked at the v3 patch, and it seems like it might not be\ntoo far from committable. I think we should *not* let this get bogged\ndown in questions of whether EXPLAIN can report which index quals were\nused or ignored. That's a problem that's existed for decades in the\nbtree code, with more or less zero user complaints.\n\nI do think v3 needs more attention to comments, for instance this\nhunk is clearly falsifying the adjacent comment:\n\n@ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,\n \tuint32\t\ti;\n \n \t/* Non-default search modes add one \"hidden\" entry to each key */\n-\tif (searchMode != GIN_SEARCH_MODE_DEFAULT)\n+\tif (searchMode != GIN_SEARCH_MODE_DEFAULT &&\n+\t\t(searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))\n \t\tnQueryValues++;\n \tkey->nentries = nQueryValues;\n \tkey->nuserentries = nUserQueryValues;\n\nAlso, I agree with Julien that this\n\n+\t\t\tso->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n\nprobably needs to be\n\n+\t\t\tso->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 13:27:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Meanwhile, I looked at the v3 patch, and it seems like it might not be\n> too far from committable. I think we should *not* let this get bogged\n> down in questions of whether EXPLAIN can report which index quals were\n> used or ignored. That's a problem that's existed for decades in the\n> btree code, with more or less zero user complaints.\n>\n> I do think v3 needs more attention to comments, for instance this\n> hunk is clearly falsifying the adjacent comment:\n>\n> @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,\n> uint32 i;\n>\n> /* Non-default search modes add one \"hidden\" entry to each key */\n> - if (searchMode != GIN_SEARCH_MODE_DEFAULT)\n> + if (searchMode != GIN_SEARCH_MODE_DEFAULT &&\n> + (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))\n> nQueryValues++;\n> key->nentries = nQueryValues;\n> key->nuserentries = nUserQueryValues;\n>\n> Also, I agree with Julien that this\n>\n> + so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n>\n> probably needs to be\n>\n> + so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;\n\nPing, Julien? Based on the above, it looks like if we had a\nlast-minute patch addressing the above this could go directly to Ready\nfor Committer? I will hold off moving this one to CF2 until my\nmorning.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 18:42:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 8:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Meanwhile, I looked at the v3 patch, and it seems like it might not be\n> > too far from committable. I think we should *not* let this get bogged\n> > down in questions of whether EXPLAIN can report which index quals were\n> > used or ignored. That's a problem that's existed for decades in the\n> > btree code, with more or less zero user complaints.\n> >\n> > I do think v3 needs more attention to comments, for instance this\n> > hunk is clearly falsifying the adjacent comment:\n> >\n> > @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,\n> > uint32 i;\n> >\n> > /* Non-default search modes add one \"hidden\" entry to each key */\n> > - if (searchMode != GIN_SEARCH_MODE_DEFAULT)\n> > + if (searchMode != GIN_SEARCH_MODE_DEFAULT &&\n> > + (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))\n> > nQueryValues++;\n> > key->nentries = nQueryValues;\n> > key->nuserentries = nUserQueryValues;\n> >\n> > Also, I agree with Julien that this\n> >\n> > + so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n> >\n> > probably needs to be\n> >\n> > + so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;\n>\n> Ping, Julien? Based on the above, it looks like if we had a\n> last-minute patch addressing the above this could go directly to Ready\n> for Committer? I will hold off moving this one to CF2 until my\n> morning.\n\nAttached v4 that should address all comments.",
"msg_date": "Thu, 1 Aug 2019 12:13:24 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 12:13 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Aug 1, 2019 at 8:43 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Wed, Jul 31, 2019 at 5:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Meanwhile, I looked at the v3 patch, and it seems like it might not be\n> > > too far from committable. I think we should *not* let this get bogged\n> > > down in questions of whether EXPLAIN can report which index quals were\n> > > used or ignored. That's a problem that's existed for decades in the\n> > > btree code, with more or less zero user complaints.\n> > >\n> > > I do think v3 needs more attention to comments, for instance this\n> > > hunk is clearly falsifying the adjacent comment:\n> > >\n> > > @ -141,7 +141,8 @@ ginFillScanKey(GinScanOpaque so, OffsetNumber attnum,\n> > > uint32 i;\n> > >\n> > > /* Non-default search modes add one \"hidden\" entry to each key */\n> > > - if (searchMode != GIN_SEARCH_MODE_DEFAULT)\n> > > + if (searchMode != GIN_SEARCH_MODE_DEFAULT &&\n> > > + (searchMode != GIN_SEARCH_MODE_ALL || nQueryValues))\n> > > nQueryValues++;\n> > > key->nentries = nQueryValues;\n> > > key->nuserentries = nUserQueryValues;\n> > >\n> > > Also, I agree with Julien that this\n> > >\n> > > + so->forcedRecheck = key->triConsistentFn(key) != GIN_TRUE;\n> > >\n> > > probably needs to be\n> > >\n> > > + so->forcedRecheck |= key->triConsistentFn(key) != GIN_TRUE;\n> >\n> > Ping, Julien? Based on the above, it looks like if we had a\n> > last-minute patch addressing the above this could go directly to Ready\n> > for Committer? I will hold off moving this one to CF2 until my\n> > morning.\n>\n> Attached v4 that should address all comments.\n\nAnd of course, thanks a lot! Sorry for the message sent quite\nprecipitately, I'm also dealing with plumbing issues this morning :(\n\n\n",
"msg_date": "Thu, 1 Aug 2019 12:37:44 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> Attached v4 that should address all comments.\n\nEyeing this a bit further ... doesn't scanPendingInsert also need\nto honor so->forcedRecheck? Something along the lines of\n\n-\t\t\ttbm_add_tuples(tbm, &pos.item, 1, recheck);\n+\t\t\ttbm_add_tuples(tbm, &pos.item, 1, recheck | so->forcedRecheck);\n\nat line 1837? (Obviously, there's more than one way you could\nwrite that.)\n\nI'm also not exactly satisfied with the new comments --- they aren't\nconveying much, and the XXX in one of them is confusing; does that\nmean you're unsure that the comment is correct?\n\nThe added test case seems a bit unsatisfying as well, in that it\nfails to retrieve any rows. It's not very clear what it's\ntrying to test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 10:37:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > Attached v4 that should address all comments.\n>\n> Eyeing this a bit further ... doesn't scanPendingInsert also need\n> to honor so->forcedRecheck? Something along the lines of\n>\n> - tbm_add_tuples(tbm, &pos.item, 1, recheck);\n> + tbm_add_tuples(tbm, &pos.item, 1, recheck | so->forcedRecheck);\n>\n> at line 1837? (Obviously, there's more than one way you could\n> write that.)\n\nI think so.\n\n> I'm also not exactly satisfied with the new comments --- they aren't\n> conveying much, and the XXX in one of them is confusing; does that\n> mean you're unsure that the comment is correct?\n\nThat's actually not my code, and I'm not familiar enough with GIN code\nto do much better :(\n\nFor the XXX, IIUC Nikita added this comment as room for future\nimprovement, as stated in his initial mail:\n\n>> 2. We need to replace GIN_SEARCH_MODE_EVERYTHING with GIN_SEARCH_MODE_ALL\n>> if there are no GIN entries and some key requested GIN_SEARCH_MODE_ALL\n>> because we need to skip NULLs in GIN_SEARCH_MODE_ALL. Simplest example here\n>> is \"array @> '{}'\": triconsistent() returns GIN_TRUE, recheck is not forced,\n>> and GIN_SEARCH_MODE_EVERYTHING returns NULLs that are not rechecked. Maybe\n>> it would be better to introduce new GIN_SEARCH_MODE_EVERYTHING_NON_NULL.\n\n> The added test case seems a bit unsatisfying as well, in that it\n> fails to retrieve any rows. It's not very clear what it's\n> trying to test.\n\nYes, I used the same tests as before, but since with this approach\nthere's no way to distinguish whether a full index scan was performed,\nso the explain is quite useless. However, testing both cases should\nstill have the value to test the newly added code path.\n\n\n",
"msg_date": "Thu, 1 Aug 2019 17:19:35 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Eyeing this a bit further ... doesn't scanPendingInsert also need\n>> to honor so->forcedRecheck? Something along the lines of\n\n> I think so.\n\nYeah, it does --- the updated pg_trgm test attached fails if it doesn't.\n\nAlso, I found that Alexander's concern upthread:\n\n>> What would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT\n>> scan on first column and GIN_SEARCH_MODE_ALL on second? I think even\n>> if triconsistent() for second column returns GIN_TRUE, we still need\n>> to recheck to verify second columns is not NULL.\n\nis entirely on-point. This patch generates the wrong answer in the\ncase I added to gin.sql below. (The expected output was generated\nwith HEAD and seems correct, but with these code changes, we incorrectly\nreport the row with NULL as matching. So I expect the cfbot is going\nto complain about the patch in this state.)\n\nWhile I've not attempted to fix that here, I wonder whether we shouldn't\nfix it by just forcing forcedRecheck to true in any case where we discard\nan ALL qualifier. That would get rid of all the ugliness around\nginFillScanKey, which I'd otherwise really want to refactor to avoid\nthis business of adding and then removing a scan key. It would also\nget rid of the bit about \"XXX Need to use ALL mode instead of EVERYTHING\nto skip NULLs if ALL mode has been seen\", which aside from being ugly\nseems to be dead wrong for multi-column-index cases.\n\nBTW, it's not particularly the fault of this patch, but: what does it\neven mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?\nShould we decide to treat that as an error? It doesn't look to me like\nany of the in-tree opclasses will return such a case, and I'm not at\nall convinced what the GIN scan code would actually do with it, except\nthat I doubt it matches the documentation.\n\nSetting this back to Waiting on Author.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 01 Aug 2019 14:59:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Thu, Aug 1, 2019 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Eyeing this a bit further ... doesn't scanPendingInsert also need\n> >> to honor so->forcedRecheck? Something along the lines of\n>\n> > I think so.\n>\n> Yeah, it does --- the updated pg_trgm test attached fails if it doesn't.\n>\n> Also, I found that Alexander's concern upthread:\n>\n> >> What would happen when two-columns index have GIN_SEARCH_MODE_DEFAULT\n> >> scan on first column and GIN_SEARCH_MODE_ALL on second? I think even\n> >> if triconsistent() for second column returns GIN_TRUE, we still need\n> >> to recheck to verify second columns is not NULL.\n>\n> is entirely on-point. This patch generates the wrong answer in the\n> case I added to gin.sql below. (The expected output was generated\n> with HEAD and seems correct, but with these code changes, we incorrectly\n> report the row with NULL as matching. So I expect the cfbot is going\n> to complain about the patch in this state.)\n>\n> While I've not attempted to fix that here, I wonder whether we shouldn't\n> fix it by just forcing forcedRecheck to true in any case where we discard\n> an ALL qualifier. That would get rid of all the ugliness around\n> ginFillScanKey, which I'd otherwise really want to refactor to avoid\n> this business of adding and then removing a scan key. It would also\n> get rid of the bit about \"XXX Need to use ALL mode instead of EVERYTHING\n> to skip NULLs if ALL mode has been seen\", which aside from being ugly\n> seems to be dead wrong for multi-column-index cases.\n\n+1 for setting forcedRecheck in any case we discard ALL qualifier.\nISTM, real life number of cases we can skip recheck here is\nnegligible. And it doesn't justify complexity.\n\n> BTW, it's not particularly the fault of this patch, but: what does it\n> even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?\n\nIt might mean we would like to see all the results, which don't\ncontain given key.\n\n> Should we decide to treat that as an error? It doesn't look to me like\n> any of the in-tree opclasses will return such a case, and I'm not at\n> all convinced what the GIN scan code would actually do with it, except\n> that I doubt it matches the documentation.\n\nI think tsvector_ops behaves so. See gin_extract_tsquery().\n\n /*\n * If the query doesn't have any required positive matches (for\n * instance, it's something like '! foo'), we have to do a full index\n * scan.\n */\n if (tsquery_requires_match(item))\n *searchMode = GIN_SEARCH_MODE_DEFAULT;\n else\n *searchMode = GIN_SEARCH_MODE_ALL;\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:15:22 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While I've not attempted to fix that here, I wonder whether we shouldn't\n>> fix it by just forcing forcedRecheck to true in any case where we discard\n>> an ALL qualifier.\n\n> +1 for setting forcedRecheck in any case we discard ALL qualifier.\n> ISTM, real life number of cases we can skip recheck here is\n> negligible. And it doesn't justify complexity.\n\nYeah, that was pretty much what I was thinking --- by the time we got\nit fully right considering nulls and multicolumn indexes, the cases\nwhere not rechecking could actually do something useful would be\npretty narrow. And a bitmap heap scan is always going to have to\nvisit the heap, IIRC, so how much could skipping the recheck really\nsave?\n\n>> BTW, it's not particularly the fault of this patch, but: what does it\n>> even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?\n\n> It might mean we would like to see all the results, which don't\n> contain given key.\n\nAh, right, I forgot that the consistent-fn might look at the match\nresults.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Aug 2019 15:28:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Attached 6th version of the patches.\n\nOn 01.08.2019 22:28, Tom Lane wrote:\n\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n>> On Thu, Aug 1, 2019 at 9:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> While I've not attempted to fix that here, I wonder whether we shouldn't\n>>> fix it by just forcing forcedRecheck to true in any case where we discard\n>>> an ALL qualifier.\n>>>\n>> +1 for setting forcedRecheck in any case we discard ALL qualifier.\n>> ISTM, real life number of cases we can skip recheck here is\n>> negligible. And it doesn't justify complexity.\n> Yeah, that was pretty much what I was thinking --- by the time we got\n> it fully right considering nulls and multicolumn indexes, the cases\n> where not rechecking could actually do something useful would be\n> pretty narrow. And a bitmap heap scan is always going to have to\n> visit the heap, IIRC, so how much could skipping the recheck really\n> save?\n\nI have simplified patch #1 setting forcedRecheck for all discarded ALL quals.\n(This solution is very close to the earliest unpublished version of the patch.)\n\n\nMore accurate recheck-forcing logic was moved into patch #2 (multicolumn\nindexes were fixed). This patch also contains ginFillScanKey() refactoring\nand new internal mode GIN_SEARCH_MODE_NOT_NULL that is used only for\nGinScanKey.xxxConsistentFn initialization and transformed into\nGIN_SEARCH_MODE_ALL before GinScanEntry initialization.\n\n\nThe cost estimation seems to be correct for both patch #1 and patch #2 and\nleft untouched since v05.\n\n\n>>> BTW, it's not particularly the fault of this patch, but: what does it\n>>> even mean to specify GIN_SEARCH_MODE_ALL with a nonzero number of keys?\n>>>\n>> It might mean we would like to see all the results, which don't\n>> contain given key.\n> Ah, right, I forgot that the consistent-fn might look at the match\n> results.\n\nAlso I decided to go further and tried to optimize (patch #3) the case for\nGIN_SEARCH_MODE_ALL with a nonzero number of keys.\n\nFull GIN scan can be avoided in queries like this contrib/intarray query:\n\"arr @@ '1' AND arr @@ '!2'\" (search arrays containing 1 and not containing 2).\n\nHere we have two keys:\n - key '1' with GIN_SEARCH_MODE_DEFAULT\n - key '2' with GIN_SEARCH_MODE_ALL\n\nKey '2' requires full scan that can be avoided with the forced recheck.\n\nThis query is equivalent to single-qual query \"a @@ '1 & !2'\" which\nemits only one GIN key '1' with recheck.\n\n\nBelow is example for contrib/intarray operator @@:\n\n=# CREATE EXTENSION intarray;\n=# CREATE TABLE t (a int[]);\n=# INSERT INTO t SELECT ARRAY[i] FROM generate_series(1, 1000000) i;\n=# CREATE INDEX ON t USING gin (a gin__int_ops);\n=# SET enable_seqscan = OFF;\n\n-- master\n=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1' AND a @@ '!2';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=16000095.45..16007168.16 rows=5019 width=24) (actual time=66.955..66.956 rows=1 loops=1)\n Recheck Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))\n Heap Blocks: exact=1\n Buffers: shared hit=6816\n -> Bitmap Index Scan on t_a_idx (cost=0.00..16000094.19 rows=5019 width=0) (actual time=66.950..66.950 rows=1 loops=1)\n Index Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))\n Buffers: shared hit=6815\n Planning Time: 0.086 ms\n Execution Time: 67.076 ms\n(9 rows)\n\n-- equivalent single-qual query\n=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1 & !2';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=78.94..7141.57 rows=5025 width=24) (actual time=0.019..0.019 rows=1 loops=1)\n Recheck Cond: (a @@ '1 & !2'::query_int)\n Heap Blocks: exact=1\n Buffers: shared hit=8\n -> Bitmap Index Scan on t_a_idx (cost=0.00..77.68 rows=5025 width=0) (actual time=0.014..0.014 rows=1 loops=1)\n Index Cond: (a @@ '1 & !2'::query_int)\n Buffers: shared hit=7\n Planning Time: 0.056 ms\n Execution Time: 0.039 ms\n(9 rows)\n\n-- with patch #3\n=# EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM t WHERE a @@ '1' AND a @@ '!2';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=75.45..7148.16 rows=5019 width=24) (actual time=0.019..0.020 rows=1 loops=1)\n Recheck Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))\n Heap Blocks: exact=1\n Buffers: shared hit=6\n -> Bitmap Index Scan on t_a_idx (cost=0.00..74.19 rows=5019 width=0) (actual time=0.011..0.012 rows=1 loops=1)\n Index Cond: ((a @@ '1'::query_int) AND (a @@ '!2'::query_int))\n Buffers: shared hit=5\n Planning Time: 0.059 ms\n Execution Time: 0.040 ms\n(9 rows)\n\n\n\n\nPatch #3 again contains a similar ugly solution -- we have to remove already\ninitialized GinScanKeys with theirs GinScanEntries. GinScanEntries can be\nshared, so the reference counting was added.\n\n\nAlso modifications of cost estimation in patch #3 are questionable.\nGinQualCounts are simply not incremented when haveFullScan flag is set,\nbecause the counters anyway will be overwritten by the caller.\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 3 Aug 2019 04:51:16 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n> Attached 6th version of the patches.\n\nI spent a bit of time looking at these. Attached is a proposed revision\nof the 0001 patch, with some minor changes:\n\n* I didn't adopt your move of the \"Non-default modes require the index\nto have placeholders\" test to after the stanza handling zero-key cases.\nI think that move would be correct for 0001 as it stands, but it's far\nless clear that it's still safe after 0002/0003 or whatever variant of\nthose we end up with. We should leave that code where it is for now,\nenforcing the v1-index requirement for all non-default search modes, and\nreconsider after the dust settles. (Or if we never do reconsider, it\nwon't be a big deal --- I doubt many v0 indexes are still out there.)\n\n* Rearranged the selfuncs.c logic to match ginNewScanKey better.\n\n* Cleaned up my own sloppiness in the new gin.sql test cases.\n\nI think this would be committable as it stands, except that replacing\nan ALL scan with an EVERYTHING scan could be a performance regression\nif the index contains many null items. We need to do something about\nthat before committing.\n\nUnfortunately I'm not sold on either 0002 or 0003 as they stand;\nthey seem overly complicated, I'm not convinced they're correct,\nand you haven't really provided examples showing that all this\nextra complexity is worthwhile.\n\nIn particular:\n\n* I don't really like the whole business about detecting a constant-true\nALL condition by applying the consistentFn at this stage. That just\nfeels wrong to me: the consistentFn should be expecting some data about\nthe index contents and we don't have any to give. If it works, it's\naccidental, and probably it's fragile. Moreover, the only gain we'd get\nfrom it is maybe not having to set forcedRecheck, and that doesn't look\nto me like it would make all that much difference.\n\n* The code seems to be assuming that a zero-key ALL query is necessarily\nprecisely equivalent to a NOT NULL condition. This seems flat out wrong.\nAt the very least it's possible for such a query to be constant-false,\nrather than constant-true-for-non-null-items. Admittedly, that would\nsuggest rather stupid coding of the opclass query-extract function, as\nit could have reported a constant-false condition in an optimizable way\nrather than an unoptimizable one. But we aren't entitled to assume that\nthe opclass isn't being dumb; the API says it can do this, so it can.\nI think we have to check the scankey regardless, either in the index or\nvia forcedRecheck.\n\n* I really dislike the refcount business in 0003. It's not clear why we\nneed that or whether it's correct, and I think it would be unmaintainable\neven if it were documented (which it isn't).\n\n\nISTM we could get where we need to go in a much simpler way. A couple\nof alternative ideas:\n\n* During ginNewScanKey, separate out ALL-mode queries and don't add them\nto the scankey list immediately. After examining all the keys, if we\nfound any normal (DEFAULT or INCLUDE_EMPTY) queries, then go ahead and\nadd in the ALL-mode queries so that we can check them in the index, but\ndon't cause a full scan. Otherwise, discard all the ALL-mode queries\nand emit a NOT_NULL scan key, setting forcedRecheck so that we apply the\nfiltering that way.\n\n* Or we could just discard ALL-mode queries on sight, setting\nforcedRecheck always, and then emit NOT_NULL if we had any\nof those and no normal queries.\n\nThe thing that seems hard to predict here is whether it is worth tracking\nthe presence of additional keys in the index to avoid a recheck in the\nheap. It's fairly easy to imagine that for common keys, avoiding the\nrecheck would be a net loss because it requires more work in the index.\nIf we had statistics about key counts, which of course we don't, you\ncould imagine making this decision dynamically depending on whether an\nALL query is asking about common or uncommon keys.\n\nBTW --- any way you slice this, it seems like we'd end up with a situation\nwhere we never execute an ALL query against the index in the way we do\nnow, meaning that the relevant code in the scanning logic is dead and\ncould be removed. Given that, we don't really need a new NOT_NULL search\nmode; we could just redefine what ALL mode actually does at execution.\nThis would be good IMO, because it's not obvious what the difference\nbetween ALL and NOT_NULL modes is anyway.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 07 Aug 2019 16:32:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On 2019-Aug-07, Tom Lane wrote:\n\n> I think this would be committable as it stands, except that replacing\n> an ALL scan with an EVERYTHING scan could be a performance regression\n> if the index contains many null items. We need to do something about\n> that before committing.\n\nNikita, any word on getting this change done?\n\n> Unfortunately I'm not sold on either 0002 or 0003 as they stand;\n> they seem overly complicated, I'm not convinced they're correct,\n> and you haven't really provided examples showing that all this\n> extra complexity is worthwhile.\n\nI suppose we should call ourselves satisfied if we get 0001 done during\nthis cycle (or at least this commitfest). Further refinement can be had\nin the future, as needed -- even within pg13, if Nikita or anybody else\nwants to tackle Tom's suggested approaches (or something completely new,\nor just contest Tom's points) quickly enough. But I don't think we need\nthat in order to call this CF entry committed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 2 Sep 2019 18:20:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Attached 8th version of the patches.\n\n\nA brief description of the patches and their improvements/overheads:\n\n1. Avoid full scan in \"empty-ALL AND regular\" case.\n One EVERYTHING entry with unconditional recheck is used instead of multiple\n ALL entries.\n Overhead for rechecking NULLs and \"empty-ALL\" keys is introduced.\n Overhead of merging ALL-lists for multicolumn indexes is eliminated.\n\n2. Fix overhead of rechecking NULLs.\n Returned back overhead of merging NOT_NULL-lists for multicolumn indexes.\n\n3. Fix overhead of unnecessary recheck of \"empty-ALL\" keys using consistentFn().\n Performance of \"empty-ALL [AND empty-ALL ...]\" case now is the same as on\n master.\n\n4. Avoid full scan in \"non-empty-ALL AND regular\" case.\n New variant of list-merging logic added to scanGetItem().\n\nOn 07.08.2019 23:32, Tom Lane wrote:\n> Nikita Glukhov <n.gluhov@postgrespro.ru> writes:\n>> Attached 6th version of the patches.\n> I spent a bit of time looking at these. Attached is a proposed revision\n> of the 0001 patch, with some minor changes:\n>\n> * I didn't adopt your move of the \"Non-default modes require the index\n> to have placeholders\" test to after the stanza handling zero-key cases.\n> I think that move would be correct for 0001 as it stands, but it's far\n> less clear that it's still safe after 0002/0003 or whatever variant of\n> those we end up with. We should leave that code where it is for now,\n> enforcing the v1-index requirement for all non-default search modes, and\n> reconsider after the dust settles. (Or if we never do reconsider, it\n> won't be a big deal --- I doubt many v0 indexes are still out there.)\n\nOk.\n\n> * Rearranged the selfuncs.c logic to match ginNewScanKey better.\n>\n> * Cleaned up my own sloppiness in the new gin.sql test cases.\n>\n> I think this would be committable as it stands, except that replacing\n> an ALL scan with an EVERYTHING scan could be a performance regression\n> if the index contains many null items. We need to do something about\n> that before committing.\n\nYes, such performance regression does exist (see test results at the end).\nAnd it exists not only if there are many NULLs -- recheck overhead is\nsignificant even in the simple cases like \"array @> '{}'\". This really\nmakes patch 0001 non-committable.\n\nAnd the only thing I can here offer to fix this is the patches 0002 and 0003.\n\n> Unfortunately I'm not sold on either 0002 or 0003 as they stand;\n> they seem overly complicated, I'm not convinced they're correct,\n> and you haven't really provided examples showing that all this\n> extra complexity is worthwhile.\n\nYes, they are complicated, but I tried to simplify 0002 a bit, and even\ndivided it into two separate patches 0002 and 0003. For the performance\nimprovements, see the test results at the end.\n\n> In particular:\n>\n> * I don't really like the whole business about detecting a constant-true\n> ALL condition by applying the consistentFn at this stage. That just\n> feels wrong to me: the consistentFn should be expecting some data about\n> the index contents and we don't have any to give. If it works, it's\n> accidental, and probably it's fragile.\n\nIf we have no entries, then there is nothing to pass to consistentFn() and it\nshould always return the same value for a given scankey. The similar\ntechnique is used in startScanKey() where the fake entryRes[] is passed to it.\n\n> Moreover, the only gain we'd get from it is maybe not having to set\n> forcedRecheck, and that doesn't look to me like it would make all that\n> much difference.\n\nThe forced recheck has a non-zero cost, so this makes real sense.\n\n> * The code seems to be assuming that a zero-key ALL query is necessarily\n> precisely equivalent to a NOT NULL condition. This seems flat out wrong.\n> At the very least it's possible for such a query to be constant-false,\n> rather than constant-true-for-non-null-items. Admittedly, that would\n> suggest rather stupid coding of the opclass query-extract function, as\n> it could have reported a constant-false condition in an optimizable way\n> rather than an unoptimizable one. But we aren't entitled to assume that\n> the opclass isn't being dumb; the API says it can do this, so it can.\n> I think we have to check the scankey regardless, either in the index or\n> via forcedRecheck.\n\nYes, empty ALL queries are equivalent to NOT NULL with or without recheck.\nPatches 0001 and 0002 use unconditional forcedRecheck. Patch 0003 uses\nconditional recheck depending on the result of triConsistentFn() invocation.\nI added missing check for GIN_FALSE to eliminate constant-false empty\nALL queries. So, the empty ALL scankey is always checked in the index,\nbut only once at the initialization stage.\n\n> * I really dislike the refcount business in 0003. It's not clear why we\n> need that or whether it's correct, and I think it would be unmaintainable\n> even if it were documented (which it isn't).\n>\n>\n> ISTM we could get where we need to go in a much simpler way. A couple\n> of alternative ideas:\n>\n> * During ginNewScanKey, separate out ALL-mode queries and don't add them\n> to the scankey list immediately. After examining all the keys, if we\n> found any normal (DEFAULT or INCLUDE_EMPTY) queries, then go ahead and\n> add in the ALL-mode queries so that we can check them in the index, but\n> don't cause a full scan. Otherwise, discard all the ALL-mode queries\n> and emit a NOT_NULL scan key, setting forcedRecheck so that we apply the\n> filtering that way.\n>\n> * Or we could just discard ALL-mode queries on sight, setting\n> forcedRecheck always, and then emit NOT_NULL if we had any\n> of those and no normal queries.\n\nI completely rewrote this logic in patch 0004, the reference counting is no\nlonger needed.\n\nNon-empty ALL keys are immediately added to the list, but the initialization\nof hidden ALL entries in them is postponed, and these full scan entries added\nonly if there are no normal keys. But if we have normal keys, then for each\nALL key enabled special list-merging logic in scanGetItem(), so the items\nmatching normal keys are emitted to the result even if they have no entries\nrequired for ALL scankeys.\n\n\nFor example, the following intarray query\n\n arr @@ '1' AND arr @@ '!2'\n\nproduces two 1-entry scankeys:\n\n DEFAULT('1')\n ALL('2') (previously there were 2 entries: '2' and ALL)\n\nWhen the item lists for the entries '1' and '2' are merged, emitted all items\n - having '1' and not having '2', without forced recheck (new logic)\n - having both '1' and '2', if triConsistentFn(ALL('2')) returns not FALSE\n (ordinary logic, each item must have at least one entry of each scankey)\n\n\nThis helps to do as much work as possible in the index, and to avoid a\nunnecessary recheck.\n\n\nI'm not sure that code changes in scanGetItem() are correct (especially due to\nthe presence of lossy items), and the whole patch 0004 was not carefully tested,\nbut if the performance results are interesting, I could work further on this\noptimization.\n\n> The thing that seems hard to predict here is whether it is worth tracking\n> the presence of additional keys in the index to avoid a recheck in the\n> heap. It's fairly easy to imagine that for common keys, avoiding the\n> recheck would be a net loss because it requires more work in the index.\n> If we had statistics about key counts, which of course we don't, you\n> could imagine making this decision dynamically depending on whether an\n> ALL query is asking about common or uncommon keys.\n>\n> BTW --- any way you slice this, it seems like we'd end up with a situation\n> where we never execute an ALL query against the index in the way we do\n> now, meaning that the relevant code in the scanning logic is dead and\n> could be removed. Given that, we don't really need a new NOT_NULL search\n> mode; we could just redefine what ALL mode actually does at execution.\n> This would be good IMO, because it's not obvious what the difference\n> between ALL and NOT_NULL modes is anyway.\n\nThe ALL mode is still used now for non-empty ALL queries without normal queries.\n\n\nSimple performance test:\n\ncreate table t (a int[], b int[], c int[]);\n\n-- 1M NULLs\ninsert into t select NULL, NULL, NULL\nfrom generate_series(0, 999999) i;\n\n-- 1M 1-element arrays\ninsert into t select array[i], array[i], array[i]\nfrom generate_series(0, 999999) i;\n\n-- 10k 2-element arrays with common element\ninsert into t select array[-1,i], array[-1,i], array[-1,i]\nfrom generate_series(0, 9999) i;\n\ncreate extension intarray;\ncreate index on t using gin (a gin__int_ops, b gin__int_ops, c gin__int_ops);\n\n\n | Query time, ms\n WHERE condition | master | patches\n | | #1 | #2 | #3 | #4\n---------------------------------------+--------+------+------+------+------\n a @> '{}' | 272 | 473 | 369 | 271 | 261\n a @> '{}' and b @> '{}' | 374 | 548 | 523 | 368 | 353\n a @> '{}' and b @> '{}' and c @> '{}' | 479 | 602 | 665 | 461 | 446\n\n a @> '{}' and a @@ '1' | 52.2 | 0.4 | 0.4 | 0.4 | 0.4\n a @> '{}' and a @@ '-1' | 56.2 | 4.0 | 4.0 | 2.3 | 2.3\n\n a @@ '!-1' and a @@ '1' | 52.8 | 53.0 | 52.7 | 52.9 | 0.3\n a @@ '!1' and a @@ '-1' | 54.9 | 55.2 | 55.1 | 55.3 | 2.4\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 23 Nov 2019 02:35:50 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Nov 23, 2019 at 02:35:50AM +0300, Nikita Glukhov wrote:\n> Attached 8th version of the patches.\n\nPlease be careful here. The CF entry was still marked as waiting on\nauthor, but you sent a new patch series which has not been reviewed.\nI have moved this patc to next CF instead.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 17:08:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Nov 23, 2019 at 02:35:50AM +0300, Nikita Glukhov wrote:\n>> Attached 8th version of the patches.\n\n> Please be careful here. The CF entry was still marked as waiting on\n> author, but you sent a new patch series which has not been reviewed.\n> I have moved this patc to next CF instead.\n\nThat seems a bit premature --- the new patch is only a couple of days\nold. The CF entry should've been moved back to \"Needs Review\",\nsure.\n\n(Having said that, the end of the month isn't that far away,\nso it may well end up in the next CF anyway.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Nov 2019 09:52:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Nov 23, 2019 at 2:39 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> Attached 8th version of the patches.\n\nI've read this thread. I decided to rewrite the patch in the way,\nwhich I find simple and more clear. Attached is the draft patch\nwritten from scratch except regression tests, which were copied \"as\nis\". It based on the discussion in this thread as well as my own\nideas. It works as following.\n\n1) New GinScanKey->excludeOnly flag is introduced. This flag means\nthat scan key might be satisfied even if no of its entries match the\nrow. So, such scan keys are useful only for additional check of\nresults returned by other keys. That is excludeOnly scan key is\ndesigned for exclusion of already obtained results.\n2) Initially no hidden scan entries are appended to\nGIN_SEARCH_MODE_ALL scan keys. They are appended after getting\nstatistics about search modes applied to particular attributes.\n3) We append at only one GIN_CAT_EMPTY_QUERY scan entry when all scan\nkeys GIN_SEARCH_MODE_ALL. If there is at least one normal scan key,\nno GIN_CAT_EMPTY_QUERY is appended.\n4) No hidden entries are appended to GIN_SEARCH_MODE_ALL scan key if\nthere are normal scan keys for the same column. Otherwise\nGIN_CAT_NULL_KEY hidden entry is appended.\n5) GIN_SEARCH_MODE_ALL scan keys, which don't have GIN_CAT_EMPTY_QUERY\nhidden entry, are marked with excludeOnly flag. So, they are used to\nfilter results of other scan keys.\n6) GIN_CAT_NULL_KEY hidden entry is found, then scan key doesn't match\nindependently on result of consistent function call.\n\nTherefore, attached patch removes unnecessary GIN_CAT_EMPTY_QUERY scan\nentries without removing positive effect of filtering in\nGIN_SEARCH_MODE_ALL scan keys.\n\nPatch requires further polishing including comments, minor refactoring\netc. I'm going to continue work on this.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 25 Dec 2019 08:25:38 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 8:25 AM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> Patch requires further polishing including comments, minor refactoring\n> etc. I'm going to continue work on this.\n>\n\nI also run the same performance comparison as Nikita [1] on my laptop.\nThe results are shown below. PostgreSQL was built with -O2 and\nasserts enabled.\n\n | Query time, ms |\n WHERE condition | master | patch |\n---------------------------------------+--------+-------+\n a @> '{}' | 117 | 116 |\n a @> '{}' and b @> '{}' | 150 | 146 |\n a @> '{}' and b @> '{}' and c @> '{}' | 168 | 167 |\n a @> '{}' and a @@ '1' | 126 | 0.6 |\n a @> '{}' and a @@ '-1' | 128 | 3.2 |\n a @@ '!-1' and a @@ '1' | 127 | 0.7 |\n a @@ '!1' and a @@ '-1' | 122 | 4.4 |\n\nPerformance effect looks similar to patch #4 by Nikita. I've tried to\nadd patch #4 to comparison, but I've catch assertion failure.\n\nTRAP: FailedAssertion(\"key->includeNonMatching\", File: \"ginget.c\", Line:\n1340)\n\nI'm going to continue polishing my version of patch.\n\nLinks\n1.\nhttps://www.postgresql.org/message-id/f2889144-db1d-e3b2-db97-cfc8794cda43%40postgrespro.ru\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Wed, Dec 25, 2019 at 8:25 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:Patch requires further polishing including comments, minor refactoring\netc. I'm going to continue work on this.I also run the same performance comparison as Nikita [1] on my laptop.The results are shown below. PostgreSQL was built with -O2 andasserts enabled. | Query time, ms | WHERE condition | master | patch |---------------------------------------+--------+-------+ a @> '{}' | 117 | 116 | a @> '{}' and b @> '{}' | 150 | 146 | a @> '{}' and b @> '{}' and c @> '{}' | 168 | 167 | a @> '{}' and a @@ '1' | 126 | 0.6 | a @> '{}' and a @@ '-1' | 128 | 3.2 | a @@ '!-1' and a @@ '1' | 127 | 0.7 | a @@ '!1' and a @@ '-1' | 122 | 4.4 |Performance effect looks similar to patch #4 by Nikita. I've tried toadd patch #4 to comparison, but I've catch assertion failure.TRAP: FailedAssertion(\"key->includeNonMatching\", File: \"ginget.c\", Line: 1340)I'm going to continue polishing my version of patch.Links1. https://www.postgresql.org/message-id/f2889144-db1d-e3b2-db97-cfc8794cda43%40postgrespro.ru------Alexander KorotkovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Thu, 26 Dec 2019 04:59:48 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On 26.12.2019 4:59, Alexander Korotkov wrote:\n>\n> I've tried to add patch #4 to comparison, but I've catch assertion \n> failure.\n>\n> TRAP: FailedAssertion(\"key->includeNonMatching\", File: \"ginget.c\", \n> Line: 1340)\nThere simply should be inverted condition in the assertion:\nAssert(!key->includeNonMatching);\n\nI have looked at v9 patch, and here is my review:\n\n1. I agree with NULL-flag handling simplifications in ginNewScanKey(),\nginScanKeyAddHiddenEntry() extraction.\n\n2. I also agree that usage of nrequired/nadditional in keyGetItem() is a more\nnatural solution to implement exclusion keys than my previous attempt of doing\nthat in scanGetKey().\n\nBut there are some questions:\n\nCan we avoid referencing excludeOnly flag keyGetItem() by replacing these\nreferences with !nrequired?\n\nMaybe it would be better to move the whole block of keyGetItem() code\nstarting from the first loop over required keys and ending before the loop over\nadditional keys inside 'if (key->nrequired) { ... }'?\n\nCan we avoid introducing excludeOnly flag by reusing searchMode and/or by\nmoving the initialization of nrequired/nadditional into ginNewScanKey()?\n\n\n3. The following two times repeated NULL-filtering check looks too complicated\nand needs to be refactored somehow:\n\n-\tres = key->triConsistentFn(key);\n+\tif (key->excludeOnly &&\n+\t\tkey->nuserentries < key->nentries &&\n+\t\tkey->scanEntry[key->nuserentries]->queryCategory == GIN_CAT_NULL_KEY &&\n+\t\tkey->entryRes[key->nuserentries] == GIN_TRUE)\n+\t\tres = GIN_FALSE;\n+\telse\n+\t\tres = key->triConsistentFn(key);\n\nFor example, a special consistentFn() can be introduced for such NOT_NULL\nscankeys. Or even a hidden separate one-entry scankey with a trivial\nconsistentFn() can be added instead of adding hidden entry.\n\n\n4. forcedRecheck flag that was previously used for discarded empty ALL scankeys\nis removed now. 0-entry exclusion keys can appear instead, and their\nconsistentFn() simply returns constant value. Could this lead to tangible\noverhead in some cases (in comparison to forcedRecheck flag)?\n\n\n5. A hidden GIN_CAT_EMPTY_QUERY is added only for the first empty ALL-scankey,\nNULLs in other columns are filtered out with GIN_CAT_NULL_KEY. This looks like\nasymmetric, and it leads to accelerations is some cases and slowdowns in others\n(depending on NULL fractions and their correlations in columns).\n\nThe following test shows a significant performance regression of v9:\n\ninsert into t select array[i], NULL, NULL from generate_series(1, 1000000) i;\n\n | Query time, ms\n WHERE condition | master | v8 | v9\n---------------------------------------+--------+--------+---------\n a @> '{}' | 224 | 213 | 212\n a @> '{}' and b @> '{}' | 52 | 57 | 255\n a @> '{}' and b @> '{}' and c @> '{}' | 51 | 58 | 290\n\n\nIn the older version of the patch I tried to do the similar things (initialize\nonly one NOT_NULL entry for the first column), but refused to do this in v8.\n\nSo, to avoid slowdowns relative to master, I can offer simply to add\nGIN_CAT_EMPTY_QUERY entry for each column with empty ALL-keys if there are\nno normal keys.\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nOn 26.12.2019 4:59, Alexander Korotkov\n wrote:\n\n\n\n\n\n I've tried to add patch #4 to comparison, but I've catch\n assertion failure.\n\n\nTRAP:\n FailedAssertion(\"key->includeNonMatching\", File:\n \"ginget.c\", Line: 1340)\n\n\n\n\n There simply should be inverted condition in the assertion: \n Assert(!key->includeNonMatching);\n\n\nI have looked at v9 patch, and here is my review:\n\n1. I agree with NULL-flag handling simplifications in ginNewScanKey(),\nginScanKeyAddHiddenEntry() extraction.\n\n2. I also agree that usage of nrequired/nadditional in keyGetItem() is a more\nnatural solution to implement exclusion keys than my previous attempt of doing\nthat in scanGetKey().\n\nBut there are some questions:\n\nCan we avoid referencing excludeOnly flag keyGetItem() by replacing these\nreferences with !nrequired?\n\nMaybe it would be better to move the whole block of keyGetItem() code\nstarting from the first loop over required keys and ending before the loop over\nadditional keys inside 'if (key->nrequired) { ... }'?\n\nCan we avoid introducing excludeOnly flag by reusing searchMode and/or by\nmoving the initialization of nrequired/nadditional into ginNewScanKey()?\n\n\n3. The following two times repeated NULL-filtering check looks too complicated\nand needs to be refactored somehow:\n\n-\tres = key->triConsistentFn(key);\n+\tif (key->excludeOnly &&\n+\t\tkey->nuserentries < key->nentries &&\n+\t\tkey->scanEntry[key->nuserentries]->queryCategory == GIN_CAT_NULL_KEY &&\n+\t\tkey->entryRes[key->nuserentries] == GIN_TRUE)\n+\t\tres = GIN_FALSE;\n+\telse\n+\t\tres = key->triConsistentFn(key);\n\nFor example, a special consistentFn() can be introduced for such NOT_NULL\nscankeys. Or even a hidden separate one-entry scankey with a trivial\nconsistentFn() can be added instead of adding hidden entry.\n\n\n4. forcedRecheck flag that was previously used for discarded empty ALL scankeys\nis removed now. 0-entry exclusion keys can appear instead, and their\nconsistentFn() simply returns constant value. Could this lead to tangible\noverhead in some cases (in comparison to forcedRecheck flag)?\n\n\n5. A hidden GIN_CAT_EMPTY_QUERY is added only for the first empty ALL-scankey,\nNULLs in other columns are filtered out with GIN_CAT_NULL_KEY. This looks like\nasymmetric, and it leads to accelerations is some cases and slowdowns in others\n(depending on NULL fractions and their correlations in columns).\n\nThe following test shows a significant performance regression of v9:\n\ninsert into t select array[i], NULL, NULL from generate_series(1, 1000000) i;\n\n | Query time, ms\n WHERE condition | master | v8 | v9\n---------------------------------------+--------+--------+---------\n a @> '{}' | 224 | 213 | 212\n a @> '{}' and b @> '{}' | 52 | 57 | 255\n a @> '{}' and b @> '{}' and c @> '{}' | 51 | 58 | 290\n\n\nIn the older version of the patch I tried to do the similar things (initialize\nonly one NOT_NULL entry for the first column), but refused to do this in v8.\n\nSo, to avoid slowdowns relative to master, I can offer simply to add \nGIN_CAT_EMPTY_QUERY entry for each column with empty ALL-keys if there are \nno normal keys.\n\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 27 Dec 2019 04:36:14 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 04:36:14AM +0300, Nikita Glukhov wrote:\n>On 26.12.2019 4:59, Alexander Korotkov wrote:\n>>\n>>�I've tried to add patch #4 to comparison, but I've catch assertion \n>>failure.\n>>\n>>TRAP: FailedAssertion(\"key->includeNonMatching\", File: \"ginget.c\", \n>>Line: 1340)\n>There simply should be inverted condition in the assertion:\n>Assert(!key->includeNonMatching);\n>\n>I have looked at v9 patch, and here is my review:\n>\n>1. I agree with NULL-flag handling simplifications in ginNewScanKey(),\n>ginScanKeyAddHiddenEntry() extraction.\n>\n>2. I also agree that usage of nrequired/nadditional in keyGetItem() is a more\n>natural solution to implement exclusion keys than my previous attempt of doing\n>that in scanGetKey().\n>\n>But there are some questions:\n>\n>Can we avoid referencing excludeOnly flag keyGetItem() by replacing these\n>references with !nrequired?\n>\n>Maybe it would be better to move the whole block of keyGetItem() code\n>starting from the first loop over required keys and ending before the loop over\n>additional keys inside 'if (key->nrequired) { ... }'?\n>\n>Can we avoid introducing excludeOnly flag by reusing searchMode and/or by\n>moving the initialization of nrequired/nadditional into ginNewScanKey()?\n>\n>\n>3. The following two times repeated NULL-filtering check looks too complicated\n>and needs to be refactored somehow:\n>\n>-\tres = key->triConsistentFn(key);\n>+\tif (key->excludeOnly &&\n>+\t\tkey->nuserentries < key->nentries &&\n>+\t\tkey->scanEntry[key->nuserentries]->queryCategory == GIN_CAT_NULL_KEY &&\n>+\t\tkey->entryRes[key->nuserentries] == GIN_TRUE)\n>+\t\tres = GIN_FALSE;\n>+\telse\n>+\t\tres = key->triConsistentFn(key);\n>\n>For example, a special consistentFn() can be introduced for such NOT_NULL\n>scankeys. Or even a hidden separate one-entry scankey with a trivial\n>consistentFn() can be added instead of adding hidden entry.\n>\n>\n>4. forcedRecheck flag that was previously used for discarded empty ALL scankeys\n>is removed now. 0-entry exclusion keys can appear instead, and their\n>consistentFn() simply returns constant value. Could this lead to tangible\n>overhead in some cases (in comparison to forcedRecheck flag)?\n>\n>\n>5. A hidden GIN_CAT_EMPTY_QUERY is added only for the first empty ALL-scankey,\n>NULLs in other columns are filtered out with GIN_CAT_NULL_KEY. This looks like\n>asymmetric, and it leads to accelerations is some cases and slowdowns in others\n>(depending on NULL fractions and their correlations in columns).\n>\n>The following test shows a significant performance regression of v9:\n>\n>insert into t select array[i], NULL, NULL from generate_series(1, 1000000) i;\n>\n> | Query time, ms\n> WHERE condition | master | v8 | v9\n>---------------------------------------+--------+--------+---------\n> a @> '{}' | 224 | 213 | 212\n> a @> '{}' and b @> '{}' | 52 | 57 | 255\n> a @> '{}' and b @> '{}' and c @> '{}' | 51 | 58 | 290\n>\n>\n>In the older version of the patch I tried to do the similar things (initialize\n>only one NOT_NULL entry for the first column), but refused to do this in v8.\n>\n>So, to avoid slowdowns relative to master, I can offer simply to add\n>GIN_CAT_EMPTY_QUERY entry for each column with empty ALL-keys if there are\n>no normal keys.\n>\n\nYeah, I can confirm those results, although on my system the timings are\na bit different (I haven't tested v8):\n\n | Query time, ms\n WHERE condition | master | v9\n---------------------------------------+--------+---------\n a @> '{}' | 610 | 589\n a @> '{}' and b @> '{}' | 185 | 665\n a @> '{}' and b @> '{}' and c @> '{}' | 185 | 741\n\nSo that's something we probably need to address, perhaps by using the\nGIN_CAT_EMPTY_QUERY entries as proposed.\n\nI've also tested this on a database storing mailing lists archives with\na trigram index, and in that case the performance with short values gets\nmuch better. The \"messages\" table has two text fields with a GIN trigram\nindex - subject and body, and querying them with short/long values works\nlike this:\n\n WHERE | master | v9\n --------------------------------------------------------------\n subject LIKE '%aa%' AND body LIKE '%xx%' | 4943 | 4052\n subject LIKE '%aaa%' AND body LIKE '%xx%' | 10 | 10\n subject LIKE '%aa%' AND body LIKE '%xxx%' | 380 | 13\n subject LIKE '%aaa%' AND BODY LIKE '%xxx%' | 2 | 2\n\nwhich seems fairly nice. I've done tests with individual columns, and\nthat seems to be working fine too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 16:21:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi, Tomas!\n\nThank you for your feedback!\n\nOn Mon, Jan 6, 2020 at 6:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> Yeah, I can confirm those results, although on my system the timings are\n> a bit different (I haven't tested v8):\n>\n> | Query time, ms\n> WHERE condition | master | v9\n> ---------------------------------------+--------+---------\n> a @> '{}' | 610 | 589\n> a @> '{}' and b @> '{}' | 185 | 665\n> a @> '{}' and b @> '{}' and c @> '{}' | 185 | 741\n>\n> So that's something we probably need to address, perhaps by using the\n> GIN_CAT_EMPTY_QUERY entries as proposed.\n>\n\nYeah, handling nulls better without regression in some cases is hard.\nFor now I see at least 3 different ways of nulls handling, assuming there\nis another non-excluding scan key:\n\n1) Collect non-null matches by full scan of all non-null entries.\n2) Exclude null marches using scan of null entry.\n3) Force recheck.\n\nEach method have its own advantages and disadvantages. We probably\nwould need some cost-based decision making algorithm based on statistics.\nI'm not entirely sure it's OK to do this execution time. However, it\nprobably\ncould be classified as \"adaptive query processing\", which is considered as\ncool trend in DBMS.\n\nAttached version 10 of patch doesn't change null handling in comparison\nwith master. It eliminates full index scan only if there is another scan\non the\nsame column. So, it never adds null item to the scan key. I've rerun tests\nfrom Nikita [1].\n\n | | Query time, ms |\n # | WHERE condition | master | v10 |\n---+----------------------------------------+--------+-------+\n 1 | a @> '{}' | 223 | 218 |\n 2 | a @> '{}' and b @> '{}' | 302 | 308 |\n 3 | a @> '{}' and b @> '{}' and c @> '{}' | 405 | 404 |\n 4 | a @> '{}' and a @@ '1' | 59 | 0.3 |\n 5 | a @> '{}' and a @@ '-1' | 64 | 2.2 |\n 6 | a @@ '!-1' and a @@ '1' | 63 | 0.3 |\n 7 | a @@ '!1' and a @@ '-1' | 62 | 3.0 |\n\nIt appears that absolute numbers for master are higher than they were\nprevious time [2]. I've rechecked multiple times that current numbers are\ncorrect. So, it might be I didn't turn off sequential scan previous time.\n\nWe can see that cases #1, #2, #3, which have quals over multiple attributes\nhave the same execution time as in master. That's expected since scanning\nstrategy is the same. Cases #4, #5, #6, #7 have about the same improvement\nas in v9.\n\nI've also rerun many nulls test from Nikita [3].\n\n | Query time, ms |\n WHERE condition | master | v10 |\n---------------------------------------+--------+-------+\n a @> '{}' | 190 | 192 |\n a @> '{}' and b @> '{}' | 55 | 57 |\n a @> '{}' and b @> '{}' and c @> '{}' | 60 | 58 |\n\nThe results are the same as in master again.\n\nI've also tested this on a database storing mailing lists archives with\n> a trigram index, and in that case the performance with short values gets\n> much better. The \"messages\" table has two text fields with a GIN trigram\n> index - subject and body, and querying them with short/long values works\n> like this:\n>\n> WHERE | master | v9\n> --------------------------------------------------------------\n> subject LIKE '%aa%' AND body LIKE '%xx%' | 4943 | 4052\n> subject LIKE '%aaa%' AND body LIKE '%xx%' | 10 | 10\n> subject LIKE '%aa%' AND body LIKE '%xxx%' | 380 | 13\n> subject LIKE '%aaa%' AND BODY LIKE '%xxx%' | 2 | 2\n>\n> which seems fairly nice. I've done tests with individual columns, and\n> that seems to be working fine too.\n>\n\nCool, thanks!\n\nSo, I think v10 is a version of patch, which can be committed after\nsome cleanup. And we can try doing better nulls handling in a separate\npatch.\n\nLinks\n1.\nhttps://www.postgresql.org/message-id/f2889144-db1d-e3b2-db97-cfc8794cda43%40postgrespro.ru\n2.\nhttps://www.postgresql.org/message-id/CAPpHfdvT_t6ShG2pvptEWceDxEnyNRsm2MxmCWWvxBzQ-pbMuw%40mail.gmail.com\n3.\nhttps://www.postgresql.org/message-id/b53614eb-6f9f-8c5c-9df8-f703b0b102b6%40postgrespro.ru\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 8 Jan 2020 19:31:52 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> So, I think v10 is a version of patch, which can be committed after\n> some cleanup. And we can try doing better nulls handling in a separate\n> patch.\n\nThe cfbot reports that this doesn't pass regression testing.\nI haven't looked into why not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 10:31:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > So, I think v10 is a version of patch, which can be committed after\n> > some cleanup. And we can try doing better nulls handling in a separate\n> > patch.\n>\n> The cfbot reports that this doesn't pass regression testing.\n> I haven't looked into why not.\n>\n\nThank you for noticing. I'll take care of it.\n\n ------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> So, I think v10 is a version of patch, which can be committed after\n> some cleanup. And we can try doing better nulls handling in a separate\n> patch.\n\nThe cfbot reports that this doesn't pass regression testing.\nI haven't looked into why not.Thank you for noticing. I'll take care of it. ------Alexander KorotkovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company",
"msg_date": "Fri, 10 Jan 2020 19:36:48 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n>> > So, I think v10 is a version of patch, which can be committed after\n>> > some cleanup. And we can try doing better nulls handling in a separate\n>> > patch.\n>>\n>> The cfbot reports that this doesn't pass regression testing.\n>> I haven't looked into why not.\n>\n>\n> Thank you for noticing. I'll take care of it.\n\n\nIn the v10 I've fixed a bug with nulls handling, but it appears that\ntest contained wrong expected result. I've modified this test so that\nit directly compares sequential scan results with bitmap indexscan\nresults.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 11 Jan 2020 03:10:05 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi,\n\nOn Sat, Jan 11, 2020 at 1:10 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> >> > So, I think v10 is a version of patch, which can be committed after\n> >> > some cleanup. And we can try doing better nulls handling in a separate\n> >> > patch.\n> >>\n> >> The cfbot reports that this doesn't pass regression testing.\n> >> I haven't looked into why not.\n> >\n> >\n> > Thank you for noticing. I'll take care of it.\n>\n>\n> In the v10 I've fixed a bug with nulls handling, but it appears that\n> test contained wrong expected result. I've modified this test so that\n> it directly compares sequential scan results with bitmap indexscan\n> results.\n\nThanks a lot for working on that! I'm not very familiar with gin\ninternals so additional eyes are definitely needed here but I agree\nthat this approach is simpler and cleaner. I didn't find any problem\nwith the modified logic, the patch applies cleanly, compiles without\nwarning and all regression tests pass, so it all seems good to me.\n\nHere are a few comments:\n\n- In keyGetItem(), it seems that some comments would need to be\nupdated wrt. the new excludeOnly flag. I'm thinking of:\n\n * Ok, we now know that there are no matches < minItem.\n *\n * If minItem is lossy, it means that there were no exact items on the\n * page among requiredEntries, because lossy pointers sort after exact\n * items. However, there might be exact items for the same page among\n * additionalEntries, so we mustn't advance past them.\n\nand\n\n /*\n * Normally, none of the items in additionalEntries can have a curItem\n * larger than minItem. But if minItem is a lossy page, then there\n * might be exact items on the same page among additionalEntries.\n */ if (ginCompareItemPointers(&entry->curItem, &minItem) < 0)\n {\n Assert(ItemPointerIsLossyPage(&minItem) || key->nrequired == 0);\n minItem = entry->curItem;\n }\n\nWhile at it, IIUC only excludeOnly key can have nrequired == 0 (if\nthat's the case, this could be explicitly said in startScanKey\nrelevant comment), so it'd be more consistent to also use excludeOnly\nrather than nrequired in this assert?\n\n- the pg_trgm regression tests check for the number of rows returned\nwith the new \"excludeOnly\" permutations, but only with an indexscan,\nshould we make sure that the same results are returned with a seq\nscan?\n\n\n",
"msg_date": "Sat, 11 Jan 2020 13:21:15 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi!\n\nThank you for feedback!\n\nOn Sat, Jan 11, 2020 at 3:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Sat, Jan 11, 2020 at 1:10 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Fri, Jan 10, 2020 at 7:36 PM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > >\n> > > On Fri, Jan 10, 2020 at 6:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>\n> > >> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > >> > So, I think v10 is a version of patch, which can be committed after\n> > >> > some cleanup. And we can try doing better nulls handling in a separate\n> > >> > patch.\n> > >>\n> > >> The cfbot reports that this doesn't pass regression testing.\n> > >> I haven't looked into why not.\n> > >\n> > >\n> > > Thank you for noticing. I'll take care of it.\n> >\n> >\n> > In the v10 I've fixed a bug with nulls handling, but it appears that\n> > test contained wrong expected result. I've modified this test so that\n> > it directly compares sequential scan results with bitmap indexscan\n> > results.\n>\n> Thanks a lot for working on that! I'm not very familiar with gin\n> internals so additional eyes are definitely needed here but I agree\n> that this approach is simpler and cleaner. I didn't find any problem\n> with the modified logic, the patch applies cleanly, compiles without\n> warning and all regression tests pass, so it all seems good to me.\n>\n> Here are a few comments:\n>\n> - In keyGetItem(), it seems that some comments would need to be\n> updated wrt. the new excludeOnly flag. I'm thinking of:\n>\n> * Ok, we now know that there are no matches < minItem.\n> *\n> * If minItem is lossy, it means that there were no exact items on the\n> * page among requiredEntries, because lossy pointers sort after exact\n> * items. However, there might be exact items for the same page among\n> * additionalEntries, so we mustn't advance past them.\n>\n> and\n>\n> /*\n> * Normally, none of the items in additionalEntries can have a curItem\n> * larger than minItem. But if minItem is a lossy page, then there\n> * might be exact items on the same page among additionalEntries.\n> */ if (ginCompareItemPointers(&entry->curItem, &minItem) < 0)\n> {\n> Assert(ItemPointerIsLossyPage(&minItem) || key->nrequired == 0);\n> minItem = entry->curItem;\n> }\n\nSure, thank you for pointing. I'm working on improving comments.\nI'll provide updated patch soon.\n\n> While at it, IIUC only excludeOnly key can have nrequired == 0 (if\n> that's the case, this could be explicitly said in startScanKey\n> relevant comment), so it'd be more consistent to also use excludeOnly\n> rather than nrequired in this assert?\n\nMake sense. I'll adjust the assert as well as comment.\n\n> - the pg_trgm regression tests check for the number of rows returned\n> with the new \"excludeOnly\" permutations, but only with an indexscan,\n> should we make sure that the same results are returned with a seq\n> scan?\n\nYes, I recently fixed similar issue in gin regression test. I'll\nadjust pg_trgm test as well.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 12 Jan 2020 00:10:56 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Updated patch is attached. It contains more comments as well as commit message.\n\nOn Sun, Jan 12, 2020 at 12:10 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Sat, Jan 11, 2020 at 3:19 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > While at it, IIUC only excludeOnly key can have nrequired == 0 (if\n> > that's the case, this could be explicitly said in startScanKey\n> > relevant comment), so it'd be more consistent to also use excludeOnly\n> > rather than nrequired in this assert?\n>\n> Make sense. I'll adjust the assert as well as comment.\n\nThe changes to this assertion are not actually needed. I just\naccidentally forgot to revert them.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 14 Jan 2020 06:03:41 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> Updated patch is attached. It contains more comments as well as commit message.\n\nI reviewed this a little bit. I agree this seems way more straightforward\nthan the patches we've been considering so far. I wasn't too happy with\nthe state of the comments, so I rewrote them a bit in the attached v13.\n\nOne thing I'm still not happy about is the comment in\ncollectMatchesForHeapRow. v12 failed to touch that at all, so I tried to\nfill it in, but I'm not sure if my explanation is good. Also, if we know\nthat excludeOnly keys are going to be ignored, can we save any work in\nthe main loop of that function?\n\nThe test cases needed some work too. Notably, some of the places where\nyou tried to use EXPLAIN ANALYZE are unportable because they expose \"Heap\nBlocks\" counts that are not stable. (I checked the patch on a 32-bit\nmachine and indeed some of these failed.) While it'd be possible to work\naround that by filtering the EXPLAIN output, that would not be any simpler\nor faster than our traditional style of just doing a plain EXPLAIN and a\nseparate execution.\n\nIt troubles me a bit as well that the test cases don't really expose\nany difference between patched and unpatched code --- I checked, and\nthey \"passed\" without applying any of the code changes. Maybe there's\nnot much to be done about that, since after all this is an optimization\nthat's not supposed to change any query results.\n\nI didn't repeat any of the performance testing --- it seems fairly\nclear that this can't make any cases worse.\n\nOther than the collectMatchesForHeapRow issue, I think this is\ncommittable.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 14 Jan 2020 13:43:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Hi!\n\nOn Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > Updated patch is attached. It contains more comments as well as commit message.\n>\n> I reviewed this a little bit. I agree this seems way more straightforward\n> than the patches we've been considering so far. I wasn't too happy with\n> the state of the comments, so I rewrote them a bit in the attached v13.\n\nThank you!\n\n> One thing I'm still not happy about is the comment in\n> collectMatchesForHeapRow. v12 failed to touch that at all, so I tried to\n> fill it in, but I'm not sure if my explanation is good.\n\nI've tried to rephrase this comment making it better from my point of\nview. It's hard for me to be sure about this, since I'm not native\nEnglish speaker. I'd like you to take a look on it.\n\n> Also, if we know\n> that excludeOnly keys are going to be ignored, can we save any work in\n> the main loop of that function?\n\nIt doesn't look so for me. We still need to collect matches for\nconsistent function call afterwards. We may skip calling consistent\nfunction for excludeOnly keys by forcing a recheck. But that's not\ngoing to be a plain win.\n\nI thought about different optimization. We now check for at least one\nmatching entry. Can we check for at least one *required* entry? It\nseems we can save some consistent function calls.\n\n> The test cases needed some work too. Notably, some of the places where\n> you tried to use EXPLAIN ANALYZE are unportable because they expose \"Heap\n> Blocks\" counts that are not stable. (I checked the patch on a 32-bit\n> machine and indeed some of these failed.) While it'd be possible to work\n> around that by filtering the EXPLAIN output, that would not be any simpler\n> or faster than our traditional style of just doing a plain EXPLAIN and a\n> separate execution.\n\nThanks!\n\n> It troubles me a bit as well that the test cases don't really expose\n> any difference between patched and unpatched code --- I checked, and\n> they \"passed\" without applying any of the code changes. Maybe there's\n> not much to be done about that, since after all this is an optimization\n> that's not supposed to change any query results.\n\nYep, it seems like we can't do much in this field unless we're going\nto expose too much internals at user level.\n\nI also had concerns about how excludeOnly keys work with lossy pages.\nI didn't find exact error. But I've added code, which skips\nexcludeOnly keys checks for lossy pages. They aren't going to exclude\nany lossy page anyway. So, we can save some resources by skipping\nthis.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 15 Jan 2020 01:47:30 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 1:47 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I also had concerns about how excludeOnly keys work with lossy pages.\n> I didn't find exact error. But I've added code, which skips\n> excludeOnly keys checks for lossy pages. They aren't going to exclude\n> any lossy page anyway. So, we can save some resources by skipping\n> this.\n\nI also found the way we combine lossy pages and exact TIDs pretty\nasymmetric. Imagine one scan key A matches a lossy page, while\nanother key B have set of matching TIDs on the same page. If key A\ngoes first, we will report a lossy page. But if key B goes first, we\nwill report a set of TIDs with recheck set. It would be nice to\nimprove. But this is definitely subject of a separate patch.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 15 Jan 2020 01:56:47 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One thing I'm still not happy about is the comment in\n>> collectMatchesForHeapRow. v12 failed to touch that at all, so I tried to\n>> fill it in, but I'm not sure if my explanation is good.\n\n> I've tried to rephrase this comment making it better from my point of\n> view. It's hard for me to be sure about this, since I'm not native\n> English speaker. I'd like you to take a look on it.\n\nYeah, that's not great as-is. Maybe like\n\n+\t * All scan keys except excludeOnly require at least one entry to match.\n+\t * excludeOnly keys are an exception, because their implied\n+\t * GIN_CAT_EMPTY_QUERY scanEntry always matches. So return \"true\"\n+\t * if all non-excludeOnly scan keys have at least one match.\n\n>> Also, if we know\n>> that excludeOnly keys are going to be ignored, can we save any work in\n>> the main loop of that function?\n\n> It doesn't look so for me. We still need to collect matches for\n> consistent function call afterwards.\n\nAh, right.\n\n> I also had concerns about how excludeOnly keys work with lossy pages.\n> I didn't find exact error. But I've added code, which skips\n> excludeOnly keys checks for lossy pages. They aren't going to exclude\n> any lossy page anyway. So, we can save some resources by skipping\n> this.\n\nHmm ... yeah, these test cases are not large enough to exercise any\nlossy-page cases, are they? I doubt we should try to make a new regression\ntest that is that big. (But if there is one already, maybe we could add\nmore test queries with it, instead of creating whole new tables?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:03:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Tue, Jan 14, 2020 at 9:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> One thing I'm still not happy about is the comment in\n> >> collectMatchesForHeapRow. v12 failed to touch that at all, so I tried to\n> >> fill it in, but I'm not sure if my explanation is good.\n>\n> > I've tried to rephrase this comment making it better from my point of\n> > view. It's hard for me to be sure about this, since I'm not native\n> > English speaker. I'd like you to take a look on it.\n>\n> Yeah, that's not great as-is. Maybe like\n>\n> + * All scan keys except excludeOnly require at least one entry to match.\n> + * excludeOnly keys are an exception, because their implied\n> + * GIN_CAT_EMPTY_QUERY scanEntry always matches. So return \"true\"\n> + * if all non-excludeOnly scan keys have at least one match.\n\nLooks good to me.\n\n> >> Also, if we know\n> >> that excludeOnly keys are going to be ignored, can we save any work in\n> >> the main loop of that function?\n>\n> > It doesn't look so for me. We still need to collect matches for\n> > consistent function call afterwards.\n>\n> Ah, right.\n>\n> > I also had concerns about how excludeOnly keys work with lossy pages.\n> > I didn't find exact error. But I've added code, which skips\n> > excludeOnly keys checks for lossy pages. They aren't going to exclude\n> > any lossy page anyway. So, we can save some resources by skipping\n> > this.\n>\n> Hmm ... yeah, these test cases are not large enough to exercise any\n> lossy-page cases, are they? I doubt we should try to make a new regression\n> test that is that big. (But if there is one already, maybe we could add\n> more test queries with it, instead of creating whole new tables?)\n\nI've checked that none of existing tests for GIN can produce lossy\nbitmap page with minimal work_mem = '64kB'. I've tried to generate\nsample table with single integer column to get lossy page. It appears\nthat we need at least 231425 rows to get it. With wider rows, we\nwould need less number of rows, but I think total heap size wouldn't\nbe less.\n\nSo, I think we don't need so huge regression test to exercise this corner case.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 18 Jan 2020 00:33:14 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 12:33 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> So, I think we don't need so huge regression test to exercise this corner case.\n\nForgot to mention. I'm going to push v15 if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 18 Jan 2020 00:34:47 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... yeah, these test cases are not large enough to exercise any\n>> lossy-page cases, are they? I doubt we should try to make a new regression\n>> test that is that big. (But if there is one already, maybe we could add\n>> more test queries with it, instead of creating whole new tables?)\n\n> I've checked that none of existing tests for GIN can produce lossy\n> bitmap page with minimal work_mem = '64kB'. I've tried to generate\n> sample table with single integer column to get lossy page. It appears\n> that we need at least 231425 rows to get it. With wider rows, we\n> would need less number of rows, but I think total heap size wouldn't\n> be less.\n> So, I think we don't need so huge regression test to exercise this corner case.\n\nUgh. Yeah, I don't want a regression test case that big either.\n\nv15 looks good to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Jan 2020 16:48:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 12:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <a.korotkov@postgrespro.ru> writes:\n> > On Wed, Jan 15, 2020 at 2:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm ... yeah, these test cases are not large enough to exercise any\n> >> lossy-page cases, are they? I doubt we should try to make a new regression\n> >> test that is that big. (But if there is one already, maybe we could add\n> >> more test queries with it, instead of creating whole new tables?)\n>\n> > I've checked that none of existing tests for GIN can produce lossy\n> > bitmap page with minimal work_mem = '64kB'. I've tried to generate\n> > sample table with single integer column to get lossy page. It appears\n> > that we need at least 231425 rows to get it. With wider rows, we\n> > would need less number of rows, but I think total heap size wouldn't\n> > be less.\n> > So, I think we don't need so huge regression test to exercise this corner case.\n>\n> Ugh. Yeah, I don't want a regression test case that big either.\n>\n> v15 looks good to me.\n\nThanks! Pushed!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 18 Jan 2020 01:13:33 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoid full GIN index scan when possible"
}
] |
[
{
"msg_contents": "This results in an Assert failure on master and an elog ERROR prior to\nc2fe139c201:\n\ncreate role test_role with login;\ncreate table ref(a int primary key);\ngrant references on ref to test_role;\nset role test_role;\ncreate table t1(a int, b int);\ninsert into t1 values(1,1);\nalter table t1 add constraint t1_b_key foreign key (b) references ref(a);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nFails in heapam_tuple_satisfies_snapshot() at\nAssert(BufferIsValid(bslot->buffer));\n\nc2fe139c201~1:\nERROR: expected buffer tuple\n\nThe test case is just a variation of the case in [1], but a different\nbug, so reporting it on a different thread.\n\nI've not looked into the cause or when it started happening.\n\n[1] https://www.postgresql.org/message-id/CAK%3D1%3DWrnNmBbe5D9sm3t0a6dnAq3cdbF1vXY816j1wsMqzC8bw%40mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Sun, 24 Mar 2019 23:54:53 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Assert failure when validating foreign keys"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-24 23:54:53 +1300, David Rowley wrote:\n> This results in an Assert failure on master and an elog ERROR prior to\n> c2fe139c201:\n> \n> create role test_role with login;\n> create table ref(a int primary key);\n> grant references on ref to test_role;\n> set role test_role;\n> create table t1(a int, b int);\n> insert into t1 values(1,1);\n> alter table t1 add constraint t1_b_key foreign key (b) references ref(a);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n> Fails in heapam_tuple_satisfies_snapshot() at\n> Assert(BufferIsValid(bslot->buffer));\n> \n> c2fe139c201~1:\n> ERROR: expected buffer tuple\n> \n> The test case is just a variation of the case in [1], but a different\n> bug, so reporting it on a different thread.\n> \n> I've not looked into the cause or when it started happening.\n\nThat's probably my fault somehow, I'll look into it. Got some urgent\nerrands to run first (and it's still early here).\n\nThanks for noticing,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 07:24:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure when validating foreign keys"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-24 23:54:53 +1300, David Rowley wrote:\n> This results in an Assert failure on master and an elog ERROR prior to\n> c2fe139c201:\n> \n> create role test_role with login;\n> create table ref(a int primary key);\n> grant references on ref to test_role;\n> set role test_role;\n> create table t1(a int, b int);\n> insert into t1 values(1,1);\n> alter table t1 add constraint t1_b_key foreign key (b) references ref(a);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> \n> Fails in heapam_tuple_satisfies_snapshot() at\n> Assert(BufferIsValid(bslot->buffer));\n> \n> c2fe139c201~1:\n> ERROR: expected buffer tuple\n> \n> The test case is just a variation of the case in [1], but a different\n> bug, so reporting it on a different thread.\n> \n> I've not looked into the cause or when it started happening.\n\nI think the cause is stupidity of mine. In\nvalidateForeignKeyConstraint() I passed true to the materialize argument\nof ExecFetchSlotHeapTuple(). Which therefore is made independent of\nbuffers. Which this assert then notices. Just changing that to false,\nwhich is correct, fixes the issue for me.\n\nI'm a bit confused as to how we have no tests for this code? Is it just\nthat the left join codepath is \"too good\"?\n\nI've also noticed that we should free the tuple - that doesn't matter\nfor heap, but it sure does for other callers. But uh, is it actually ok\nto validate an entire table's worth of foreign keys without a memory\ncontext reset? I.e. shouldn't we have a memory context that we reset\nafter each iteration?\n\nAlso, why's there no CHECK_FOR_INTERRUPTS()? heap has some internally on\na page level, but that doesn't seem all that granular?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 25 Mar 2019 11:04:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure when validating foreign keys"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 11:04:05 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-03-24 23:54:53 +1300, David Rowley wrote:\n> > This results in an Assert failure on master and an elog ERROR prior to\n> > c2fe139c201:\n> > \n> > create role test_role with login;\n> > create table ref(a int primary key);\n> > grant references on ref to test_role;\n> > set role test_role;\n> > create table t1(a int, b int);\n> > insert into t1 values(1,1);\n> > alter table t1 add constraint t1_b_key foreign key (b) references ref(a);\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> > \n> > Fails in heapam_tuple_satisfies_snapshot() at\n> > Assert(BufferIsValid(bslot->buffer));\n> > \n> > c2fe139c201~1:\n> > ERROR: expected buffer tuple\n> > \n> > The test case is just a variation of the case in [1], but a different\n> > bug, so reporting it on a different thread.\n> > \n> > I've not looked into the cause or when it started happening.\n> \n> I think the cause is stupidity of mine. In\n> validateForeignKeyConstraint() I passed true to the materialize argument\n> of ExecFetchSlotHeapTuple(). Which therefore is made independent of\n> buffers. Which this assert then notices. Just changing that to false,\n> which is correct, fixes the issue for me.\n> \n> I'm a bit confused as to how we have no tests for this code? Is it just\n> that the left join codepath is \"too good\"?\n> \n> I've also noticed that we should free the tuple - that doesn't matter\n> for heap, but it sure does for other callers. But uh, is it actually ok\n> to validate an entire table's worth of foreign keys without a memory\n> context reset? I.e. shouldn't we have a memory context that we reset\n> after each iteration?\n> \n> Also, why's there no CHECK_FOR_INTERRUPTS()? heap has some internally on\n> a page level, but that doesn't seem all that granular?\n\nTom pushed a part of this earlier in\ncommit 46e3442c9ec858071d60a1c0fae2e9868aeaa0c8\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2019-04-06 15:09:09 -0400\n\n Fix failures in validateForeignKeyConstraint's slow path.\n\nI've now added a fixed version of the memory context portion of this\npatch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 7 Apr 2019 22:53:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure when validating foreign keys"
}
] |
[
{
"msg_contents": "Hi Robert! >On Tue, Mar 19, 2019 at 2:47 PM Robert Haas\n<robertmhaas(at)gmail(dot)com> wrote: >> how close you were getting to\nrewriting the entire heap. This is the >> one thing I found but did not\nfix; any chance you could make this >> change and update the documentation\nto match? > > >Hi, is anybody working on this? Thank you so much for\nreviewing the patch and sorry for the late reply. Today, I realized that\nyou sent the email for the patch because I took a sick leave from work for\na while. So, I created new patch based on your comments asap. I hope it is\nacceptable to you. :) Please find attached file. Changes - Add new column\n*heap_tuples_written* in the view This column is updated when the phases\nare \"seq scanning heap\", \"index scanning heap\" or \"writing new heap\". - Fix\ndocument - Revised the patch on 280a408b48d5ee42969f981bceb9e9426c3a344c\n\n\n\n\n\n\n\n\n\nRegards,\n\n\n\nTatsuro Yamada",
"msg_date": "Mon, 25 Mar 2019 01:55:18 +0900",
"msg_from": "Tattsu Yama <yamatattsu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] CLUSTER command progress monitor"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\nWhen I do the following:\r\npostgres=# create table t1 (a int);\r\npostgres=# insert into t1 values(1);\r\npostgres=# create unique index uniq_idx on t1(a);\r\npostgres=# alter table t1 add column b float8 not null default random(), add primary key using index uniq_idx;\r\nERROR: column \"b\" contains null values\r\n\r\nPostgreSQL throws error \"column b contains null values\".\r\n\r\n#########################################\r\nalter table t1 add column b float8 not null default 0, add primary key using index uniq_idx;\r\n\r\nalter table success.\r\n#########################################\r\n\r\nThe reasons for the error are as follows.\r\n\r\nATController provides top level control over the phases.\r\nPhase 1: preliminary examination of commands, create work queue \r\nPhase 2: update system catalogs \r\nPhase 3: scan/rewrite tables as needed \r\n\r\nIn Phase 2, when dealing with \"add column b float8 not null default random()\", the table is marked rewrite.\r\nWhen dealing with \"add primary key using index uniq_idx\", ATExecAddIndexConstraint calls index_check_primary_key.\r\n\r\nThe calling order is as follows.\r\nindex_check_primary_key()\r\n ↓\r\nAlterTableInternal()\r\n ↓\r\nATController()\r\n ↓\r\nATRewriteTables()\r\n ↓\r\nATRewriteTable()\r\n\r\nATRewriteTable check all not-null constraints. Column a and column b need to check NOT NULL.\r\nUnfortunately, at this time, Phase 3 hasn't been done yet.\r\nThe table is not rewrited, just marked rewrite. So, throws error \"column b contains null values\".\r\n\r\nIn Phase 2, if table is marked rewrite, we can do not check whether columns are NOT NULL.\r\nBecause phase 3 will do it.\r\n\r\nHere's a patch to fix this bug.\r\n\r\nBest Regards!",
"msg_date": "Mon, 25 Mar 2019 01:31:51 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE with ADD COLUMN and ADD PRIMARY KEY USING INDEX throws\n spurious \"column contains null values\""
},
{
"msg_contents": "Hi\n\nI did not review patch, just want add link to related bug 15580 and one another -hackers thread:\n\nhttps://www.postgresql.org/message-id/flat/15580-d1a6de5a3d65da51%40postgresql.org\nhttps://www.postgresql.org/message-id/flat/CAOVr7%2B3C9u_ZApjxpccrorvt0aw%3Dk8Ct5FhHRVZA-YO36V3%3Drg%40mail.gmail.com\n\nregards, Sergei\n\n",
"msg_date": "Mon, 25 Mar 2019 15:45:21 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with ADD COLUMN and ADD PRIMARY KEY USING INDEX\n throws spurious \"column contains null values\""
},
{
"msg_contents": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com> writes:\n> Here's a patch to fix this bug.\n\nI took a look at this patch, but I really dislike it: it adds a mighty\nad-hoc parameter to a whole bunch of functions that shouldn't really\nhave anything to do with the problem. Moreover, I have little confidence\nthat it's really solving the problem and not introducing any new problems\n(such as failure to apply the not-null check when we need to).\n\nI think the real problem is exactly that we've got index_check_primary_key\ndoing its own mini ALTER TABLE in ignorance of what might be happening\nin the outer ALTER TABLE. That's just ripe for order-of-operations\nproblems, seeing that ALTER TABLE has a lot of very careful decisions\nabout which steps have to happen before which other ones. Moreover,\nas this old comment notes, it's a horridly inefficient approach if\nthe table is large:\n\n /*\n * XXX: possible future improvement: when being called from ALTER TABLE,\n * it would be more efficient to merge this with the outer ALTER TABLE, so\n * as to avoid two scans. But that seems to complicate DefineIndex's API\n * unduly.\n */\n\nSo I thought a bit about how to fix that, and realized that we could\neasily adjust the parser to emit AT_SetNotNull subcommands as part of the\nouter ALTER TABLE that has the ADD PRIMARY KEY subcommand. Then,\nindex_check_primary_key doesn't need to do anything at all about adding\nNOT NULL, although it seems like a good safety feature for it to check\nthat somebody else already added that.\n\nSo, attached is a WIP patch that fixes it that way. Some notes\nfor review:\n\n* I initially thought that index_check_primary_key could be simplified\nto the point where it *only* throws an error for missing NOT NULL.\nThis soon proved to be wrong, because the comments for the function\nare lies, or at least obsolete: there are multiple scenarios in which\na CREATE TABLE with a PRIMARY KEY option does need this function to\nperform ALTER TABLE SET NOT NULL. Fortunately, that's not so expensive\nin that case, since the table must be empty. So as coded, it throws\nan error if is_alter_table, and otherwise does what it did before.\n\n* We need to fix the order of operations in ALTER TABLE phase 2 so that\nany AT_SetNotNull subcommands happen after the AT_PASS_ADD_COL pass\n(else the column might not be there yet) and before AT_PASS_ADD_INDEX\n(else index_check_primary_key will complain). I did this by putting\nAT_SetNotNull into the AT_PASS_COL_ATTRS pass and moving that to after\nAT_PASS_ADD_COL; it had been before AT_PASS_ADD_COL, but that seems at\nbest random and at worst broken. (AT_SetIdentity is the only existing\nsubcommand using AT_PASS_COL_ATTRS, and it sure seems like it'd make more\nsense to run it after AT_PASS_ADD_COL, so that it can work on a column\nbeing added in the same ALTER. Am I missing something?)\n\n* Some existing regression tests for \"ALTER TABLE ONLY partitioned_table\nADD PRIMARY KEY\" failed. That apparently is supposed to work if all\npartitions already have a suitable unique index and NOT NULL constraint,\nbut it was failing because ATPrepSetNotNull wasn't expecting to be used\nthis way. I thought that function was a pretty terrible kluge anyway,\nso what I did was to refactor things so that in this scenario we just\napply checks to see if all the partitions already have suitable NOT NULL.\nNote that this represents looser semantics than what was there before,\nbecause previously you couldn't say \"ALTER TABLE ONLY partitioned_table\nSET NOT NULL\" if there were any partitions; now you can, if the partitions\nall have suitable NOT NULL already. We probably ought to change the error\nmessage to reflect that, but I didn't yet.\n\n* A couple of existing test cases change error messages, like so:\n\n-ERROR: column \"test1\" named in key does not exist\n+ERROR: column \"test1\" of relation \"atacc1\" does not exist\n\nThis is because the added AT_SetNotNull subcommand runs before\nAT_AddIndex, so it's the one that notices that there's not really\nany such column, and historically it's spelled its error message\ndifferently. This change seems all to the good to me, so I didn't\ntry to avoid it.\n\n* I haven't yet added any test case(s) reflecting the bug fix nor\nthe looser semantics for adding NOT NULL to partitioned tables.\nIt does pass check-world as presented.\n\n* I'm not sure whether we want to try to back-patch this, or how\nfar it should go. The misbehavior has been there a long time\n(at least back to 8.4, I didn't check further); so the lack of\nprevious reports means people seldom try to do it. That may\nindicate that it's not worth taking any risks of new bugs to\nsquash this one. (Also, I suspect that it might take a lot of\nwork to port this to before v10, because there are comments\nsuggesting that this code worked a good bit differently before.)\nI do think we should shove this into v12 though.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 17 Apr 2019 23:54:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with ADD COLUMN and ADD PRIMARY KEY USING INDEX\n throws spurious \"column contains null values\""
},
{
"msg_contents": "On Wed, Apr 17, 2019 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * I'm not sure whether we want to try to back-patch this, or how\n> far it should go. The misbehavior has been there a long time\n> (at least back to 8.4, I didn't check further); so the lack of\n> previous reports means people seldom try to do it. That may\n> indicate that it's not worth taking any risks of new bugs to\n> squash this one. (Also, I suspect that it might take a lot of\n> work to port this to before v10, because there are comments\n> suggesting that this code worked a good bit differently before.)\n> I do think we should shove this into v12 though.\n\nShoving it into v12 but not back-patching seems like a reasonable\ncompromise, although I have not reviewed the patch or tried to figure\nout how risky it is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Apr 2019 14:32:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with ADD COLUMN and ADD PRIMARY KEY USING INDEX\n throws spurious \"column contains null values\""
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Apr 17, 2019 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I'm not sure whether we want to try to back-patch this, or how\n>> far it should go. The misbehavior has been there a long time\n>> (at least back to 8.4, I didn't check further); so the lack of\n>> previous reports means people seldom try to do it. That may\n>> indicate that it's not worth taking any risks of new bugs to\n>> squash this one. (Also, I suspect that it might take a lot of\n>> work to port this to before v10, because there are comments\n>> suggesting that this code worked a good bit differently before.)\n>> I do think we should shove this into v12 though.\n\n> Shoving it into v12 but not back-patching seems like a reasonable\n> compromise, although I have not reviewed the patch or tried to figure\n> out how risky it is.\n\nHere's a less-WIP patch for that. I fixed up some more stuff:\n\n>> * I initially thought that index_check_primary_key could be simplified\n>> to the point where it *only* throws an error for missing NOT NULL.\n>> This soon proved to be wrong, because the comments for the function\n>> are lies, or at least obsolete: there are multiple scenarios in which\n>> a CREATE TABLE with a PRIMARY KEY option does need this function to\n>> perform ALTER TABLE SET NOT NULL.\n\nI decided that a cleaner way to handle this was to make the parser\ngenerate required ALTER TABLE SET NOT NULL operations in all cases,\nnot just the ALTER TABLE case. This gets rid of the former confused\nsituation wherein transformIndexConstraint forced primary-key columns\nNOT NULL in some situations and abdicated responsibility in others.\n\n>> * We need to fix the order of operations in ALTER TABLE phase 2 so that\n>> any AT_SetNotNull subcommands happen after the AT_PASS_ADD_COL pass\n>> (else the column might not be there yet) and before AT_PASS_ADD_INDEX\n>> (else index_check_primary_key will complain). I did this by putting\n>> AT_SetNotNull into the AT_PASS_COL_ATTRS pass and moving that to after\n>> AT_PASS_ADD_COL; it had been before AT_PASS_ADD_COL, but that seems at\n>> best random and at worst broken. (AT_SetIdentity is the only existing\n>> subcommand using AT_PASS_COL_ATTRS, and it sure seems like it'd make more\n>> sense to run it after AT_PASS_ADD_COL, so that it can work on a column\n>> being added in the same ALTER. Am I missing something?)\n\nSure enough, AT_SetIdentity is broken for the case where the column was\njust created in the same ALTER command, as per test case added below.\nAdmittedly, that's a fairly unlikely thing to do, but it should work;\nso the current ordering of these passes is wrong.\n\nBTW, now that we have an AT_PASS_COL_ATTRS pass, it's a bit tempting to\nshove other stuff that's in the nature of change-a-column-attribute into\nit; there are several AT_ subcommands of that sort that are currently in\nAT_PASS_MISC. I didn't do that here though.\n\nAlso, this fixes the issue complained of in\nhttps://postgr.es/m/16115.1555874162@sss.pgh.pa.us\n\nBarring objections I'll commit this tomorrow or so.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 Apr 2019 17:21:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE with ADD COLUMN and ADD PRIMARY KEY USING INDEX\n throws spurious \"column contains null values\""
}
] |
[
{
"msg_contents": "Hello all,\n\nLast November snowball added support for Greek language [1]. Following the\ninstructions [2], I wrote a patch that adds fulltext search for Greek in\nPostgres. The patch is attached.\nI would appreciate any feedback that will help in getting this merged.\n\nwith kind regards,\nPanos\n\n[1]\nhttps://github.com/snowballstem/snowball/commits/master/algorithms/greek.sbl\n[2]\nhttps://github.com/postgres/postgres/blob/97c39498e5ca9208d3de5a443a2282923619bf91/src/backend/snowball/README",
"msg_date": "Mon, 25 Mar 2019 13:04:52 +0200",
"msg_from": "Panagiotis Mavrogiorgos <pmav99@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature: Add Greek language fulltext search"
},
{
"msg_contents": "Panagiotis Mavrogiorgos <pmav99@gmail.com> writes:\n> Last November snowball added support for Greek language [1]. Following the\n> instructions [2], I wrote a patch that adds fulltext search for Greek in\n> Postgres. The patch is attached.\n\nCool!\n\n> I would appreciate any feedback that will help in getting this merged.\n\nWe're past the deadline for submitting features for v12, but please\nregister this patch in the first v13 commitfest so that we remember\nabout it when the time comes:\n\nhttps://commitfest.postgresql.org/23/\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 09:53:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Add Greek language fulltext search"
},
{
"msg_contents": "On 2019-03-25 12:04, Panagiotis Mavrogiorgos wrote:\n> Last November snowball added support for Greek language [1]. Following\n> the instructions [2], I wrote a patch that adds fulltext search for\n> Greek in Postgres. The patch is attached. \n\nI have committed a full sync from the upstream snowball repository,\nwhich pulled in the new greek stemmer.\n\nCould you please clarify where you got the stopword list from? The\nREADME says those need to be downloaded separately, but I wasn't able to\nfind the download location. It would be good to document this, for\nexample in the commit message. I haven't committed the stopword list yet.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 4 Jul 2019 13:39:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Add Greek language fulltext search"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 1:39 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-03-25 12:04, Panagiotis Mavrogiorgos wrote:\n> > Last November snowball added support for Greek language [1]. Following\n> > the instructions [2], I wrote a patch that adds fulltext search for\n> > Greek in Postgres. The patch is attached.\n>\n> I have committed a full sync from the upstream snowball repository,\n> which pulled in the new greek stemmer.\n>\n> Could you please clarify where you got the stopword list from? The\n> README says those need to be downloaded separately, but I wasn't able to\n> find the download location. It would be good to document this, for\n> example in the commit message. I haven't committed the stopword list yet.\n>\n\nThank you Peter,\n\nHere is the repo with the stop-words:\nhttps://github.com/pmav99/greek_stopwords\nThe list is based on an earlier publication with modification by me. All\nthe relevant info is on github.\n\nDisclaimer 1: The list has not been validated by an expert.\n\nDisclaimer 2: There are more stop-words lists on the internet, but they are\nless complete and they also use ancient greek words. Furthermore, my\ntesting showed that snowball needs to handle accents (tonous) and ς (teliko\nsigma) in a special way if you want the stemmer to work with capitalized\nwords too.\n\nhttps://github.com/Xangis/extra-stopwords/blob/master/greek\nhttps://github.com/stopwords-iso/stopwords-el/tree/master/raw\n\nall the best,\nPanagiotis\n\nOn Thu, Jul 4, 2019 at 1:39 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2019-03-25 12:04, Panagiotis Mavrogiorgos wrote:\n> Last November snowball added support for Greek language [1]. Following\n> the instructions [2], I wrote a patch that adds fulltext search for\n> Greek in Postgres. The patch is attached. \n\nI have committed a full sync from the upstream snowball repository,\nwhich pulled in the new greek stemmer.\n\nCould you please clarify where you got the stopword list from? The\nREADME says those need to be downloaded separately, but I wasn't able to\nfind the download location. It would be good to document this, for\nexample in the commit message. I haven't committed the stopword list yet.Thank you Peter,Here is the repo with the stop-words: https://github.com/pmav99/greek_stopwordsThe list is based on an earlier publication with modification by me. All the relevant info is on github.Disclaimer 1: The list has not been validated by an expert.Disclaimer 2: There are more stop-words lists on the internet, but they are less complete and they also use ancient greek words. Furthermore, my testing showed that snowball needs to handle accents (tonous) and ς (teliko sigma) in a special way if you want the stemmer to work with capitalized words too.https://github.com/Xangis/extra-stopwords/blob/master/greekhttps://github.com/stopwords-iso/stopwords-el/tree/master/rawall the best,Panagiotis",
"msg_date": "Tue, 9 Jul 2019 16:18:00 +0200",
"msg_from": "Panagiotis Mavrogiorgos <pmav99@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature: Add Greek language fulltext search"
},
{
"msg_contents": "On 7/4/19 1:39 PM, Peter Eisentraut wrote:\n> On 2019-03-25 12:04, Panagiotis Mavrogiorgos wrote:\n>> Last November snowball added support for Greek language [1]. Following\n>> the instructions [2], I wrote a patch that adds fulltext search for\n>> Greek in Postgres. The patch is attached. \n> \n> I have committed a full sync from the upstream snowball repository,\n> which pulled in the new greek stemmer.\n> \n> Could you please clarify where you got the stopword list from? The\n> README says those need to be downloaded separately, but I wasn't able to\n> find the download location. It would be good to document this, for\n> example in the commit message. I haven't committed the stopword list yet.\n> \n\nThanks, I noted snowball pushed a new commit related to greek stemmer few days\nafter your sync:\nhttps://github.com/snowballstem/snowball/commit/533602101f963eeb0c38343d94c428ceef740c0c\n\nAs it seems there is no policy for stable release on Snowball, I don't know what\nis the best way to keep in sync :(",
"msg_date": "Thu, 11 Jul 2019 11:55:53 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: Feature: Add Greek language fulltext search"
}
] |
[
{
"msg_contents": "Get rid of backtracking in jsonpath_scan.l\n\nNon-backtracking flex parsers work faster than backtracking ones. So, this\ncommit gets rid of backtracking in jsonpath_scan.l. That required explicit\nhandling of some cases as well as manual backtracking for some cases. More\nregression tests for numerics are added.\n\nDiscussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063\nAuthor: John Naylor, Nikita Gluknov, Alexander Korotkov\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/1d88a75c424664cc85f307a876cde85191d27272\n\nModified Files\n--------------\nsrc/backend/utils/adt/Makefile | 1 +\nsrc/backend/utils/adt/jsonpath_scan.l | 56 +++--\nsrc/test/regress/expected/jsonb_jsonpath.out | 2 +-\nsrc/test/regress/expected/jsonpath.out | 168 +++++++++++++++\nsrc/test/regress/expected/jsonpath_encoding.out | 249 ++++++++++++++++++++++\nsrc/test/regress/expected/jsonpath_encoding_1.out | 237 ++++++++++++++++++++\nsrc/test/regress/parallel_schedule | 2 +-\nsrc/test/regress/serial_schedule | 1 +\nsrc/test/regress/sql/jsonb_jsonpath.sql | 2 +-\nsrc/test/regress/sql/jsonpath.sql | 30 +++\nsrc/test/regress/sql/jsonpath_encoding.sql | 71 ++++++\n11 files changed, 795 insertions(+), 24 deletions(-)\n\n",
"msg_date": "Mon, 25 Mar 2019 12:44:08 +0000",
"msg_from": "Alexander Korotkov <akorotkov@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 8:44 AM Alexander Korotkov\n<akorotkov@postgresql.org> wrote:\n> Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063\n\nThis is really a pretty evil link to include in a commit message.\nWhen I clicked on it, it logged me out of my gmail account.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 08:55:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 3:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Mar 25, 2019 at 8:44 AM Alexander Korotkov\n> <akorotkov@postgresql.org> wrote:\n> > Discussion: https://mail.google.com/mail/u/0?ik=a20b091faa&view=om&permmsgid=msg-f%3A1628425344167939063\n>\n> This is really a pretty evil link to include in a commit message.\n> When I clicked on it, it logged me out of my gmail account.\n\nOh, such a shameful oversight!\nThat should be:\nhttps://postgr.es/m/CACPNZCuUXV3jEPFPsRw%2B4AKLvmO6CFWh3OwtH0CJv3w0oXnVoQ%40mail.gmail.com\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Mon, 25 Mar 2019 15:58:12 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "\nOn 3/25/19 8:44 AM, Alexander Korotkov wrote:\n> Get rid of backtracking in jsonpath_scan.l\n>\n> Non-backtracking flex parsers work faster than backtracking ones. So, this\n> commit gets rid of backtracking in jsonpath_scan.l. That required explicit\n> handling of some cases as well as manual backtracking for some cases. More\n> regression tests for numerics are added.\n\n\n\njacana appears to be having trouble with this:\n\n\n2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:7] DETAIL: Failed process was running: select '$ ? (@ like_regex \"pattern\" flag \"a\")'::jsonpath;\n2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:8] HINT: See C include file \"ntstatus.h\" for a description of the hexadecimal value.\n2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:9] LOG: terminating any other active server processes\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Mar 2019 08:31:15 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/25/19 8:44 AM, Alexander Korotkov wrote:\n>> Get rid of backtracking in jsonpath_scan.l\n\n> jacana appears to be having trouble with this:\n\nI wonder if that's related to the not-very-reproducible failures\nwe've seen on snapper. Can you get a stack trace?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Mar 2019 10:21:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 5:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> > On 3/25/19 8:44 AM, Alexander Korotkov wrote:\n> >> Get rid of backtracking in jsonpath_scan.l\n>\n> > jacana appears to be having trouble with this:\n>\n> I wonder if that's related to the not-very-reproducible failures\n> we've seen on snapper. Can you get a stack trace?\n\n+1\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n",
"msg_date": "Tue, 26 Mar 2019 17:46:47 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On 2019-Mar-26, Andrew Dunstan wrote:\n\n> On 3/25/19 8:44 AM, Alexander Korotkov wrote:\n> > Get rid of backtracking in jsonpath_scan.l\n> >\n> > Non-backtracking flex parsers work faster than backtracking ones. So, this\n> > commit gets rid of backtracking in jsonpath_scan.l. That required explicit\n> > handling of some cases as well as manual backtracking for some cases. More\n> > regression tests for numerics are added.\n> \n> jacana appears to be having trouble with this:\n> \n> \n> 2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n\n0xC0000028 is STATUS_BAD_STACK, per\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\nNot sure how credible/useful a stack trace is going to be.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 26 Mar 2019 12:22:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "\nOn 3/26/19 11:22 AM, Alvaro Herrera wrote:\n> On 2019-Mar-26, Andrew Dunstan wrote:\n>\n>> On 3/25/19 8:44 AM, Alexander Korotkov wrote:\n>>> Get rid of backtracking in jsonpath_scan.l\n>>>\n>>> Non-backtracking flex parsers work faster than backtracking ones. So, this\n>>> commit gets rid of backtracking in jsonpath_scan.l. That required explicit\n>>> handling of some cases as well as manual backtracking for some cases. More\n>>> regression tests for numerics are added.\n>> jacana appears to be having trouble with this:\n>>\n>>\n>> 2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n> 0xC0000028 is STATUS_BAD_STACK, per\n> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n> Not sure how credible/useful a stack trace is going to be.\n>\n\n\nRight, and getting stack traces isn't easy in any case. There is a\ngadget from Google that is supposed to trap exceptions and produce a\nstack trace on the fly in mingw. I'm going to take a look at it,\nalthough loading it might be ... interesting.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Mar 2019 12:36:05 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On 2019-Mar-26, Alvaro Herrera wrote:\n\n> > 2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n> \n> 0xC0000028 is STATUS_BAD_STACK, per\n> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n> Not sure how credible/useful a stack trace is going to be.\n\nBTW I think we should update our message to use this URL instead of\nambiguously pointing to \"ntstatus.h\". Also, all the URLs in\nwin32_port.h (except the wine one) are dead.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 26 Mar 2019 13:53:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "\nOn 3/26/19 12:53 PM, Alvaro Herrera wrote:\n> On 2019-Mar-26, Alvaro Herrera wrote:\n>\n>>> 2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n>> 0xC0000028 is STATUS_BAD_STACK, per\n>> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n>> Not sure how credible/useful a stack trace is going to be.\n> BTW I think we should update our message to use this URL instead of\n> ambiguously pointing to \"ntstatus.h\". Also, all the URLs in\n> win32_port.h (except the wine one) are dead.\n>\n\n\nThat's a fairly awful URL. How stable is it likely to be?\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 26 Mar 2019 13:11:14 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> 0xC0000028 is STATUS_BAD_STACK, per\n>> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n>> Not sure how credible/useful a stack trace is going to be.\n\n> BTW I think we should update our message to use this URL instead of\n> ambiguously pointing to \"ntstatus.h\".\n\nI've never cared for the ntstatus.h reference, but how stable is\nthe URL you suggest going to be? That UUID or whatever it is\ndoes not inspire confidence.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Mar 2019 13:19:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "On 2019-Mar-26, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> >> 0xC0000028 is STATUS_BAD_STACK, per\n> >> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n> >> Not sure how credible/useful a stack trace is going to be.\n> \n> > BTW I think we should update our message to use this URL instead of\n> > ambiguously pointing to \"ntstatus.h\".\n> \n> I've never cared for the ntstatus.h reference, but how stable is\n> the URL you suggest going to be? That UUID or whatever it is\n> does not inspire confidence.\n\nThat's true. Before posting, I looked for a statement about URL\nstability, couldn't find anything. I suppose one currently working URL\nis better than four currently dead URLs.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Tue, 26 Mar 2019 15:00:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Mar-26, Tom Lane wrote:\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>>> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n\n>> I've never cared for the ntstatus.h reference, but how stable is\n>> the URL you suggest going to be? That UUID or whatever it is\n>> does not inspire confidence.\n\n> That's true. Before posting, I looked for a statement about URL\n> stability, couldn't find anything. I suppose one currently working URL\n> is better than four currently dead URLs.\n\nIt looks like we could just point to the parent page,\n\nhttps://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/\n\nThat requires a little more drilling down, but it seems likely to\nremain stable across versions, whereas I strongly suspect the URL\nyou mention is version-specific.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Mar 2019 14:25:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
},
{
"msg_contents": "\nOn 3/26/19 12:36 PM, Andrew Dunstan wrote:\n> On 3/26/19 11:22 AM, Alvaro Herrera wrote:\n>> On 2019-Mar-26, Andrew Dunstan wrote:\n>>\n>>> On 3/25/19 8:44 AM, Alexander Korotkov wrote:\n>>>> Get rid of backtracking in jsonpath_scan.l\n>>>>\n>>>> Non-backtracking flex parsers work faster than backtracking ones. So, this\n>>>> commit gets rid of backtracking in jsonpath_scan.l. That required explicit\n>>>> handling of some cases as well as manual backtracking for some cases. More\n>>>> regression tests for numerics are added.\n>>> jacana appears to be having trouble with this:\n>>>\n>>>\n>>> 2019-03-26 00:49:02.208 EDT [5c99ae9e.20cc:6] LOG: server process (PID 8368) was terminated by exception 0xC0000028\n>> 0xC0000028 is STATUS_BAD_STACK, per\n>> https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/596a1078-e883-4972-9bbc-49e60bebca55\n>> Not sure how credible/useful a stack trace is going to be.\n>>\n>\n> Right, and getting stack traces isn't easy in any case. There is a\n> gadget from Google that is supposed to trap exceptions and produce a\n> stack trace on the fly in mingw. I'm going to take a look at it,\n> although loading it might be ... interesting.\n>\n>\n\nStill working on this. However, I have another data point. On a shiny\nnew Msys2/WS2019 system (on AWS/EC2) this does not reproduce, even\nthough jacana seems to be producing it quite reliably.\n\n\nTo get the backtrace gadget working I think I'm going to have to add\nsome code like:\n\n\n #if defined(WIN32) && defined(LOAD_BACKTRACE)\n\n LoadLibraryA(\"backtrace.dll\");\n\n #endif\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 26 Mar 2019 17:25:16 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Get rid of backtracking in jsonpath_scan.l"
}
] |
[
{
"msg_contents": "When reading another codepath, I happened to notice a few codepaths where we do\npg_malloc() immediately followed by a memset( .. 0, ..), without there being a\njustification (that I can see) for not using pg_malloc0() instead. The attached\npatch changes to pg_malloc0(), and passes make check.\n\ncheers ./daniel",
"msg_date": "Mon, 25 Mar 2019 13:18:05 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "pg_malloc0() instead of pg_malloc()+memset()"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 01:18:05PM +0000, Daniel Gustafsson wrote:\n> When reading another codepath, I happened to notice a few codepaths where we do\n> pg_malloc() immediately followed by a memset( .. 0, ..), without there being a\n> justification (that I can see) for not using pg_malloc0() instead. The attached\n> patch changes to pg_malloc0(), and passes make check.\n\nIf we simplify all of them (and that's not really a big deal), I have\nspotted two extra places on top of what you noticed, one in gist.c\nwhere ROTATEDIST is defined and a second one in tablefunc.c. \n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 26 Mar 2019 17:00:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_malloc0() instead of pg_malloc()+memset()"
},
{
"msg_contents": "On Tuesday, March 26, 2019 9:00 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Mar 25, 2019 at 01:18:05PM +0000, Daniel Gustafsson wrote:\n>\n> > When reading another codepath, I happened to notice a few codepaths where we do\n> > pg_malloc() immediately followed by a memset( .. 0, ..), without there being a\n> > justification (that I can see) for not using pg_malloc0() instead. The attached\n> > patch changes to pg_malloc0(), and passes make check.\n>\n> If we simplify all of them (and that's not really a big deal), I have\n> spotted two extra places on top of what you noticed, one in gist.c\n> where ROTATEDIST is defined and a second one in tablefunc.c.\n\nNice, I had missed them as I my eyes set on pg_malloc(). I've done another pass over\nthe codebase and I can't spot any other on top of the additional ones you found where\nMemSet() in palloc0 is preferrable over memset().\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 26 Mar 2019 09:14:46 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_malloc0() instead of pg_malloc()+memset()"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 09:14:46AM +0000, Daniel Gustafsson wrote:\n> Nice, I had missed them as I my eyes set on pg_malloc(). I've done another pass over\n> the codebase and I can't spot any other on top of the additional ones you found where\n> MemSet() in palloc0 is preferrable over memset().\n\nSame here, committed. Just for the note, I have been just using git\ngrep -A to look at the follow-up lines of any allocation calls.\n--\nMichael",
"msg_date": "Wed, 27 Mar 2019 12:04:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_malloc0() instead of pg_malloc()+memset()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing some zheap code last week, I noticed that the naming\nof the ExecStore*Tuple() functions seems a bit unfortunate. One\nreason is that we are now using slots in all kinds of places other\nthan the executor, so that the \"Exec\" prefix seems a bit out of place.\nHowever, you could argue that this is OK, on the grounds that the\nfunctions are defined in the executor. What I noticed, though, is\nthat every new table AM is probably going to need its own type of\nslot, and it's natural to define those slot support functions in the\nAM code rather than having every AM shove whatever it's got into\nexecTuples.c.\n\nBut if you do that, then you end up with a function with a name like\nExecStoreZHeapTuple() which isn't in src/backend/executor, and which\nfurthermore will never be called from the executor, because the\nexecutor only knows about a short, hard-coded list of built-in slot\ntypes, and ZHeapTuples are not one of them. That function will,\nrather, only be called from the zheap code. And having a function\nwhose name starts with Exec... that is neither defined in the executor\nnor used in the executor seems wrong.\n\nSo I think we should rename these functions before we get too used to\nthe new names. There is a significant advantage in doing that for v12\nbecause people are already going to have to adjust third-party code to\ncompensate for the fact that we no longer have an ExecStoreTuple()\nfunction any more.\n\nI'm not sure exactly what names would be better. Perhaps just change\nthe \"Exec\" prefix to \"Slot\", e.g. SlotStoreHeapTuple(). Or maybe put\nInSlot at the end, like StoreHeapTupleInSlot(). Or just take those\nfour words - slot, heap, tuple, and store - and put them in any order\nyou like. TupleSlotStoreHeap?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 11:20:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "I agree about taking out the \"Exec\" part of the name.\n\nOn 2019-Mar-25, Robert Haas wrote:\n\n> I'm not sure exactly what names would be better. Perhaps just change\n> the \"Exec\" prefix to \"Slot\", e.g. SlotStoreHeapTuple(). Or maybe put\n> InSlot at the end, like StoreHeapTupleInSlot(). Or just take those\n> four words - slot, heap, tuple, and store - and put them in any order\n> you like. TupleSlotStoreHeap?\n\nHeapStoreTupleInSlot?\n\n(I wonder why not \"Put\" instead of \"Store\".)\n\nShould we keep ExecStoreTuple as a compatibility macro for third party\ncode?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Mar 2019 12:27:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 11:27 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Should we keep ExecStoreTuple as a compatibility macro for third party\n> code?\n\nI think that might be rather dangerous, actually, because slots now\nhave a type, which they didn't before. You have to use the correct\nfunction for the kind of slot you've got.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 11:30:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "On 2019-Mar-25, Robert Haas wrote:\n\n> On Mon, Mar 25, 2019 at 11:27 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> > Should we keep ExecStoreTuple as a compatibility macro for third party\n> > code?\n> \n> I think that might be rather dangerous, actually, because slots now\n> have a type, which they didn't before. You have to use the correct\n> function for the kind of slot you've got.\n\nAh, right. I was thinking of assuming that the un-updated third-party\ncode would always have a slot of type heap, but of course that's not\nguaranteed.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n",
"msg_date": "Mon, 25 Mar 2019 12:49:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 11:20:13 -0400, Robert Haas wrote:\n> While reviewing some zheap code last week, I noticed that the naming\n> of the ExecStore*Tuple() functions seems a bit unfortunate. One\n> reason is that we are now using slots in all kinds of places other\n> than the executor, so that the \"Exec\" prefix seems a bit out of place.\n> However, you could argue that this is OK, on the grounds that the\n> functions are defined in the executor. What I noticed, though, is\n> that every new table AM is probably going to need its own type of\n> slot, and it's natural to define those slot support functions in the\n> AM code rather than having every AM shove whatever it's got into\n> execTuples.c.\n\nYea, it's not accurate. Nor was it before this change though - there's\ne.g. plenty executor code that used slots long before v12.\n\n\n> But if you do that, then you end up with a function with a name like\n> ExecStoreZHeapTuple() which isn't in src/backend/executor, and which\n> furthermore will never be called from the executor, because the\n> executor only knows about a short, hard-coded list of built-in slot\n> types, and ZHeapTuples are not one of them. That function will,\n> rather, only be called from the zheap code. And having a function\n> whose name starts with Exec... that is neither defined in the executor\n> nor used in the executor seems wrong.\n\nYea.\n\nI feel if we go there we probably should also rename ExecClearTuple,\nExecMaterializeSlot, ExecCopySlotHeapTuple, ExecCopySlotMinimalTuple,\nExecCopySlot. Although there we probably can add a compat wrapper.\n\n\n> So I think we should rename these functions before we get too used to\n> the new names. There is a significant advantage in doing that for v12\n> because people are already going to have to adjust third-party code to\n> compensate for the fact that we no longer have an ExecStoreTuple()\n> function any more.\n\nIndeed.\n\n\n> I'm not sure exactly what names would be better. Perhaps just change\n> the \"Exec\" prefix to \"Slot\", e.g. SlotStoreHeapTuple(). Or maybe put\n> InSlot at the end, like StoreHeapTupleInSlot(). Or just take those\n> four words - slot, heap, tuple, and store - and put them in any order\n> you like. TupleSlotStoreHeap?\n\nI think I might go for slot_store_heap_tuple etc, given that a lot of\nthose accessors have been slot_ for about ever. What do you think?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 08:57:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 11:57 AM Andres Freund <andres@anarazel.de> wrote:\n> I think I might go for slot_store_heap_tuple etc, given that a lot of\n> those accessors have been slot_ for about ever. What do you think?\n\nI don't have a super-strong feeling about it, although changing the\ncase convention might confuse a few people.\n\nMaybe we don't really need the word \"tuple\". Like we could just make\nit slot_store_heap() or SlotStoreHeap(). A slot can only store a\ntuple, after all.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 12:05:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 12:05:52 -0400, Robert Haas wrote:\n> On Mon, Mar 25, 2019 at 11:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think I might go for slot_store_heap_tuple etc, given that a lot of\n> > those accessors have been slot_ for about ever. What do you think?\n> \n> I don't have a super-strong feeling about it, although changing the\n> case convention might confuse a few people.\n\nRight, but is that going to be more people that are going to be confused\nby the difference between the already existing slot_ functions and the\nexisting camel-cased stuff?\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 09:32:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Maybe we don't really need the word \"tuple\". Like we could just make\n> it slot_store_heap() or SlotStoreHeap(). A slot can only store a\n> tuple, after all.\n\nI don't think it's wise to think of these things as just \"slots\";\nthat name is way too generic. They are \"tuple slots\", and so that\nword has to stay in the relevant function names.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 12:33:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 12:33:38 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Maybe we don't really need the word \"tuple\". Like we could just make\n> > it slot_store_heap() or SlotStoreHeap(). A slot can only store a\n> > tuple, after all.\n> \n> I don't think it's wise to think of these things as just \"slots\";\n> that name is way too generic. They are \"tuple slots\", and so that\n> word has to stay in the relevant function names.\n\nHm. But we already have slot_{getsomeattrs, getallattrs, attisnull,\ngetattr, getsysattr}. But perhaps the att in there is enough addiitional\ninformation?\n\nAlthough now I'm looking at consistency annoyed at slot_attisnull (no r)\nand slot_getattr (r) ;)\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 09:36:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-25 12:33:38 -0400, Tom Lane wrote:\n>> I don't think it's wise to think of these things as just \"slots\";\n>> that name is way too generic. They are \"tuple slots\", and so that\n>> word has to stay in the relevant function names.\n\n> Hm. But we already have slot_{getsomeattrs, getallattrs, attisnull,\n> getattr, getsysattr}. But perhaps the att in there is enough addiitional\n> information?\n\nI don't claim to be entirely innocent in this matter ;-)\n\nIf we're going to rename stuff in this area without concern for avoiding\ninessential code churn, then those are valid targets as well.\n\nBTW, maybe it's worth drawing a naming distinction between\nslot-type-specific and slot-type-independent functions?\n(I assume there are still some of the latter.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 25 Mar 2019 12:45:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-25 12:45:36 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-03-25 12:33:38 -0400, Tom Lane wrote:\n> >> I don't think it's wise to think of these things as just \"slots\";\n> >> that name is way too generic. They are \"tuple slots\", and so that\n> >> word has to stay in the relevant function names.\n> \n> > Hm. But we already have slot_{getsomeattrs, getallattrs, attisnull,\n> > getattr, getsysattr}. But perhaps the att in there is enough addiitional\n> > information?\n> \n> I don't claim to be entirely innocent in this matter ;-)\n> \n> If we're going to rename stuff in this area without concern for avoiding\n> inessential code churn, then those are valid targets as well.\n\nFWIW, I don't have much of a problem with current slot_ names.\n\n\n> BTW, maybe it's worth drawing a naming distinction between\n> slot-type-specific and slot-type-independent functions?\n> (I assume there are still some of the latter.)\n\nHm, I guess that depends on what you classify as type independent. Is\nsomething like slot_getattr type independent, even though it internally\ncalls slot_getsomeattrs, which'll call a callback if additional\ndeforming is necessary?\n\nI'd say, if you exclude functions like that, ExecStoreVirtualTuple(),\nExecStoreAllNullTuple(), slot_getmissingattrs(), ExecSetSlotDescriptor()\nare probably the only ones that have no awareness of the type of a\nslot.\n\nI'm not sure it matters that much however? Unless you store something in\na slot, code normally shouldn't have to care what type a slot is.\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 10:20:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
},
{
"msg_contents": "On Mon, Mar 25, 2019 at 12:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Maybe we don't really need the word \"tuple\". Like we could just make\n> > it slot_store_heap() or SlotStoreHeap(). A slot can only store a\n> > tuple, after all.\n>\n> I don't think it's wise to think of these things as just \"slots\";\n> that name is way too generic. They are \"tuple slots\", and so that\n> word has to stay in the relevant function names.\n\nI suppose there is some potential for confusion with things like\nlogical replication slots, but I think that these are the most\nwidely-used type of slot in the backend, so it's not entirely crazy to\nthink that they deserve a bit of special consideration. I'm not\nviolently opposed to using four words instead of three\n(slot_store_heap_tuple vs. slot_store_heap) but to really spell out\nthe operation in full you'd need to say something like\nHeapTupleTableSlotStoreHeapTuple, and I think that's pretty unwieldy\nfor what's likely to end up being a very common programming idiom.\n\nIt's not crazy that we type 'cd' to change directories rather than\n'chdir' or 'change_directory'.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n",
"msg_date": "Mon, 25 Mar 2019 13:22:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: renaming ExecStoreWhateverTuple"
}
] |
[
{
"msg_contents": "Hi,\n\nFor the tableam work I'd like to remove heapam.h from\nnodeModifyTable.c. The only remaining impediment to that is a call to\nsetLastTid(), which is defined in tid.c but declared in heapam.h.\n\nThat doesn't seem like a particularly accurate location, it doesn't\nreally have that much to do with heap. It seems more like a general\nexecutor facility or something. Does anybody have a good idea where to\nput the declaration?\n\n\nLooking at how this function is used, lead to some confusion on my part.\n\n\nWe currently call setLastTid in ExecInsert():\n\n\tif (canSetTag)\n\t{\n\t\t(estate->es_processed)++;\n\t\tsetLastTid(&slot->tts_tid);\n\t}\n\nAnd Current_last_tid, the variable setLastTid sets, is only used in\ncurrtid_byreloid():\n\n\nDatum\ncurrtid_byreloid(PG_FUNCTION_ARGS)\n{\n\tOid\t\t\treloid = PG_GETARG_OID(0);\n\tItemPointer tid = PG_GETARG_ITEMPOINTER(1);\n\tItemPointer result;\n\tRelation\trel;\n\tAclResult\taclresult;\n\tSnapshot\tsnapshot;\n\n\tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n\tif (!reloid)\n\t{\n\t\t*result = Current_last_tid;\n\t\tPG_RETURN_ITEMPOINTER(result);\n\t}\n\nI've got to say I'm a bit baffled by this interface. If somebody passes\nin a 0 reloid, we just ignore the passed in tid, and return the last tid\ninserted into any table?\n\nI then was even more baffled to find that there's no documentation of\nthis function, nor this special case behaviour, to be found\nanywhere. Not in the docs (which don't mention the function, nor it's\nspecial case behaviour for relation 0), nor in the code.\n\n\nIt's unfortunately used in psqlobdc:\n\n else if ((flag & USE_INSERTED_TID) != 0)\n printfPQExpBuffer(&selstr, \"%s where ctid = (select currtid(0, '(0,0)'))\", load_stmt);\n\nI gotta say, all that currtid code looks to me like it just should be\nripped out. It really doesn't make a ton of sense to just walk the tid\nchain for a random tid - without an open snapshot, there's absolutely no\nguarantee that you get back anything meaningful. Nor am I convinced\nit's perfectly alright to just return the latest inserted tid for a\nrelation the user might not have any permission for.\n\nOTOH, we probably can't just break psqlodbc, so we probably have to hold\nour noses a bit longer and just move the prototype elsewhere? But I'm\ninclined to just say that this functionality is going to get ripped out\nsoon, unless somebody from the odbc community works on making it make a\nbit more sense (tests, docs at the very very least).\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Mon, 25 Mar 2019 17:44:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "setLastTid() and currtid()"
},
{
"msg_contents": "Hi Andres,\nSorry for the late reply.\n\nOn 2019/03/26 9:44, Andres Freund wrote:\n> Hi,\n>\n> For the tableam work I'd like to remove heapam.h from\n> nodeModifyTable.c. The only remaining impediment to that is a call to\n> setLastTid(), which is defined in tid.c but declared in heapam.h.\n>\n> That doesn't seem like a particularly accurate location, it doesn't\n> really have that much to do with heap. It seems more like a general\n> executor facility or something. Does anybody have a good idea where to\n> put the declaration?\n>\n>\n> Looking at how this function is used, lead to some confusion on my part.\n>\n>\n> We currently call setLastTid in ExecInsert():\n>\n> \tif (canSetTag)\n> \t{\n> \t\t(estate->es_processed)++;\n> \t\tsetLastTid(&slot->tts_tid);\n> \t}\n>\n> And Current_last_tid, the variable setLastTid sets, is only used in\n> currtid_byreloid():\n>\n>\n> Datum\n> currtid_byreloid(PG_FUNCTION_ARGS)\n> {\n> \tOid\t\t\treloid = PG_GETARG_OID(0);\n> \tItemPointer tid = PG_GETARG_ITEMPOINTER(1);\n> \tItemPointer result;\n> \tRelation\trel;\n> \tAclResult\taclresult;\n> \tSnapshot\tsnapshot;\n>\n> \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n> \tif (!reloid)\n> \t{\n> \t\t*result = Current_last_tid;\n> \t\tPG_RETURN_ITEMPOINTER(result);\n> \t}\n>\n> I've got to say I'm a bit baffled by this interface. If somebody passes\n> in a 0 reloid, we just ignore the passed in tid, and return the last tid\n> inserted into any table?\n>\n> I then was even more baffled to find that there's no documentation of\n> this function, nor this special case behaviour, to be found\n> anywhere. Not in the docs (which don't mention the function, nor it's\n> special case behaviour for relation 0), nor in the code.\n>\n>\n> It's unfortunately used in psqlobdc:\n>\n> else if ((flag & USE_INSERTED_TID) != 0)\n> printfPQExpBuffer(&selstr, \"%s where ctid = (select currtid(0, '(0,0)'))\", load_stmt);\n\nThe above code remains only for PG servers whose version < 8.2.\nPlease remove the code around setLastTid().\n\nregards,\nHiroshi Inoue\n\n> I gotta say, all that currtid code looks to me like it just should be\n> ripped out. It really doesn't make a ton of sense to just walk the tid\n> chain for a random tid - without an open snapshot, there's absolutely no\n> guarantee that you get back anything meaningful. Nor am I convinced\n> it's perfectly alright to just return the latest inserted tid for a\n> relation the user might not have any permission for.\n>\n> OTOH, we probably can't just break psqlodbc, so we probably have to hold\n> our noses a bit longer and just move the prototype elsewhere? But I'm\n> inclined to just say that this functionality is going to get ripped out\n> soon, unless somebody from the odbc community works on making it make a\n> bit more sense (tests, docs at the very very least).\n>\n> Greetings,\n>\n> Andres Freund\n\n---\nこのメールは、AVG によってウイルス チェックされています。\nhttp://www.avg.com\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 10:01:08 +0900",
"msg_from": "\"Inoue, Hiroshi\" <h-inoue@dream.email.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-27 10:01:08 +0900, Inoue, Hiroshi wrote:\n> Hi Andres,\n> Sorry for the late reply.\n\nNot late at all. Sorry for *my* late reply :)\n\n\n> On 2019/03/26 9:44, Andres Freund wrote:\n> > Hi,\n> > \n> > For the tableam work I'd like to remove heapam.h from\n> > nodeModifyTable.c. The only remaining impediment to that is a call to\n> > setLastTid(), which is defined in tid.c but declared in heapam.h.\n> > \n> > That doesn't seem like a particularly accurate location, it doesn't\n> > really have that much to do with heap. It seems more like a general\n> > executor facility or something. Does anybody have a good idea where to\n> > put the declaration?\n> > \n> > \n> > Looking at how this function is used, lead to some confusion on my part.\n> > \n> > \n> > We currently call setLastTid in ExecInsert():\n> > \n> > \tif (canSetTag)\n> > \t{\n> > \t\t(estate->es_processed)++;\n> > \t\tsetLastTid(&slot->tts_tid);\n> > \t}\n> > \n> > And Current_last_tid, the variable setLastTid sets, is only used in\n> > currtid_byreloid():\n> > \n> > \n> > Datum\n> > currtid_byreloid(PG_FUNCTION_ARGS)\n> > {\n> > \tOid\t\t\treloid = PG_GETARG_OID(0);\n> > \tItemPointer tid = PG_GETARG_ITEMPOINTER(1);\n> > \tItemPointer result;\n> > \tRelation\trel;\n> > \tAclResult\taclresult;\n> > \tSnapshot\tsnapshot;\n> > \n> > \tresult = (ItemPointer) palloc(sizeof(ItemPointerData));\n> > \tif (!reloid)\n> > \t{\n> > \t\t*result = Current_last_tid;\n> > \t\tPG_RETURN_ITEMPOINTER(result);\n> > \t}\n> > \n> > I've got to say I'm a bit baffled by this interface. If somebody passes\n> > in a 0 reloid, we just ignore the passed in tid, and return the last tid\n> > inserted into any table?\n> > \n> > I then was even more baffled to find that there's no documentation of\n> > this function, nor this special case behaviour, to be found\n> > anywhere. Not in the docs (which don't mention the function, nor it's\n> > special case behaviour for relation 0), nor in the code.\n> > \n> > \n> > It's unfortunately used in psqlobdc:\n> > \n> > else if ((flag & USE_INSERTED_TID) != 0)\n> > printfPQExpBuffer(&selstr, \"%s where ctid = (select currtid(0, '(0,0)'))\", load_stmt);\n> \n> The above code remains only for PG servers whose version < 8.2.\n> Please remove the code around setLastTid().\n\nDoes anybody else have concerns about removing this interface? Does\nanybody think we should have a deprecation phase? Should we remove this\nin 12 or 13?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2019 09:52:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-03-27 10:01:08 +0900, Inoue, Hiroshi wrote:\n>> The above code remains only for PG servers whose version < 8.2.\n>> Please remove the code around setLastTid().\n\n> Does anybody else have concerns about removing this interface? Does\n> anybody think we should have a deprecation phase? Should we remove this\n> in 12 or 13?\n\nI think removing it after feature freeze is not something to do,\nbut +1 for nuking it as soon as the v13 branch opens. Unless\nthere's some important reason we need it to be gone in v12?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 13:27:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "On 2019-Apr-11, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-03-27 10:01:08 +0900, Inoue, Hiroshi wrote:\n> >> The above code remains only for PG servers whose version < 8.2.\n> >> Please remove the code around setLastTid().\n> \n> > Does anybody else have concerns about removing this interface? Does\n> > anybody think we should have a deprecation phase? Should we remove this\n> > in 12 or 13?\n> \n> I think removing it after feature freeze is not something to do,\n> but +1 for nuking it as soon as the v13 branch opens. Unless\n> there's some important reason we need it to be gone in v12?\n\nUmm ... I'm not sure I agree. We're in feature freeze, not code freeze,\nand while we're not expecting to have any new feature patches pushed,\ncleanup for features that did make the cut is still fair game. As I\nunderstand, this setLastTid stuff would cause trouble if used with a\nnon-core table AM. Furthermore, if nothing uses it, what's the point of\nkeeping it?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 13:52:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-11 13:27:03 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-03-27 10:01:08 +0900, Inoue, Hiroshi wrote:\n> >> The above code remains only for PG servers whose version < 8.2.\n> >> Please remove the code around setLastTid().\n> \n> > Does anybody else have concerns about removing this interface? Does\n> > anybody think we should have a deprecation phase? Should we remove this\n> > in 12 or 13?\n> \n> I think removing it after feature freeze is not something to do,\n> but +1 for nuking it as soon as the v13 branch opens. Unless\n> there's some important reason we need it to be gone in v12?\n\nNo, I don't think there really is. They're bogus and possibly a bit\ndangerous, but that's not really new.\n\nI was mostly just reminded of this when Heikki asked me to improve the\ndocumentation for heap_get_latest_tid/table_get_latest_tid() - and I was\nbriefly wondering whether we could just nuke the whole functionality.\nBut it's still used in nodeTidscan.c:\n\n\t\t/*\n\t\t * For WHERE CURRENT OF, the tuple retrieved from the cursor might\n\t\t * since have been updated; if so, we should fetch the version that is\n\t\t * current according to our snapshot.\n\t\t */\n\t\tif (node->tss_isCurrentOf)\n\t\t\ttable_get_latest_tid(heapRelation, snapshot, &tid);\n\nIf we were able to just get rid of that I think there'd have been a\nstrong case for removing $subject in v12, to avoid exposing something to\nnew AMs that we're going to nuke in v13.\n\nThe only other reason I can see is that there's literally no use for\nthem (bogus and only used by pgodbc when targeting <= 8.2), and that\nthey cost a bit of performance and are the only reason heapam.h is still\nincluded in nodeModifyTable.h (hurting my pride). But that's probably\nnot sufficient reason.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2019 10:52:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-11 13:52:08 -0400, Alvaro Herrera wrote:\n> As I understand, this setLastTid stuff would cause trouble if used\n> with a non-core table AM.\n\nI'm not sure there'd actually be trouble. I mean, what it does for heap\nis basically meaningless already, so it's not going to be meaningfully\nworse for any other table AM. It's an undocumented odd interface, whose\nimplementation is also ugly, and that'd be a fair reason on its own to\nrip it out though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Apr 2019 11:05:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-11 13:27:03 -0400, Tom Lane wrote:\n>> I think removing it after feature freeze is not something to do,\n>> but +1 for nuking it as soon as the v13 branch opens. Unless\n>> there's some important reason we need it to be gone in v12?\n\n> No, I don't think there really is. They're bogus and possibly a bit\n> dangerous, but that's not really new.\n\n> I was mostly just reminded of this when Heikki asked me to improve the\n> documentation for heap_get_latest_tid/table_get_latest_tid() - and I was\n> briefly wondering whether we could just nuke the whole functionality.\n> But it's still used in nodeTidscan.c:\n\nYeah, if we could simplify the tableam API, that would be sufficient\nreason to remove the stuff in v12, IMO. But if it is outside of that\nAPI, I'd counsel waiting till v13.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 11 Apr 2019 14:06:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 02:06:13PM -0400, Tom Lane wrote:\n> Yeah, if we could simplify the tableam API, that would be sufficient\n> reason to remove the stuff in v12, IMO. But if it is outside of that\n> API, I'd counsel waiting till v13.\n\nYes, I agree that simplifying the table AM interface would be a reason\nfine enough to delete this code for v12. If not, v13 sounds better at\nthis stage.\n--\nMichael",
"msg_date": "Fri, 12 Apr 2019 13:44:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "On Fri, Apr 12, 2019 at 1:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 11, 2019 at 02:06:13PM -0400, Tom Lane wrote:\n> > Yeah, if we could simplify the tableam API, that would be sufficient\n> > reason to remove the stuff in v12, IMO. But if it is outside of that\n> > API, I'd counsel waiting till v13.\n>\n> Yes, I agree that simplifying the table AM interface would be a reason\n> fine enough to delete this code for v12. If not, v13 sounds better at\n> this stage.\n\nNow we are in the dev of v13, so it's time to rip the functions out?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 7 Feb 2020 17:24:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 05:24:12PM +0900, Fujii Masao wrote:\n> On Fri, Apr 12, 2019 at 1:44 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Apr 11, 2019 at 02:06:13PM -0400, Tom Lane wrote:\n> > > Yeah, if we could simplify the tableam API, that would be sufficient\n> > > reason to remove the stuff in v12, IMO. But if it is outside of that\n> > > API, I'd counsel waiting till v13.\n> >\n> > Yes, I agree that simplifying the table AM interface would be a reason\n> > fine enough to delete this code for v12. If not, v13 sounds better at\n> > this stage.\n> \n> Now we are in the dev of v13, so it's time to rip the functions out?\n\nWhere are we on this? Can the functions be removed in PG 14?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 13 Oct 2020 14:12:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
},
{
"msg_contents": "On Tue, Oct 13, 2020 at 02:12:53PM -0400, Bruce Momjian wrote:\n> Where are we on this? Can the functions be removed in PG 14?\n\n(Sent this message previously but it got lost after some cross-posting\nacross two lists, issue fixed now.)\n\nI still have a patch lying around to do that, registered in the CF:\nhttps://commitfest.postgresql.org/30/2579/\nAnd here is the latest status of the discussion, based on some study\nof the ODBC driver I have done:\nhttps://www.postgresql.org/message-id/20200626041155.GD1504@paquier.xyz\n\nI would rather gather any future discussions on the other thread.\n--\nMichael",
"msg_date": "Thu, 15 Oct 2020 17:12:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: setLastTid() and currtid()"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have just committed a fix for a crash with the handling of partition\nbounds using column references which has been discussed here:\nhttps://www.postgresql.org/message-id/15668-0377b1981aa1a393@postgresql.org\n\nAnd while discussing on the matter with Amit, the point has been\nraised that default expressions with column references can lead to\nsome funny error messages with the context. For example, take that\nwith an undefined column:\n=# create table foo (a int default (a.a));\nERROR: 42P01: missing FROM-clause entry for table \"a\"\nLINE 1: create table foo (a int default (a.a));\n\nThis confusion is old I think, and reproduces down to 9.4 and older.\nIf using directly a reference from a column's table then things get\ncorrect:\n=# create table foo (a int default (foo.a));\nERROR: 42P10: cannot use column references in default expression\nLOCATION: cookDefault, heap.c:2948\n=# create table foo (a int default (a));\nERROR: 42P10: cannot use column references in default expression\nLOCATION: cookDefault, heap.c:2948\n\nWe have the same problem for partition bounds actually, which is new\nas v12 as partition bound expressions now use the common expression\nmachinery for transformation:\n=# CREATE TABLE list_parted (a int) PARTITION BY LIST (a);\nCREATE TABLE\n=# CREATE TABLE part_list_crash PARTITION OF list_parted\n FOR VALUES IN (somename.somename);\nERROR: 42P01: missing FROM-clause entry for table \"somename\"\nLINE 2: FOR VALUES IN (somename.somename)\n\nOne idea which came from Amit, and it seems to me that it is a good\nidea, would be to have more context-related error messages directly in\ntransformColumnRef(), so as we can discard at an early stage column\nreferences which are part of contexts where there is no meaning to\nhave them. The inconsistent part in HEAD is that cookDefault() and\ntransformPartitionBoundValue() already discard column references, so\nif we move those checks at transformation phase we can simplify the\nerror handling post-transformation. This would make the whole thing\nmore consistent.\n\nWhile this takes care of the RTE issues, this has a downside though.\nTake for example this case using an expression with an aggregate and\na column reference: \n=# CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted\n FOR VALUES IN (sum(a));\n-ERROR: aggregate functions are not allowed in partition bound\n+ERROR: cannot use column reference in partition bound expression\n\nSo this would mean that we would first complain of the most inner\nparts of the expression, which is more intuitive actually in my\nopinion. The difference can be seen using the patch attached for\npartition bounds, as I have added more test coverage with a previous\ncommit. We also don't have much tests in the code for default\nexpression patterns, so I have added some.\n\nThe docs of CREATE TABLE also look incorrect to me when it comes to\ndefault expressions. It says the following: \"other columns in the\ncurrent table are not allowed\". However *all* columns are not\nauthorized, including the column which uses the expression.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 26 Mar 2019 11:08:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Misleading errors with column references in default expressions and\n partition bounds"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> One idea which came from Amit, and it seems to me that it is a good\n> idea, would be to have more context-related error messages directly in\n> transformColumnRef(), so as we can discard at an early stage column\n> references which are part of contexts where there is no meaning to\n> have them.\n\n+1 for the general idea, but I find the switch a bit overly verbose.\nDo we really need to force every new EXPR_KIND to visit this spot,\nwhen so few of them have a need to do anything? I'd be a bit inclined\nto simplify it to\n\n\tswitch (pstate->p_expr_kind)\n\t{\n\t\tcase EXPR_KIND_COLUMN_DEFAULT:\n\t\t\tereport(...);\n\t\t\tbreak;\n\t\tcase EXPR_KIND_PARTITION_BOUND:\n\t\t\tereport(...);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\nThat's just a nitpick though.\n\n> While this takes care of the RTE issues, this has a downside though.\n> Take for example this case using an expression with an aggregate and\n> a column reference: \n> =# CREATE TABLE part_bogus_expr_fail PARTITION OF list_parted\n> FOR VALUES IN (sum(a));\n> -ERROR: aggregate functions are not allowed in partition bound\n> +ERROR: cannot use column reference in partition bound expression\n\nI don't see that as an issue.\n\n> The docs of CREATE TABLE also look incorrect to me when it comes to\n> default expressions. It says the following: \"other columns in the\n> current table are not allowed\". However *all* columns are not\n> authorized, including the column which uses the expression.\n\nI think the idea is that trying to reference another column is something\nthat people might well try to do, whereas referencing the DEFAULT's\nown column is obviously silly. In particular the use of \"cross-reference\"\nimplies that another column is what is being referenced. If we dumb it\ndown to just \"references to columns in the current table\", then it's\nconsistent, but it's also completely redundant with the main part of the\nsentence. It doesn't help that somebody decided to jam the independent\nissue of subqueries into the same sentence. In short, maybe it'd be\nbetter like this:\n\n ... The value\n is any variable-free expression (in particular, cross-references\n to other columns in the current table are not allowed). Subqueries\n are not allowed either.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Mar 2019 10:03:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Misleading errors with column references in default expressions\n and partition bounds"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 10:03:35AM -0400, Tom Lane wrote:\n> +1 for the general idea, but I find the switch a bit overly verbose.\n> Do we really need to force every new EXPR_KIND to visit this spot,\n> when so few of them have a need to do anything? I'd be a bit inclined\n> to simplify it to\n> \n> \tswitch (pstate->p_expr_kind)\n> \t{\n> \t\tcase EXPR_KIND_COLUMN_DEFAULT:\n> \t\t\tereport(...);\n> \t\t\tbreak;\n> \t\tcase EXPR_KIND_PARTITION_BOUND:\n> \t\t\tereport(...);\n> \t\t\tbreak;\n> \t\tdefault:\n> \t\t\tbreak;\n> \t}\n> \n> That's just a nitpick though.\n\nParseExprKind is an enum, so listing all the options without the\ndefault has the advantage to generate a warning if somebody adds a\nvalue. This way anybody changing this code will need to think about\nit.\n\n> I don't see that as an issue.\n\nThanks!\n\n> I think the idea is that trying to reference another column is something\n> that people might well try to do, whereas referencing the DEFAULT's\n> own column is obviously silly. In particular the use of \"cross-reference\"\n> implies that another column is what is being referenced. If we dumb it\n> down to just \"references to columns in the current table\", then it's\n> consistent, but it's also completely redundant with the main part of the\n> sentence. It doesn't help that somebody decided to jam the independent\n> issue of subqueries into the same sentence. In short, maybe it'd be\n> better like this:\n> \n> ... The value\n> is any variable-free expression (in particular, cross-references\n> to other columns in the current table are not allowed). Subqueries\n> are not allowed either.\n\nOkay, I think I see your point here. That sounds sensible.\n--\nMichael",
"msg_date": "Wed, 27 Mar 2019 12:13:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Misleading errors with column references in default expressions\n and partition bounds"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 12:13:16PM +0900, Michael Paquier wrote:\n> ParseExprKind is an enum, so listing all the options without the\n> default has the advantage to generate a warning if somebody adds a\n> value. This way anybody changing this code will need to think about\n> it.\n\nA bit late, but committed without the case/default.\n\n>> ... The value\n>> is any variable-free expression (in particular, cross-references\n>> to other columns in the current table are not allowed). Subqueries\n>> are not allowed either.\n> \n> Okay, I think I see your point here. That sounds sensible.\n\nAnd I have used this suggestion from Tom as well for the docs.\n--\nMichael",
"msg_date": "Thu, 28 Mar 2019 21:14:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Misleading errors with column references in default expressions\n and partition bounds"
},
{
"msg_contents": "On 2019/03/28 21:14, Michael Paquier wrote:\n> On Wed, Mar 27, 2019 at 12:13:16PM +0900, Michael Paquier wrote:\n>> ParseExprKind is an enum, so listing all the options without the\n>> default has the advantage to generate a warning if somebody adds a\n>> value. This way anybody changing this code will need to think about\n>> it.\n> \n> A bit late, but committed without the case/default.\n> \n>>> ... The value\n>>> is any variable-free expression (in particular, cross-references\n>>> to other columns in the current table are not allowed). Subqueries\n>>> are not allowed either.\n>>\n>> Okay, I think I see your point here. That sounds sensible.\n> \n> And I have used this suggestion from Tom as well for the docs.\n\nThanks Michael for taking care of this.\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 11:55:48 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Misleading errors with column references in default expressions\n and partition bounds"
}
] |
[
{
"msg_contents": "Hi,\n\nOne of our clients suggested that the installation document[1] lacks description\nabout requriements of installing *-devel packages. For example, postgresqlxx-devel\nis required for using --with-pgsql, and openssl-devel for --with-openssl, and so on,\nbut these are not documented.\n\n[1] http://www.pgpool.net/docs/pgpool-II-3.7.4/en/html/install-pgpool.html\n\nI know the document of PostgreSQL[2] also lacks the description about openssl-devel,\nkerberos-devel, etc. (except to readline-devl). However, it would be convenient\nfor users who want to install Pgpool-II from source code if the required packages\nfor installation are described in the document explicitly.\n\n[2] https://www.postgresql.org/docs/current/install-requirements.html\n\nIs it not worth to consider this?\n\n\nBTW, the Pgpool-II doc[2] says:\n\n--with-memcached=path\n Pgpool-II binaries will use memcached for in memory query cache. You have to install libmemcached. \n\n, but maybe libmemcached-devel is correct instead of libmemcached?\n\nRegards,\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n",
"msg_date": "Tue, 26 Mar 2019 20:38:31 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Improvement of installation document"
},
{
"msg_contents": "Hi,\n\nI apologize that I accidentally sent the following email to this list.\nPlease disregard this.\n\nI am sorry for making a lot of noise.\n\nRegard,\n\nOn Tue, 26 Mar 2019 20:38:31 +0900\nYugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> Hi,\n> \n> One of our clients suggested that the installation document[1] lacks description\n> about requriements of installing *-devel packages. For example, postgresqlxx-devel\n> is required for using --with-pgsql, and openssl-devel for --with-openssl, and so on,\n> but these are not documented.\n> \n> [1] http://www.pgpool.net/docs/pgpool-II-3.7.4/en/html/install-pgpool.html\n> \n> I know the document of PostgreSQL[2] also lacks the description about openssl-devel,\n> kerberos-devel, etc. (except to readline-devl). However, it would be convenient\n> for users who want to install Pgpool-II from source code if the required packages\n> for installation are described in the document explicitly.\n> \n> [2] https://www.postgresql.org/docs/current/install-requirements.html\n> \n> Is it not worth to consider this?\n> \n> \n> BTW, the Pgpool-II doc[2] says:\n> \n> --with-memcached=path\n> Pgpool-II binaries will use memcached for in memory query cache. You have to install libmemcached. \n> \n> , but maybe libmemcached-devel is correct instead of libmemcached?\n> \n> Regards,\n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n> \n> -- \n> Yugo Nagata <nagata@sraoss.co.jp>\n> \n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n",
"msg_date": "Tue, 26 Mar 2019 20:45:19 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Improvement of installation document"
},
{
"msg_contents": "> One of our clients suggested that the installation document[1] lacks description\n> about requriements of installing *-devel packages. For example, postgresqlxx-devel\n> is required for using --with-pgsql, and openssl-devel for --with-openssl, and so on,\n> but these are not documented.\n> \n> [1] http://www.pgpool.net/docs/pgpool-II-3.7.4/en/html/install-pgpool.html\n> \n> I know the document of PostgreSQL[2] also lacks the description about openssl-devel,\n> kerberos-devel, etc. (except to readline-devl). However, it would be convenient\n> for users who want to install Pgpool-II from source code if the required packages\n> for installation are described in the document explicitly.\n> \n> [2] https://www.postgresql.org/docs/current/install-requirements.html\n> \n> Is it not worth to consider this?\n\nI am against the idea.\n\nDevelopment package names could differ according to\ndistributions/OS. For example, the developement package of OpenSSL is\n\"openssl-dev\", not \"openssl-devel\" in Debian or Debian derived\nsystems.\n\nAnother reason is, a user who is installaing software from source code\nshould be familiar enough with the fact that each software requires\ndevelopment libraries.\n\nIn summary adding not-so-complete-list-of-development-package-names to\nour document will give incorrect information to novice users, and will\nbe annoying for skilled users.\n\n> BTW, the Pgpool-II doc[2] says:\n> \n> --with-memcached=path\n> Pgpool-II binaries will use memcached for in memory query cache. You have to install libmemcached. \n> \n> , but maybe libmemcached-devel is correct instead of libmemcached?\n\nI don't think so. \"libmemcached-devel\" is just a package name in a\ncetain Linux distribution. \"libmemcached\" is a more geneal and non\ndistribution dependent term.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n",
"msg_date": "Tue, 26 Mar 2019 22:18:53 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Improvement of installation document"
},
{
"msg_contents": "On Tue, 26 Mar 2019 22:18:53 +0900 (JST)\nTatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > One of our clients suggested that the installation document[1] lacks description\n> > about requriements of installing *-devel packages. For example, postgresqlxx-devel\n> > is required for using --with-pgsql, and openssl-devel for --with-openssl, and so on,\n> > but these are not documented.\n> > \n> > [1] http://www.pgpool.net/docs/pgpool-II-3.7.4/en/html/install-pgpool.html\n> > \n> > I know the document of PostgreSQL[2] also lacks the description about openssl-devel,\n> > kerberos-devel, etc. (except to readline-devl). However, it would be convenient\n> > for users who want to install Pgpool-II from source code if the required packages\n> > for installation are described in the document explicitly.\n> > \n> > [2] https://www.postgresql.org/docs/current/install-requirements.html\n> > \n> > Is it not worth to consider this?\n> \n> I am against the idea.\n> \n> Development package names could differ according to\n> distributions/OS. For example, the developement package of OpenSSL is\n> \"openssl-dev\", not \"openssl-devel\" in Debian or Debian derived\n> systems.\n> \n> Another reason is, a user who is installaing software from source code\n> should be familiar enough with the fact that each software requires\n> development libraries.\n> \n> In summary adding not-so-complete-list-of-development-package-names to\n> our document will give incorrect information to novice users, and will\n> be annoying for skilled users.\n\nOK. I agreed.\n\n# From this viewpoint, it would not be so good that PostgreSQL doc[2]\n# mentions readline-devel...., but this is noa a topic here.\n\n> \n> > BTW, the Pgpool-II doc[2] says:\n> > \n> > --with-memcached=path\n> > Pgpool-II binaries will use memcached for in memory query cache. You have to install libmemcached. \n> > \n> > , but maybe libmemcached-devel is correct instead of libmemcached?\n> \n> I don't think so. \"libmemcached-devel\" is just a package name in a\n> cetain Linux distribution. \"libmemcached\" is a more geneal and non\n> distribution dependent term.\n\nThanks for your explaination. I understood it.\n \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Wed, 27 Mar 2019 10:20:54 +0900",
"msg_from": "Yugo Nagata <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Improvement of installation document"
}
] |
[
{
"msg_contents": "As I mentioned in [1], I've had a few cases recently about auto-vacuum\nnot working. On the other thread, it was all about auto-vacuum being\nconfigured to run too slowly. The other culprit for auto-vacuum not\nworking is when people periodically use pg_stat_reset().\n\nThe problem with pg_stat_reset() is that it zeros n_dead_tup and\nn_mod_since_analyze. If say a user resets the stats on a monthly\nbasis then this can mean that tables that normally receive an\nauto-vacuum any less frequently than once per month could never\nreceive an auto-vacuum... at least not until an anti-wraparound vacuum\ngets hold of it.\n\nThe best I can think to do to try and avoid this is to put a giant\nWARNING in the docs about either not using it or to at least run\nANALYZE after using it.\n\nDoes anyone else think this is a problem worth trying to solve?\n\n[1] https://www.postgresql.org/message-id/CAKJS1f_YbXC2qTMPyCbmsPiKvZYwpuQNQMohiRXLj1r=8_rYvw@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n",
"msg_date": "Wed, 27 Mar 2019 01:53:42 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "Em ter, 26 de mar de 2019 às 09:54, David Rowley\n<david.rowley@2ndquadrant.com> escreveu:\n>\n> As I mentioned in [1], I've had a few cases recently about auto-vacuum\n> not working. On the other thread, it was all about auto-vacuum being\n> configured to run too slowly. The other culprit for auto-vacuum not\n> working is when people periodically use pg_stat_reset().\n>\n> The problem with pg_stat_reset() is that it zeros n_dead_tup and\n> n_mod_since_analyze. If say a user resets the stats on a monthly\n> basis then this can mean that tables that normally receive an\n> auto-vacuum any less frequently than once per month could never\n> receive an auto-vacuum... at least not until an anti-wraparound vacuum\n> gets hold of it.\n>\nIt seems a bug^H^H^H new feature. The problem is if you keep resetting\nstatistic before reaching an ANALYZE threshold. In this case,\nautoVACUUM was never triggered because we don't have stats. The\nconsequence is a huge bloat.\n\n> The best I can think to do to try and avoid this is to put a giant\n> WARNING in the docs about either not using it or to at least run\n> ANALYZE after using it.\n>\n+1. I am afraid it is not sufficient.\n\n> Does anyone else think this is a problem worth trying to solve?\n>\nI don't remember why we didn't consider table without stats to be\nANALYZEd. Isn't it the case to fix autovacuum? Analyze\nautovacuum_count + vacuum_count = 0?\n\nIf at least autovacuum was also time-based, it should mitigate the\nlack of statistic.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n",
"msg_date": "Tue, 26 Mar 2019 12:28:19 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On 2019-03-26 16:28, Euler Taveira wrote:\n> I don't remember why we didn't consider table without stats to be\n> ANALYZEd. Isn't it the case to fix autovacuum? Analyze\n> autovacuum_count + vacuum_count = 0?\n\nWhen the autovacuum system was introduced, we didn't have those columns.\n But now it seems to make sense that a table with autoanalyze_count +\nanalyze_count = 0 should be a candidate for autovacuum even if the write\nstatistics are zero. Obviously, this would have the effect that a\npg_stat_reset() causes an immediate autovacuum for all tables, so maybe\nit's not quite that simple.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 22:28:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On 2019-Mar-27, Peter Eisentraut wrote:\n\n> On 2019-03-26 16:28, Euler Taveira wrote:\n> > I don't remember why we didn't consider table without stats to be\n> > ANALYZEd. Isn't it the case to fix autovacuum? Analyze\n> > autovacuum_count + vacuum_count = 0?\n> \n> When the autovacuum system was introduced, we didn't have those columns.\n> But now it seems to make sense that a table with autoanalyze_count +\n> analyze_count = 0 should be a candidate for autovacuum even if the write\n> statistics are zero. Obviously, this would have the effect that a\n> pg_stat_reset() causes an immediate autovacuum for all tables, so maybe\n> it's not quite that simple.\n\nI'd say it would make them a candidate for auto-analyze; upon completion\nof that, there's sufficient data to determine whether auto-vacuum is\nneeded or not. This sounds like a sensible idea to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 18:33:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Thu, 28 Mar 2019 at 10:33, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-27, Peter Eisentraut wrote:\n>\n> > On 2019-03-26 16:28, Euler Taveira wrote:\n> > > I don't remember why we didn't consider table without stats to be\n> > > ANALYZEd. Isn't it the case to fix autovacuum? Analyze\n> > > autovacuum_count + vacuum_count = 0?\n> >\n> > When the autovacuum system was introduced, we didn't have those columns.\n> > But now it seems to make sense that a table with autoanalyze_count +\n> > analyze_count = 0 should be a candidate for autovacuum even if the write\n> > statistics are zero. Obviously, this would have the effect that a\n> > pg_stat_reset() causes an immediate autovacuum for all tables, so maybe\n> > it's not quite that simple.\n>\n> I'd say it would make them a candidate for auto-analyze; upon completion\n> of that, there's sufficient data to determine whether auto-vacuum is\n> needed or not. This sounds like a sensible idea to me.\n\nYeah, analyze, not vacuum. It is a bit scary to add new ways for\nauto-vacuum to suddenly have a lot of work to do. When all workers\nare busy it can lead to neglect of other duties. It's true that there\nwon't be much in the way of routine vacuuming work for the database\nthe stats were just reset on, as of course, all the n_dead_tup\ncounters were just reset. However, it could starve other databases of\nvacuum attention. Anti-wraparound vacuums on the current database may\nget neglected too.\n\nI'm not saying let's not do it, I'm just saying we need to think of\nwhat bad things could happen as a result of such a change.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 12:49:02 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 7:49 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> Yeah, analyze, not vacuum. It is a bit scary to add new ways for\n> auto-vacuum to suddenly have a lot of work to do. When all workers\n> are busy it can lead to neglect of other duties. It's true that there\n> won't be much in the way of routine vacuuming work for the database\n> the stats were just reset on, as of course, all the n_dead_tup\n> counters were just reset. However, it could starve other databases of\n> vacuum attention. Anti-wraparound vacuums on the current database may\n> get neglected too.\n>\n> I'm not saying let's not do it, I'm just saying we need to think of\n> what bad things could happen as a result of such a change.\n\n+1. I think that if we documented that pg_stat_reset() is going to\ntrigger an auto-analyze of every table in your system, we'd have some\npeople who didn't read the documentation and unleashed a storm of\nauto-analyze activity, and other people who did read the documentation\nand then intentionally used it to unleash a storm of auto-analyze\nactivity. Neither sounds that great.\n\nI really wish somebody had the time and energy to put some serious\nwork on the problem of autovacuum scheduling in general. Our current\nalgorithm is a huge improvement over what what we had before 8.3, but\nthat was a decade ago. This particular issue strikes me as something\nthat is likely to be hard to solve with an isolated tweak.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Mar 2019 07:59:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Sat, 30 Mar 2019 at 00:59, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 27, 2019 at 7:49 PM David Rowley\n> <david.rowley@2ndquadrant.com> wrote:\n> > Yeah, analyze, not vacuum. It is a bit scary to add new ways for\n> > auto-vacuum to suddenly have a lot of work to do. When all workers\n> > are busy it can lead to neglect of other duties. It's true that there\n> > won't be much in the way of routine vacuuming work for the database\n> > the stats were just reset on, as of course, all the n_dead_tup\n> > counters were just reset. However, it could starve other databases of\n> > vacuum attention. Anti-wraparound vacuums on the current database may\n> > get neglected too.\n> >\n> > I'm not saying let's not do it, I'm just saying we need to think of\n> > what bad things could happen as a result of such a change.\n>\n> +1. I think that if we documented that pg_stat_reset() is going to\n> trigger an auto-analyze of every table in your system, we'd have some\n> people who didn't read the documentation and unleashed a storm of\n> auto-analyze activity, and other people who did read the documentation\n> and then intentionally used it to unleash a storm of auto-analyze\n> activity. Neither sounds that great.\n\nI still think we should start with a warning about pg_stat_reset().\nPeople are surprised by this, and these are just the ones who notice:\n\nhttps://www.postgresql.org/message-id/CAB_myF4sZpxNXdb-x=weLpqBDou6uE8FHtM0FVerPM-1J7phkw@mail.gmail.com\n\nI imagine there are many others just suffering from bloat due to\nauto-vacuum not knowing how many dead tuples there are in the tables.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 04:14:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:\n> On Sat, 30 Mar 2019 at 00:59, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Mar 27, 2019 at 7:49 PM David Rowley\n> > <david.rowley@2ndquadrant.com> wrote:\n> > > Yeah, analyze, not vacuum. It is a bit scary to add new ways for\n> > > auto-vacuum to suddenly have a lot of work to do. When all workers\n> > > are busy it can lead to neglect of other duties. It's true that there\n> > > won't be much in the way of routine vacuuming work for the database\n> > > the stats were just reset on, as of course, all the n_dead_tup\n> > > counters were just reset. However, it could starve other databases of\n> > > vacuum attention. Anti-wraparound vacuums on the current database may\n> > > get neglected too.\n> > >\n> > > I'm not saying let's not do it, I'm just saying we need to think of\n> > > what bad things could happen as a result of such a change.\n> >\n> > +1. I think that if we documented that pg_stat_reset() is going to\n> > trigger an auto-analyze of every table in your system, we'd have some\n> > people who didn't read the documentation and unleashed a storm of\n> > auto-analyze activity, and other people who did read the documentation\n> > and then intentionally used it to unleash a storm of auto-analyze\n> > activity. Neither sounds that great.\n> \n> I still think we should start with a warning about pg_stat_reset().\n> People are surprised by this, and these are just the ones who notice:\n> \n> https://www.postgresql.org/message-id/CAB_myF4sZpxNXdb-x=weLpqBDou6uE8FHtM0FVerPM-1J7phkw@mail.gmail.com\n> \n> I imagine there are many others just suffering from bloat due to\n> auto-vacuum not knowing how many dead tuples there are in the tables.\n\nOK, let me step back. Why are people resetting the statistics\nregularly? Based on that purpose, does it make sense to clear the\nstats that effect autovacuum?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Wed, 10 Apr 2019 14:52:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On 2019-Apr-10, Bruce Momjian wrote:\n\n> On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:\n\n> > I still think we should start with a warning about pg_stat_reset().\n> > People are surprised by this, and these are just the ones who notice:\n> > \n> > https://www.postgresql.org/message-id/CAB_myF4sZpxNXdb-x=weLpqBDou6uE8FHtM0FVerPM-1J7phkw@mail.gmail.com\n> > \n> > I imagine there are many others just suffering from bloat due to\n> > auto-vacuum not knowing how many dead tuples there are in the tables.\n> \n> OK, let me step back. Why are people resetting the statistics\n> regularly? Based on that purpose, does it make sense to clear the\n> stats that effect autovacuum?\n\nI agree that we should research that angle. IMO resetting stats should\nbe seriously frowned upon. And if they do need to reset some counters\nfor some valid reason, offer a mechanism that leaves the autovac-\nguiding counters alone.\n\nIMO the answer for $SUBJECT is yes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 15:33:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "Em qua, 10 de abr de 2019 às 16:33, Alvaro Herrera\n<alvherre@2ndquadrant.com> escreveu:\n>\n> On 2019-Apr-10, Bruce Momjian wrote:\n>\n> > On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:\n>\n> > > I still think we should start with a warning about pg_stat_reset().\n> > > People are surprised by this, and these are just the ones who notice:\n> > >\n> > > https://www.postgresql.org/message-id/CAB_myF4sZpxNXdb-x=weLpqBDou6uE8FHtM0FVerPM-1J7phkw@mail.gmail.com\n> > >\n> > > I imagine there are many others just suffering from bloat due to\n> > > auto-vacuum not knowing how many dead tuples there are in the tables.\n> >\n> > OK, let me step back. Why are people resetting the statistics\n> > regularly? Based on that purpose, does it make sense to clear the\n> > stats that effect autovacuum?\n>\n> I agree that we should research that angle. IMO resetting stats should\n> be seriously frowned upon. And if they do need to reset some counters\n> for some valid reason, offer a mechanism that leaves the autovac-\n> guiding counters alone.\n>\nThen you have to change the way pg_stat_reset() works (it currently\nremoves the hash tables). Even pg_stat_reset_single_table_counters()\ncould cause trouble although it is in a smaller proportion. Reset\nstatistics leaves autovacuum state machine in an invalid state. Since\nreset statistic is a rare situation (at least I don't know monitoring\ntools or practices that regularly execute those functions), would it\nbe worth adding complexity to pg_stat_reset* functions? autovacuum\ncould handle those rare cases just fine.\n\n> IMO the answer for $SUBJECT is yes.\n>\n+1. However, I also suggest a WARNING saying \"autovacuum won't work\nbecause you reset statistics that it depends on\" plus detail \"Consider\nexecuting ANALYZE on all your tables\" / \"Consider executing ANALYZE on\ntable foo.bar\".\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Wed, 10 Apr 2019 20:09:34 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Thu, 11 Apr 2019 at 07:33, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> IMO the answer for $SUBJECT is yes.\n\nHere's what I had in mind.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 11 Apr 2019 13:43:55 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Thu, 11 Apr 2019 at 06:52, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> OK, let me step back. Why are people resetting the statistics\n> regularly? Based on that purpose, does it make sense to clear the\n> stats that effect autovacuum?\n\nI can't speak for everyone, but once upon a time when I first started\nusing PostgreSQL, to monitor the application's use of the database I\nrecorded the output of pg_stat_user_tables once per day and then reset\nthe statistics. It was useful to know the number of inserted tuples,\nseq scans, index scans etc so I could understand the load on the\ndatabase better. Of course, nowadays with LEAD()/LAG() it's pretty\neasy to find the difference from the previous day. I'd have done it\ndifferently if those had existed back then.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 11 Apr 2019 13:49:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On 27/03/2019 22:28, Peter Eisentraut wrote:\n> On 2019-03-26 16:28, Euler Taveira wrote:\n>> I don't remember why we didn't consider table without stats to be\n>> ANALYZEd. Isn't it the case to fix autovacuum? Analyze\n>> autovacuum_count + vacuum_count = 0?\n> \n> When the autovacuum system was introduced, we didn't have those columns.\n> But now it seems to make sense that a table with autoanalyze_count +\n> analyze_count = 0 should be a candidate for autovacuum even if the write\n> statistics are zero. Obviously, this would have the effect that a\n> pg_stat_reset() causes an immediate autovacuum for all tables, so maybe\n> it's not quite that simple.\n\nNot just pg_stat_reset() but also on promotion.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n",
"msg_date": "Sat, 13 Apr 2019 11:51:05 +0200",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 08:09:34PM -0300, Euler Taveira wrote:\n>Em qua, 10 de abr de 2019 �s 16:33, Alvaro Herrera\n><alvherre@2ndquadrant.com> escreveu:\n>>\n>> On 2019-Apr-10, Bruce Momjian wrote:\n>>\n>> > On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:\n>>\n>> > > I still think we should start with a warning about pg_stat_reset().\n>> > > People are surprised by this, and these are just the ones who notice:\n>> > >\n>> > > https://www.postgresql.org/message-id/CAB_myF4sZpxNXdb-x=weLpqBDou6uE8FHtM0FVerPM-1J7phkw@mail.gmail.com\n>> > >\n>> > > I imagine there are many others just suffering from bloat due to\n>> > > auto-vacuum not knowing how many dead tuples there are in the tables.\n>> >\n>> > OK, let me step back. Why are people resetting the statistics\n>> > regularly? Based on that purpose, does it make sense to clear the\n>> > stats that effect autovacuum?\n>>\n>> I agree that we should research that angle. IMO resetting stats should\n>> be seriously frowned upon. And if they do need to reset some counters\n>> for some valid reason, offer a mechanism that leaves the autovac-\n>> guiding counters alone.\n>>\n>Then you have to change the way pg_stat_reset() works (it currently\n>removes the hash tables). Even pg_stat_reset_single_table_counters()\n>could cause trouble although it is in a smaller proportion. Reset\n>statistics leaves autovacuum state machine in an invalid state. Since\n>reset statistic is a rare situation (at least I don't know monitoring\n>tools or practices that regularly execute those functions), would it\n>be worth adding complexity to pg_stat_reset* functions? autovacuum\n>could handle those rare cases just fine.\n>\n\nYeah, resetting most of the stats but keeping a couple of old values\naround is going to do more harm than good. Even resetting stats for a\nsingle object is annoying when you have to analyze the data, making it\neven more granular by keeping some fields is just complicating it\nfurther ...\n\nThe main reason why people do this is that we only provide cumulative\ncounters, so if you need to monitor how it changed in a given time\nperiod (last hour, day, ...) you need to compute the delta somehow. And\njust resetting the stats is the easiest way to achieve that.\n\n+1 to have a warning about this, and maybe we should point people to\ntools regularly snapshotting the statistics and computing the deltas for\nthem. There's a couple of specialized ones, but even widely available\nmonitoring tools will do that.\n\nIf only we had a way to regularly snapshot the data from within the\ndatabase, and then compute the deltas on that. If only we could insert\ndata from one table into another one a then do some analysics on it,\nwith like small windows moving over the data or something ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Apr 2019 22:42:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On 4/13/19 3:42 PM, Tomas Vondra wrote:\n> If only we had a way to regularly snapshot the data from within the\n> database, and then compute the deltas on that. If only we could insert\n> data from one table into another one a then do some analysics on it,\n> with like small windows moving over the data or something ...\n\nWhy not store deltas separately with their own delta reset command?\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Sun, 14 Apr 2019 09:11:52 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 09:11:52AM -0500, Joe Conway wrote:\n>On 4/13/19 3:42 PM, Tomas Vondra wrote:\n>> If only we had a way to regularly snapshot the data from within the\n>> database, and then compute the deltas on that. If only we could insert\n>> data from one table into another one a then do some analysics on it,\n>> with like small windows moving over the data or something ...\n>\n>Why not store deltas separately with their own delta reset command?\n>\n\nWell, we could do that, but we don't. Essentially, we'd implement some\nsort of RRD, but we'd have to handle cleanup, configuration (how much\nhistory, how frequently to snapshot the deltas). I think the assumption\nis people will do that in a third-party tool.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Apr 2019 19:33:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 2:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> OK, let me step back. Why are people resetting the statistics\n> regularly? Based on that purpose, does it make sense to clear the\n> stats that effect autovacuum?\n>\n\nWhen I've done it (not regularly, thankfully), it was usually because I\nfailed to type \"pg_stat_reset_shared('bgwriter')\" or\n\"pg_stat_statements_reset()\" correctly.\n\nI've also been tempted to do it because storing snapshots with a cron job\nor something requires effort and planning ahead to set up the tables and\ncron and some way to limit the retention, and than running LAG windows\nfunctions over the snapshots requires a re-study of the documentation,\nbecause they are a bit esoteric and I don't use them enough to commit the\nsyntax to memory. I don't want to see pg_statio_user_indexes often enough\nto make elaborate arrangements ahead of time (especially since\ntrack_io_timing columns is missing from it); but when I do want them, I\nwant them. And when I do, I don't want them to be \"since the beginning of\ntime\".\n\nWhen I'm thinking about pg_statio_user_indexes, I am probably not thinking\nabout autovac, since they have about nothing in common with each other.\n(Other than pg_stat_reset operating on both.)\n\nCheers,\n\nJeff\n\nOn Wed, Apr 10, 2019 at 2:52 PM Bruce Momjian <bruce@momjian.us> wrote:\nOK, let me step back. Why are people resetting the statistics\nregularly? Based on that purpose, does it make sense to clear the\nstats that effect autovacuum?When I've done it (not regularly, thankfully), it was usually because I failed to type \"pg_stat_reset_shared('bgwriter')\" or \"pg_stat_statements_reset()\" correctly.I've also been tempted to do it because storing snapshots with a cron job or something requires effort and planning ahead to set up the tables and cron and some way to limit the retention, and than running LAG windows functions over the snapshots requires a re-study of the documentation, because they are a bit esoteric and I don't use them enough to commit the syntax to memory. I don't want to see pg_statio_user_indexes often enough to make elaborate arrangements ahead of time (especially since track_io_timing columns is missing from it); but when I do want them, I want them. And when I do, I don't want them to be \"since the beginning of time\".When I'm thinking about pg_statio_user_indexes, I am probably not thinking about autovac, since they have about nothing in common with each other. (Other than pg_stat_reset operating on both.)Cheers,Jeff",
"msg_date": "Sun, 14 Apr 2019 16:47:09 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should the docs have a warning about pg_stat_reset()?"
}
] |
[
{
"msg_contents": "Good morning everyone,\n\nI am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n\nThank you,\nIlaria",
"msg_date": "Tue, 26 Mar 2019 14:09:43 +0100",
"msg_from": "\"Ila B.\" <ilaria.battiston@gmail.com>",
"msg_from_op": true,
"msg_subject": "[GSoC 2019] Proposal: Develop Performance Farm Database and Website"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 9:10 AM Ila B. <ilaria.battiston@gmail.com> wrote:\n> I am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n\nI think there's probably a very large amount of work to be done in\nthis area. Nobody is going to finish it in a summer. Still, there's\nprobably some useful things you could get done in a summer. I think a\nlot will depend on finding a good mentor who is familiar with these\nareas (which I am not). Has anyone expressed an interest?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Mar 2019 08:04:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC 2019] Proposal: Develop Performance Farm Database and\n Website"
},
{
"msg_contents": "On 2019-03-29 13:04, Robert Haas wrote:\n> On Tue, Mar 26, 2019 at 9:10 AM Ila B. <ilaria.battiston@gmail.com> wrote:\n>> I am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n> \n> I think there's probably a very large amount of work to be done in\n> this area. Nobody is going to finish it in a summer. Still, there's\n> probably some useful things you could get done in a summer. I think a\n> lot will depend on finding a good mentor who is familiar with these\n> areas (which I am not). Has anyone expressed an interest?\n\nMoreover, I have a feeling that have been hearing about work on a\nperformance farm for many years. Perhaps it should be investigated what\nbecame of that work and what the problems were getting it to a working\nstate.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 13:52:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC 2019] Proposal: Develop Performance Farm Database and\n Website"
},
{
"msg_contents": "Hello,\n\nThanks for the answer. This project is on the official PostgreSQL project list of GSoC 2019, and potential mentors are stated there. \n\nI trust mentors’ judgement on outlining the work and the tasks to be done in three months, and there is the previous student’s work to use as example if needed. The project consists in building a database and a website on top of it for users to browse performance data. \n\nLet me know whether there are any specific issues you’re concerned about. \n\nIlaria\n\n> Am 29.03.2019 um 13:52 schrieb Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n> \n>> On 2019-03-29 13:04, Robert Haas wrote:\n>>> On Tue, Mar 26, 2019 at 9:10 AM Ila B. <ilaria.battiston@gmail.com> wrote:\n>>> I am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n>> \n>> I think there's probably a very large amount of work to be done in\n>> this area. Nobody is going to finish it in a summer. Still, there's\n>> probably some useful things you could get done in a summer. I think a\n>> lot will depend on finding a good mentor who is familiar with these\n>> areas (which I am not). Has anyone expressed an interest?\n> \n> Moreover, I have a feeling that have been hearing about work on a\n> performance farm for many years. Perhaps it should be investigated what\n> became of that work and what the problems were getting it to a working\n> state.\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:01:05 +0100",
"msg_from": "Ilaria <ilaria.battiston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC 2019] Proposal: Develop Performance Farm Database and\n Website"
},
{
"msg_contents": "Hi Ilaria,\n\nEdited for bottom posting. :)\n\nOn Fri, Mar 29, 2019 at 03:01:05PM +0100, Ilaria wrote:\n> > Am 29.03.2019 um 13:52 schrieb Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n> > \n> >> On 2019-03-29 13:04, Robert Haas wrote:\n> >>> On Tue, Mar 26, 2019 at 9:10 AM Ila B. <ilaria.battiston@gmail.com> wrote:\n> >>> I am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n> >> \n> >> I think there's probably a very large amount of work to be done in\n> >> this area. Nobody is going to finish it in a summer. Still, there's\n> >> probably some useful things you could get done in a summer. I think a\n> >> lot will depend on finding a good mentor who is familiar with these\n> >> areas (which I am not). Has anyone expressed an interest?\n> > \n> > Moreover, I have a feeling that have been hearing about work on a\n> > performance farm for many years. Perhaps it should be investigated what\n> > became of that work and what the problems were getting it to a working\n> > state.\n\n> Hello,\n> \n> Thanks for the answer. This project is on the official PostgreSQL project list of GSoC 2019, and potential mentors are stated there. \n> \n> I trust mentors’ judgement on outlining the work and the tasks to be done in three months, and there is the previous student’s work to use as example if needed. The project consists in building a database and a website on top of it for users to browse performance data. \n> \n> Let me know whether there are any specific issues you’re concerned about. \n\nHongyuan, our student last summer, put together a summary of his\nprogress in a GitHub issue:\n\nhttps://github.com/PGPerfFarm/pgperffarm/issues/22\n\n\nWe have systems for proofing (from OSUOSL) and you can also see the\nprototype here:\n\nhttp://140.211.168.111/\n\n\nFor Phase 1, I'd recommend getting familiar with the database schema in\nplace now. Perhaps it can use some tweaking, but I just mean to suggest\nthat it might not be necessary to rebuild it from scratch.\n\nIn Phase 2, we had some difficulty last year about getting the\nauthentication/authorization completely integrated. I think the main\nissue was how to integrate this app while using resources outside of the\ncommunity infrastructure. We may have to continue working around that.\n\nOtherwise, I think the rest make sense. Let us know if you have any\nmore questions.\n\nRegards,\nMark\n\n-- \nMark Wong\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Thu, 4 Apr 2019 09:59:27 -0700",
"msg_from": "Mark Wong <mark@2ndQuadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [GSoC 2019] Proposal: Develop Performance Farm Database and\n Website"
},
{
"msg_contents": "Thank you so much for your answer, it provided a clearer understanding of the task and it was really useful to complete my proposal which I’ve now submitted. I really hope to keep on working with Postgres.\n\nBest of luck to all GSoC students :)\n\nIlaria\n\n> On 4 Apr 2019, at 18:59, Mark Wong <mark@2ndQuadrant.com> wrote:\n> \n> Hi Ilaria,\n> \n> Edited for bottom posting. :)\n> \n> On Fri, Mar 29, 2019 at 03:01:05PM +0100, Ilaria wrote:\n>>> Am 29.03.2019 um 13:52 schrieb Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n>>> \n>>>> On 2019-03-29 13:04, Robert Haas wrote:\n>>>>> On Tue, Mar 26, 2019 at 9:10 AM Ila B. <ilaria.battiston@gmail.com> wrote:\n>>>>> I am Ilaria Battiston, an aspiring GSoC student, and I would love to have a feedback on the first draft of my Google Summer of Code proposal. The project is \"Develop Performance Farm Database and Website”. You can find any other detail in the attached PDF file :)\n>>>> \n>>>> I think there's probably a very large amount of work to be done in\n>>>> this area. Nobody is going to finish it in a summer. Still, there's\n>>>> probably some useful things you could get done in a summer. I think a\n>>>> lot will depend on finding a good mentor who is familiar with these\n>>>> areas (which I am not). Has anyone expressed an interest?\n>>> \n>>> Moreover, I have a feeling that have been hearing about work on a\n>>> performance farm for many years. Perhaps it should be investigated what\n>>> became of that work and what the problems were getting it to a working\n>>> state.\n> \n>> Hello,\n>> \n>> Thanks for the answer. This project is on the official PostgreSQL project list of GSoC 2019, and potential mentors are stated there. \n>> \n>> I trust mentors’ judgement on outlining the work and the tasks to be done in three months, and there is the previous student’s work to use as example if needed. The project consists in building a database and a website on top of it for users to browse performance data. \n>> \n>> Let me know whether there are any specific issues you’re concerned about. \n> \n> Hongyuan, our student last summer, put together a summary of his\n> progress in a GitHub issue:\n> \n> https://github.com/PGPerfFarm/pgperffarm/issues/22\n> \n> \n> We have systems for proofing (from OSUOSL) and you can also see the\n> prototype here:\n> \n> http://140.211.168.111/\n> \n> \n> For Phase 1, I'd recommend getting familiar with the database schema in\n> place now. Perhaps it can use some tweaking, but I just mean to suggest\n> that it might not be necessary to rebuild it from scratch.\n> \n> In Phase 2, we had some difficulty last year about getting the\n> authentication/authorization completely integrated. I think the main\n> issue was how to integrate this app while using resources outside of the\n> community infrastructure. We may have to continue working around that.\n> \n> Otherwise, I think the rest make sense. Let us know if you have any\n> more questions.\n> \n> Regards,\n> Mark\n> \n> -- \n> Mark Wong\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n> https://www.2ndQuadrant.com/\n\n\n\n",
"msg_date": "Sun, 7 Apr 2019 21:48:07 +0200",
"msg_from": "\"Ila B.\" <ilaria.battiston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [GSoC 2019] Proposal: Develop Performance Farm Database and\n Website"
}
] |
[
{
"msg_contents": "Hi!\n\nIn caf626b2 type of vacuum_cost_delay have been switched from int to real, \neverywhere, but not in *RelOpts[] arrays.\n\nFor some reason it continued to build and work. But I think it is better to \nmove vacuum_cost_delay from int to real there too...\n\nPatch attached.\n\nPS. As you can see current reloption code is error-prone. To properly change \nreloptions you should simultaneously change code in several different places, \nand as you can see, it may not report if you missed something.\nI am working on reloptions code refactoring now, please join reviewing my \npatches. This work is important as you can see from this example...",
"msg_date": "Tue, 26 Mar 2019 19:19:50 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": true,
"msg_subject": "[PATCH][HOTFIX] vacuum_cost_delay type change from int to real have\n not been done everywhere"
},
{
"msg_contents": "Nikolay Shaplov <dhyan@nataraj.su> writes:\n> In caf626b2 type of vacuum_cost_delay have been switched from int to real, \n> everywhere, but not in *RelOpts[] arrays.\n\nUgh.\n\n> For some reason it continued to build and work.\n\nI'm not quite sure why it worked either; apparently, the type of that\narray entry doesn't have anything to do with the variable's storage\nformat. The bounds-check code must think it's dealing with an integer,\nbut that doesn't matter either for the values we need.\n\n> PS. As you can see current reloption code is error-prone.\n\nYeah, that was pretty obvious already :-(. Having more than one place\ndefining the type of an option is clearly bogus. I missed that this\nentry was type-specific because you'd really have to go up to the top\nof the array to notice that; and since the type information *is* contained\nin another entry, my bogometer failed to trigger.\n\nFix pushed, thanks for the report!\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 26 Mar 2019 13:38:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH][HOTFIX] vacuum_cost_delay type change from int to real\n have not been done everywhere"
}
] |
[
{
"msg_contents": "Hi,\n\nAs detailed in\nhttps://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\nthe way the backend's basebackup checksum verification works makes its\nerror detection capabilities very dubious.\n\nI think we need to fix this before the next set of backbranch releases,\nor at the very least add a big fat warning that the feature isn't doing\nmuch.\n\nThe more I think about it, the less convinced I am of the method to\navoid the torn page problem using LSNs. To make e.g. the PageIsNew()\ncheck correct, we need to verify that the whole page is zeroes - but we\ncan't use the LSN for that, as it's not on the page. But there very well\ncould be a torn page issue with only the second half of the page being\nwritten back (with the default 8kb pages that can trivially happen as\nthe kernel's pagecache commonly uses 4kb granularity).\n\nI basically think it needs to work like this:\n\n1) Perform the entire set of PageIsVerified() checks *without*\n previously checking the page's LSN, but don't error out.\n\n2) If 1) errored out, ensure that that's because the backend is\n currently writing out the page. That basically requires doing what\n BufferAlloc() does. So I think we'd need to end up with a function\n like:\n\n LockoutBackendWrites():\n buf = ReadBufferWithoutRelcache(relfilenode);\n LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n /*\n * Reread page from OS, and recheck. This needs to happen while\n * the IO lock prevents rereading from the OS. Note that we do\n * not want to rely on the buffer contents here, as that could\n * be very old cache contents.\n */\n perform_checksum_check(relfilenode, ERROR);\n\n LWLockRelease(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n ReleaseBuffer(buf);\n\n3) If 2) also failed, then we can be sure that the page is truly\n corrupted.\n\n\nGreetings,\n\nAndres Freund\n\n",
"msg_date": "Tue, 26 Mar 2019 10:08:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "basebackup checksum verification"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> As detailed in\n> https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n> the way the backend's basebackup checksum verification works makes its\n> error detection capabilities very dubious.\n\nI disagree that it's 'very dubious', even with your analysis. I thought\nRobert's response was generally good, pointing out that we're talking\nabout this being an issue if the corruption happens in a certain set of\nbytes. That said, I'm happy to see improvements in this area but I'm\nflat out upset about the notion that we must be perfect here- our\nchecksums themselves aren't perfect for catching corruption either.\n\n> I think we need to fix this before the next set of backbranch releases,\n> or at the very least add a big fat warning that the feature isn't doing\n> much.\n\nI disagree about this level of urgency, but if you have a decent idea\nabout how to improve the situation, I'm fully in support of it.\n\n> The more I think about it, the less convinced I am of the method to\n> avoid the torn page problem using LSNs. To make e.g. the PageIsNew()\n> check correct, we need to verify that the whole page is zeroes - but we\n> can't use the LSN for that, as it's not on the page. But there very well\n> could be a torn page issue with only the second half of the page being\n> written back (with the default 8kb pages that can trivially happen as\n> the kernel's pagecache commonly uses 4kb granularity).\n> \n> I basically think it needs to work like this:\n> \n> 1) Perform the entire set of PageIsVerified() checks *without*\n> previously checking the page's LSN, but don't error out.\n\nPerforming the PageIsVerified() checks seems reasonable, I don't see any\ndownside to doing that, so if you'd like to add that, sure, go for it.\n\n> 2) If 1) errored out, ensure that that's because the backend is\n> currently writing out the page. That basically requires doing what\n> BufferAlloc() does. So I think we'd need to end up with a function\n> like:\n> \n> LockoutBackendWrites():\n> buf = ReadBufferWithoutRelcache(relfilenode);\n\nThis is going to cause it to be pulled into shared buffers, if it isn't\nalready there, isn't it? That seems less than ideal and isn't it going\nto end up just doing exactly the same PageIsVerified() call, and what\nhappens when that fails? You're going to end up getting an\nereport(ERROR) and long-jump out of here... Depending on the other\ncode, maybe that's something you can manage, but it seems rather tricky\nto me. I do think, as was discussed extensively previously, that the\nbackup *should* continue even in the face of corruption, but there\nshould be warnings issued to notify the user of the issues.\n\n> LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n> /*\n> * Reread page from OS, and recheck. This needs to happen while\n> * the IO lock prevents rereading from the OS. Note that we do\n> * not want to rely on the buffer contents here, as that could\n> * be very old cache contents.\n> */\n> perform_checksum_check(relfilenode, ERROR);\n> \n> LWLockRelease(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n> ReleaseBuffer(buf);\n\nI don't particularly like having to lock pages in this way while\nperforming this check, espectially with having to read the page into\nshared buffers potentially.\n\nThis also isn't the only approach to dealing with this issue that the\nLSN might be corrupted. There's at least two other ways we can improve\nthe situation here- we can keep track of the highest LSN seen, perhaps\non a per-file basis, and then compare those to the end-of-backup LSN,\nand issue a warning or perform a re-check or do something else if we\ndiscover that the LSN found was later than the end-of-backup LSN.\nThat's not perfect, but it's certainly a good improvement over what we\nhave today. The other approach would be to track all of the pages\nwhich were skipped and then compare them to the pages in the WAL which\nwere archived during the backup, making sure that all pages which failed\nchecksum exist in the WAL. That should allow us to confirm that the\npage was actually being modified and won't ever be used in the state\nthat we saw it in, since it'll be replayed over by WAL, and therefore we\ndon't have to worry about the LSN or the page itself being corrupt. Of\ncourse, that requires tracking all the pages which are modified by the\nWAL for the duration of the backup, and tracking all the pages which\nfailed checksum and/or other validation, and then performing the\ncross-check. That seems like a fair bit of work for this, but I'm not\nsure that it's avoidable, ultimately.\n\nI'm happy with incremental improvements in this area though, and just\nchecking that the LSN of pages skipped isn't completely insane would\ndefinitely be a good improvement to begin with and might be simple\nenough to back-patch. I don't think back-patching changes like those\nproposed here is a good idea. I don't have any problem adding\nadditional documentation to explain what's being done though, with\nappropriate caveats at how this might not catch all types of corruption\n(we should do the same for the checksum feature itself, if we don't\nalready have such caveats...).\n\nThanks!\n\nStephen",
"msg_date": "Tue, 26 Mar 2019 19:22:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-26 19:22:03 -0400, Stephen Frost wrote:\n> Greetings,\n> \n> * Andres Freund (andres@anarazel.de) wrote:\n> > As detailed in\n> > https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n> > the way the backend's basebackup checksum verification works makes its\n> > error detection capabilities very dubious.\n> \n> I disagree that it's 'very dubious', even with your analysis.\n\nI really don't know what to say. The current algorithm is flat out\nbogus.\n\n\n> I thought Robert's response was generally good, pointing out that\n> we're talking about this being an issue if the corruption happens in a\n> certain set of bytes. That said, I'm happy to see improvements in\n> this area but I'm flat out upset about the notion that we must be\n> perfect here- our checksums themselves aren't perfect for catching\n> corruption either.\n\nThe point is that we're not detecting errors that we can detect when\nread outside of basebackup. I really entirely completely fail how that\ncan be defended.\n\nI think we're making promises with this the basebackup feature we're not\neven remotely keeping. I don't understand how you can defend that, given\nthe current state, you can have a basebackup that you took with\nchecksums enabled, and then when actually use that basebackup you get\nchecksum failures. Like it's one thing not to detect all storage\nissues, but if we do detect them after using the basebackup, that's\nreally not ok.\n\n\n> > 2) If 1) errored out, ensure that that's because the backend is\n> > currently writing out the page. That basically requires doing what\n> > BufferAlloc() does. So I think we'd need to end up with a function\n> > like:\n> > \n> > LockoutBackendWrites():\n> > buf = ReadBufferWithoutRelcache(relfilenode);\n> \n> This is going to cause it to be pulled into shared buffers, if it isn't\n> already there, isn't it?\n\nI can't see that being a problem. We're only going to enter this path if\nwe encountered a buffer where the checksum was wrong. And either that's\na data corruption even in which case we don't care about a small\nperformance penalty, or it's a race around writing out the page because\nbasebackup read it while half writen - in which case it's pretty pretty\nlikely that the page is still in shared buffers.\n\n\n> That seems less than ideal and isn't it going\n> to end up just doing exactly the same PageIsVerified() call, and what\n> happens when that fails?\n\nThat seems quite easily handled with a different RBM_ mode.\n\n\n\n> > LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n> > /*\n> > * Reread page from OS, and recheck. This needs to happen while\n> > * the IO lock prevents rereading from the OS. Note that we do\n> > * not want to rely on the buffer contents here, as that could\n> > * be very old cache contents.\n> > */\n> > perform_checksum_check(relfilenode, ERROR);\n> > \n> > LWLockRelease(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n> > ReleaseBuffer(buf);\n\nThis should be the IO lock, not content lock, sorry. Copy & pasto.\n\n\n> I don't particularly like having to lock pages in this way while\n> performing this check, espectially with having to read the page into\n> shared buffers potentially.\n\nGiven it's only the IO lock (see above correction), and only if we can't\nverify the checksum during the race, I fail to see how that can be a\nproblem?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 26 Mar 2019 16:49:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 04:49:21PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-03-26 19:22:03 -0400, Stephen Frost wrote:\n>> Greetings,\n>>\n>> * Andres Freund (andres@anarazel.de) wrote:\n>> > As detailed in\n>> > https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n>> > the way the backend's basebackup checksum verification works makes its\n>> > error detection capabilities very dubious.\n>>\n>> I disagree that it's 'very dubious', even with your analysis.\n>\n>I really don't know what to say. The current algorithm is flat out\n>bogus.\n>\n\nBogus might be a bit too harsh, but yeah - failure to reliably detect obviously\ninvalid checksums when the LSN just happens to be high due to randomness is not\na good thing. We'll still detect pages corrupted in other places, but this is\nrather unfortunate.\n\n>\n>> I thought Robert's response was generally good, pointing out that\n>> we're talking about this being an issue if the corruption happens in a\n>> certain set of bytes. That said, I'm happy to see improvements in\n>> this area but I'm flat out upset about the notion that we must be\n>> perfect here- our checksums themselves aren't perfect for catching\n>> corruption either.\n>\n>The point is that we're not detecting errors that we can detect when\n>read outside of basebackup. I really entirely completely fail how that\n>can be defended.\n>\n>I think we're making promises with this the basebackup feature we're not\n>even remotely keeping. I don't understand how you can defend that, given\n>the current state, you can have a basebackup that you took with\n>checksums enabled, and then when actually use that basebackup you get\n>checksum failures. Like it's one thing not to detect all storage\n>issues, but if we do detect them after using the basebackup, that's\n>really not ok.\n>\n\nYeah, if basebackup completes without reporting any invalid checksums, but\nrunning pg_verify_checksums on the same backups detects those, that probably\nshould raise some eyebrows.\n\nWe already have such blind spot, but it's expected to be pretty small\n(essentially pages modified since start of the backup).\n\n>\n>> > 2) If 1) errored out, ensure that that's because the backend is\n>> > currently writing out the page. That basically requires doing what\n>> > BufferAlloc() does. So I think we'd need to end up with a function\n>> > like:\n>> >\n>> > LockoutBackendWrites():\n>> > buf = ReadBufferWithoutRelcache(relfilenode);\n>>\n>> This is going to cause it to be pulled into shared buffers, if it isn't\n>> already there, isn't it?\n>\n>I can't see that being a problem. We're only going to enter this path if\n>we encountered a buffer where the checksum was wrong. And either that's\n>a data corruption even in which case we don't care about a small\n>performance penalty, or it's a race around writing out the page because\n>basebackup read it while half writen - in which case it's pretty pretty\n>likely that the page is still in shared buffers.\n>\n\nYep, I think this is fine. Although, I think in the other thread where this\nfailure mode was discussed, I think we've only discussed to get I/O lock on\nthe buffer, no? But as you say, this should be a rare code path.\n\n>\n>> That seems less than ideal and isn't it going\n>> to end up just doing exactly the same PageIsVerified() call, and what\n>> happens when that fails?\n>\n>That seems quite easily handled with a different RBM_ mode.\n>\n>\n>\n>> > LWLockAcquire(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n>> > /*\n>> > * Reread page from OS, and recheck. This needs to happen while\n>> > * the IO lock prevents rereading from the OS. Note that we do\n>> > * not want to rely on the buffer contents here, as that could\n>> > * be very old cache contents.\n>> > */\n>> > perform_checksum_check(relfilenode, ERROR);\n>> >\n>> > LWLockRelease(BufferDescriptorGetContentLock(buf), LW_EXCLUSIVE);\n>> > ReleaseBuffer(buf);\n>\n>This should be the IO lock, not content lock, sorry. Copy & pasto.\n>\n>\n>> I don't particularly like having to lock pages in this way while\n>> performing this check, espectially with having to read the page into\n>> shared buffers potentially.\n>\n>Given it's only the IO lock (see above correction), and only if we can't\n>verify the checksum during the race, I fail to see how that can be a\n>problem?\n>\n\nIMHO not a problem.\n\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 27 Mar 2019 01:10:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Mar 26, 2019 at 04:49:21PM -0700, Andres Freund wrote:\n> >On 2019-03-26 19:22:03 -0400, Stephen Frost wrote:\n> >>* Andres Freund (andres@anarazel.de) wrote:\n> >>> As detailed in\n> >>> https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n> >>> the way the backend's basebackup checksum verification works makes its\n> >>> error detection capabilities very dubious.\n> >>\n> >>I disagree that it's 'very dubious', even with your analysis.\n> >\n> >I really don't know what to say. The current algorithm is flat out\n> >bogus.\n> \n> Bogus might be a bit too harsh, but yeah - failure to reliably detect obviously\n> invalid checksums when the LSN just happens to be high due to randomness is not\n> a good thing. We'll still detect pages corrupted in other places, but this is\n> rather unfortunate.\n\nI'm all for improving it, as I said originally.\n\n> >>I thought Robert's response was generally good, pointing out that\n> >>we're talking about this being an issue if the corruption happens in a\n> >>certain set of bytes. That said, I'm happy to see improvements in\n> >>this area but I'm flat out upset about the notion that we must be\n> >>perfect here- our checksums themselves aren't perfect for catching\n> >>corruption either.\n> >\n> >The point is that we're not detecting errors that we can detect when\n> >read outside of basebackup. I really entirely completely fail how that\n> >can be defended.\n> >\n> >I think we're making promises with this the basebackup feature we're not\n> >even remotely keeping. I don't understand how you can defend that, given\n> >the current state, you can have a basebackup that you took with\n> >checksums enabled, and then when actually use that basebackup you get\n> >checksum failures. Like it's one thing not to detect all storage\n> >issues, but if we do detect them after using the basebackup, that's\n> >really not ok.\n> \n> Yeah, if basebackup completes without reporting any invalid checksums, but\n> running pg_verify_checksums on the same backups detects those, that probably\n> should raise some eyebrows.\n\nThat isn't actually what would happen at this point, just so we're\nclear. What Andres is talking about is a solution which would only\nactually work for pg_basebackup, and not for pg_verify_checksums\n(without some serious changes which make it connect to the running\nserver and run various functions to perform the locking that he's\nproposing pg_basebackup do...).\n\n> We already have such blind spot, but it's expected to be pretty small\n> (essentially pages modified since start of the backup).\n\neh..? This is.. more-or-less entirely what's being discussed here:\nexactly how we detect and determine which pages were modified since the\nstart of the backup, and which might have been partially written out\nwhen we tried to read them and therefore fail a checksum check, but it\ndoesn't matter because we don't actually end up using those pages.\n\nI outlined a couple of other approaches to improving that situation,\nwhich would be able to be used with pg_verify_checksums without having\nto connect to the backend, but I'll note that those were completely\nignored, leading me to believe that there's really not much more to\ndiscuss here since other ideas are just not open to being considered.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 26 Mar 2019 20:18:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 5:10 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Bogus might be a bit too harsh, but yeah - failure to reliably detect obviously\n> invalid checksums when the LSN just happens to be high due to randomness is not\n> a good thing. We'll still detect pages corrupted in other places, but this is\n> rather unfortunate.\n\nI have personally seen real world corruption that involved a page\nimage consisting of random noise. Several times. Failing to detect\nblatant corruption is unacceptable IMV.\n\nCan't we do better here without great difficulty? There are plenty of\ngeneric things that you we could do that can verify that almost any\ntype of initialized page is at least somewhat sane. For example, you\ncan verify that line pointers indicate that tuples are\nnon-overlapping.\n\nThat said, Andres' approach sounds like the way to go to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 26 Mar 2019 17:23:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-26 20:18:31 -0400, Stephen Frost wrote:\n> > >>I thought Robert's response was generally good, pointing out that\n> > >>we're talking about this being an issue if the corruption happens in a\n> > >>certain set of bytes. That said, I'm happy to see improvements in\n> > >>this area but I'm flat out upset about the notion that we must be\n> > >>perfect here- our checksums themselves aren't perfect for catching\n> > >>corruption either.\n> > >\n> > >The point is that we're not detecting errors that we can detect when\n> > >read outside of basebackup. I really entirely completely fail how that\n> > >can be defended.\n> > >\n> > >I think we're making promises with this the basebackup feature we're not\n> > >even remotely keeping. I don't understand how you can defend that, given\n> > >the current state, you can have a basebackup that you took with\n> > >checksums enabled, and then when actually use that basebackup you get\n> > >checksum failures. Like it's one thing not to detect all storage\n> > >issues, but if we do detect them after using the basebackup, that's\n> > >really not ok.\n> > \n> > Yeah, if basebackup completes without reporting any invalid checksums, but\n> > running pg_verify_checksums on the same backups detects those, that probably\n> > should raise some eyebrows.\n> \n> That isn't actually what would happen at this point, just so we're\n> clear. What Andres is talking about is a solution which would only\n> actually work for pg_basebackup, and not for pg_verify_checksums\n> (without some serious changes which make it connect to the running\n> server and run various functions to perform the locking that he's\n> proposing pg_basebackup do...).\n\nWell, I still think it's just plain wrong to do online checksum\nverification outside of the server, and we should just reject adding\nthat as a feature.\n\nBesides the fact that I think having at precisely equal or more error\ndetection capabilities than the backend, I think all the LSN based\napproaches also have the issue that they'll prevent us from using them\non non WAL logged data. There's ongoing work to move SLRUs into the\nbackend allowing them to be checksummed (Shawn Debnath is IIRC planning\nto propose a patch for v13), and we also really should offer to also\nchecksum unlogged tables (and temp tables?) - just because they'd be\ngone after a crash, imo doesn't make it OK to not detect corrupted on\ndisk data outside of a crash. For those things we won't necessarily\nhave LSNs that we can conveniently can associate with those buffers -\nmaking LSN based logic harder.\n\n\n> I outlined a couple of other approaches to improving that situation,\n> which would be able to be used with pg_verify_checksums without having\n> to connect to the backend, but I'll note that those were completely\n> ignored, leading me to believe that there's really not much more to\n> discuss here since other ideas are just not open to being considered.\n\nWell, given that we can do an accurate determination without too much\ncode in the basebackup case, I don't see what your proposals gain over\nthat? That's why I didn't comment on them. I'm focusing on the\nbasebackup case, over the online checksum case, because it's released\ncode.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 26 Mar 2019 17:31:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 08:18:31PM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Tue, Mar 26, 2019 at 04:49:21PM -0700, Andres Freund wrote:\n>> >On 2019-03-26 19:22:03 -0400, Stephen Frost wrote:\n>> >>* Andres Freund (andres@anarazel.de) wrote:\n>> >>> As detailed in\n>> >>> https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n>> >>> the way the backend's basebackup checksum verification works makes its\n>> >>> error detection capabilities very dubious.\n>> >>\n>> >>I disagree that it's 'very dubious', even with your analysis.\n>> >\n>> >I really don't know what to say. The current algorithm is flat out\n>> >bogus.\n>>\n>> Bogus might be a bit too harsh, but yeah - failure to reliably detect obviously\n>> invalid checksums when the LSN just happens to be high due to randomness is not\n>> a good thing. We'll still detect pages corrupted in other places, but this is\n>> rather unfortunate.\n>\n>I'm all for improving it, as I said originally.\n>\n>> >>I thought Robert's response was generally good, pointing out that\n>> >>we're talking about this being an issue if the corruption happens in a\n>> >>certain set of bytes. That said, I'm happy to see improvements in\n>> >>this area but I'm flat out upset about the notion that we must be\n>> >>perfect here- our checksums themselves aren't perfect for catching\n>> >>corruption either.\n>> >\n>> >The point is that we're not detecting errors that we can detect when\n>> >read outside of basebackup. I really entirely completely fail how that\n>> >can be defended.\n>> >\n>> >I think we're making promises with this the basebackup feature we're not\n>> >even remotely keeping. I don't understand how you can defend that, given\n>> >the current state, you can have a basebackup that you took with\n>> >checksums enabled, and then when actually use that basebackup you get\n>> >checksum failures. Like it's one thing not to detect all storage\n>> >issues, but if we do detect them after using the basebackup, that's\n>> >really not ok.\n>>\n>> Yeah, if basebackup completes without reporting any invalid checksums, but\n>> running pg_verify_checksums on the same backups detects those, that probably\n>> should raise some eyebrows.\n>\n>That isn't actually what would happen at this point, just so we're\n>clear. What Andres is talking about is a solution which would only\n>actually work for pg_basebackup, and not for pg_verify_checksums\n>(without some serious changes which make it connect to the running\n>server and run various functions to perform the locking that he's\n>proposing pg_basebackup do...).\n>\n\nI was talking about pg_verify_checksums in offline mode, i.e. when you\ntake a backup and then run pg_verify_checksums on it. I'm pretty sure\nthat does not need to talk to the cluster. Sorry if that was not clear.\n\n>> We already have such blind spot, but it's expected to be pretty small\n>> (essentially pages modified since start of the backup).\n>\n>eh..? This is.. more-or-less entirely what's being discussed here:\n>exactly how we detect and determine which pages were modified since the\n>start of the backup, and which might have been partially written out\n>when we tried to read them and therefore fail a checksum check, but it\n>doesn't matter because we don't actually end up using those pages.\n>\n\nAnd? All I'm saying is that we knew there's a gap that we don't check,\nbut that the understanding was that it's rather small and limited to\nrecently modified pages. If we can further reduce that window, that's\ngreat and we should do it, of course.\n\n>I outlined a couple of other approaches to improving that situation,\n>which would be able to be used with pg_verify_checksums without having\n>to connect to the backend, but I'll note that those were completely\n>ignored, leading me to believe that there's really not much more to\n>discuss here since other ideas are just not open to being considered.\n>\n\nUh.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 27 Mar 2019 01:36:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Tue, Mar 26, 2019 at 08:18:31PM -0400, Stephen Frost wrote:\n> >* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> >>On Tue, Mar 26, 2019 at 04:49:21PM -0700, Andres Freund wrote:\n> >>>On 2019-03-26 19:22:03 -0400, Stephen Frost wrote:\n> >>>>* Andres Freund (andres@anarazel.de) wrote:\n> >>>>> As detailed in\n> >>>>> https://postgr.es/m/20190319200050.ncuxejradurjakdc%40alap3.anarazel.de\n> >>>>> the way the backend's basebackup checksum verification works makes its\n> >>>>> error detection capabilities very dubious.\n> >>>>\n> >>>>I disagree that it's 'very dubious', even with your analysis.\n> >>>\n> >>>I really don't know what to say. The current algorithm is flat out\n> >>>bogus.\n> >>\n> >>Bogus might be a bit too harsh, but yeah - failure to reliably detect obviously\n> >>invalid checksums when the LSN just happens to be high due to randomness is not\n> >>a good thing. We'll still detect pages corrupted in other places, but this is\n> >>rather unfortunate.\n> >\n> >I'm all for improving it, as I said originally.\n> >\n> >>>>I thought Robert's response was generally good, pointing out that\n> >>>>we're talking about this being an issue if the corruption happens in a\n> >>>>certain set of bytes. That said, I'm happy to see improvements in\n> >>>>this area but I'm flat out upset about the notion that we must be\n> >>>>perfect here- our checksums themselves aren't perfect for catching\n> >>>>corruption either.\n> >>>\n> >>>The point is that we're not detecting errors that we can detect when\n> >>>read outside of basebackup. I really entirely completely fail how that\n> >>>can be defended.\n> >>>\n> >>>I think we're making promises with this the basebackup feature we're not\n> >>>even remotely keeping. I don't understand how you can defend that, given\n> >>>the current state, you can have a basebackup that you took with\n> >>>checksums enabled, and then when actually use that basebackup you get\n> >>>checksum failures. Like it's one thing not to detect all storage\n> >>>issues, but if we do detect them after using the basebackup, that's\n> >>>really not ok.\n> >>\n> >>Yeah, if basebackup completes without reporting any invalid checksums, but\n> >>running pg_verify_checksums on the same backups detects those, that probably\n> >>should raise some eyebrows.\n> >\n> >That isn't actually what would happen at this point, just so we're\n> >clear. What Andres is talking about is a solution which would only\n> >actually work for pg_basebackup, and not for pg_verify_checksums\n> >(without some serious changes which make it connect to the running\n> >server and run various functions to perform the locking that he's\n> >proposing pg_basebackup do...).\n> \n> I was talking about pg_verify_checksums in offline mode, i.e. when you\n> take a backup and then run pg_verify_checksums on it. I'm pretty sure\n> that does not need to talk to the cluster. Sorry if that was not clear.\n\nTo make that work, you'd have to take a backup, then restore it, then\nbring PG up and have it replay all of the outstanding WAL, then shut\ndown PG cleanly, and *then* you could run pg_verify_checksums on it.\n\n> >>We already have such blind spot, but it's expected to be pretty small\n> >>(essentially pages modified since start of the backup).\n> >\n> >eh..? This is.. more-or-less entirely what's being discussed here:\n> >exactly how we detect and determine which pages were modified since the\n> >start of the backup, and which might have been partially written out\n> >when we tried to read them and therefore fail a checksum check, but it\n> >doesn't matter because we don't actually end up using those pages.\n> \n> And? All I'm saying is that we knew there's a gap that we don't check,\n> but that the understanding was that it's rather small and limited to\n> recently modified pages. If we can further reduce that window, that's\n> great and we should do it, of course.\n\nThe point that Andres is making is that in the face of corruption of\ncertain particular bytes, the set of pages that we check can be much\nless than the overall size of the DB (or that of what was recently\nmodified), and we end up skipping a lot of other pages due to the\ncorruption rather than because they've been recently modified because\nthe determination of what's been recently modified is based on those\nbytes. With the changes being made to pg_checksums, we'd at least\nreport back those pages as having been 'skipped', but better would be if\nwe could accurately determine that the pages were recently modified and\ntherefore what we saw was a torn page.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 26 Mar 2019 20:51:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-03-26 20:18:31 -0400, Stephen Frost wrote:\n> > > >>I thought Robert's response was generally good, pointing out that\n> > > >>we're talking about this being an issue if the corruption happens in a\n> > > >>certain set of bytes. That said, I'm happy to see improvements in\n> > > >>this area but I'm flat out upset about the notion that we must be\n> > > >>perfect here- our checksums themselves aren't perfect for catching\n> > > >>corruption either.\n> > > >\n> > > >The point is that we're not detecting errors that we can detect when\n> > > >read outside of basebackup. I really entirely completely fail how that\n> > > >can be defended.\n> > > >\n> > > >I think we're making promises with this the basebackup feature we're not\n> > > >even remotely keeping. I don't understand how you can defend that, given\n> > > >the current state, you can have a basebackup that you took with\n> > > >checksums enabled, and then when actually use that basebackup you get\n> > > >checksum failures. Like it's one thing not to detect all storage\n> > > >issues, but if we do detect them after using the basebackup, that's\n> > > >really not ok.\n> > > \n> > > Yeah, if basebackup completes without reporting any invalid checksums, but\n> > > running pg_verify_checksums on the same backups detects those, that probably\n> > > should raise some eyebrows.\n> > \n> > That isn't actually what would happen at this point, just so we're\n> > clear. What Andres is talking about is a solution which would only\n> > actually work for pg_basebackup, and not for pg_verify_checksums\n> > (without some serious changes which make it connect to the running\n> > server and run various functions to perform the locking that he's\n> > proposing pg_basebackup do...).\n> \n> Well, I still think it's just plain wrong to do online checksum\n> verification outside of the server, and we should just reject adding\n> that as a feature.\n\nI get that, and I disagree with it.\n\n> Besides the fact that I think having at precisely equal or more error\n> detection capabilities than the backend, I think all the LSN based\n> approaches also have the issue that they'll prevent us from using them\n> on non WAL logged data. There's ongoing work to move SLRUs into the\n> backend allowing them to be checksummed (Shawn Debnath is IIRC planning\n> to propose a patch for v13), and we also really should offer to also\n> checksum unlogged tables (and temp tables?) - just because they'd be\n> gone after a crash, imo doesn't make it OK to not detect corrupted on\n> disk data outside of a crash. For those things we won't necessarily\n> have LSNs that we can conveniently can associate with those buffers -\n> making LSN based logic harder.\n\nI'm kinda guessing that the SLRUs are still going to be WAL'd. If\nthat's changing, I'd be very curious to hear the details.\n\nAs for unlogged table and temp tables, it's an interesting idea to\nchecksum them but far less valuable by the very nature of what those are\nused for. Futher, it's utterly useless to checksum as part of backup,\nlike what pg_basebackup is actually doing, and is actually valuable to\nskip over them rather than back them up, since otherwise we'd back them\nup, and then restore them ... and then remove them immediately during\nrecovery from the backup state. Having a way to verify checksum on\nunlogged tables or temp tables using some *other* tool or background\nprocess could be valuable, provided it doesn't cause any issues for\nongoing operations.\n\n> > I outlined a couple of other approaches to improving that situation,\n> > which would be able to be used with pg_verify_checksums without having\n> > to connect to the backend, but I'll note that those were completely\n> > ignored, leading me to believe that there's really not much more to\n> > discuss here since other ideas are just not open to being considered.\n> \n> Well, given that we can do an accurate determination without too much\n> code in the basebackup case, I don't see what your proposals gain over\n> that? That's why I didn't comment on them. I'm focusing on the\n> basebackup case, over the online checksum case, because it's released\n> code.\n\nI'm fine with improving the basebackup case, but I don't agree at all\nwith the idea that all checksum validation must be exclusively done in a\nbackend process.\n\nI'm also not convinced that these changes to pg_basebackup will be free\nof issues that may impact users in a negative way, making me concerned\nthat we're going to end up doing more harm than good with such a change\nbeing back-patched. Simply comparing the skipped LSNs to the\nend-of-backup LSN seems much less invasive when it comes to this core\ncode, and certainly increases the chances quite a bit that we'll detect\nan issue with corruption in the LSN.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 26 Mar 2019 21:01:27 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-26 21:01:27 -0400, Stephen Frost wrote:\n> I'm also not convinced that these changes to pg_basebackup will be free\n> of issues that may impact users in a negative way, making me concerned\n> that we're going to end up doing more harm than good with such a change\n> being back-patched. Simply comparing the skipped LSNs to the\n> end-of-backup LSN seems much less invasive when it comes to this core\n> code, and certainly increases the chances quite a bit that we'll detect\n> an issue with corruption in the LSN.\n\nYea, in the other thread we'd discussed that that might be the correct\ncourse for backpatch, at least initially. But I think the insert/replay\nLSN would be the correct LSN to compare to in the basebackup.c case?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 26 Mar 2019 18:04:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-03-26 21:01:27 -0400, Stephen Frost wrote:\n> > I'm also not convinced that these changes to pg_basebackup will be free\n> > of issues that may impact users in a negative way, making me concerned\n> > that we're going to end up doing more harm than good with such a change\n> > being back-patched. Simply comparing the skipped LSNs to the\n> > end-of-backup LSN seems much less invasive when it comes to this core\n> > code, and certainly increases the chances quite a bit that we'll detect\n> > an issue with corruption in the LSN.\n> \n> Yea, in the other thread we'd discussed that that might be the correct\n> course for backpatch, at least initially. But I think the insert/replay\n> LSN would be the correct LSN to compare to in the basebackup.c case?\n\nYes, it seems like that could be done in the basebackup case and would\navoid the need to track the skipped LSNs, since you could just look up\nthe insert/replay LSN at the time and do the comparison right away.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 26 Mar 2019 21:07:20 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
},
{
"msg_contents": "On Tue, Mar 26, 2019 at 05:23:01PM -0700, Peter Geoghegan wrote:\n> I have personally seen real world corruption that involved a page\n> image consisting of random noise. Several times. Failing to detect\n> blatant corruption is unacceptable IMV.\n\nYeah, I have seen that as well. If we have a tool not able to detect \nchecksums failures in any reliable and robust way, then we don't have\nsomething that qualifies as a checksum verification tool.\n--\nMichael",
"msg_date": "Thu, 28 Mar 2019 22:48:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: basebackup checksum verification"
}
] |
[
{
"msg_contents": "Hello, my name is Victor Kuvshiev.\n\nCurrently I'm third-year student of Petrozavodsk State University, studying\ninformation systems and technologies.\n\nI have relatively good knowledge of HTML, CSS and Python also have some\nskills in javascript language.\n\nexample of my works: ruletka, console game in Lua that stores data in\nPostgreSQL: https://github.com/kloun/ruletka\n\nI can spend 5-6 hours in a day for project. Currently I don't have any\nother work in the summer.\n\nSelected interested project\nhttps://wiki.postgresql.org/wiki/GSoC_2019#Develop_Performance_Farm_Database_and_Website_.282019.29\n\nHello, my name is Victor Kuvshiev. Currently I'm third-year student of Petrozavodsk State University, studying information systems and technologies. I have relatively good knowledge of HTML, CSS and Python also have some skills in javascript language. example of my works: ruletka, console game in Lua that stores data in PostgreSQL: https://github.com/kloun/ruletka I can spend 5-6 hours in a day for project. Currently I don't have any other work in the summer. Selected interested project https://wiki.postgresql.org/wiki/GSoC_2019#Develop_Performance_Farm_Database_and_Website_.282019.29",
"msg_date": "Tue, 26 Mar 2019 21:59:24 +0300",
"msg_from": "Victor Kukshiev <andrey0bolkonsky@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fwd: Gsoc proposal perffarn"
}
] |
[
{
"msg_contents": "Compute XID horizon for page level index vacuum on primary.\n\nPreviously the xid horizon was only computed during WAL replay. That\nhad two major problems:\n1) It relied on knowing what the table pointed to looks like. That was\n easy enough before the introducing of tableam (we knew it had to be\n heap, although some trickery around logging the heap relfilenodes\n was required). But to properly handle table AMs we need\n per-database catalog access to look up the AM handler, which\n recovery doesn't allow.\n2) Not knowing the xid horizon also makes it hard to support logical\n decoding on standbys. When on a catalog table, we need to be able\n to conflict with slots that have an xid horizon that's too old. But\n computing the horizon by visiting the heap only works once\n consistency is reached, but we always need to be able to detect\n conflicts.\n\nThere's also a secondary problem, in that the current method performs\nredundant work on every standby. But that's counterbalanced by\npotentially computing the value when not necessary (either because\nthere's no standby, or because there's no connected backends).\n\nSolve 1) and 2) by moving computation of the xid horizon to the\nprimary and by involving tableam in the computation of the horizon.\n\nTo address the potentially increased overhead, increase the efficiency\nof the xid horizon computation for heap by sorting the tids, and\neliminating redundant buffer accesses. When prefetching is available,\nadditionally perform prefetching of buffers. As this is more of a\nmaintenance task, rather than something routinely done in every read\nonly query, we add an arbitrary 10 to the effective concurrency -\nthereby using IO concurrency, when not globally enabled. That's\npossibly not the perfect formula, but seems good enough for now.\n\nBumps WAL format, as latestRemovedXid is now part of the records, and\nthe heap's relfilenode isn't anymore.\n\nAuthor: Andres Freund, Amit Khandekar, Robert Haas\nReviewed-By: Robert Haas\nDiscussion:\n https://postgr.es/m/20181212204154.nsxf3gzqv3gesl32@alap3.anarazel.de\n https://postgr.es/m/20181214014235.dal5ogljs3bmlq44@alap3.anarazel.de\n https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/558a9165e081d1936573e5a7d576f5febd7fb55a\n\nModified Files\n--------------\nsrc/backend/access/hash/hash_xlog.c | 153 +--------------------\nsrc/backend/access/hash/hashinsert.c | 17 ++-\nsrc/backend/access/heap/heapam.c | 221 +++++++++++++++++++++++++++++++\nsrc/backend/access/heap/heapam_handler.c | 1 +\nsrc/backend/access/index/genam.c | 37 ++++++\nsrc/backend/access/nbtree/nbtpage.c | 8 +-\nsrc/backend/access/nbtree/nbtxlog.c | 156 +---------------------\nsrc/backend/access/rmgrdesc/hashdesc.c | 5 +-\nsrc/backend/access/rmgrdesc/nbtdesc.c | 3 +-\nsrc/include/access/genam.h | 5 +\nsrc/include/access/hash_xlog.h | 2 +-\nsrc/include/access/heapam.h | 4 +\nsrc/include/access/nbtxlog.h | 3 +-\nsrc/include/access/tableam.h | 19 +++\nsrc/include/access/xlog_internal.h | 2 +-\nsrc/tools/pgindent/typedefs.list | 1 +\n16 files changed, 316 insertions(+), 321 deletions(-)\n\n",
"msg_date": "Wed, 27 Mar 2019 00:06:45 +0000",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgsql: Compute XID horizon for page level index vacuum on primary."
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:\n> Compute XID horizon for page level index vacuum on primary.\n\nHi Andres,\n\nI have a virtual machine running FreeBSD 12.0 on i386 on which\ncontrib/test_decoding consistently self-deadlocks in the \"rewrite\"\ntest, with the stack listed below. You can see that we wait for a\nshare lock that we already hold exclusively. Peter Geoghegan spotted\nthe problem: this code path shouldn't access syscache, or at least not\nfor a catalog table. He suggested something along these lines:\n\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -6977,7 +6977,10 @@ heap_compute_xid_horizon_for_tuples(Relation rel,\n * simplistic, but at the moment there is no evidence of that\nor any idea\n * about what would work better.\n */\n- io_concurrency =\nget_tablespace_io_concurrency(rel->rd_rel->reltablespace);\n+ if (IsCatalogRelation(rel))\n+ io_concurrency = 1;\n+ else\n+ io_concurrency =\nget_tablespace_io_concurrency(rel->rd_rel->reltablespace);\n prefetch_distance = Min((io_concurrency) + 10, MAX_IO_CONCURRENCY);\n\n /* Start prefetching. */\n\nIndeed that seems to fix the problem for me.\n\n* frame #0: 0x28c04ca1 libc.so.7`__sys__umtx_op + 5\n frame #1: 0x28bed0ab libc.so.7`sem_clockwait_np + 283\n frame #2: 0x28bed1ae libc.so.7`sem_wait + 62\n frame #3: 0x0858d837 postgres`PGSemaphoreLock(sema=0x290141a8) at\npg_sema.c:316\n frame #4: 0x08678b94 postgres`LWLockAcquire(lock=0x295365a4,\nmode=LW_SHARED) at lwlock.c:1244\n frame #5: 0x08639d8f postgres`LockBuffer(buffer=129, mode=1) at\nbufmgr.c:3565\n frame #6: 0x08187f7d postgres`_bt_getbuf(rel=0x31c3e95c, blkno=1,\naccess=1) at nbtpage.c:806\n frame #7: 0x081887f9 postgres`_bt_getroot(rel=0x31c3e95c,\naccess=1) at nbtpage.c:323\n frame #8: 0x081932aa postgres`_bt_search(rel=0x31c3e95c,\nkey=0xffbfad00, bufP=0xffbfb31c, access=1, snapshot=0x08b27c58) at\nnbtsearch.c:99\n frame #9: 0x08195bfc postgres`_bt_first(scan=0x31db73a0,\ndir=ForwardScanDirection) at nbtsearch.c:1246\n frame #10: 0x08190f96 postgres`btgettuple(scan=0x31db73a0,\ndir=ForwardScanDirection) at nbtree.c:245\n frame #11: 0x0817d3fa postgres`index_getnext_tid(scan=0x31db73a0,\ndirection=ForwardScanDirection) at indexam.c:550\n frame #12: 0x0817d6a8 postgres`index_getnext_slot(scan=0x31db73a0,\ndirection=ForwardScanDirection, slot=0x31df2320) at indexam.c:642\n frame #13: 0x0817b4c9\npostgres`systable_getnext(sysscan=0x31df242c) at genam.c:450\n frame #14: 0x0887e3a3 postgres`ScanPgRelation(targetRelId=1213,\nindexOK=true, force_non_historic=false) at relcache.c:365\n frame #15: 0x088742e1 postgres`RelationBuildDesc(targetRelId=1213,\ninsertIt=true) at relcache.c:1055\n frame #16: 0x0887356a\npostgres`RelationIdGetRelation(relationId=1213) at relcache.c:2030\n frame #17: 0x080d7ac5 postgres`relation_open(relationId=1213,\nlockmode=1) at relation.c:59\n frame #18: 0x081cc2b6 postgres`table_open(relationId=1213,\nlockmode=1) at table.c:43\n frame #19: 0x0886597b\npostgres`SearchCatCacheMiss(cache=0x31c2b200, nkeys=1,\nhashValue=1761185739, hashIndex=3, v1=1663, v2=0, v3=0, v4=0) at\ncatcache.c:1357\n frame #20: 0x088622db\npostgres`SearchCatCacheInternal(cache=0x31c2b200, nkeys=1, v1=1663,\nv2=0, v3=0, v4=0) at catcache.c:1299\n frame #21: 0x08862354 postgres`SearchCatCache1(cache=0x31c2b200,\nv1=1663) at catcache.c:1167\n frame #22: 0x0888406a postgres`SearchSysCache1(cacheId=61,\nkey1=1663) at syscache.c:1119\n frame #23: 0x088834de postgres`get_tablespace(spcid=1663) at spccache.c:136\n frame #24: 0x08883617\npostgres`get_tablespace_io_concurrency(spcid=0) at spccache.c:217\n frame #25: 0x08155a82\npostgres`heap_compute_xid_horizon_for_tuples(rel=0x31cbee40,\ntids=0x31df146c, nitems=3) at heapam.c:6980\n frame #26: 0x0817b09d\npostgres`table_compute_xid_horizon_for_tuples(rel=0x31cbee40,\nitems=0x31df146c, nitems=3) at tableam.h:708\n frame #27: 0x0817b03a\npostgres`index_compute_xid_horizon_for_tuples(irel=0x31c3e95c,\nhrel=0x31cbee40, ibuf=129, itemnos=0xffbfbb8c, nitems=3) at\ngenam.c:306\n frame #28: 0x0818ae92 postgres`_bt_delitems_delete(rel=0x31c3e95c,\nbuf=129, itemnos=0xffbfbb8c, nitems=3, heapRel=0x31cbee40) at\nnbtpage.c:1111\n frame #29: 0x0818405b postgres`_bt_vacuum_one_page(rel=0x31c3e95c,\nbuffer=129, heapRel=0x31cbee40) at nbtinsert.c:2270\n frame #30: 0x08180a4f postgres`_bt_findinsertloc(rel=0x31c3e95c,\ninsertstate=0xffbfcce0, checkingunique=true, stack=0x00000000,\nheapRel=0x31cbee40) at nbtinsert.c:736\n frame #31: 0x0817f40c postgres`_bt_doinsert(rel=0x31c3e95c,\nitup=0x31db69f4, checkUnique=UNIQUE_CHECK_YES, heapRel=0x31cbee40) at\nnbtinsert.c:281\n frame #32: 0x08190416 postgres`btinsert(rel=0x31c3e95c,\nvalues=0xffbfce54, isnull=0xffbfce34, ht_ctid=0x31db42e4,\nheapRel=0x31cbee40, checkUnique=UNIQUE_CHECK_YES,\nindexInfo=0x31db67dc) at nbtree.c:203\n frame #33: 0x0817c173\npostgres`index_insert(indexRelation=0x31c3e95c, values=0xffbfce54,\nisnull=0xffbfce34, heap_t_ctid=0x31db42e4, heapRelation=0x31cbee40,\ncheckUnique=UNIQUE_CHECK_YES, indexInfo=0x31db67dc) at indexam.c:212\n frame #34: 0x0823cca4\npostgres`CatalogIndexInsert(indstate=0x31db9228, heapTuple=0x31db42e0)\nat indexing.c:140\n frame #35: 0x0823cd72\npostgres`CatalogTupleUpdate(heapRel=0x31cbee40, otid=0x31db42e4,\ntup=0x31db42e0) at indexing.c:215\n frame #36: 0x088768ed\npostgres`RelationSetNewRelfilenode(relation=0x31c2fb38,\npersistence='p', freezeXid=0, minmulti=0) at relcache.c:3508\n frame #37: 0x0823b5df postgres`reindex_index(indexId=2672,\nskip_constraint_checks=true, persistence='p', options=0) at\nindex.c:3700\n frame #38: 0x0823bf00 postgres`reindex_relation(relid=1262,\nflags=18, options=0) at index.c:3946\n frame #39: 0x08320063 postgres`finish_heap_swap(OIDOldHeap=1262,\nOIDNewHeap=16580, is_system_catalog=true, swap_toast_by_content=true,\ncheck_constraints=false, is_internal=true, frozenXid=673,\ncutoffMulti=1, newrelpersistence='p') at cluster.c:1673\n frame #40: 0x0831f5a3\npostgres`rebuild_relation(OldHeap=0x31c2ff68, indexOid=0,\nverbose=false) at cluster.c:629\n frame #41: 0x0831eecd postgres`cluster_rel(tableOid=1262,\nindexOid=0, options=0) at cluster.c:435\n frame #42: 0x083f7c1d postgres`vacuum_rel(relid=1262,\nrelation=0x28b4b9dc, params=0xffbfd670) at vacuum.c:1743\n frame #43: 0x083f6f87 postgres`vacuum(relations=0x31d6c1cc,\nparams=0xffbfd670, bstrategy=0x31d6c090, isTopLevel=true) at\nvacuum.c:372\n frame #44: 0x083f6837 postgres`ExecVacuum(pstate=0x31ca2c90,\nvacstmt=0x28b4ba54, isTopLevel=true) at vacuum.c:175\n frame #45: 0x0869f145\npostgres`standard_ProcessUtility(pstmt=0x28b4bb18, queryString=\"VACUUM\nFULL pg_database;\", context=PROCESS_UTILITY_TOPLEVEL,\nparams=0x00000000, queryEnv=0x00000000, dest=0x28b4bc90,\ncompletionTag=\"\") at utility.c:670\n frame #46: 0x0869e68e postgres`ProcessUtility(pstmt=0x28b4bb18,\nqueryString=\"VACUUM FULL pg_database;\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x00000000,\nqueryEnv=0x00000000, dest=0x28b4bc90, completionTag=\"\") at\nutility.c:360\n frame #47: 0x0869ddfb postgres`PortalRunUtility(portal=0x31bee090,\npstmt=0x28b4bb18, isTopLevel=true, setHoldSnapshot=false,\ndest=0x28b4bc90, completionTag=\"\") at pquery.c:1175\n frame #48: 0x0869ce02 postgres`PortalRunMulti(portal=0x31bee090,\nisTopLevel=true, setHoldSnapshot=false, dest=0x28b4bc90,\naltdest=0x28b4bc90, completionTag=\"\") at pquery.c:1321\n frame #49: 0x0869c363 postgres`PortalRun(portal=0x31bee090,\ncount=2147483647, isTopLevel=true, run_once=true, dest=0x28b4bc90,\naltdest=0x28b4bc90, completionTag=\"\") at pquery.c:796\n frame #50: 0x08696e68\npostgres`exec_simple_query(query_string=\"VACUUM FULL pg_database;\") at\npostgres.c:1215\n frame #51: 0x08695eec postgres`PostgresMain(argc=1,\nargv=0x31be6658, dbname=\"contrib_regression\", username=\"munro\") at\npostgres.c:4247\n frame #52: 0x085aedb0 postgres`BackendRun(port=0x31be1000) at\npostmaster.c:4399\n frame #53: 0x085adf9a postgres`BackendStartup(port=0x31be1000) at\npostmaster.c:4090\n frame #54: 0x085accd5 postgres`ServerLoop at postmaster.c:1703\n frame #55: 0x085a9d95 postgres`PostmasterMain(argc=8,\nargv=0xffbfe608) at postmaster.c:1376\n frame #56: 0x0849ec92 postgres`main(argc=8, argv=0xffbfe608) at main.c:228\n frame #57: 0x080bf5eb postgres`_start1(cleanup=0x28b1e540, argc=8,\nargv=0xffbfe608) at crt1_c.c:73\n frame #58: 0x080bf4b8 postgres`_start at crt1_s.S:49\n\n(lldb) print num_held_lwlocks\n(int) $0 = 1\n(lldb) print held_lwlocks[0]\n(LWLockHandle) $1 = {\n lock = 0x295365a4\n mode = LW_EXCLUSIVE\n}\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Mar 2019 17:34:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On 2019-03-28 17:34:52 +1300, Thomas Munro wrote:\n> On Wed, Mar 27, 2019 at 1:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > Compute XID horizon for page level index vacuum on primary.\n> \n> Hi Andres,\n> \n> I have a virtual machine running FreeBSD 12.0 on i386 on which\n> contrib/test_decoding consistently self-deadlocks in the \"rewrite\"\n> test, with the stack listed below. You can see that we wait for a\n> share lock that we already hold exclusively. Peter Geoghegan spotted\n> the problem: this code path shouldn't access syscache, or at least not\n> for a catalog table. He suggested something along these lines:\n> \n> --- a/src/backend/access/heap/heapam.c\n> +++ b/src/backend/access/heap/heapam.c\n> @@ -6977,7 +6977,10 @@ heap_compute_xid_horizon_for_tuples(Relation rel,\n> * simplistic, but at the moment there is no evidence of that\n> or any idea\n> * about what would work better.\n> */\n> - io_concurrency =\n> get_tablespace_io_concurrency(rel->rd_rel->reltablespace);\n> + if (IsCatalogRelation(rel))\n> + io_concurrency = 1;\n> + else\n> + io_concurrency =\n> get_tablespace_io_concurrency(rel->rd_rel->reltablespace);\n> prefetch_distance = Min((io_concurrency) + 10, MAX_IO_CONCURRENCY);\n\nHm, good catch. I don't like this fix very much (even if it were\ncommented), but I don't have a great idea right now. I'm mildly\ninclined to take effective_io_concurrency into account even if we can't\nuse get_tablespace_io_concurrency - that should be doable from a\nlocking POV.\n\nDo you want to apply the above?\n\n- Andres\n\n\n",
"msg_date": "Thu, 28 Mar 2019 05:28:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 5:28 AM Andres Freund <andres@anarazel.de> wrote:\n> Hm, good catch. I don't like this fix very much (even if it were\n> commented), but I don't have a great idea right now.\n\nThat was just a POC, to verify the problem. Not a proposal.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Mar 2019 08:24:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn March 28, 2019 11:24:46 AM EDT, Peter Geoghegan <pg@bowt.ie> wrote:\n>On Thu, Mar 28, 2019 at 5:28 AM Andres Freund <andres@anarazel.de>\n>wrote:\n>> Hm, good catch. I don't like this fix very much (even if it were\n>> commented), but I don't have a great idea right now.\n>\n>That was just a POC, to verify the problem. Not a proposal.\n\nI'm mildly inclined to push a commented version of this. And add a open items entry. Alternatively I'm thinking of just but taking the tablespace setting into account.\n\nAndres\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 28 Mar 2019 11:27:27 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 8:27 AM Andres Freund <andres@anarazel.de> wrote:\n> I'm mildly inclined to push a commented version of this. And add a open items entry. Alternatively I'm thinking of just but taking the tablespace setting into account.\n\nI would just hard code something if there needs to be a temporary band-aid fix.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Mar 2019 08:30:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 8:30 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would just hard code something if there needs to be a temporary band-aid fix.\n\nI also suggest that you remove the #include for heapam_xlog.h from\nboth nbtxlog.c and hash_xlog.c.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Mar 2019 18:05:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Wed, 27 Mar 2019 at 00:06, Andres Freund <andres@anarazel.de> wrote:\n\n> Compute XID horizon for page level index vacuum on primary.\n>\n> Previously the xid horizon was only computed during WAL replay.\n\n\nThis commit message was quite confusing. It took me a while to realize this\nrelates to btree index deletes and that what you mean is that we are\ncalculcating the latestRemovedXid for index entries. That is related to but\nnot same thing as the horizon itself. So now I understand the \"was computed\nonly during WAL replay\" since it seemed obvious that the xmin horizon was\ncalculcated regularly on the master, but as you say the latestRemovedXid\nwas not.\n\nNow I understand, I'm happy that you've moved this from redo into mainline.\nAnd you've optimized it, which is also important (since performance was the\noriginal objection and why it was placed in redo). I can see you've removed\nduplicate code in hash indexes as well, which is good.\n\nThe term \"xid horizon\" was only used once in the code in PG11. That usage\nappears to be a typo, since in many other places the term \"xmin horizon\" is\nused to mean the point at which we can finally mark tuples as dead. Now we\nhave some new, undocumented APIs that use the term \"xid horizon\" yet still\ncode that refers to \"xmin horizon\", with neither term being defined. I'm\nhoping you'll do some later cleanup of that to avoid confusion.\n\nWhile trying to understand this, I see there is an even better way to\noptimize this. Since we are removing dead index tuples, we could alter the\nkilled index tuple interface so that it returns the xmax of the tuple being\nmarked as killed, rather than just a boolean to say it is dead. Indexes can\nthen mark the killed tuples with the xmax that killed them rather than just\na hint bit. This is possible since the index tuples are dead and cannot be\nused to follow the htid to the heap, so the htid is redundant and so the\nblock number of the tid could be overwritten with the xmax, zeroing the\nitemid. Each killed item we mark with its xmax means one less heap fetch we\nneed to perform when we delete the page - it's possible we optimize that\naway completely by doing this.\n\nSince this point of the code is clearly going to be a performance issue it\nseems like something we should do now.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Wed, 27 Mar 2019 at 00:06, Andres Freund <andres@anarazel.de> wrote:Compute XID horizon for page level index vacuum on primary.\n\nPreviously the xid horizon was only computed during WAL replay.This commit message was quite confusing. It took me a while to realize this relates to btree index deletes and that what you mean is that we are calculcating the latestRemovedXid for index entries. That is related to but not same thing as the horizon itself. So now I understand the \"was computed only during WAL replay\" since it seemed obvious that the xmin horizon was calculcated regularly on the master, but as you say the latestRemovedXid was not.Now I understand, I'm happy that you've moved this from redo into mainline. And you've optimized it, which is also important (since performance was the original objection and why it was placed in redo). I can see you've removed duplicate code in hash indexes as well, which is good.The term \"xid horizon\" was only used once in the code in PG11. That usage appears to be a typo, since in many other places the term \"xmin horizon\" is used to mean the point at which we can finally mark tuples as dead. Now we have some new, undocumented APIs that use the term \"xid horizon\" yet still code that refers to \"xmin horizon\", with neither term being defined. I'm hoping you'll do some later cleanup of that to avoid confusion.While trying to understand this, I see there is an even better way to optimize this. Since we are removing dead index tuples, we could alter the killed index tuple interface so that it returns the xmax of the tuple being marked as killed, rather than just a boolean to say it is dead. Indexes can then mark the killed tuples with the xmax that killed them rather than just a hint bit. This is possible since the index tuples are dead and cannot be used to follow the htid to the heap, so the htid is redundant and so the block number of the tid could be overwritten with the xmax, zeroing the itemid. Each killed item we mark with its xmax means one less heap fetch we need to perform when we delete the page - it's possible we optimize that away completely by doing this.Since this point of the code is clearly going to be a performance issue it seems like something we should do now.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Mar 2019 09:37:11 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn 2019-03-29 09:37:11 +0000, Simon Riggs wrote:\n> This commit message was quite confusing. It took me a while to realize this\n> relates to btree index deletes and that what you mean is that we are\n> calculcating the latestRemovedXid for index entries. That is related to but\n> not same thing as the horizon itself.\n\nWell, it's the page level horizon...\n\n\n> While trying to understand this, I see there is an even better way to\n> optimize this. Since we are removing dead index tuples, we could alter the\n> killed index tuple interface so that it returns the xmax of the tuple being\n> marked as killed, rather than just a boolean to say it is dead.\n\nWouldn't that quite possibly result in additional and unnecessary\nconflicts? Right now the page level horizon is computed whenever the\npage is actually reused, rather than when an item is marked as\ndeleted. As it stands right now, the computed horizons are commonly very\n\"old\", because of that delay, leading to lower rates of conflicts.\n\n\n> Indexes can then mark the killed tuples with the xmax that killed them\n> rather than just a hint bit. This is possible since the index tuples\n> are dead and cannot be used to follow the htid to the heap, so the\n> htid is redundant and so the block number of the tid could be\n> overwritten with the xmax, zeroing the itemid. Each killed item we\n> mark with its xmax means one less heap fetch we need to perform when\n> we delete the page - it's possible we optimize that away completely by\n> doing this.\n\nThat's far from a trivial feature imo. It seems quite possible that we'd\nend up with increased overhead, because the current logic can get away\nwith only doing hint bit style writes - but would that be true if we\nstarted actually replacing the item pointers? Because I don't see any\nguarantee they couldn't cross a page boundary etc? So I think we'd need\nto do WAL logging during index searches, which seems prohibitively\nexpensive.\n\nAnd I'm also doubtful it's worth it because:\n\n> Since this point of the code is clearly going to be a performance issue it\n> seems like something we should do now.\n\nI've tried quite a bit to find a workload where this matters, but after\navoiding redundant buffer accesses by sorting, and prefetching I was\nunable to do so. What workload do you see where this would be really be\nbad? Without the performance optimization I'd found a very minor\nregression by trying to force the heap visits to happen in a pretty\nrandom order, but after sorting that went away. I'm sure it's possible\nto find a case on overloaded rotational disks where you'd find a small\nregression, but I don't think it'd be particularly bad.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 29 Mar 2019 08:29:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n\n\n> On 2019-03-29 09:37:11 +0000, Simon Riggs wrote:\n>\n\n\n> > While trying to understand this, I see there is an even better way to\n> > optimize this. Since we are removing dead index tuples, we could alter\n> the\n> > killed index tuple interface so that it returns the xmax of the tuple\n> being\n> > marked as killed, rather than just a boolean to say it is dead.\n>\n> Wouldn't that quite possibly result in additional and unnecessary\n> conflicts? Right now the page level horizon is computed whenever the\n> page is actually reused, rather than when an item is marked as\n> deleted. As it stands right now, the computed horizons are commonly very\n> \"old\", because of that delay, leading to lower rates of conflicts.\n>\n\nI wasn't suggesting we change when the horizon is calculated, so no change\nthere.\n\nThe idea was to cache the data for later use, replacing the hint bit with a\nhint xid.\n\nThat won't change the rate of conflicts, up or down - but it does avoid I/O.\n\n\n> > Indexes can then mark the killed tuples with the xmax that killed them\n> > rather than just a hint bit. This is possible since the index tuples\n> > are dead and cannot be used to follow the htid to the heap, so the\n> > htid is redundant and so the block number of the tid could be\n> > overwritten with the xmax, zeroing the itemid. Each killed item we\n> > mark with its xmax means one less heap fetch we need to perform when\n> > we delete the page - it's possible we optimize that away completely by\n> > doing this.\n>\n> That's far from a trivial feature imo. It seems quite possible that we'd\n> end up with increased overhead, because the current logic can get away\n> with only doing hint bit style writes - but would that be true if we\n> started actually replacing the item pointers? Because I don't see any\n> guarantee they couldn't cross a page boundary etc? So I think we'd need\n> to do WAL logging during index searches, which seems prohibitively\n> expensive.\n>\n\nDon't see that.\n\nI was talking about reusing the first 4 bytes of an index tuple's\nItemPointerData,\nwhich is the first field of an index tuple. Index tuples are MAXALIGNed, so\nI can't see how that would ever cross a page boundary.\n\n\n> And I'm also doubtful it's worth it because:\n>\n> > Since this point of the code is clearly going to be a performance issue\n> it\n> > seems like something we should do now.\n>\n> I've tried quite a bit to find a workload where this matters, but after\n> avoiding redundant buffer accesses by sorting, and prefetching I was\n> unable to do so. What workload do you see where this would be really be\n> bad? Without the performance optimization I'd found a very minor\n> regression by trying to force the heap visits to happen in a pretty\n> random order, but after sorting that went away. I'm sure it's possible\n> to find a case on overloaded rotational disks where you'd find a small\n> regression, but I don't think it'd be particularly bad.\n>\n\nThe code can do literally hundreds of random I/Os in an 8192 blocksize.\nWhat happens with 16 or 32kB?\n\n\"Small regression\" ?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote: On 2019-03-29 09:37:11 +0000, Simon Riggs wrote: > While trying to understand this, I see there is an even better way to\n> optimize this. Since we are removing dead index tuples, we could alter the\n> killed index tuple interface so that it returns the xmax of the tuple being\n> marked as killed, rather than just a boolean to say it is dead.\n\nWouldn't that quite possibly result in additional and unnecessary\nconflicts? Right now the page level horizon is computed whenever the\npage is actually reused, rather than when an item is marked as\ndeleted. As it stands right now, the computed horizons are commonly very\n\"old\", because of that delay, leading to lower rates of conflicts.I wasn't suggesting we change when the horizon is calculated, so no change there.The idea was to cache the data for later use, replacing the hint bit with a hint xid.That won't change the rate of conflicts, up or down - but it does avoid I/O. \n> Indexes can then mark the killed tuples with the xmax that killed them\n> rather than just a hint bit. This is possible since the index tuples\n> are dead and cannot be used to follow the htid to the heap, so the\n> htid is redundant and so the block number of the tid could be\n> overwritten with the xmax, zeroing the itemid. Each killed item we\n> mark with its xmax means one less heap fetch we need to perform when\n> we delete the page - it's possible we optimize that away completely by\n> doing this.\n\nThat's far from a trivial feature imo. It seems quite possible that we'd\nend up with increased overhead, because the current logic can get away\nwith only doing hint bit style writes - but would that be true if we\nstarted actually replacing the item pointers? Because I don't see any\nguarantee they couldn't cross a page boundary etc? So I think we'd need\nto do WAL logging during index searches, which seems prohibitively\nexpensive.Don't see that.I was talking about reusing the first 4 bytes of an index tuple's ItemPointerData,which is the first field of an index tuple. Index tuples are MAXALIGNed, so I can't see how that would ever cross a page boundary. \nAnd I'm also doubtful it's worth it because:\n\n> Since this point of the code is clearly going to be a performance issue it\n> seems like something we should do now.\n\nI've tried quite a bit to find a workload where this matters, but after\navoiding redundant buffer accesses by sorting, and prefetching I was\nunable to do so. What workload do you see where this would be really be\nbad? Without the performance optimization I'd found a very minor\nregression by trying to force the heap visits to happen in a pretty\nrandom order, but after sorting that went away. I'm sure it's possible\nto find a case on overloaded rotational disks where you'd find a small\nregression, but I don't think it'd be particularly bad.The code can do literally hundreds of random I/Os in an 8192 blocksize. What happens with 16 or 32kB?\"Small regression\" ?-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Mar 2019 15:58:14 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n> > That's far from a trivial feature imo. It seems quite possible that we'd\n> > end up with increased overhead, because the current logic can get away\n> > with only doing hint bit style writes - but would that be true if we\n> > started actually replacing the item pointers? Because I don't see any\n> > guarantee they couldn't cross a page boundary etc? So I think we'd need\n> > to do WAL logging during index searches, which seems prohibitively\n> > expensive.\n> >\n> \n> Don't see that.\n> \n> I was talking about reusing the first 4 bytes of an index tuple's\n> ItemPointerData,\n> which is the first field of an index tuple. Index tuples are MAXALIGNed, so\n> I can't see how that would ever cross a page boundary.\n\nThey're 8 bytes, and MAXALIGN often is 4 bytes:\n\nstruct ItemPointerData {\n BlockIdData ip_blkid; /* 0 4 */\n OffsetNumber ip_posid; /* 4 2 */\n\n /* size: 6, cachelines: 1, members: 2 */\n /* last cacheline: 6 bytes */\n};\n\nstruct IndexTupleData {\n ItemPointerData t_tid; /* 0 6 */\n short unsigned int t_info; /* 6 2 */\n\n /* size: 8, cachelines: 1, members: 2 */\n /* last cacheline: 8 bytes */\n};\n\nSo as a whole they definitely can cross sector boundaries. You might be\nable to argue your way out of that by saying that the blkid is going to\nbe aligned, but that's not that trivial, as t_info isn't guaranteed\nthat.\n\nBut even so, you can't have unlogged changes that you then rely on. Even\nif there's no torn page issue. Currently BTP_HAS_GARBAGE and\nItemIdMarkDead() are treated as hints - if we want to guarantee all\nthese are accurate, I don't quite see how we'd get around WAL logging\nthose.\n\n\n> > And I'm also doubtful it's worth it because:\n> >\n> > > Since this point of the code is clearly going to be a performance issue\n> > it\n> > > seems like something we should do now.\n> >\n> > I've tried quite a bit to find a workload where this matters, but after\n> > avoiding redundant buffer accesses by sorting, and prefetching I was\n> > unable to do so. What workload do you see where this would be really be\n> > bad? Without the performance optimization I'd found a very minor\n> > regression by trying to force the heap visits to happen in a pretty\n> > random order, but after sorting that went away. I'm sure it's possible\n> > to find a case on overloaded rotational disks where you'd find a small\n> > regression, but I don't think it'd be particularly bad.\n\n> The code can do literally hundreds of random I/Os in an 8192 blocksize.\n> What happens with 16 or 32kB?\n\nIt's really hard to construct such cases after the sorting changes, but\nobviously not impossible. But to make it actually painful you need a\nworkload where the implied randomness of accesses isn't already a major\nbottleneck - and that's hard.\n\nThis has been discussed publically for a few months...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 29 Mar 2019 09:12:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Fri, 29 Mar 2019 at 16:12, Andres Freund <andres@anarazel.de> wrote:\n\n\n> On 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> > On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n> > > That's far from a trivial feature imo. It seems quite possible that\n> we'd\n> > > end up with increased overhead, because the current logic can get away\n> > > with only doing hint bit style writes - but would that be true if we\n> > > started actually replacing the item pointers? Because I don't see any\n> > > guarantee they couldn't cross a page boundary etc? So I think we'd need\n> > > to do WAL logging during index searches, which seems prohibitively\n> > > expensive.\n> > >\n> >\n> > Don't see that.\n> >\n> > I was talking about reusing the first 4 bytes of an index tuple's\n> > ItemPointerData,\n> > which is the first field of an index tuple. Index tuples are MAXALIGNed,\n> so\n> > I can't see how that would ever cross a page boundary.\n>\n> They're 8 bytes, and MAXALIGN often is 4 bytes:\n>\n\nxids are 4 bytes, so we're good.\n\nIf MAXALIGN could ever be 2 bytes, we'd have a problem.\n\nSo as a whole they definitely can cross sector boundaries. You might be\n> able to argue your way out of that by saying that the blkid is going to\n> be aligned, but that's not that trivial, as t_info isn't guaranteed\n> that.\n>\n> But even so, you can't have unlogged changes that you then rely on. Even\n> if there's no torn page issue. Currently BTP_HAS_GARBAGE and\n> ItemIdMarkDead() are treated as hints - if we want to guarantee all\n> these are accurate, I don't quite see how we'd get around WAL logging\n> those.\n>\n\nYou can have unlogged changes that you rely on - that is exactly how hints\nwork.\n\nIf the hint is lost, we do the I/O. Worst case it would be the same as what\nyou have now.\n\nI'm talking about saving many I/Os - this doesn't need to provably avoid\nall I/Os to work, its incremental benefit all the way.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 29 Mar 2019 at 16:12, Andres Freund <andres@anarazel.de> wrote: On 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n> > That's far from a trivial feature imo. It seems quite possible that we'd\n> > end up with increased overhead, because the current logic can get away\n> > with only doing hint bit style writes - but would that be true if we\n> > started actually replacing the item pointers? Because I don't see any\n> > guarantee they couldn't cross a page boundary etc? So I think we'd need\n> > to do WAL logging during index searches, which seems prohibitively\n> > expensive.\n> >\n> \n> Don't see that.\n> \n> I was talking about reusing the first 4 bytes of an index tuple's\n> ItemPointerData,\n> which is the first field of an index tuple. Index tuples are MAXALIGNed, so\n> I can't see how that would ever cross a page boundary.\n\nThey're 8 bytes, and MAXALIGN often is 4 bytes:xids are 4 bytes, so we're good. If MAXALIGN could ever be 2 bytes, we'd have a problem.So as a whole they definitely can cross sector boundaries. You might be\nable to argue your way out of that by saying that the blkid is going to\nbe aligned, but that's not that trivial, as t_info isn't guaranteed\nthat.\n\nBut even so, you can't have unlogged changes that you then rely on. Even\nif there's no torn page issue. Currently BTP_HAS_GARBAGE and\nItemIdMarkDead() are treated as hints - if we want to guarantee all\nthese are accurate, I don't quite see how we'd get around WAL logging\nthose.You can have unlogged changes that you rely on - that is exactly how hints work.If the hint is lost, we do the I/O. Worst case it would be the same as what you have now.I'm talking about saving many I/Os - this doesn't need to provably avoid all I/Os to work, its incremental benefit all the way.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 29 Mar 2019 16:20:54 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": " On Fri, Mar 29, 2019 at 9:12 AM Andres Freund <andres@anarazel.de> wrote:\n> But even so, you can't have unlogged changes that you then rely on. Even\n> if there's no torn page issue. Currently BTP_HAS_GARBAGE and\n> ItemIdMarkDead() are treated as hints - if we want to guarantee all\n> these are accurate, I don't quite see how we'd get around WAL logging\n> those.\n\nIt might be possible to WAL-log the _bt_check_unique() item killing.\nThat seems to be much more effective than the similar and better known\nkill_prior_tuple optimization in practice. I don't think that that\nshould be in scope for v12, though. I for one am satisfied with your\nexplanation.\n\n> > The code can do literally hundreds of random I/Os in an 8192 blocksize.\n> > What happens with 16 or 32kB?\n>\n> It's really hard to construct such cases after the sorting changes, but\n> obviously not impossible. But to make it actually painful you need a\n> workload where the implied randomness of accesses isn't already a major\n> bottleneck - and that's hard.\n\nThere is also the fact that in many cases you'll just have accessed\nthe same buffers from within _bt_check_unique() anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 29 Mar 2019 09:21:27 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On 2019-03-29 16:20:54 +0000, Simon Riggs wrote:\n> On Fri, 29 Mar 2019 at 16:12, Andres Freund <andres@anarazel.de> wrote:\n> \n> \n> > On 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> > > On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n> > > > That's far from a trivial feature imo. It seems quite possible that\n> > we'd\n> > > > end up with increased overhead, because the current logic can get away\n> > > > with only doing hint bit style writes - but would that be true if we\n> > > > started actually replacing the item pointers? Because I don't see any\n> > > > guarantee they couldn't cross a page boundary etc? So I think we'd need\n> > > > to do WAL logging during index searches, which seems prohibitively\n> > > > expensive.\n> > > >\n> > >\n> > > Don't see that.\n> > >\n> > > I was talking about reusing the first 4 bytes of an index tuple's\n> > > ItemPointerData,\n> > > which is the first field of an index tuple. Index tuples are MAXALIGNed,\n> > so\n> > > I can't see how that would ever cross a page boundary.\n> >\n> > They're 8 bytes, and MAXALIGN often is 4 bytes:\n> >\n> \n> xids are 4 bytes, so we're good.\n\nI literally went on to explain why that's not sufficient? You can't\n*just* replace the block number with an xid. You *also* need to set a\nflag denoting that it's an xid and dead now. Which can't fit in the same\n4 bytes. You either have to set it in the IndexTuple's t_info, or or in\nthe ItemIdData's lp_flags. And both can be on a different sectors. If\nthe flag persists, and the xid doesn't you're going to interpret a block\nnumber as an xid - not great; but even worse, if the xid survives, but\nthe flag doesn't, you're going to access the xid as a block - definitely\nnot ok, because you're going to return wrong results.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 29 Mar 2019 09:32:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 4:27 AM Andres Freund <andres@anarazel.de> wrote:\n> On March 28, 2019 11:24:46 AM EDT, Peter Geoghegan <pg@bowt.ie> wrote:\n> >On Thu, Mar 28, 2019 at 5:28 AM Andres Freund <andres@anarazel.de>\n> >wrote:\n> >> Hm, good catch. I don't like this fix very much (even if it were\n> >> commented), but I don't have a great idea right now.\n> >\n> >That was just a POC, to verify the problem. Not a proposal.\n>\n> I'm mildly inclined to push a commented version of this. And add a open items entry. Alternatively I'm thinking of just but taking the tablespace setting into account.\n\nI didn't understand that last sentence.\n\nHere's an attempt to write a suitable comment for the quick fix. And\nI suppose effective_io_concurrency is a reasonable default.\n\nIt's pretty hard to think of a good way to get your hands on the real\nvalue safely from here. I wondered if there was a way to narrow this\nto just GLOBALTABLESPACE_OID since that's where pg_tablespace lives,\nbut that doesn't work, we access other catalog too in that path.\n\nHmm, it seems a bit odd that 0 is supposed to mean \"disable issuance\nof asynchronous I/O requests\" according to config.sgml, but here 0\nwill prefetch 10 buffers.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Sat, 30 Mar 2019 23:32:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 6:33 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I didn't understand that last sentence.\n>\n> Here's an attempt to write a suitable comment for the quick fix. And\n> I suppose effective_io_concurrency is a reasonable default.\n>\n> It's pretty hard to think of a good way to get your hands on the real\n> value safely from here. I wondered if there was a way to narrow this\n> to just GLOBALTABLESPACE_OID since that's where pg_tablespace lives,\n> but that doesn't work, we access other catalog too in that path.\n>\n> Hmm, it seems a bit odd that 0 is supposed to mean \"disable issuance\n> of asynchronous I/O requests\" according to config.sgml, but here 0\n> will prefetch 10 buffers.\n\nMmmph. I'm starting to think we're not going to get a satisfactory\nresult here unless we make this controlled by something other than\neffective_io_concurrency. There's just no reason to suppose that the\nsame setting that we use to control prefetching for bitmap index scans\nis also going to be right for what's basically a bulk operation.\n\nInterestingly, Dilip Kumar ran into similar issues recently while\nworking on bulk processing for undo records for zheap. In that case,\nyou definitely want to prefetch the undo aggressively, because you're\nreading it front to back and backwards scans suck without prefetching.\nAnd you possibly also want to prefetch the data pages to which the\nundo that you are prefetching applies, but maybe not as aggressively\nbecause you're going to be doing a WAL write for each data page and\nflooding the system with too many reads could be counterproductive, at\nleast if pg_wal and the rest of $PGDATA are not on separate spindles.\nAnd even if they are, it's possible that as you suck in undo pages and\nthe zheap pages that they need to update, you might evict dirty pages,\ngenerating write activity against the data directory.\n\nOverall I'm inclined to think that we're making the same mistake here\nthat we did with work_mem, namely, assuming that you can control a\nbunch of different prefetching behaviors with a single GUC and things\nwill be OK. Let's just create a new GUC for this and default it to 10\nor something and go home.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Mar 2019 11:44:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 8:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Overall I'm inclined to think that we're making the same mistake here\n> that we did with work_mem, namely, assuming that you can control a\n> bunch of different prefetching behaviors with a single GUC and things\n> will be OK. Let's just create a new GUC for this and default it to 10\n> or something and go home.\n\nI agree. If you invent a new GUC, then everybody notices, and it\nusually has to be justified quite rigorously. There is a strong\nincentive to use an existing GUC, if only because the problem that\nthis creates is harder to measure than the supposed problem that it\navoids. This can perversely work against the goal of making the system\neasy to use. Stretching the original definition of a GUC is bad.\n\nI take issue with the general assumption that not adding a GUC at\nleast makes things easier for users. In reality, it depends entirely\non the situation at hand.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 30 Mar 2019 12:19:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Sun, Mar 31, 2019 at 8:20 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Mar 30, 2019 at 8:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Overall I'm inclined to think that we're making the same mistake here\n> > that we did with work_mem, namely, assuming that you can control a\n> > bunch of different prefetching behaviors with a single GUC and things\n> > will be OK. Let's just create a new GUC for this and default it to 10\n> > or something and go home.\n>\n> I agree. If you invent a new GUC, then everybody notices, and it\n> usually has to be justified quite rigorously. There is a strong\n> incentive to use an existing GUC, if only because the problem that\n> this creates is harder to measure than the supposed problem that it\n> avoids. This can perversely work against the goal of making the system\n> easy to use. Stretching the original definition of a GUC is bad.\n>\n> I take issue with the general assumption that not adding a GUC at\n> least makes things easier for users. In reality, it depends entirely\n> on the situation at hand.\n\nI'm not sure I understand why this is any different from the bitmap\nheapscan case though, or in fact why we are adding 10 in this case.\nIn both cases we will soon be reading the referenced buffers, and it\nmakes sense to queue up prefetch requests for the blocks if they\naren't already in shared buffers. In both cases, the number of\nprefetch requests we want to send to the OS is somehow linked to the\namount of IO requests we think the OS can handle concurrently at once\n(since that's one factor determining how fast it drains them), but\nit's not necessarily the same as that number, AFAICS. It's useful to\nqueue some number of prefetch requests even if you have no IO\nconcurrency at all (a single old school spindle), just because the OS\nwill chew on that queue in the background while we're also doing\nstuff, which is probably what that \"+ 10\" is expressing. But that\nseems to apply to bitmap heapscan too, doesn't it?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 31 Mar 2019 10:33:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn March 30, 2019 5:33:12 PM EDT, Thomas Munro <thomas.munro@gmail.com> wrote:\n>On Sun, Mar 31, 2019 at 8:20 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>> On Sat, Mar 30, 2019 at 8:44 AM Robert Haas <robertmhaas@gmail.com>\n>wrote:\n>> > Overall I'm inclined to think that we're making the same mistake\n>here\n>> > that we did with work_mem, namely, assuming that you can control a\n>> > bunch of different prefetching behaviors with a single GUC and\n>things\n>> > will be OK. Let's just create a new GUC for this and default it to\n>10\n>> > or something and go home.\n>>\n>> I agree. If you invent a new GUC, then everybody notices, and it\n>> usually has to be justified quite rigorously. There is a strong\n>> incentive to use an existing GUC, if only because the problem that\n>> this creates is harder to measure than the supposed problem that it\n>> avoids. This can perversely work against the goal of making the\n>system\n>> easy to use. Stretching the original definition of a GUC is bad.\n>>\n>> I take issue with the general assumption that not adding a GUC at\n>> least makes things easier for users. In reality, it depends entirely\n>> on the situation at hand.\n>\n>I'm not sure I understand why this is any different from the bitmap\n>heapscan case though, or in fact why we are adding 10 in this case.\n>In both cases we will soon be reading the referenced buffers, and it\n>makes sense to queue up prefetch requests for the blocks if they\n>aren't already in shared buffers. In both cases, the number of\n>prefetch requests we want to send to the OS is somehow linked to the\n>amount of IO requests we think the OS can handle concurrently at once\n>(since that's one factor determining how fast it drains them), but\n>it's not necessarily the same as that number, AFAICS. It's useful to\n>queue some number of prefetch requests even if you have no IO\n>concurrency at all (a single old school spindle), just because the OS\n>will chew on that queue in the background while we're also doing\n>stuff, which is probably what that \"+ 10\" is expressing. But that\n>seems to apply to bitmap heapscan too, doesn't it?\n\nThe index page deletion code does work on behalf of multiple backends, bitmap scans don't. If your system is busy it makes sense to like resource usage of per backend work, but not really work on shared resources like page reuse. A bit like work mem vs mwm.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 30 Mar 2019 17:45:05 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Fri, 29 Mar 2019 at 16:32, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-03-29 16:20:54 +0000, Simon Riggs wrote:\n> > On Fri, 29 Mar 2019 at 16:12, Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > > On 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> > > > On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > > That's far from a trivial feature imo. It seems quite possible that\n> > > we'd\n> > > > > end up with increased overhead, because the current logic can get\n> away\n> > > > > with only doing hint bit style writes - but would that be true if\n> we\n> > > > > started actually replacing the item pointers? Because I don't see\n> any\n> > > > > guarantee they couldn't cross a page boundary etc? So I think we'd\n> need\n> > > > > to do WAL logging during index searches, which seems prohibitively\n> > > > > expensive.\n> > > > >\n> > > >\n> > > > Don't see that.\n> > > >\n> > > > I was talking about reusing the first 4 bytes of an index tuple's\n> > > > ItemPointerData,\n> > > > which is the first field of an index tuple. Index tuples are\n> MAXALIGNed,\n> > > so\n> > > > I can't see how that would ever cross a page boundary.\n> > >\n> > > They're 8 bytes, and MAXALIGN often is 4 bytes:\n> > >\n> >\n> > xids are 4 bytes, so we're good.\n>\n> I literally went on to explain why that's not sufficient? You can't\n> *just* replace the block number with an xid. You *also* need to set a\n> flag denoting that it's an xid and dead now. Which can't fit in the same\n> 4 bytes. You either have to set it in the IndexTuple's t_info, or or in\n> the ItemIdData's lp_flags. And both can be on a different sectors. If\n> the flag persists, and the xid doesn't you're going to interpret a block\n> number as an xid - not great; but even worse, if the xid survives, but\n> the flag doesn't, you're going to access the xid as a block - definitely\n> not ok, because you're going to return wrong results.\n>\n\nYes, I agree, I was thinking the same thing after my last comment, but was\nafk. The idea requires the atomic update of at least 4 bytes plus at least\n1 bit and so would require at least 8byte MAXALIGN to be useful. Your other\npoints suggesting that multiple things all need to be set accurately for\nthis to work aren't correct. The idea was that we would write a hint that\nwould avoid later I/O, so the overall idea is still viable.\n\nAnyway, thinking some more, I think the whole idea of generating\nlastRemovedXid is moot and there are better ways in the future of doing\nthis that would avoid a performance issue altogether. Clearly not PG12.\n\nThe main issue relates to the potential overhead of moving this to the\nmaster. I agree its the right thing to do, but we should have some way of\nchecking it is not a performance issue in practice.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 29 Mar 2019 at 16:32, Andres Freund <andres@anarazel.de> wrote:On 2019-03-29 16:20:54 +0000, Simon Riggs wrote:\n> On Fri, 29 Mar 2019 at 16:12, Andres Freund <andres@anarazel.de> wrote:\n> \n> \n> > On 2019-03-29 15:58:14 +0000, Simon Riggs wrote:\n> > > On Fri, 29 Mar 2019 at 15:29, Andres Freund <andres@anarazel.de> wrote:\n> > > > That's far from a trivial feature imo. It seems quite possible that\n> > we'd\n> > > > end up with increased overhead, because the current logic can get away\n> > > > with only doing hint bit style writes - but would that be true if we\n> > > > started actually replacing the item pointers? Because I don't see any\n> > > > guarantee they couldn't cross a page boundary etc? So I think we'd need\n> > > > to do WAL logging during index searches, which seems prohibitively\n> > > > expensive.\n> > > >\n> > >\n> > > Don't see that.\n> > >\n> > > I was talking about reusing the first 4 bytes of an index tuple's\n> > > ItemPointerData,\n> > > which is the first field of an index tuple. Index tuples are MAXALIGNed,\n> > so\n> > > I can't see how that would ever cross a page boundary.\n> >\n> > They're 8 bytes, and MAXALIGN often is 4 bytes:\n> >\n> \n> xids are 4 bytes, so we're good.\n\nI literally went on to explain why that's not sufficient? You can't\n*just* replace the block number with an xid. You *also* need to set a\nflag denoting that it's an xid and dead now. Which can't fit in the same\n4 bytes. You either have to set it in the IndexTuple's t_info, or or in\nthe ItemIdData's lp_flags. And both can be on a different sectors. If\nthe flag persists, and the xid doesn't you're going to interpret a block\nnumber as an xid - not great; but even worse, if the xid survives, but\nthe flag doesn't, you're going to access the xid as a block - definitely\nnot ok, because you're going to return wrong results.Yes, I agree, I was thinking the same thing after my last comment, but was afk. The idea requires the atomic update of at least 4 bytes plus at least 1 bit and so would require at least 8byte MAXALIGN to be useful. Your other points suggesting that multiple things all need to be set accurately for this to work aren't correct. The idea was that we would write a hint that would avoid later I/O, so the overall idea is still viable.Anyway, thinking some more, I think the whole idea of generating lastRemovedXid is moot and there are better ways in the future of doing this that would avoid a performance issue altogether. Clearly not PG12.The main issue relates to the potential overhead of moving this to the master. I agree its the right thing to do, but we should have some way of checking it is not a performance issue in practice.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 1 Apr 2019 06:59:19 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 11:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's an attempt to write a suitable comment for the quick fix. And\n> I suppose effective_io_concurrency is a reasonable default.\n\nPushed.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Apr 2019 09:37:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn 2019-03-30 11:44:36 -0400, Robert Haas wrote:\n> On Sat, Mar 30, 2019 at 6:33 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I didn't understand that last sentence.\n> >\n> > Here's an attempt to write a suitable comment for the quick fix. And\n> > I suppose effective_io_concurrency is a reasonable default.\n> >\n> > It's pretty hard to think of a good way to get your hands on the real\n> > value safely from here. I wondered if there was a way to narrow this\n> > to just GLOBALTABLESPACE_OID since that's where pg_tablespace lives,\n> > but that doesn't work, we access other catalog too in that path.\n> >\n> > Hmm, it seems a bit odd that 0 is supposed to mean \"disable issuance\n> > of asynchronous I/O requests\" according to config.sgml, but here 0\n> > will prefetch 10 buffers.\n> \n> Mmmph. I'm starting to think we're not going to get a satisfactory\n> result here unless we make this controlled by something other than\n> effective_io_concurrency. There's just no reason to suppose that the\n> same setting that we use to control prefetching for bitmap index scans\n> is also going to be right for what's basically a bulk operation.\n> \n> Interestingly, Dilip Kumar ran into similar issues recently while\n> working on bulk processing for undo records for zheap. In that case,\n> you definitely want to prefetch the undo aggressively, because you're\n> reading it front to back and backwards scans suck without prefetching.\n> And you possibly also want to prefetch the data pages to which the\n> undo that you are prefetching applies, but maybe not as aggressively\n> because you're going to be doing a WAL write for each data page and\n> flooding the system with too many reads could be counterproductive, at\n> least if pg_wal and the rest of $PGDATA are not on separate spindles.\n> And even if they are, it's possible that as you suck in undo pages and\n> the zheap pages that they need to update, you might evict dirty pages,\n> generating write activity against the data directory.\n\nI'm not yet convinced it's necessary to create a new GUC, but also not\nstrongly opposed. I've created an open items issue for it, so we don't\nforget.\n\n\n> Overall I'm inclined to think that we're making the same mistake here\n> that we did with work_mem, namely, assuming that you can control a\n> bunch of different prefetching behaviors with a single GUC and things\n> will be OK. Let's just create a new GUC for this and default it to 10\n> or something and go home.\n\nI agree that we needed to split work_mem, but a) that was far less clear\nfor many years b) there was no logic ot use more work_mem in\nmaintenance-y cases...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 1 Apr 2019 18:26:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn 2019-04-01 18:26:59 -0700, Andres Freund wrote:\n> I'm not yet convinced it's necessary to create a new GUC, but also not\n> strongly opposed. I've created an open items issue for it, so we don't\n> forget.\n\nMy current inclination is to not do anything for v12. Robert, do you\ndisagree?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 May 2019 09:15:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Wed, May 1, 2019 at 12:15 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-04-01 18:26:59 -0700, Andres Freund wrote:\n> > I'm not yet convinced it's necessary to create a new GUC, but also not\n> > strongly opposed. I've created an open items issue for it, so we don't\n> > forget.\n>\n> My current inclination is to not do anything for v12. Robert, do you\n> disagree?\n\nNot strongly enough to argue about it very hard. The current behavior\nis a little weird, but it's a long way from being the weirdest thing\nwe ship, and it appears that we have no tangible evidence that it\ncauses a problem in practice.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 May 2019 12:34:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, May 1, 2019 at 12:15 PM Andres Freund <andres@anarazel.de> wrote:\n>> My current inclination is to not do anything for v12. Robert, do you\n>> disagree?\n\n> Not strongly enough to argue about it very hard. The current behavior\n> is a little weird, but it's a long way from being the weirdest thing\n> we ship, and it appears that we have no tangible evidence that it\n> causes a problem in practice.\n\nI think there's nothing that fails to suck about a hardwired \"+ 10\".\n\nWe should either remove that and use effective_io_concurrency as-is,\nor decide that it's worth having a separate GUC for bulk operations.\nAt this stage of the cycle I'd incline to the former, but if somebody\nis excited enough to prepare a patch for a new GUC, I wouldn't push\nback on that solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 May 2019 12:50:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Wed, May 1, 2019 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Not strongly enough to argue about it very hard. The current behavior\n> > is a little weird, but it's a long way from being the weirdest thing\n> > we ship, and it appears that we have no tangible evidence that it\n> > causes a problem in practice.\n>\n> I think there's nothing that fails to suck about a hardwired \"+ 10\".\n\nIt avoids a performance regression without adding another GUC.\n\nThat may not be enough reason to keep it like that, but it is one\nthing that does fail to suck.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 May 2019 13:10:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Thu, May 2, 2019 at 5:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, May 1, 2019 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Not strongly enough to argue about it very hard. The current behavior\n> > > is a little weird, but it's a long way from being the weirdest thing\n> > > we ship, and it appears that we have no tangible evidence that it\n> > > causes a problem in practice.\n> >\n> > I think there's nothing that fails to suck about a hardwired \"+ 10\".\n>\n> It avoids a performance regression without adding another GUC.\n>\n> That may not be enough reason to keep it like that, but it is one\n> thing that does fail to suck.\n\nThis is listed as an open item to resolve for 12. IIUC the options on\nthe table are:\n\n1. Do nothing, and ship with effective_io_concurrency + 10.\n2. Just use effective_io_concurrency without the hardwired boost.\n3. Switch to a new GUC maintenance_io_concurrency (or some better name).\n\nThe rationale for using a different number is that this backend is\nworking on behalf of multiple sessions, so you might want to give it\nsome more juice, much like maintenance_work_mem.\n\nI vote for option 3. I have no clue how to set it, but at least users\nhave a fighting chance of experimenting and figuring it out that way.\nI volunteer to write the patch if we get a consensus.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2019 12:01:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 12:01:07 +1200, Thomas Munro wrote:\n> On Thu, May 2, 2019 at 5:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, May 1, 2019 at 12:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > Not strongly enough to argue about it very hard. The current behavior\n> > > > is a little weird, but it's a long way from being the weirdest thing\n> > > > we ship, and it appears that we have no tangible evidence that it\n> > > > causes a problem in practice.\n> > >\n> > > I think there's nothing that fails to suck about a hardwired \"+ 10\".\n> >\n> > It avoids a performance regression without adding another GUC.\n> >\n> > That may not be enough reason to keep it like that, but it is one\n> > thing that does fail to suck.\n> \n> This is listed as an open item to resolve for 12. IIUC the options on\n> the table are:\n> \n> 1. Do nothing, and ship with effective_io_concurrency + 10.\n> 2. Just use effective_io_concurrency without the hardwired boost.\n> 3. Switch to a new GUC maintenance_io_concurrency (or some better name).\n> \n> The rationale for using a different number is that this backend is\n> working on behalf of multiple sessions, so you might want to give it\n> some more juice, much like maintenance_work_mem.\n> \n> I vote for option 3. I have no clue how to set it, but at least users\n> have a fighting chance of experimenting and figuring it out that way.\n> I volunteer to write the patch if we get a consensus.\n\nI'd personally, unsurprisingly perhaps, go with 1 for v12. I think 3 is\nalso a good option - it's easy to imagine to later use it for for\nVACUUM, ANALYZE and the like. I think 2 is a bad idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 17:11:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-15 12:01:07 +1200, Thomas Munro wrote:\n>> This is listed as an open item to resolve for 12. IIUC the options on\n>> the table are:\n>> \n>> 1. Do nothing, and ship with effective_io_concurrency + 10.\n>> 2. Just use effective_io_concurrency without the hardwired boost.\n>> 3. Switch to a new GUC maintenance_io_concurrency (or some better name).\n>> \n>> I vote for option 3. I have no clue how to set it, but at least users\n>> have a fighting chance of experimenting and figuring it out that way.\n>> I volunteer to write the patch if we get a consensus.\n\n> I'd personally, unsurprisingly perhaps, go with 1 for v12. I think 3 is\n> also a good option - it's easy to imagine to later use it for for\n> VACUUM, ANALYZE and the like. I think 2 is a bad idea.\n\nFWIW, I also agree with settling for #1 at this point. A new GUC would\nmake more sense if we have multiple use-cases for it, which we probably\nwill at some point, but not today. I'm concerned that if we invent a\nGUC now, we might find out that it's not really usable for other cases\nin future (e.g., default value is no good for other cases). It's the\nold story that inventing an API with only one use-case in mind leads\nto a bad API.\n\nSo yeah, let's leave this be for now, ugly as it is. Improving it\ncan be future work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 11:53:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
},
{
"msg_contents": "On Thu, May 16, 2019 at 3:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-15 12:01:07 +1200, Thomas Munro wrote:\n> >> This is listed as an open item to resolve for 12. IIUC the options on\n> >> the table are:\n> >>\n> >> 1. Do nothing, and ship with effective_io_concurrency + 10.\n> >> 2. Just use effective_io_concurrency without the hardwired boost.\n> >> 3. Switch to a new GUC maintenance_io_concurrency (or some better name).\n> >>\n> >> I vote for option 3. I have no clue how to set it, but at least users\n> >> have a fighting chance of experimenting and figuring it out that way.\n> >> I volunteer to write the patch if we get a consensus.\n>\n> > I'd personally, unsurprisingly perhaps, go with 1 for v12. I think 3 is\n> > also a good option - it's easy to imagine to later use it for for\n> > VACUUM, ANALYZE and the like. I think 2 is a bad idea.\n>\n> FWIW, I also agree with settling for #1 at this point. A new GUC would\n> make more sense if we have multiple use-cases for it, which we probably\n> will at some point, but not today. I'm concerned that if we invent a\n> GUC now, we might find out that it's not really usable for other cases\n> in future (e.g., default value is no good for other cases). It's the\n> old story that inventing an API with only one use-case in mind leads\n> to a bad API.\n>\n> So yeah, let's leave this be for now, ugly as it is. Improving it\n> can be future work.\n\nCool, I moved it to the resolved section.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2019 09:44:53 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Compute XID horizon for page level index vacuum on\n primary."
}
] |
[
{
"msg_contents": "Hello hackers, \nThis email is regarding the Postgres pg_stat_statements extension. \nI noticed that enabling pg_stat_statements can effect performance. I thought that changing the pg_stat_statements.track parameter to 'none' could reduce this overhead without requiring a restart to remove it from shared_preload_libraries. Changing this config did not improve performance as I expected. Looking over the code, I noticed that pg_stat_statements is not checking if it is enabled before executing the post_parse_analyze_hook function. Other hooks that require access to the pg_stat_statements query hash table (through the pgss_store function) check for pgss_enabled. \nWould it make sense to check for pgss_enabled in the post_parse_analyze_hook function?\n \n**Patching**\nMaking this change drastically improved performance while pg_stat_statement.track was set to NONE. This change allows me to more effectively enable/disable pg_stat_statements without requiring a restart. \nExample patch:\n@@ -783,8 +783,8 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query)\n /* Assert we didn't do this already */\n Assert(query->queryId == 0);\n \n- /* Safety check... */\n- if (!pgss || !pgss_hash)\n+ /* Safety check...and ensure that pgss is enabled before we do any work */\n+ if (!pgss || !pgss_hash || !pgss_enabled())\n return;\n\n**Simple Mini Benchmark**\nI ran a simple test on my local machine with this spec: 16 core/32 GB memory/Windows Server 2016.\nThe simple query I used was 'select 1'. I called pg_stat_statements_reset() before each simple query to clear the pg_stat_statements query hash. The majority of the latency happens the first time a query is run. \nMedian runtime of 100 simple queries in milliseconds: \n\t\tPGSS loaded (ms)\tPGSS loaded + this patch (ms)\ntrack = top 0.53\t\t\t0.55\ntrack = none 0.41\t\t\t0.20\n\nPGSS not loaded: 0.18ms\n\n--\nRaymond Martin\nramarti@microsoft.com\nAzure Database for PostgreSQL\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 00:33:49 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hello Raymond,\n\n> Would it make sense to check for pgss_enabled in the post_parse_analyze_hook function?\n\nProbably.\n\n> -������ /* Safety check... */\n> -������ if (!pgss || !pgss_hash)\n> +������ /* Safety check...and ensure that pgss is enabled before we do any work */\n> +������ if (!pgss || !pgss_hash || !pgss_enabled())\n> ��������������� return;\n>\n> **Simple Mini Benchmark**\n> I ran a simple test on my local machine with this spec: 16 core/32 GB memory/Windows Server 2016.\n> The simple query I used was 'select 1'.\n\n> I called pg_stat_statements_reset() before each simple query to clear \n> the pg_stat_statements query hash.\n\nThis sentence seems to suggest that reset is called before each 'select \n1'? I assume it is before each test run.\n\n> The majority of the latency happens the first time a query is run.\n\n> Median runtime of 100 simple queries in milliseconds:\n> \t\tPGSS loaded (ms)\tPGSS loaded + this patch (ms)\n> track = top��������� 0.53\t\t\t0.55\n> track = none������ 0.41\t\t\t0.20\n>\n> PGSS not loaded: 0.18ms\n\nThis means 0.0018 ms latency per transaction, which seems rather fast, on \nmy laptop I have typically 0.0XX ms...\n\nI could not reproduce these results on my ubuntu laptop. Could you be more \nprecise about the test? Did you use pgbench? Did it run in parallel? What \noptions were used? What is the test script?\n\nI tried With \"pgbench\" on one thread on a local socket directory \nconnection on a 11.2 server:\n\n sh> vi one.sql # SELECT 1;\n sh> pgbench -n -T 100 -P 1 -M prepared -f one.sql\n\nAnd I had the following latencies :\n\n pgss not loaded: 0.026 ms\n pgss top: 0.026/0.027 ms\n pgss none: 0.027 ms\n\nThe effect is minimal. More precise per second analysis suggest a few \npercent.\n\nOk, maybe my habit of -M prepared would hide some of the processing cost, \nso:\n\n sh> pgbench -n -T 100 -P 1 -f one.sql\n\n pgss top: 0.035 ms\n pgss none: 0.035 ms\n pgss dropped but loaded: 0.035 ms\n pgss not loaded: 0.032 ms\n\nThere I have an impact of 10% in these ideal testing conditions wrt \nlatency where the DB does basically nothing, thus which would not warrant \nto disable pg_stat_statements given the great service this extension \nbrings to performance analysis.\n\nNote that this does not mean that the patch should not be applied, it \nlooks like an oversight, but really I do not have the performance \ndegradation you are suggesting.\n\n-- \nFabien.",
"msg_date": "Wed, 27 Mar 2019 07:20:04 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi,\nthe part that hurts in terms or performances is:\n\n\tif (jstate.clocations_count > 0)\n\t\tpgss_store(pstate->p_sourcetext,\n\t\t\t query->queryId,\n\t\t\t query->stmt_location,\n\t\t\t query->stmt_len,\n\t\t\t 0,\n\t\t\t 0,\n\t\t\t 0,\n\t\t\t NULL,\n\t\t\t &jstate);\n\nthat writes the query text to disk, when it has at less one parameter ...\nComputing the QueryId should stay very small and can be very usefull when\nused in conjonction with\nhttps://www.postgresql-archive.org/Feature-improvement-can-we-add-queryId-for-pg-catalog-pg-stat-activity-view-td6077275.html#a6077602\nfor wait events sampling.\n\nI would propose to fix this by\n\tif (jstate.clocations_count > 0 && pgss_enabled())\n\t\tpgss_store(pstate->p_sourcetext,\n ...\n\nRegards\nPAscal\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 27 Mar 2019 00:47:20 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi Pascal,\nThanks for your feedback! I like your ideas. \n\n>the part that hurts in terms or performances is:\n>\n>\tif (jstate.clocations_count > 0)\n>\t\tpgss_store(pstate->p_sourcetext,\n\nI agree that this is the typically the most expensive part, but query normalization and hashing can also start becoming expensive with larger queries. \n\n>that writes the query text to disk, when it has at less one parameter ...\n>Computing the QueryId should stay very small and can be very usefull when used in conjonction with\n>https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql-archive.org%2FFeature-improvement-can-we-add-queryId-for-pg-catalog-pg-stat-activity-view-td6077275.html%23a6077602&data=02%7C01%7Cramarti%40microsoft.com%7Cfaa866abf1d5478e9a2208d6b2887cc4%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636892696615564063&sdata=pWbyVleHceAHoNTMzb5oHGzois5yDaMpEHKmappTIwk%3D&reserved=0\n>for wait events sampling.\n\nI also agree that the query id can be very useful! In cases where query id is required, pg_stat_statements can be enabled. \nMy intent of turning tracking off is to minimize the performance impact of pgss as much as possible and the thread above states: \"PGSS jumble query logic is not bullet proof and may take time then impact the perf\".\n\nI believe your fix is a great step forward, but checking enabled at the very beginning would lead to better performance. This also follows the paradigm set by the rest of the pg_stat_statements codebase.\nIn the future, if we want only the query ID to be calculated maybe we can add another option for that?\n\n--\nRaymond Martin\nramarti@microsoft.com\nAzure Database for PostgreSQL\n\n\n\n\n",
"msg_date": "Wed, 27 Mar 2019 18:24:41 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Your fix is probably the best one.\nMaybe this could be considered as a bug and back ported to previous versions\n...\n\nRegards\nPAscal\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 27 Mar 2019 13:17:02 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi Fabien, \nThank you for your time. Apologies for not being more specific about my testing methodology.\n\n> > PGSS not loaded: 0.18ms\n>\n> This means 0.0018 ms latency per transaction, which seems rather fast, on my laptop I have typically 0.0XX ms...\n\nThis actually means 0.18 milliseconds. I agree that this is a bit high, so I instead created an Ubuntu VM to get results that would align with yours. \n\n> I could not reproduce these results on my ubuntu laptop. Could you be more precise about the test? Did you use pgbench? Did it run in parallel? What options were used? What is the test script?\n\nI did not use pgbench. It is important to call pg_stat_statements_reset before every query. This simulates a user that is performing distinct and non-repeated queries on their database. If you use prepared statements or the same set of queries each time, you would remove the contention on the pgss query text file. \nI re-tested this on an Ubuntu machine with 4cores and 14GB ram. I did not run it in parallel. I used a python script that implements the follow logic: \n\t- select pg_stat_statements_reset() -- this is important because we are making pgss treat the 'select 1' like a new query which it has not cached into pgss_hash. \n\t- time 'select 1'\nRepeat 100 times for each configuration. \n\nHere are my Ubuntu results:\n pgss unloaded\n Mean: 0.076\n Standard Deviation: 0.038\n\n pgss.track=none\n Mean: 0.099\n Standard Deviation: 0.040\n \n pgss.track=top \n Mean: 0.098\n Standard Deviation: 0.107\n\n pgss.track=none + patch\n Mean: 0.078\n Standard Deviation: 0.042\n\nThe results are less noticeable, but I still see about a 20% performance improvement here.\n\n> There I have an impact of 10% in these ideal testing conditions wrt latency where the DB does basically nothing, thus which would not warrant to disable pg_stat_statements given the great service this extension brings to performance analysis.\n\nI agree that pg_stat_statements should not be disabled based on these performance results. \n\n> Note that this does not mean that the patch should not be applied, it looks like an oversight, but really I do not have the performance degradation you are suggesting.\n\nI appreciate your input and I want to come up with a canonical test that makes this contention more obvious. \nUnfortunately, it is difficult because the criteria that causes this slow down (large query sizes and distinct non-repeated queries) are difficult to reproduce with pgbench. I would be open to any suggestions here. \n\nSo even though the performance gains in this specific scenario are not as great, do you still think it would make sense to submit a patch like this? \n\n--\nRaymond Martin\nramarti@microsoft.com\nAzure Database for PostgreSQL\n\n\n",
"msg_date": "Wed, 27 Mar 2019 22:15:52 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "my test case:\n\ndrop table a;\ncreate table a ();\n\nDO\n$$\nDECLARE\ni int;\nBEGIN\nfor i in 1..20\nloop\nexecute 'alter table a add column a'||i::text||' int';\nend loop;\nEND\n$$;\n\nselect pg_stat_statements_reset();\nset pg_stat_statements.track='none';\n\nDO\n$$\nDECLARE\ni int;\nj int;\nBEGIN\nfor j in 1..20\nloop\nfor i in 1..20\nloop\nexecute 'select a'||i::text||',a'||j::text||' from a where 1=2';\nend loop;\nend loop;\nEND\n$$;\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 27 Mar 2019 15:24:42 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "\nHello Raymond,\n\n>> Note that this does not mean that the patch should not be applied, it \n>> looks like an oversight, but really I do not have the performance \n>> degradation you are suggesting.\n>\n> I appreciate your input and I want to come up with a canonical test that \n> makes this contention more obvious. Unfortunately, it is difficult \n> because the criteria that causes this slow down (large query sizes and \n> distinct non-repeated queries) are difficult to reproduce with pgbench. \n> I would be open to any suggestions here.\n>\n> So even though the performance gains in this specific scenario are not \n> as great, do you still think it would make sense to submit a patch like \n> this?\n\nSure, it definitely makes sense to reduce the overhead when the extension \nis disabled. I wanted to understand the source of performance issue, and \nyour explanations where not enough for reproducing it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 28 Mar 2019 07:07:22 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi Fabien, \n\n> Sure, it definitely makes sense to reduce the overhead when the extension is disabled. I wanted to understand the source of performance issue, and your explanations where not enough for reproducing it.\nThanks again Fabien. I am attaching the patch to this email in the hope of getting it approved during the next commit fest. \nI will continue trying to find a simple performance test to exemplify the performance degradation that I have seen with more complex workloads. \n\n--\nRaymond Martin\nramarti@Microsoft.com\nAzure Database for PostgreSQL",
"msg_date": "Mon, 1 Apr 2019 18:28:36 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi,\n\nit seems that your patch is not readable.\nIf you want it to be included in a commitfest, you should add it by yourself\nin https://commitfest.postgresql.org/\n\nNot sure that there is any room left in pg12 commitfest.\n\nRegard\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 1 Apr 2019 13:04:07 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Re: Raymond Martin 2019-04-01 <BN8PR21MB121708579A3782866DF1F745B1550@BN8PR21MB1217.namprd21.prod.outlook.com>\n> Thanks again Fabien. I am attaching the patch to this email in the hope of getting it approved during the next commit fest. \n\nRaymond,\n\nyou sent the patch as UTF-16, could you re-send it as plain ascii?\n\nChristoph\n\n\n",
"msg_date": "Tue, 2 Apr 2019 11:37:29 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "On Tue, Apr 2, 2019 at 5:37 AM Christoph Berg <myon@debian.org> wrote:\n> Re: Raymond Martin 2019-04-01 <BN8PR21MB121708579A3782866DF1F745B1550@BN8PR21MB1217.namprd21.prod.outlook.com>\n> > Thanks again Fabien. I am attaching the patch to this email in the hope of getting it approved during the next commit fest.\n>\n> you sent the patch as UTF-16, could you re-send it as plain ascii?\n\nOne thing that needs some thought here is what happens if the value of\npgss_enabled() changes. For example we don't want a situation where\nif the value changes from off to on between one stage of processing\nand another, the server crashes.\n\nI don't know whether that's a risk here or not; it's just something to\nthink about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Apr 2019 10:26:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Robert Haas wrote\n> On Tue, Apr 2, 2019 at 5:37 AM Christoph Berg <\n\n> myon@\n\n> > wrote:\n>> Re: Raymond Martin 2019-04-01 <\n\n> BN8PR21MB121708579A3782866DF1F745B1550@.outlook\n\n> >\n>> > Thanks again Fabien. I am attaching the patch to this email in the hope\n>> of getting it approved during the next commit fest.\n>>\n>> you sent the patch as UTF-16, could you re-send it as plain ascii?\n> \n> One thing that needs some thought here is what happens if the value of\n> pgss_enabled() changes. For example we don't want a situation where\n> if the value changes from off to on between one stage of processing\n> and another, the server crashes.\n> \n> I don't know whether that's a risk here or not; it's just something to\n> think about.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nHi, here is a simple test where I commented that line\nin pgss_post_parse_analyze\n to force return; (as if pgss_enabled() was disabled)\nbut kept pgss_enabled() every where else \n\n\t/* Safety check... */\n//\tif (!pgss || !pgss_hash || !pgss_enabled())\n\t\treturn;\n\nThis works without crash as you can see here after:\n\n\npostgres=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\n\npostgres=# show pg_stat_statements.track;\n pg_stat_statements.track\n--------------------------\n all\n(1 row)\n\n\npostgres=# create table a(id int);\nCREATE TABLE\n\n\npostgres=# select * from a where id=1;\n id\n----\n(0 rows)\n\n\npostgres=# select queryid,query,calls from pg_stat_statements;\n queryid | query | calls\n---------------------+-------------------------------+-------\n 1033669194118974675 | show pg_stat_statements.track | 1\n 3022461129400094363 | create table a(id int) | 1\n(2 rows)\n\nregards\nPAscal\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 3 Apr 2019 13:22:24 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi Christoph, \n\n> you sent the patch as UTF-16, could you re-send it as plain ascii?\n\nApologies. I re-attached the plain ascii version. \n\n--\nRaymond Martin\nramarti@microsoft.com\nAzure Database for PostgreSQL",
"msg_date": "Wed, 3 Apr 2019 23:20:03 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "From: Robert Haas\n>> \n>> One thing that needs some thought here is what happens if the value of\n>> pgss_enabled() changes. For example we don't want a situation where \n>> if the value changes from off to on between one stage of processing \n>> and another, the server crashes.\n>> \n>> I don't know whether that's a risk here or not; it's just something to \n>> think about.\nThis is definitely an important consideration for this change. A hook could \nhave the implicit assumption that a query ID is always generated. \n\nFrom: PAscal\n> Hi, here is a simple test where I commented that line in pgss_post_parse_analyze\n> to force return; (as if pgss_enabled() was disabled) but kept pgss_enabled() every where else \n>\n>\t/* Safety check... */\n> //\tif (!pgss || !pgss_hash || !pgss_enabled())\n>\t\treturn;\n>\n> This works without crash as you can see here after:\n\nIn theory, the rest of the hooks look solid.\nAs mentioned above, I think the major concern would be a hook that depends \non a variable generated in pgss_post_parse_analyze. Two hooks \n(pgss_ExecutorStart, pgss_ExecutorEnd) depend on the query ID \ngenerated from pgss_post_parse_analyze. Fortunately, both of these \nfunctions already check for query ID before doing work.\n\nI really appreciate you putting this change into practice. \nGreat to see your results align with mine. \nThanks Pascal!!!\n\n--\nRaymond Martin\nramarti@microsoft.com\nAzure Database for PostgreSQL\n\n\n",
"msg_date": "Thu, 4 Apr 2019 00:19:28 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "CF entry created \nhttps://commitfest.postgresql.org/23/2092/\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 8 Apr 2019 13:20:37 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "Hi, \r\nApologies, but I had already created a commit here: https://commitfest.postgresql.org/23/2080/ . \r\nAny preference on which to keep?\r\n\r\nThanks, \r\nRaymond Martin \r\nramarti@microsoft.com",
"msg_date": "Mon, 15 Apr 2019 19:26:33 +0000",
"msg_from": "Raymond Martin <ramarti@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "\nHello Raymond,\n\n>> Sure, it definitely makes sense to reduce the overhead when the extension is disabled. I wanted to understand the source of performance issue, and your explanations where not enough for reproducing it.\n> Thanks again Fabien. I am attaching the patch to this email in the hope of getting it approved during the next commit fest.\n> I will continue trying to find a simple performance test to exemplify the performance degradation that I have seen with more complex workloads.\n\nPatch applies and compiles cleanly. Global and local make check ok.\n\nThe patch adds an early exit in one of the hook when pgss is not enabled \non a given query. This seems to be a long time oversight of some earlier \nadditions which only had some (small or large depending) performance \nimpact.\n\nAbout the comment \"...and\" -> \"... and\" (add a space)\n\nOtherwise all is well.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 15 Jul 2019 23:03:45 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "RE: minimizing pg_stat_statements performance overhead"
},
{
"msg_contents": "On Wed, 2019-04-03 at 23:20 +0000, Raymond Martin wrote:\n> Hi Christoph, \n> \n> > you sent the patch as UTF-16, could you re-send it as plain ascii?\n> \n> Apologies. I re-attached the plain ascii version. \n\nCommitted. Thanks!\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 13:50:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: minimizing pg_stat_statements performance overhead"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to use the PostgreSQL roles system as the user base for a web\napplication. The common wisdom seems to be Don't Do This, because it\nrequires a connection per-user which doesn't scale. However, thinking it\nthrough, I'm wondering it there might be a workaround using \"sandbox\ntransactions\", a scheme where a connection pooler connects as a superuser,\nbut immediately runs a\n\nSET LOCAL ROLE 'joe_regular_user';\n\nThe problem with this of course is that the user could then just issue a\nRESET ROLE and go back to superuser.\n\nWhat would be the implications of adding a NO RESET clause to SET LOCAL\nROLE? If the user were to ever end the local transaction, the system would\nneed to kick them out of the connection, and they would need to reconnect\ninside another sandbox transaction. Could this work? How hard would it\nbe, and what are the security implications?\n\nThanks,\nEric\n\nHi,I'm trying to use the PostgreSQL roles system as the user base for a web application. The common wisdom seems to be Don't Do This, because it requires a connection per-user which doesn't scale. However, thinking it through, I'm wondering it there might be a workaround using \"sandbox transactions\", a scheme where a connection pooler connects as a superuser, but immediately runs aSET LOCAL ROLE 'joe_regular_user';The problem with this of course is that the user could then just issue a RESET ROLE and go back to superuser.What would be the implications of adding a NO RESET clause to SET LOCAL ROLE? If the user were to ever end the local transaction, the system would need to kick them out of the connection, and they would need to reconnect inside another sandbox transaction. Could this work? How hard would it be, and what are the security implications?Thanks,Eric",
"msg_date": "Wed, 27 Mar 2019 01:40:10 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "SET LOCAL ROLE NO RESET -- sandbox transactions"
},
{
"msg_contents": "On 3/27/19 2:40 AM, Eric Hanson wrote:\n\n> What would be the implications of adding a NO RESET clause to SET LOCAL\n> ROLE?\n\nThere's a part of this that seems to be a special case of the\nGUC-protected-by-cookie idea discussed a bit in [1] and [2]\n(which is still an idea that I like).\n\nRegards,\n-Chap\n\n[1]\nhttps://www.postgresql.org/message-id/59127E4E.8090705%40anastigmatix.net\n\n[2]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYOz%2BZmOteahrduJCc8RT8GEgC6PNXLwRzJPObmHGaurg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 27 Mar 2019 12:23:41 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: SET LOCAL ROLE NO RESET -- sandbox transactions"
},
{
"msg_contents": "These seem like much better ideas than mine. :-) Thanks.\n\nDid anything ever come of these ideas? Do you have a sense of the level of\ncommunity support around these ideas?\n\nThanks,\nEric\n\nOn Wed, Mar 27, 2019 at 11:23 AM Chapman Flack <chap@anastigmatix.net>\nwrote:\n\n> On 3/27/19 2:40 AM, Eric Hanson wrote:\n>\n> > What would be the implications of adding a NO RESET clause to SET LOCAL\n> > ROLE?\n>\n> There's a part of this that seems to be a special case of the\n> GUC-protected-by-cookie idea discussed a bit in [1] and [2]\n> (which is still an idea that I like).\n>\n> Regards,\n> -Chap\n>\n> [1]\n> https://www.postgresql.org/message-id/59127E4E.8090705%40anastigmatix.net\n>\n> [2]\n>\n> https://www.postgresql.org/message-id/CA%2BTgmoYOz%2BZmOteahrduJCc8RT8GEgC6PNXLwRzJPObmHGaurg%40mail.gmail.com\n>\n\nThese seem like much better ideas than mine. :-) Thanks.Did anything ever come of these ideas? Do you have a sense of the level of community support around these ideas?Thanks,EricOn Wed, Mar 27, 2019 at 11:23 AM Chapman Flack <chap@anastigmatix.net> wrote:On 3/27/19 2:40 AM, Eric Hanson wrote:\n\n> What would be the implications of adding a NO RESET clause to SET LOCAL\n> ROLE?\n\nThere's a part of this that seems to be a special case of the\nGUC-protected-by-cookie idea discussed a bit in [1] and [2]\n(which is still an idea that I like).\n\nRegards,\n-Chap\n\n[1]\nhttps://www.postgresql.org/message-id/59127E4E.8090705%40anastigmatix.net\n\n[2]\nhttps://www.postgresql.org/message-id/CA%2BTgmoYOz%2BZmOteahrduJCc8RT8GEgC6PNXLwRzJPObmHGaurg%40mail.gmail.com",
"msg_date": "Fri, 29 Mar 2019 03:45:04 -0500",
"msg_from": "Eric Hanson <eric@aquameta.com>",
"msg_from_op": true,
"msg_subject": "Re: SET LOCAL ROLE NO RESET -- sandbox transactions"
}
] |
[
{
"msg_contents": "Hello hackers,\n\npostgres=> select txid_status(txid_current() + 3);\nERROR: transaction ID 627 is in the future\npostgres=> select txid_status(txid_current() + 2);\nERROR: transaction ID 627 is in the future\npostgres=> select txid_status(txid_current() + 1);\n txid_status\n-------------\n in progress\n(1 row)\n\nIf you keep asking for txid_status(txid_current() + 1) in new\ntransactions, you eventually hit:\n\nERROR: could not access status of transaction 32768\nDETAIL: Could not read from file \"pg_xact/0000\" at offset 8192: No error: 0.\n\nI think the fix is:\n\n--- a/src/backend/utils/adt/txid.c\n+++ b/src/backend/utils/adt/txid.c\n@@ -129,7 +129,7 @@ TransactionIdInRecentPast(uint64 xid_with_epoch,\nTransactionId *extracted_xid)\n\n /* If the transaction ID is in the future, throw an error. */\n if (xid_epoch > now_epoch\n- || (xid_epoch == now_epoch && xid > now_epoch_last_xid))\n+ || (xid_epoch == now_epoch && xid >= now_epoch_last_xid))\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"transaction ID %s is in the future\",\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2019 19:55:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "txid_status() off-by-one error"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 7:55 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> If you keep asking for txid_status(txid_current() + 1) in new\n> transactions, you eventually hit:\n>\n> ERROR: could not access status of transaction 32768\n> DETAIL: Could not read from file \"pg_xact/0000\" at offset 8192: No error: 0.\n\n> - || (xid_epoch == now_epoch && xid > now_epoch_last_xid))\n> + || (xid_epoch == now_epoch && xid >= now_epoch_last_xid))\n\nPushed and back-patched, along with renaming of that variable, s/last/next/.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Mar 2019 21:49:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: txid_status() off-by-one error"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAttached is sketch of small patch that fixes several edge cases with\nautovacuum. Long story short autovacuum never comes to append only tables,\nkilling large productions.\n\nFirst case, mine.\n\nhttps://www.postgresql.org/message-id/CAC8Q8tLBeAxR%2BBXWuKK%2BHP5m8tEVYn270CVrDvKXt%3D0PkJTY9g%40mail.gmail.com\n\nWe had a table we were appending and wanted Index Only Scan to work. For it\nto work, you need to call VACUUM manually, since VACUUM is the only way to\nmark pages all visible, and autovacuum never comes to append only tables.\nWe were clever to invent a workflow without dead tuples and it painfully\nbit us.\n\nSecond case, just read in the news.\nhttps://mailchimp.com/what-we-learned-from-the-recent-mandrill-outage/\n\nMandrill has 6TB append only table that autovacuum probably never vacuumed.\nThen anti-wraparound came and production went down. If autovacuum did its\njob before that last moment, it would probably be okay.\n\nIdea: look not on dead tuples, but on changes, just like ANALYZE does.\nIt's my first patch on Postgres, it's probably all wrong but I hope it\nhelps you get the idea.\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa",
"msg_date": "Wed, 27 Mar 2019 23:54:57 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> Attached is sketch of small patch that fixes several edge cases with\n> autovacuum. Long story short autovacuum never comes to append only tables,\n> killing large productions.\n\nYeah, autovac is not coping with these scenarios (and probably others).\nHowever, rather than taking your patch's idea verbatim, I think we\nshould have autovacuum use separate actions for those two (wildly\ndifferent) scenarios. For example:\n\n* certain tables would have some sort of partial scan that sets the\n visibility map. There's no reason to invoke the whole vacuuming\n machinery. I don't think this is limited to append-only tables, but\n rather those are just the ones that are affected the most.\n\n* tables nearing wraparound danger should use the (yet to be committed)\n option to skip index cleaning, which makes the cleanup action faster.\n Again, no need for complete vacuuming.\n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 18:31:58 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nчт, 28 мар. 2019 г. в 00:32, Alvaro Herrera <alvherre@2ndquadrant.com>:\n\n> On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n> > Attached is sketch of small patch that fixes several edge cases with\n> > autovacuum. Long story short autovacuum never comes to append only\n> tables,\n> > killing large productions.\n>\n> Yeah, autovac is not coping with these scenarios (and probably others).\n> However, rather than taking your patch's idea verbatim, I think we\n> should have autovacuum use separate actions for those two (wildly\n> different) scenarios. For example:\n>\n> * certain tables would have some sort of partial scan that sets the\n> visibility map. There's no reason to invoke the whole vacuuming\n> machinery. I don't think this is limited to append-only tables, but\n> rather those are just the ones that are affected the most.\n>\n\nWhat other machinery runs on VACUUM invocation that is not wanted there?\nSince Postgres 11 index cleanup is already skipped on append-only tables.\n\n\n> * tables nearing wraparound danger should use the (yet to be committed)\n> option to skip index cleaning, which makes the cleanup action faster.\n> Again, no need for complete vacuuming.\n>\n\n\"Nearing wraparound\" is too late already. In Amazon, reading table from gp2\nafter you exhausted your IOPS burst budget is like reading a floppy drive,\nyou have to freeze a lot earlier than you hit several terabytes of unfrozen\ndata, or you're dead like Mandrill's Search and Url tables from the link I\nshared.\n\n\n>\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,чт, 28 мар. 2019 г. в 00:32, Alvaro Herrera <alvherre@2ndquadrant.com>:On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> Attached is sketch of small patch that fixes several edge cases with\n> autovacuum. Long story short autovacuum never comes to append only tables,\n> killing large productions.\n\nYeah, autovac is not coping with these scenarios (and probably others).\nHowever, rather than taking your patch's idea verbatim, I think we\nshould have autovacuum use separate actions for those two (wildly\ndifferent) scenarios. For example:\n\n* certain tables would have some sort of partial scan that sets the\n visibility map. There's no reason to invoke the whole vacuuming\n machinery. I don't think this is limited to append-only tables, but\n rather those are just the ones that are affected the most.What other machinery runs on VACUUM invocation that is not wanted there?Since Postgres 11 index cleanup is already skipped on append-only tables. \n* tables nearing wraparound danger should use the (yet to be committed)\n option to skip index cleaning, which makes the cleanup action faster.\n Again, no need for complete vacuuming.\"Nearing wraparound\" is too late already. In Amazon, reading table from gp2 after you exhausted your IOPS burst budget is like reading a floppy drive, you have to freeze a lot earlier than you hit several terabytes of unfrozen data, or you're dead like Mandrill's Search and Url tables from the link I shared. \n\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 28 Mar 2019 00:41:42 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n\n\n> чт, 28 мар. 2019 г. в 00:32, Alvaro Herrera <alvherre@2ndquadrant.com>:\n> \n> > On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> > * certain tables would have some sort of partial scan that sets the\n> > visibility map. There's no reason to invoke the whole vacuuming\n> > machinery. I don't think this is limited to append-only tables, but\n> > rather those are just the ones that are affected the most.\n> \n> What other machinery runs on VACUUM invocation that is not wanted there?\n> Since Postgres 11 index cleanup is already skipped on append-only tables.\n\nWell, I think it would be useful to set all-visible earlier than waiting\nfor a vacuum to be necessary, even for tables that are not append-only.\nSo if you think about this just for the append-only table, you leave\nmoney on the table.\n\n> > * tables nearing wraparound danger should use the (yet to be committed)\n> > option to skip index cleaning, which makes the cleanup action faster.\n> > Again, no need for complete vacuuming.\n> \n> \"Nearing wraparound\" is too late already. In Amazon, reading table from gp2\n> after you exhausted your IOPS burst budget is like reading a floppy drive,\n> you have to freeze a lot earlier than you hit several terabytes of unfrozen\n> data, or you're dead like Mandrill's Search and Url tables from the link I\n> shared.\n\nOK, then start freezing tuples in the cheap mode (skip index updates)\nearlier than that. I suppose a good question is when to start.\n\n\nI wonder if Mandrill's problem is related to Mailchimp raising the\nfreeze_max_age to a point where autovac did not have enough time to\nreact with an emergency vacuum. If you keep raising that value because\nthe vacuums cause problems for you (they block DDL), there's something\nwrong.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 27 Mar 2019 19:01:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 28 Mar 2019 at 11:01, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n> > \"Nearing wraparound\" is too late already. In Amazon, reading table from gp2\n> > after you exhausted your IOPS burst budget is like reading a floppy drive,\n> > you have to freeze a lot earlier than you hit several terabytes of unfrozen\n> > data, or you're dead like Mandrill's Search and Url tables from the link I\n> > shared.\n>\n> OK, then start freezing tuples in the cheap mode (skip index updates)\n> earlier than that. I suppose a good question is when to start.\n\nI thought recently that it would be good to have some sort of\npro-active auto-vacuum mode that made use of idle workers. Probably\nthere would need to be some mode flag that mentioned which workers\nwere in proactive mode so that these could be cancelled when more\npressing work came in. I don't have an idea exactly of what\n\"pro-active\" would actually be defined as, but I know that when the\nsingle transaction ID is consumed that causes terra bytes of tables to\nsuddenly need an anti-wraparound vacuum, then it's not a good\nsituation to be in. Perhaps getting to some percentage of\nautovacuum_freeze_max_age could be classed as pro-active.\n\n> I wonder if Mandrill's problem is related to Mailchimp raising the\n> freeze_max_age to a point where autovac did not have enough time to\n> react with an emergency vacuum. If you keep raising that value because\n> the vacuums cause problems for you (they block DDL), there's something\n> wrong.\n\nI have seen some very high autovacuum_freeze_max_age settings\nrecently. It would be interesting to know what they had theirs set to.\nI see they mentioned \"Search and Url tables\". I can imagine \"search\"\nnever needs any UPDATEs, so quite possibly those were append-only, in\nwhich case the anti-wraparound vacuum would have had quite a lot of\nwork on its hands since possibly every page needed frozen. A table\nreceiving regular auto-vacuums from dead tuples would likely get some\npages frozen during those.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 12:36:24 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "чт, 28 мар. 2019 г. в 01:01, Alvaro Herrera <alvherre@2ndquadrant.com>:\n\n> On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n>\n> > чт, 28 мар. 2019 г. в 00:32, Alvaro Herrera <alvherre@2ndquadrant.com>:\n> >\n> > > On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n> > > * certain tables would have some sort of partial scan that sets the\n> > > visibility map. There's no reason to invoke the whole vacuuming\n> > > machinery. I don't think this is limited to append-only tables, but\n> > > rather those are just the ones that are affected the most.\n> >\n> > What other machinery runs on VACUUM invocation that is not wanted there?\n> > Since Postgres 11 index cleanup is already skipped on append-only tables.\n>\n> Well, I think it would be useful to set all-visible earlier than waiting\n> for a vacuum to be necessary, even for tables that are not append-only.\n> So if you think about this just for the append-only table, you leave\n> money on the table.\n>\n\nThing is, problem does not exist for non-append-only tables, they're going\nto be vacuumed after 50 rows got updated, automatically.\n\n\n>\n> > > * tables nearing wraparound danger should use the (yet to be committed)\n> > > option to skip index cleaning, which makes the cleanup action faster.\n> > > Again, no need for complete vacuuming.\n> >\n> > \"Nearing wraparound\" is too late already. In Amazon, reading table from\n> gp2\n> > after you exhausted your IOPS burst budget is like reading a floppy\n> drive,\n> > you have to freeze a lot earlier than you hit several terabytes of\n> unfrozen\n> > data, or you're dead like Mandrill's Search and Url tables from the link\n> I\n> > shared.\n>\n> OK, then start freezing tuples in the cheap mode (skip index updates)\n> earlier than that. I suppose a good question is when to start.\n>\n\nAttached (autovacuum_berserk_v1.patch)\n code achieves that. For append-only tables since\nhttps://commitfest.postgresql.org/16/952/ vacuum skips index cleanup if no\nupdates happened. You just need to trigger it, and it already will be\n\"cheap\".\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nчт, 28 мар. 2019 г. в 01:01, Alvaro Herrera <alvherre@2ndquadrant.com>:On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n\n\n> чт, 28 мар. 2019 г. в 00:32, Alvaro Herrera <alvherre@2ndquadrant.com>:\n> \n> > On 2019-Mar-27, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> > * certain tables would have some sort of partial scan that sets the\n> > visibility map. There's no reason to invoke the whole vacuuming\n> > machinery. I don't think this is limited to append-only tables, but\n> > rather those are just the ones that are affected the most.\n> \n> What other machinery runs on VACUUM invocation that is not wanted there?\n> Since Postgres 11 index cleanup is already skipped on append-only tables.\n\nWell, I think it would be useful to set all-visible earlier than waiting\nfor a vacuum to be necessary, even for tables that are not append-only.\nSo if you think about this just for the append-only table, you leave\nmoney on the table.Thing is, problem does not exist for non-append-only tables, they're going to be vacuumed after 50 rows got updated, automatically. \n\n> > * tables nearing wraparound danger should use the (yet to be committed)\n> > option to skip index cleaning, which makes the cleanup action faster.\n> > Again, no need for complete vacuuming.\n> \n> \"Nearing wraparound\" is too late already. In Amazon, reading table from gp2\n> after you exhausted your IOPS burst budget is like reading a floppy drive,\n> you have to freeze a lot earlier than you hit several terabytes of unfrozen\n> data, or you're dead like Mandrill's Search and Url tables from the link I\n> shared.\n\nOK, then start freezing tuples in the cheap mode (skip index updates)\nearlier than that. I suppose a good question is when to start.Attached (autovacuum_berserk_v1.patch) code achieves that. For append-only tables since https://commitfest.postgresql.org/16/952/ vacuum skips index cleanup if no updates happened. You just need to trigger it, and it already will be \"cheap\".-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 28 Mar 2019 08:34:34 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 2:36 AM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Thu, 28 Mar 2019 at 11:01, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >\n> > On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n> > > \"Nearing wraparound\" is too late already. In Amazon, reading table\n> from gp2\n> > > after you exhausted your IOPS burst budget is like reading a floppy\n> drive,\n> > > you have to freeze a lot earlier than you hit several terabytes of\n> unfrozen\n> > > data, or you're dead like Mandrill's Search and Url tables from the\n> link I\n> > > shared.\n> >\n> > OK, then start freezing tuples in the cheap mode (skip index updates)\n> > earlier than that. I suppose a good question is when to start.\n>\n> I thought recently that it would be good to have some sort of\n> pro-active auto-vacuum mode that made use of idle workers.\n\n\nProblem with \"idle\" is that it never happens on system that are going to\nwraparound on their lifetime. This has to be a part of normal database\nfunctioning.\n\nWhy not select a table that has inserts, updates and deletes for autovacuum\njust like we do for autoanalyze, not only deletes and updates like we do\nnow?\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Thu, Mar 28, 2019 at 2:36 AM David Rowley <david.rowley@2ndquadrant.com> wrote:On Thu, 28 Mar 2019 at 11:01, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Mar-28, Darafei \"Komяpa\" Praliaskouski wrote:\n> > \"Nearing wraparound\" is too late already. In Amazon, reading table from gp2\n> > after you exhausted your IOPS burst budget is like reading a floppy drive,\n> > you have to freeze a lot earlier than you hit several terabytes of unfrozen\n> > data, or you're dead like Mandrill's Search and Url tables from the link I\n> > shared.\n>\n> OK, then start freezing tuples in the cheap mode (skip index updates)\n> earlier than that. I suppose a good question is when to start.\n\nI thought recently that it would be good to have some sort of\npro-active auto-vacuum mode that made use of idle workers. Problem with \"idle\" is that it never happens on system that are going to wraparound on their lifetime. This has to be a part of normal database functioning.Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 28 Mar 2019 12:03:47 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 28 Mar 2019 at 22:04, Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>\n> On Thu, Mar 28, 2019 at 2:36 AM David Rowley <david.rowley@2ndquadrant.com> wrote:\n>> I thought recently that it would be good to have some sort of\n>> pro-active auto-vacuum mode that made use of idle workers.\n>\n> Problem with \"idle\" is that it never happens on system that are going to wraparound on their lifetime. This has to be a part of normal database functioning.\n\nI'd say auto-vacuum is configured to run too slowly if you never have\nan idle worker. The chances that it happens to be running at exactly\nthe right speed to keep up with demand must be about close to nil.\n\n> Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?\n\nSounds like a good idea, although I do agree with Alvaro when he\nmentions that it would be good to only invoke a worker that was only\ngoing to freeze tuples and not look at the indexes. I've not looked at\nit, but there's a patch [1] in the current CF for that. I'd say a\ngood course of action would be to review that then write a patch with\na new bool flag in relation_needs_vacanalyze for \"freezeonly\" and have\nauto-vacuum invoke vacuum in this new freeze only mode if freezeonly\nis set and dovacuum is not.\n\nAny patch not in the current CF is already PG13 or beyond. Having at\nleast a freeze only vacuum mode main ease some pain, even if it still\nneeds to be done manually for anyone finding themselves in a similar\nsituation as mailchimp.\n\nThe idea I was mentioning was more targeted to ease the sudden rush of\nauto-vacuum activity when suddenly a bunch of large tables require an\nanti-wraparound vacuum all at once.\n\n[1] https://commitfest.postgresql.org/22/1817/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 22:32:21 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 6:32 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Thu, 28 Mar 2019 at 22:04, Darafei \"Komяpa\" Praliaskouski\n> <me@komzpa.net> wrote:\n> >\n> > On Thu, Mar 28, 2019 at 2:36 AM David Rowley <david.rowley@2ndquadrant.com> wrote:\n> >> I thought recently that it would be good to have some sort of\n> >> pro-active auto-vacuum mode that made use of idle workers.\n> >\n> > Problem with \"idle\" is that it never happens on system that are going to wraparound on their lifetime. This has to be a part of normal database functioning.\n>\n> I'd say auto-vacuum is configured to run too slowly if you never have\n> an idle worker. The chances that it happens to be running at exactly\n> the right speed to keep up with demand must be about close to nil.\n>\n> > Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?\n>\n> Sounds like a good idea, although I do agree with Alvaro when he\n> mentions that it would be good to only invoke a worker that was only\n> going to freeze tuples and not look at the indexes.\n\nThe invoking autovacuum on table based on inserts, not only deletes\nand updates, seems good idea to me. But in this case, I think that we\ncan not only freeze tuples but also update visibility map even when\nsetting all-visible. Roughly speaking I think vacuum does the\nfollowing operations.\n\n1. heap vacuum\n2. HOT pruning\n3. freezing tuples\n4. updating visibility map (all-visible and all-frozen)\n5. index vacuum/cleanup\n6. truncation\n\nWith the proposed patch[1] we can control to do 5 or not. In addition\nto that, another proposed patch[2] allows us to control 6.\n\nFor append-only tables (and similar tables), what we periodically want\nto do would be 3 and 4 (possibly we can do 2 as well). So maybe we\nneed to have both an option of (auto)vacuum to control whether to do 1\nand something like a new autovacuum threshold (or an option) to invoke\nthe vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\nand 4 would be much cheaper than today's vacuum and anti-wraparound\nvacuum would be able to skip almost pages.\n\n[1] https://commitfest.postgresql.org/22/1817/\n[2] https://commitfest.postgresql.org/22/1981/\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 28 Mar 2019 19:28:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 12:32 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Thu, 28 Mar 2019 at 22:04, Darafei \"Komяpa\" Praliaskouski\n> <me@komzpa.net> wrote:\n> >\n> > On Thu, Mar 28, 2019 at 2:36 AM David Rowley <\n> david.rowley@2ndquadrant.com> wrote:\n> >> I thought recently that it would be good to have some sort of\n> >> pro-active auto-vacuum mode that made use of idle workers.\n> >\n> > Problem with \"idle\" is that it never happens on system that are going to\n> wraparound on their lifetime. This has to be a part of normal database\n> functioning.\n>\n> I'd say auto-vacuum is configured to run too slowly if you never have\n> an idle worker. The chances that it happens to be running at exactly\n> the right speed to keep up with demand must be about close to nil.\n>\n> > Why not select a table that has inserts, updates and deletes for\n> autovacuum just like we do for autoanalyze, not only deletes and updates\n> like we do now?\n>\n> Sounds like a good idea, although I do agree with Alvaro when he\n> mentions that it would be good to only invoke a worker that was only\n> going to freeze tuples and not look at the indexes.\n\n\nThis is current behavior of VACUUM on tables without dead tuples, already.\nIssue is that nothing triggers this VACUUM apart from user performing\nVACUUM manually, or super late antiwraparound vacuum.\n\nAny patch not in the current CF is already PG13 or beyond. Having at\n> least a freeze only vacuum mode main ease some pain, even if it still\n> needs to be done manually for anyone finding themselves in a similar\n> situation as mailchimp.\n>\n\nIf you're in wraparound halt with super large table on Amazon gp2 nothing\nwill help you - issue is, there's no option to \"rewrite all of it quickly\".\nBurst limit lets you feel the shared drive as if it was an SSD on most of\nyour load, but reading and re-writing all the storage gets throttled, and\nthere's no option to escape this quickly.\n\nThe process that freezes and marks all-visible pages has to run in parallel\nand at the speed of your backend pushing pages to disk, maybe lagging\nbehind a bit - but not up to \"we need to rescan all the table\".\n\n\n>\n> The idea I was mentioning was more targeted to ease the sudden rush of\n> auto-vacuum activity when suddenly a bunch of large tables require an\n> anti-wraparound vacuum all at once.\n>\n> [1] https://commitfest.postgresql.org/22/1817/\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Thu, Mar 28, 2019 at 12:32 PM David Rowley <david.rowley@2ndquadrant.com> wrote:On Thu, 28 Mar 2019 at 22:04, Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>\n> On Thu, Mar 28, 2019 at 2:36 AM David Rowley <david.rowley@2ndquadrant.com> wrote:\n>> I thought recently that it would be good to have some sort of\n>> pro-active auto-vacuum mode that made use of idle workers.\n>\n> Problem with \"idle\" is that it never happens on system that are going to wraparound on their lifetime. This has to be a part of normal database functioning.\n\nI'd say auto-vacuum is configured to run too slowly if you never have\nan idle worker. The chances that it happens to be running at exactly\nthe right speed to keep up with demand must be about close to nil.\n\n> Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?\n\nSounds like a good idea, although I do agree with Alvaro when he\nmentions that it would be good to only invoke a worker that was only\ngoing to freeze tuples and not look at the indexes.This is current behavior of VACUUM on tables without dead tuples, already. Issue is that nothing triggers this VACUUM apart from user performing VACUUM manually, or super late antiwraparound vacuum.\nAny patch not in the current CF is already PG13 or beyond. Having at\nleast a freeze only vacuum mode main ease some pain, even if it still\nneeds to be done manually for anyone finding themselves in a similar\nsituation as mailchimp.If you're in wraparound halt with super large table on Amazon gp2 nothing will help you - issue is, there's no option to \"rewrite all of it quickly\". Burst limit lets you feel the shared drive as if it was an SSD on most of your load, but reading and re-writing all the storage gets throttled, and there's no option to escape this quickly.The process that freezes and marks all-visible pages has to run in parallel and at the speed of your backend pushing pages to disk, maybe lagging behind a bit - but not up to \"we need to rescan all the table\". \n\nThe idea I was mentioning was more targeted to ease the sudden rush of\nauto-vacuum activity when suddenly a bunch of large tables require an\nanti-wraparound vacuum all at once.\n\n[1] https://commitfest.postgresql.org/22/1817/\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 28 Mar 2019 14:36:58 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\n> > Why not select a table that has inserts, updates and deletes for\nautovacuum just like we do for autoanalyze, not only deletes and updates\nlike we do now?\n\n> >\n> > Sounds like a good idea, although I do agree with Alvaro when he\n> > mentions that it would be good to only invoke a worker that was only\n> > going to freeze tuples and not look at the indexes.\n>\n> The invoking autovacuum on table based on inserts, not only deletes\n> and updates, seems good idea to me. But in this case, I think that we\n> can not only freeze tuples but also update visibility map even when\n> setting all-visible. Roughly speaking I think vacuum does the\n> following operations.\n>\n> 1. heap vacuum\n\n2. HOT pruning\n>\nIs it worth skipping it if we're writing a page anyway for the sake of hint\nbits and new xids? This will all be no-op anyway on append-only tables and\nhappen only when we actually need something?\n\n\n> 3. freezing tuples\n> 4. updating visibility map (all-visible and all-frozen)\n>\nThese two are needed, and current autovacuum launch process does not take\ninto account that this is also needed for non-dead tuples.\n\n\n> 5. index vacuum/cleanup\n>\nThere is a separate patch for that. But, since\nhttps://commitfest.postgresql.org/16/952/ for almost a year already\nPostgres skips index cleanup on tables without new dead tuples, so this\ncase is taken care of already?\n\n\n> 6. truncation\n>\nThis shouldn't be a heavy operation?\n\n\n>\n> With the proposed patch[1] we can control to do 5 or not. In addition\n> to that, another proposed patch[2] allows us to control 6.\n>\n> For append-only tables (and similar tables), what we periodically want\n> to do would be 3 and 4 (possibly we can do 2 as well). So maybe we\n> need to have both an option of (auto)vacuum to control whether to do 1\n> and something like a new autovacuum threshold (or an option) to invoke\n> the vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\n> and 4 would be much cheaper than today's vacuum and anti-wraparound\n> vacuum would be able to skip almost pages.\n>\n\nWhy will we want to get rid of 1? It's a noop from write perspective and\nsaves a scan to do it if it's not noop.\n\nWhy make it faster in emergency situations when situation can be made\nnon-emergency from the very beginning instead?\n\n\n>\n> [1] https://commitfest.postgresql.org/22/1817/\n> [2] https://commitfest.postgresql.org/22/1981/\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> NIPPON TELEGRAPH AND TELEPHONE CORPORATION\n> NTT Open Source Software Center\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,> > Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?\n>\n> Sounds like a good idea, although I do agree with Alvaro when he\n> mentions that it would be good to only invoke a worker that was only\n> going to freeze tuples and not look at the indexes.\n\nThe invoking autovacuum on table based on inserts, not only deletes\nand updates, seems good idea to me. But in this case, I think that we\ncan not only freeze tuples but also update visibility map even when\nsetting all-visible. Roughly speaking I think vacuum does the\nfollowing operations.\n\n1. heap vacuum\n2. HOT pruningIs it worth skipping it if we're writing a page anyway for the sake of hint bits and new xids? This will all be no-op anyway on append-only tables and happen only when we actually need something? \n3. freezing tuples\n4. updating visibility map (all-visible and all-frozen)These two are needed, and current autovacuum launch process does not take into account that this is also needed for non-dead tuples. \n5. index vacuum/cleanupThere is a separate patch for that. But, since https://commitfest.postgresql.org/16/952/ for almost a year already Postgres skips index cleanup on tables without new dead tuples, so this case is taken care of already? \n6. truncationThis shouldn't be a heavy operation? \n\nWith the proposed patch[1] we can control to do 5 or not. In addition\nto that, another proposed patch[2] allows us to control 6.\n\nFor append-only tables (and similar tables), what we periodically want\nto do would be 3 and 4 (possibly we can do 2 as well). So maybe we\nneed to have both an option of (auto)vacuum to control whether to do 1\nand something like a new autovacuum threshold (or an option) to invoke\nthe vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\nand 4 would be much cheaper than today's vacuum and anti-wraparound\nvacuum would be able to skip almost pages.Why will we want to get rid of 1? It's a noop from write perspective and saves a scan to do it if it's not noop.Why make it faster in emergency situations when situation can be made non-emergency from the very beginning instead? \n\n[1] https://commitfest.postgresql.org/22/1817/\n[2] https://commitfest.postgresql.org/22/1981/\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 28 Mar 2019 14:58:02 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 8:58 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>\n> Hi,\n>\n> > > Why not select a table that has inserts, updates and deletes for autovacuum just like we do for autoanalyze, not only deletes and updates like we do now?\n>>\n>> >\n>> > Sounds like a good idea, although I do agree with Alvaro when he\n>> > mentions that it would be good to only invoke a worker that was only\n>> > going to freeze tuples and not look at the indexes.\n>>\n>> The invoking autovacuum on table based on inserts, not only deletes\n>> and updates, seems good idea to me. But in this case, I think that we\n>> can not only freeze tuples but also update visibility map even when\n>> setting all-visible. Roughly speaking I think vacuum does the\n>> following operations.\n>>\n>> 1. heap vacuum\n>>\n>> 2. HOT pruning\n>\n> Is it worth skipping it if we're writing a page anyway for the sake of hint bits and new xids? This will all be no-op anyway on append-only tables and happen only when we actually need something?\n>\n\nYeah, these operations are required only when the table has actual\ngarbage. IOW, append-only tables never require them.\n\n>>\n>> 3. freezing tuples\n>> 4. updating visibility map (all-visible and all-frozen)\n>\n> These two are needed, and current autovacuum launch process does not take into account that this is also needed for non-dead tuples.\n>\n>>\n>> 5. index vacuum/cleanup\n>\n> There is a separate patch for that. But, since https://commitfest.postgresql.org/16/952/ for almost a year already Postgres skips index cleanup on tables without new dead tuples, so this case is taken care of already?\n\nI think that's not enough. The feature \"GUC for cleanup index\nthreshold\" allows us to skip only index cleanup when there are less\ninsertion than the fraction of the total number of heap tuples since\nlast index cleanup. Therefore it helps only append-only tables (and\nsupporting only btree index for now). We still have to do index\nvacuuming even if the table has just a few dead tuple. The proposed\npatch[1] helps this situation; vacuum can run while skipping index\nvacuuming and index cleanup.\n\n>\n>>\n>> 6. truncation\n>\n> This shouldn't be a heavy operation?\n>\n\nI don't think so. This could take AccessExclusiveLock on the table and\ntake a long time with large shared buffer as per reported on that\nthread[2].\n\n>>\n>>\n>> With the proposed patch[1] we can control to do 5 or not. In addition\n>> to that, another proposed patch[2] allows us to control 6.\n>>\n>> For append-only tables (and similar tables), what we periodically want\n>> to do would be 3 and 4 (possibly we can do 2 as well). So maybe we\n>> need to have both an option of (auto)vacuum to control whether to do 1\n>> and something like a new autovacuum threshold (or an option) to invoke\n>> the vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\n>> and 4 would be much cheaper than today's vacuum and anti-wraparound\n>> vacuum would be able to skip almost pages.\n>\n>\n> Why will we want to get rid of 1? It's a noop from write perspective and saves a scan to do it if it's not noop.\n>\n\nBecause that's for tables that have many inserts but have some\nupdates/deletes. I think that this strategy would help not only\nappend-only tables but also such tables.\n\n> Why make it faster in emergency situations when situation can be made non-emergency from the very beginning instead?\n>\n\nI don't understand the meaning of \"situation can be made non-emergency\nfrom the very beginning\". Could you please elaborate on that?\n\n\n>> [1] https://commitfest.postgresql.org/22/1817/\n>> [2] https://commitfest.postgresql.org/22/1981/\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 29 Mar 2019 00:42:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 28, 2019 at 12:36:24PM +1300, David Rowley wrote:\n> On Thu, 28 Mar 2019 at 11:01, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > I wonder if Mandrill's problem is related to Mailchimp raising the\n> > freeze_max_age to a point where autovac did not have enough time to\n> > react with an emergency vacuum. If you keep raising that value because\n> > the vacuums cause problems for you (they block DDL), there's something\n> > wrong.\n>\n> I have seen some very high autovacuum_freeze_max_age settings\n> recently. It would be interesting to know what they had theirs set to.\n> I see they mentioned \"Search and Url tables\". I can imagine \"search\"\n> never needs any UPDATEs, so quite possibly those were append-only, in\n> which case the anti-wraparound vacuum would have had quite a lot of\n> work on its hands since possibly every page needed frozen. A table\n> receiving regular auto-vacuums from dead tuples would likely get some\n> pages frozen during those.\n\nBy the way, the Routine Vacuuming chapter of the documentation says:\n\n\"The sole disadvantage of increasing autovacuum_freeze_max_age (and\nvacuum_freeze_table_age along with it) is that the pg_xact and\npg_commit_ts subdirectories of the database cluster will take more space\n\n[...]\n\nIf [pg_xact and pg_commit_ts taking 0.5 and 20 GB, respectively]\nis trivial compared to your total database size, setting\nautovacuum_freeze_max_age to its maximum allowed value is recommended.\"\n\nMaybe this should be qualified with \"unless you have trouble with your\nautovacuum keeping up\" or so; or generally reworded?\n\n\nMichael\n\n\n",
"msg_date": "Fri, 29 Mar 2019 10:06:06 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Mar 27, 2019 at 5:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> * certain tables would have some sort of partial scan that sets the\n> visibility map. There's no reason to invoke the whole vacuuming\n> machinery. I don't think this is limited to append-only tables, but\n> rather those are just the ones that are affected the most.\n\nI think this is a really good idea, but in order for it to work well I\nthink we would need to have some kind of estimate of autovacuum\npressure.\n\nIf we know that we're currently fairly on top of things, and there is\nnot much for autovacuum to do, periodically vacuuming a chunk of some\ntable that has a lot of unset visibility-map bits is probably a good\nidea. However, we can't possibly guess how aggressively to do this if\nwe have no idea how long it's going to be before we need to vacuum\nthat table for real. If the number of XIDs remaining until the table\ngets a wraparound vacuum is X, and the number of XIDs being consumed\nper day is Y, we can estimate that in roughly X/Y days, we're going to\nneed to do a wraparound vacuum. That value might be in the range of\nmonths, or in the range of hours.\n\nIf it's months, we probably want limit vacuum to working at a pretty\nslow rate, say 1% of the table size per hour or something. If it's in\nhours, we need to be a lot more aggressive. Right now we have no\ninformation to tell us which of those things is the case, so we'd just\nbe shooting in the dark.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 30 Mar 2019 12:11:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, Mar 31, 2019 at 1:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Mar 27, 2019 at 5:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > * certain tables would have some sort of partial scan that sets the\n> > visibility map. There's no reason to invoke the whole vacuuming\n> > machinery. I don't think this is limited to append-only tables, but\n> > rather those are just the ones that are affected the most.\n>\n> I think this is a really good idea, but in order for it to work well I\n> think we would need to have some kind of estimate of autovacuum\n> pressure.\n>\n> If we know that we're currently fairly on top of things, and there is\n> not much for autovacuum to do, periodically vacuuming a chunk of some\n> table that has a lot of unset visibility-map bits is probably a good\n> idea. However, we can't possibly guess how aggressively to do this if\n> we have no idea how long it's going to be before we need to vacuum\n> that table for real. If the number of XIDs remaining until the table\n> gets a wraparound vacuum is X, and the number of XIDs being consumed\n> per day is Y, we can estimate that in roughly X/Y days, we're going to\n> need to do a wraparound vacuum. That value might be in the range of\n> months, or in the range of hours.\n>\n> If it's months, we probably want limit vacuum to working at a pretty\n> slow rate, say 1% of the table size per hour or something. If it's in\n> hours, we need to be a lot more aggressive. Right now we have no\n> information to tell us which of those things is the case, so we'd just\n> be shooting in the dark.\n\nSawada-san presented some ideas in his PGCon 2018 talk that may be related.\n\nhttps://www.pgcon.org/2018/schedule/attachments/488_Vacuum_More_Efficient_Than_Ever\n\n(slide 32~)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Sun, 31 Mar 2019 01:23:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On 27/03/2019 21:54, Darafei \"Komяpa\" Praliaskouski wrote:\n> Hi hackers,\n> \n> Attached is sketch of small patch that fixes several edge cases with\n> autovacuum. Long story short autovacuum never comes to append only\n> tables, killing large productions.\n> \n> First case, mine.\n> https://www.postgresql.org/message-id/CAC8Q8tLBeAxR%2BBXWuKK%2BHP5m8tEVYn270CVrDvKXt%3D0PkJTY9g%40mail.gmail.com\n> \n> We had a table we were appending and wanted Index Only Scan to work. For\n> it to work, you need to call VACUUM manually, since VACUUM is the only\n> way to mark pages all visible, and autovacuum never comes to append only\n> tables. We were clever to invent a workflow without dead tuples and it\n> painfully bit us.\n> \n> Second case, just read in the news.\n> https://mailchimp.com/what-we-learned-from-the-recent-mandrill-outage/\n> \n> Mandrill has 6TB append only table that autovacuum probably never\n> vacuumed. Then anti-wraparound came and production went down. If\n> autovacuum did its job before that last moment, it would probably be okay.\n> \n> Idea: look not on dead tuples, but on changes, just like ANALYZE does.\n> It's my first patch on Postgres, it's probably all wrong but I hope it\n> helps you get the idea.\n\nThis was suggested and rejected years ago:\nhttps://www.postgresql.org/message-id/b970f20f-f096-2d3a-6c6d-ee887bd30cfb@2ndquadrant.fr\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n\n\n",
"msg_date": "Sat, 30 Mar 2019 17:55:20 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 6:43 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> >> 1. heap vacuum\n> >>\n> >> 2. HOT pruning\n> >\n> > Is it worth skipping it if we're writing a page anyway for the sake of\n> hint bits and new xids? This will all be no-op anyway on append-only tables\n> and happen only when we actually need something?\n> >\n>\n> Yeah, these operations are required only when the table has actual\n> garbage. IOW, append-only tables never require them.\n>\n> >>\n> >> 3. freezing tuples\n> >> 4. updating visibility map (all-visible and all-frozen)\n> >\n> > These two are needed, and current autovacuum launch process does not\n> take into account that this is also needed for non-dead tuples.\n> >\n> >>\n> >> 5. index vacuum/cleanup\n> >\n> > There is a separate patch for that. But, since\n> https://commitfest.postgresql.org/16/952/ for almost a year already\n> Postgres skips index cleanup on tables without new dead tuples, so this\n> case is taken care of already?\n>\n> I think that's not enough. The feature \"GUC for cleanup index\n> threshold\" allows us to skip only index cleanup when there are less\n> insertion than the fraction of the total number of heap tuples since\n> last index cleanup. Therefore it helps only append-only tables (and\n> supporting only btree index for now). We still have to do index\n> vacuuming even if the table has just a few dead tuple. The proposed\n> patch[1] helps this situation; vacuum can run while skipping index\n> vacuuming and index cleanup.\n>\n\nSo, the patch I posted can be technically applied after\nhttps://commitfest.postgresql.org/22/1817/ gets merged?\n\nThe change with my patch is that a table with 49 insertions and one delete:\n - previously will wait for 49 more deletes by default (and ignore\ninsertions), and only then clean up both table and indexes.\n - with patch will freeze/update VM for insertions, and scan the index.\n\nIn my experience only btree index is requiring a slow full index scan,\nthat's why only it was in the \"GUC for cleanup index\nthreshold\" patch. Is it wrong and more index types do a full index scan on\nvacuum after deletion of a single tuple?\n\n\n\n> >> 6. truncation\n> >\n> > This shouldn't be a heavy operation?\n> >\n>\n> I don't think so. This could take AccessExclusiveLock on the table and\n> take a long time with large shared buffer as per reported on that\n> thread[2].\n>\n\nWhile this can be a useful optimization, I believe it is out of scope for\nthis patch. I want to fix vacuum never coming to append only tables without\nbreaking other behaviors. Truncation is likely a case of enough dead tuples\nto trigger a vacuum via currently existing mechanisms.\n\n\n> >>\n> >>\n> >> With the proposed patch[1] we can control to do 5 or not. In addition\n> >> to that, another proposed patch[2] allows us to control 6.\n> >>\n> >> For append-only tables (and similar tables), what we periodically want\n> >> to do would be 3 and 4 (possibly we can do 2 as well). So maybe we\n> >> need to have both an option of (auto)vacuum to control whether to do 1\n> >> and something like a new autovacuum threshold (or an option) to invoke\n> >> the vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\n> >> and 4 would be much cheaper than today's vacuum and anti-wraparound\n> >> vacuum would be able to skip almost pages.\n> >\n> >\n> > Why will we want to get rid of 1? It's a noop from write perspective and\n> saves a scan to do it if it's not noop.\n> >\n>\n> Because that's for tables that have many inserts but have some\n> updates/deletes. I think that this strategy would help not only\n> append-only tables but also such tables.\n>\n\nHow much do we save by skipping a heap vacuum on almost-append-only table,\nwhere amount of updates is below 50 which is current threshold?\n\n\n>\n> > Why make it faster in emergency situations when situation can be made\n> non-emergency from the very beginning instead?\n> >\n>\n> I don't understand the meaning of \"situation can be made non-emergency\n> from the very beginning\". Could you please elaborate on that?\n>\n\nLet's imagine a simple append-only workflow on current default settings\nPostgres. You create a table, and start inserting tuples, one per\ntransaction. Let's imagine a page fits 50 tuples (my case for taxi movement\ndata), and Amazon gp2 storage which caps you say at 1000 IOPS in non-burst\nmode.\nAnti-wrap-around-auto-vacuum (we need a drawing of misreading of this term\nwith a crossed out car bent in Space) will be triggered\nin autovacuum_freeze_max_age inserts, 200000000 by default. That converts\ninto 4000000 pages, or around 32 GB. It will be the first vacuum ever on\nthat table, since no other mechanism triggers it, and if it steals all the\navailable IOPS, it will finish in 200000000/50 /1000 = 4000 seconds,\nkilling prod for over an hour.\n\nTelemetry workloads can easily generate 32 GB of data a day (I've seen\nmore, but let's stick to that number). Production going down for an hour a\nday isn't good and I consider it an emergency.\n\nNow, two ways to fix it that reading documentation leads you while you're a\nsleepy one trying to get prod back:\n - raise autovacuum_freeze_max_age so VACUUM keeps sleeping;\n - rewrite code to use batching to insert more tuples at once.\n\nWe don't have a better recommendation mechanism for settings, and\nexperience in tuning autovacuum into right direction comes at the cost of a\njob or company to people :)\n\nBoth ways not fix the problem but just delay the inevitable. Ratio of \"one\nhour of vacuum per day of operation\" keeps, you just delay it.\nLet's say had same thing with 1000 records batched inserts, and moved\nautovacuum_freeze_max_age to the highest possible value. How much will the\ndowntime last?\n\n2**31 (max tid) * 1000 (tuples per tid) / 50 (tuples in page) / 1000 (pages\nper second) / 86400 (seconds in day) = 49 days.\n\nThis matches highest estimation in Mandrill's report, so that might be what\nhave happened to them.\n\nThis all would not be needed if autovacuum came after 50 inserted tuples.\nIt will just mark page as all visible and all frozen and be gone, while\nit's still in memory. This will get rid of emergency altogether.\n\nIs this elaborate enough disaster scenario? :)\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Thu, Mar 28, 2019 at 6:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> 1. heap vacuum\n>>\n>> 2. HOT pruning\n>\n> Is it worth skipping it if we're writing a page anyway for the sake of hint bits and new xids? This will all be no-op anyway on append-only tables and happen only when we actually need something?\n>\n\nYeah, these operations are required only when the table has actual\ngarbage. IOW, append-only tables never require them.\n\n>>\n>> 3. freezing tuples\n>> 4. updating visibility map (all-visible and all-frozen)\n>\n> These two are needed, and current autovacuum launch process does not take into account that this is also needed for non-dead tuples.\n>\n>>\n>> 5. index vacuum/cleanup\n>\n> There is a separate patch for that. But, since https://commitfest.postgresql.org/16/952/ for almost a year already Postgres skips index cleanup on tables without new dead tuples, so this case is taken care of already?\n\nI think that's not enough. The feature \"GUC for cleanup index\nthreshold\" allows us to skip only index cleanup when there are less\ninsertion than the fraction of the total number of heap tuples since\nlast index cleanup. Therefore it helps only append-only tables (and\nsupporting only btree index for now). We still have to do index\nvacuuming even if the table has just a few dead tuple. The proposed\npatch[1] helps this situation; vacuum can run while skipping index\nvacuuming and index cleanup.So, the patch I posted can be technically applied after https://commitfest.postgresql.org/22/1817/ gets merged?The change with my patch is that a table with 49 insertions and one delete: - previously will wait for 49 more deletes by default (and ignore insertions), and only then clean up both table and indexes. - with patch will freeze/update VM for insertions, and scan the index.In my experience only btree index is requiring a slow full index scan, that's why only it was in the \"GUC for cleanup indexthreshold\" patch. Is it wrong and more index types do a full index scan on vacuum after deletion of a single tuple? \n>> 6. truncation\n>\n> This shouldn't be a heavy operation?\n>\n\nI don't think so. This could take AccessExclusiveLock on the table and\ntake a long time with large shared buffer as per reported on that\nthread[2].While this can be a useful optimization, I believe it is out of scope for this patch. I want to fix vacuum never coming to append only tables without breaking other behaviors. Truncation is likely a case of enough dead tuples to trigger a vacuum via currently existing mechanisms. \n>>\n>>\n>> With the proposed patch[1] we can control to do 5 or not. In addition\n>> to that, another proposed patch[2] allows us to control 6.\n>>\n>> For append-only tables (and similar tables), what we periodically want\n>> to do would be 3 and 4 (possibly we can do 2 as well). So maybe we\n>> need to have both an option of (auto)vacuum to control whether to do 1\n>> and something like a new autovacuum threshold (or an option) to invoke\n>> the vacuum that disables 1, 5 and 6. The vacuum that does only 2, 3\n>> and 4 would be much cheaper than today's vacuum and anti-wraparound\n>> vacuum would be able to skip almost pages.\n>\n>\n> Why will we want to get rid of 1? It's a noop from write perspective and saves a scan to do it if it's not noop.\n>\n\nBecause that's for tables that have many inserts but have some\nupdates/deletes. I think that this strategy would help not only\nappend-only tables but also such tables.How much do we save by skipping a heap vacuum on almost-append-only table, where amount of updates is below 50 which is current threshold? \n\n> Why make it faster in emergency situations when situation can be made non-emergency from the very beginning instead?\n>\n\nI don't understand the meaning of \"situation can be made non-emergency\nfrom the very beginning\". Could you please elaborate on that?Let's imagine a simple append-only workflow on current default settings Postgres. You create a table, and start inserting tuples, one per transaction. Let's imagine a page fits 50 tuples (my case for taxi movement data), and Amazon gp2 storage which caps you say at 1000 IOPS in non-burst mode.Anti-wrap-around-auto-vacuum (we need a drawing of misreading of this term with a crossed out car bent in Space) will be triggered in autovacuum_freeze_max_age inserts, 200000000 by default. That converts into 4000000 pages, or around 32 GB. It will be the first vacuum ever on that table, since no other mechanism triggers it, and if it steals all the available IOPS, it will finish in 200000000/50 /1000 = 4000 seconds, killing prod for over an hour.Telemetry workloads can easily generate 32 GB of data a day (I've seen more, but let's stick to that number). Production going down for an hour a day isn't good and I consider it an emergency.Now, two ways to fix it that reading documentation leads you while you're a sleepy one trying to get prod back: - raise autovacuum_freeze_max_age so VACUUM keeps sleeping; - rewrite code to use batching to insert more tuples at once.We don't have a better recommendation mechanism for settings, and experience in tuning autovacuum into right direction comes at the cost of a job or company to people :)Both ways not fix the problem but just delay the inevitable. Ratio of \"one hour of vacuum per day of operation\" keeps, you just delay it.Let's say had same thing with 1000 records batched inserts, and moved autovacuum_freeze_max_age to the highest possible value. How much will the downtime last?2**31 (max tid) * 1000 (tuples per tid) / 50 (tuples in page) / 1000 (pages per second) / 86400 (seconds in day) = 49 days.This matches highest estimation in Mandrill's report, so that might be what have happened to them.This all would not be needed if autovacuum came after 50 inserted tuples. It will just mark page as all visible and all frozen and be gone, while it's still in memory. This will get rid of emergency altogether.Is this elaborate enough disaster scenario? :)-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 31 Mar 2019 13:12:21 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": ">\n> By the way, the Routine Vacuuming chapter of the documentation says:\n>\n> \"The sole disadvantage of increasing autovacuum_freeze_max_age (and\n> vacuum_freeze_table_age along with it) is that the pg_xact and\n> pg_commit_ts subdirectories of the database cluster will take more space\n>\n> [...]\n>\n> If [pg_xact and pg_commit_ts taking 0.5 and 20 GB, respectively]\n> is trivial compared to your total database size, setting\n> autovacuum_freeze_max_age to its maximum allowed value is recommended.\"\n>\n> Maybe this should be qualified with \"unless you have trouble with your\n> autovacuum keeping up\" or so; or generally reworded?\n\n\nThis recommendation is in the mindset of \"wraparound never happens\".\nIf your database is large, you have more chances to hit it painfully, and\nif it's append-only even more so.\n\nAlternative point of \"if your database is super large and actively written,\nyou may want to set autovacuum_freeze_max_age to even smaller values so\nthat autovacuum load is more evenly spread over time\" may be needed.\n\n\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nBy the way, the Routine Vacuuming chapter of the documentation says:\n\n\"The sole disadvantage of increasing autovacuum_freeze_max_age (and\nvacuum_freeze_table_age along with it) is that the pg_xact and\npg_commit_ts subdirectories of the database cluster will take more space\n\n[...]\n\nIf [pg_xact and pg_commit_ts taking 0.5 and 20 GB, respectively]\nis trivial compared to your total database size, setting\nautovacuum_freeze_max_age to its maximum allowed value is recommended.\"\n\nMaybe this should be qualified with \"unless you have trouble with your\nautovacuum keeping up\" or so; or generally reworded?This recommendation is in the mindset of \"wraparound never happens\".If your database is large, you have more chances to hit it painfully, and if it's append-only even more so.Alternative point of \"if your database is super large and actively written, you may want to set autovacuum_freeze_max_age to even smaller values so that autovacuum load is more evenly spread over time\" may be needed. -- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 31 Mar 2019 13:19:53 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": ">\n> If it's months, we probably want limit vacuum to working at a pretty\n> slow rate, say 1% of the table size per hour or something. If it's in\n> hours, we need to be a lot more aggressive. Right now we have no\n> information to tell us which of those things is the case, so we'd just\n> be shooting in the dark.\n\n\nThing is, you don't need to spread out your vacuum in time if the rate of\nvacuuming matches rate of table growth. Can we mark tuples/pages as\nall-visible and all-frozen say, the moment they're pushed out of\nshared_buffers?\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nIf it's months, we probably want limit vacuum to working at a pretty\nslow rate, say 1% of the table size per hour or something. If it's in\nhours, we need to be a lot more aggressive. Right now we have no\ninformation to tell us which of those things is the case, so we'd just\nbe shooting in the dark.Thing is, you don't need to spread out your vacuum in time if the rate of vacuuming matches rate of table growth. Can we mark tuples/pages as all-visible and all-frozen say, the moment they're pushed out of shared_buffers? -- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 31 Mar 2019 13:30:12 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": ">\n>\n> > Idea: look not on dead tuples, but on changes, just like ANALYZE does.\n> > It's my first patch on Postgres, it's probably all wrong but I hope it\n> > helps you get the idea.\n>\n> This was suggested and rejected years ago:\n>\n> https://www.postgresql.org/message-id/b970f20f-f096-2d3a-6c6d-ee887bd30cfb@2ndquadrant.fr\n\n\nThank you for sharing the link. I've read through the thread and see you\nposted two patches, first being similar but different from mine, and second\nbeing about a different matter.\n\nI don't see \"rejected\" there, just a common distraction of \"you should also\nconsider this\" and time-out leading to \"returned with feedback\" at the end.\n\nThing is, we have dead large productions and post-mortems now as your patch\nwasn't pushed back in 2016, so situation is different. Let's push at least\nfirst of two patches of yours, or mine.\n\nWhich one is better and why?\n\nI believe mine, as it just follows a pattern already established and proven\nin autoanalyze. If vacuum comes and unable to harvest some dead tuples, it\nwill come over again in your case, and just sleep until it gets new dead\ntuples in mine, which looks better to me - there's no dead loop in case\nsome dead tuples are stuck forever.\nIf someone thinks yours is better we may also consider it for autoanalyze?\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n\n> Idea: look not on dead tuples, but on changes, just like ANALYZE does.\n> It's my first patch on Postgres, it's probably all wrong but I hope it\n> helps you get the idea.\n\nThis was suggested and rejected years ago:\nhttps://www.postgresql.org/message-id/b970f20f-f096-2d3a-6c6d-ee887bd30cfb@2ndquadrant.frThank you for sharing the link. I've read through the thread and see you posted two patches, first being similar but different from mine, and second being about a different matter.I don't see \"rejected\" there, just a common distraction of \"you should also consider this\" and time-out leading to \"returned with feedback\" at the end.Thing is, we have dead large productions and post-mortems now as your patch wasn't pushed back in 2016, so situation is different. Let's push at least first of two patches of yours, or mine.Which one is better and why?I believe mine, as it just follows a pattern already established and proven in autoanalyze. If vacuum comes and unable to harvest some dead tuples, it will come over again in your case, and just sleep until it gets new dead tuples in mine, which looks better to me - there's no dead loop in case some dead tuples are stuck forever.If someone thinks yours is better we may also consider it for autoanalyze?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 31 Mar 2019 13:40:07 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": ">\n> The invoking autovacuum on table based on inserts, not only deletes\n> and updates, seems good idea to me. But in this case, I think that we\n> can not only freeze tuples but also update visibility map even when\n> setting all-visible. Roughly speaking I think vacuum does the\n> following operations.\n>\n> 1. heap vacuum\n> 2. HOT pruning\n> 3. freezing tuples\n> 4. updating visibility map (all-visible and all-frozen)\n> 5. index vacuum/cleanup\n> 6. truncation\n>\n> With the proposed patch[1] we can control to do 5 or not. In addition\n> to that, another proposed patch[2] allows us to control 6.\n>\n\n[1] is committed, [2] nears commit. Seems we have now all the infra to\nteach autovacuum to run itself based on inserts and not hurt anybody?\n\n...\n\n> [1] https://commitfest.postgresql.org/22/1817/\n> [2] https://commitfest.postgresql.org/22/1981/\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nThe invoking autovacuum on table based on inserts, not only deletes\nand updates, seems good idea to me. But in this case, I think that we\ncan not only freeze tuples but also update visibility map even when\nsetting all-visible. Roughly speaking I think vacuum does the\nfollowing operations.\n\n1. heap vacuum\n2. HOT pruning\n3. freezing tuples\n4. updating visibility map (all-visible and all-frozen)\n5. index vacuum/cleanup\n6. truncation\n\nWith the proposed patch[1] we can control to do 5 or not. In addition\nto that, another proposed patch[2] allows us to control 6.[1] is committed, [2] nears commit. Seems we have now all the infra to teach autovacuum to run itself based on inserts and not hurt anybody?... \n[1] https://commitfest.postgresql.org/22/1817/\n[2] https://commitfest.postgresql.org/22/1981/\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sat, 6 Apr 2019 10:56:16 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, Apr 6, 2019 at 9:56 AM Darafei \"Komяpa\" Praliaskouski <me@komzpa.net>\nwrote:\n\n> The invoking autovacuum on table based on inserts, not only deletes\n>> and updates, seems good idea to me. But in this case, I think that we\n>> can not only freeze tuples but also update visibility map even when\n>> setting all-visible. Roughly speaking I think vacuum does the\n>> following operations.\n>>\n>> 1. heap vacuum\n>> 2. HOT pruning\n>> 3. freezing tuples\n>> 4. updating visibility map (all-visible and all-frozen)\n>> 5. index vacuum/cleanup\n>> 6. truncation\n>>\n>> With the proposed patch[1] we can control to do 5 or not. In addition\n>> to that, another proposed patch[2] allows us to control 6.\n>>\n>\n> [1] is committed, [2] nears commit. Seems we have now all the infra to\n> teach autovacuum to run itself based on inserts and not hurt anybody?\n>\n> ...\n>\n>> [1] https://commitfest.postgresql.org/22/1817/\n>> [2] https://commitfest.postgresql.org/22/1981/\n>>\n>\n>\nReading the thread and the patch, I generally agree that:\n1. With the current infrastructure having auto vacuum periodically scan\nappend-only tables for freezing would be good, and\n2. I can't think of any cases where this would be a bad thing.\n\nAlso I am not 100% convinced that the problems are avoidable by setting the\nwraparound prevention thresholds low enough. In cases where one is doing\nlarge bulk inserts all the time, vacuum freeze could have a lot of work to\ndo, and in some cases I could imagine IO storms making that difficult.\n\nI plan to run some benchmarks on this to try to assess performance impact\nof this patch in standard pgbench scenarios.I will also try to come up with\nsome other benchmarks in append only workloads.\n\n\n>\n> --\n> Darafei Praliaskouski\n> Support me: http://patreon.com/komzpa\n>\n\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Sat, Apr 6, 2019 at 9:56 AM Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote:The invoking autovacuum on table based on inserts, not only deletes\nand updates, seems good idea to me. But in this case, I think that we\ncan not only freeze tuples but also update visibility map even when\nsetting all-visible. Roughly speaking I think vacuum does the\nfollowing operations.\n\n1. heap vacuum\n2. HOT pruning\n3. freezing tuples\n4. updating visibility map (all-visible and all-frozen)\n5. index vacuum/cleanup\n6. truncation\n\nWith the proposed patch[1] we can control to do 5 or not. In addition\nto that, another proposed patch[2] allows us to control 6.[1] is committed, [2] nears commit. Seems we have now all the infra to teach autovacuum to run itself based on inserts and not hurt anybody?... \n[1] https://commitfest.postgresql.org/22/1817/\n[2] https://commitfest.postgresql.org/22/1981/\nReading the thread and the patch, I generally agree that:1. With the current infrastructure having auto vacuum periodically scan append-only tables for freezing would be good, and2. I can't think of any cases where this would be a bad thing.Also I am not 100% convinced that the problems are avoidable by setting the wraparound prevention thresholds low enough. In cases where one is doing large bulk inserts all the time, vacuum freeze could have a lot of work to do, and in some cases I could imagine IO storms making that difficult.I plan to run some benchmarks on this to try to assess performance impact of this patch in standard pgbench scenarios.I will also try to come up with some other benchmarks in append only workloads. -- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Wed, 10 Apr 2019 15:14:04 +0200",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> Alternative point of \"if your database is super large and actively written,\n> you may want to set autovacuum_freeze_max_age to even smaller values so\n> that autovacuum load is more evenly spread over time\" may be needed.\n\nI don't think it's helpful to force emergency vacuuming more frequently;\nquite the contrary, it's likely to cause even more issues. We should\ntweak autovacuum to perform freezing more preemtively instead.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:13:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn April 10, 2019 8:13:06 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n>> Alternative point of \"if your database is super large and actively\n>written,\n>> you may want to set autovacuum_freeze_max_age to even smaller values\n>so\n>> that autovacuum load is more evenly spread over time\" may be needed.\n>\n>I don't think it's helpful to force emergency vacuuming more\n>frequently;\n>quite the contrary, it's likely to cause even more issues. We should\n>tweak autovacuum to perform freezing more preemtively instead.\n\nI still think the fundamental issue with making vacuum less painful is that the all indexes have to be read entirely. Even if there's not much work (say millions of rows frozen, hundreds removed). Without that issue we could vacuum much more frequently. And do it properly in insert only workloads.\n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 10 Apr 2019 08:21:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n> >\n> >> Alternative point of \"if your database is super large and actively\n> >written,\n> >> you may want to set autovacuum_freeze_max_age to even smaller values\n> >so\n> >> that autovacuum load is more evenly spread over time\" may be needed.\n> >\n> >I don't think it's helpful to force emergency vacuuming more\n> >frequently;\n> >quite the contrary, it's likely to cause even more issues. We should\n> >tweak autovacuum to perform freezing more preemtively instead.\n>\n> I still think the fundamental issue with making vacuum less painful is\n> that the all indexes have to be read entirely. Even if there's not much\n> work (say millions of rows frozen, hundreds removed). Without that issue we\n> could vacuum much more frequently. And do it properly in insert only\n> workloads.\n>\n\nSo I see a couple of issues here and wondering what the best approach is.\n\nThe first is to just skip lazy_cleanup_index if no rows were removed. Is\nthis the approach you have in mind? Or is that insufficient?\n\nThe second approach would be to replace the whole idea of this patch with a\nlazy freeze worker which would basically periodically do a vacuum freeze on\nrelations matching certain criteria. This could have a lower max workers\nthan autovacuum and therefore less of a threat in terms of total IO usage.\n\nThoughts?\n\n>\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n>\n>\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nOn Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn April 10, 2019 8:13:06 AM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n>> Alternative point of \"if your database is super large and actively\n>written,\n>> you may want to set autovacuum_freeze_max_age to even smaller values\n>so\n>> that autovacuum load is more evenly spread over time\" may be needed.\n>\n>I don't think it's helpful to force emergency vacuuming more\n>frequently;\n>quite the contrary, it's likely to cause even more issues. We should\n>tweak autovacuum to perform freezing more preemtively instead.\n\nI still think the fundamental issue with making vacuum less painful is that the all indexes have to be read entirely. Even if there's not much work (say millions of rows frozen, hundreds removed). Without that issue we could vacuum much more frequently. And do it properly in insert only workloads.So I see a couple of issues here and wondering what the best approach is.The first is to just skip lazy_cleanup_index if no rows were removed. Is this the approach you have in mind? Or is that insufficient?The second approach would be to replace the whole idea of this patch with a lazy freeze worker which would basically periodically do a vacuum freeze on relations matching certain criteria. This could have a lower max workers than autovacuum and therefore less of a threat in terms of total IO usage.Thoughts?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Thu, 11 Apr 2019 11:25:29 +0200",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 11:25:29AM +0200, Chris Travers wrote:\n> On Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n> >On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n> >\n> >> Alternative point of \"if your database is super large and actively\n> >written,\n> >> you may want to set autovacuum_freeze_max_age to even smaller values\n> >so\n> >> that autovacuum load is more evenly spread over time\" may be needed.\n> >\n> >I don't think it's helpful to force emergency vacuuming more\n> >frequently;\n> >quite the contrary, it's likely to cause even more issues. We should\n> >tweak autovacuum to perform freezing more preemtively instead.\n>\n> I still think the fundamental issue with making vacuum less painful is\n> that the all indexes have to be read entirely. Even if there's not much\n> work (say millions of rows frozen, hundreds removed). Without that issue\n> we could vacuum much more frequently. And do it properly in insert only\n> workloads.\n>\n> So I see a couple of issues here and wondering what the best approach is.\n> The first is to just skip lazy_cleanup_index if no rows were removed. Is\n> this the approach you have in mind? Or is that insufficient?\n\nI don't think that's what Andres had in mind, as he explicitly mentioned\nremoved rows. So just skipping lazy_cleanup_index when there were no\ndeleted would not help in that case.\n\nWhat I think we could do is simply leave the tuple pointers in the table\n(and indexes) when there are only very few of them, and only do the\nexpensive table/index cleanup once there's anough of them.\n\n> The second approach would be to replace the whole idea of this patch with\n> a lazy freeze worker which would basically periodically do a vacuum freeze\n> on relations matching certain criteria. This could have a lower max\n> workers than autovacuum and therefore less of a threat in terms of total\n> IO usage.\n> Thoughts?\n>\n\nNot sure. I find it rather difficult to manage more and more different\ntypes of cleanup workers.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 13 Apr 2019 21:50:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 6:13 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n>\n> > Alternative point of \"if your database is super large and actively\n> written,\n> > you may want to set autovacuum_freeze_max_age to even smaller values so\n> > that autovacuum load is more evenly spread over time\" may be needed.\n>\n> I don't think it's helpful to force emergency vacuuming more frequently;\n> quite the contrary, it's likely to cause even more issues. We should\n> tweak autovacuum to perform freezing more preemtively instead.\n>\n\nOkay. What would be your recommendation for the case of Mandrill running\ncurrent Postgres 11? Which parameters shall they tune and to which values?\n\n\n\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nOn Wed, Apr 10, 2019 at 6:13 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> Alternative point of \"if your database is super large and actively written,\n> you may want to set autovacuum_freeze_max_age to even smaller values so\n> that autovacuum load is more evenly spread over time\" may be needed.\n\nI don't think it's helpful to force emergency vacuuming more frequently;\nquite the contrary, it's likely to cause even more issues. We should\ntweak autovacuum to perform freezing more preemtively instead.Okay. What would be your recommendation for the case of Mandrill running current Postgres 11? Which parameters shall they tune and to which values? \n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 14 Apr 2019 15:51:05 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": ">\n>\n> >I don't think it's helpful to force emergency vacuuming more\n> >frequently;\n> >quite the contrary, it's likely to cause even more issues. We should\n> >tweak autovacuum to perform freezing more preemtively instead.\n>\n> I still think the fundamental issue with making vacuum less painful is\n> that the all indexes have to be read entirely. Even if there's not much\n> work (say millions of rows frozen, hundreds removed). Without that issue we\n> could vacuum much more frequently. And do it properly in insert only\n> workloads.\n>\n\nDeletion of hundreds of rows on default settings will cause the same\nbehavior now.\nIf there was 0 updates currently the index cleanup will be skipped.\n\nhttps://commitfest.postgresql.org/22/1817/ got merged. This means\nAutovacuum can have two separate thresholds - the current, on dead tuples,\ntriggering the VACUUM same way it triggers it now, and a new one, on\ninserted tuples only, triggering VACUUM (INDEX_CLEANUP FALSE)?\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n>I don't think it's helpful to force emergency vacuuming more\n>frequently;\n>quite the contrary, it's likely to cause even more issues. We should\n>tweak autovacuum to perform freezing more preemtively instead.\n\nI still think the fundamental issue with making vacuum less painful is that the all indexes have to be read entirely. Even if there's not much work (say millions of rows frozen, hundreds removed). Without that issue we could vacuum much more frequently. And do it properly in insert only workloads.Deletion of hundreds of rows on default settings will cause the same behavior now.If there was 0 updates currently the index cleanup will be skipped.https://commitfest.postgresql.org/22/1817/ got merged. This means Autovacuum can have two separate thresholds - the current, on dead tuples, triggering the VACUUM same way it triggers it now, and a new one, on inserted tuples only, triggering VACUUM (INDEX_CLEANUP FALSE)?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Sun, 14 Apr 2019 15:58:55 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 4:51 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Thu, Apr 11, 2019 at 11:25:29AM +0200, Chris Travers wrote:\n> > On Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera\n> > <alvherre@2ndquadrant.com> wrote:\n> > >On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n> > >\n> > >> Alternative point of \"if your database is super large and actively\n> > >written,\n> > >> you may want to set autovacuum_freeze_max_age to even smaller values\n> > >so\n> > >> that autovacuum load is more evenly spread over time\" may be needed.\n> > >\n> > >I don't think it's helpful to force emergency vacuuming more\n> > >frequently;\n> > >quite the contrary, it's likely to cause even more issues. We should\n> > >tweak autovacuum to perform freezing more preemtively instead.\n> >\n> > I still think the fundamental issue with making vacuum less painful is\n> > that the all indexes have to be read entirely. Even if there's not much\n> > work (say millions of rows frozen, hundreds removed). Without that issue\n> > we could vacuum much more frequently. And do it properly in insert only\n> > workloads.\n> >\n> > So I see a couple of issues here and wondering what the best approach is.\n> > The first is to just skip lazy_cleanup_index if no rows were removed. Is\n> > this the approach you have in mind? Or is that insufficient?\n>\n> I don't think that's what Andres had in mind, as he explicitly mentioned\n> removed rows. So just skipping lazy_cleanup_index when there were no\n> deleted would not help in that case.\n>\n> What I think we could do is simply leave the tuple pointers in the table\n> (and indexes) when there are only very few of them, and only do the\n> expensive table/index cleanup once there's anough of them.\n\nYeah, we now have an infrastructure that skips index vacuuming by\nleaving the tuples pointers. So we then can have a threshold for\nautovacuum to invoke index vacuuming. Or an another idea is to delete\nindex entries more actively by index looking up instead of scanning\nthe whole index. It's proposed[1].\n\n[1] I couldn't get the URL of the thread right now for some reason but\nthe thread subject is \" [WIP] [B-Tree] Retail IndexTuple deletion\".\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 15 Apr 2019 10:15:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Apr 15, 2019 at 10:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sun, Apr 14, 2019 at 4:51 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Thu, Apr 11, 2019 at 11:25:29AM +0200, Chris Travers wrote:\n> > > On Wed, Apr 10, 2019 at 5:21 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera\n> > > <alvherre@2ndquadrant.com> wrote:\n> > > >On 2019-Mar-31, Darafei \"Komяpa\" Praliaskouski wrote:\n> > > >\n> > > >> Alternative point of \"if your database is super large and actively\n> > > >written,\n> > > >> you may want to set autovacuum_freeze_max_age to even smaller values\n> > > >so\n> > > >> that autovacuum load is more evenly spread over time\" may be needed.\n> > > >\n> > > >I don't think it's helpful to force emergency vacuuming more\n> > > >frequently;\n> > > >quite the contrary, it's likely to cause even more issues. We should\n> > > >tweak autovacuum to perform freezing more preemtively instead.\n> > >\n> > > I still think the fundamental issue with making vacuum less painful is\n> > > that the all indexes have to be read entirely. Even if there's not much\n> > > work (say millions of rows frozen, hundreds removed). Without that issue\n> > > we could vacuum much more frequently. And do it properly in insert only\n> > > workloads.\n> > >\n> > > So I see a couple of issues here and wondering what the best approach is.\n> > > The first is to just skip lazy_cleanup_index if no rows were removed. Is\n> > > this the approach you have in mind? Or is that insufficient?\n> >\n> > I don't think that's what Andres had in mind, as he explicitly mentioned\n> > removed rows. So just skipping lazy_cleanup_index when there were no\n> > deleted would not help in that case.\n> >\n> > What I think we could do is simply leave the tuple pointers in the table\n> > (and indexes) when there are only very few of them, and only do the\n> > expensive table/index cleanup once there's anough of them.\n>\n> Yeah, we now have an infrastructure that skips index vacuuming by\n> leaving the tuples pointers. So we then can have a threshold for\n> autovacuum to invoke index vacuuming. Or an another idea is to delete\n> index entries more actively by index looking up instead of scanning\n> the whole index. It's proposed[1].\n>\n> [1] I couldn't get the URL of the thread right now for some reason but\n> the thread subject is \" [WIP] [B-Tree] Retail IndexTuple deletion\".\n\nNow I got https://www.postgresql.org/message-id/425db134-8bba-005c-b59d-56e50de3b41e%40postgrespro.ru\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 15 Apr 2019 10:31:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, Apr 14, 2019 at 9:59 PM Darafei \"Komяpa\" Praliaskouski\n<me@komzpa.net> wrote:\n>>\n>>\n>> >I don't think it's helpful to force emergency vacuuming more\n>> >frequently;\n>> >quite the contrary, it's likely to cause even more issues. We should\n>> >tweak autovacuum to perform freezing more preemtively instead.\n>>\n>> I still think the fundamental issue with making vacuum less painful is that the all indexes have to be read entirely. Even if there's not much work (say millions of rows frozen, hundreds removed). Without that issue we could vacuum much more frequently. And do it properly in insert only workloads.\n>\n>\n> Deletion of hundreds of rows on default settings will cause the same behavior now.\n> If there was 0 updates currently the index cleanup will be skipped.\n>\n> https://commitfest.postgresql.org/22/1817/ got merged. This means Autovacuum can have two separate thresholds - the current, on dead tuples, triggering the VACUUM same way it triggers it now, and a new one, on inserted tuples only, triggering VACUUM (INDEX_CLEANUP FALSE)?\n>\n\nAgreed.\n\nTo invoke autovacuum even on insert-only tables we would need check\nthe number of inserted tuples since last vacuum. I think we can keep\ntrack of the number of inserted tuples since last vacuum to the stats\ncollector and add the threshold to invoke vacuum with INDEX_CLEANUP =\nfalse. If an autovacuum worker confirms that the number of inserted\ntuples exceeds the threshold it invokes vacuum with INDEX_CLEANUP =\nfalse. However if the number of dead tuples also exceeds the\nautovacuum thresholds (autovacuum_vacuum_threshold and\nautovacuum_vacuum_scale_factor) it should invoke vacuum with\nINDEX_CLEANUP = true. Therefore new threshold makes sense only when\nit's lower than the autovacuum thresholds.\n\nI guess we can have one new GUC parameter to control scale factor.\nSince only relatively large tables will require this feature we might\nnot need the threshold based the number of tuples.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:21:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Jul 23, 2019 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> To invoke autovacuum even on insert-only tables we would need check\n> the number of inserted tuples since last vacuum. I think we can keep\n> track of the number of inserted tuples since last vacuum to the stats\n> collector and add the threshold to invoke vacuum with INDEX_CLEANUP =\n> false. If an autovacuum worker confirms that the number of inserted\n> tuples exceeds the threshold it invokes vacuum with INDEX_CLEANUP =\n> false. However if the number of dead tuples also exceeds the\n> autovacuum thresholds (autovacuum_vacuum_threshold and\n> autovacuum_vacuum_scale_factor) it should invoke vacuum with\n> INDEX_CLEANUP = true. Therefore new threshold makes sense only when\n> it's lower than the autovacuum thresholds.\n>\n> I guess we can have one new GUC parameter to control scale factor.\n> Since only relatively large tables will require this feature we might\n> not need the threshold based the number of tuples.\n>\n\nGenerally speaking, having more guc's for autovacuum and that too\nwhich are in some way dependent on existing guc's sounds bit scary,\nbut OTOH whatever you wrote makes sense and can help the scenarios\nwhich this thread is trying to deal with. Have you given any thought\nto what Alvaro mentioned up-thread \"certain tables would have some\nsort of partial scan that sets the visibility map. There's no reason\nto invoke the whole vacuuming machinery. I don't think this is\nlimited to append-only tables, but\nrather those are just the ones that are affected the most.\"?\n\nThis thread seems to be stalled for the reason that we don't have a\nclear consensus on what is the right solution for the problem being\ndiscussed. Alvaro, anyone has any thoughts on how we can move forward\nwith this work?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Sep 2019 16:49:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Sep 10, 2019 at 8:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 23, 2019 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > To invoke autovacuum even on insert-only tables we would need check\n> > the number of inserted tuples since last vacuum. I think we can keep\n> > track of the number of inserted tuples since last vacuum to the stats\n> > collector and add the threshold to invoke vacuum with INDEX_CLEANUP =\n> > false. If an autovacuum worker confirms that the number of inserted\n> > tuples exceeds the threshold it invokes vacuum with INDEX_CLEANUP =\n> > false. However if the number of dead tuples also exceeds the\n> > autovacuum thresholds (autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor) it should invoke vacuum with\n> > INDEX_CLEANUP = true. Therefore new threshold makes sense only when\n> > it's lower than the autovacuum thresholds.\n> >\n> > I guess we can have one new GUC parameter to control scale factor.\n> > Since only relatively large tables will require this feature we might\n> > not need the threshold based the number of tuples.\n> >\n>\n> Generally speaking, having more guc's for autovacuum and that too\n> which are in some way dependent on existing guc's sounds bit scary,\n> but OTOH whatever you wrote makes sense and can help the scenarios\n> which this thread is trying to deal with. Have you given any thought\n> to what Alvaro mentioned up-thread \"certain tables would have some\n> sort of partial scan that sets the visibility map. There's no reason\n> to invoke the whole vacuuming machinery. I don't think this is\n> limited to append-only tables, but\n> rather those are just the ones that are affected the most.\"?\n>\n\nSpeaking of partial scan I've considered before that we could use WAL\nto find which pages have garbage much and not all-visible pages. We\ncan vacuum only a particular part of table that is most effective of\ngarbage collection instead of whole tables. I've shared some results\nof that at PGCon and it's still in PoC state.\n\nAlso, to address the issue of updating VM of mostly-append-only tables\nI considered some possible solutions:\n\n1. Using INDEX_CLEANUP = false and TRUNCATE = false vacuum does hot\npruning, vacuuming table and updating VM. In addition to updating VM\nwe need to do other two operations but since the mostly-insert-only\ntables would have less garbage the hot pruning and vacuuming table\nshould be light workload. This is what I proposed on up-thread.\n2. This may have already been discussed before but we could update\nVM when hot pruning during SELECT operation. Since this affects SELECT\nperformance it should be enabled on only particular tables by user\nrequest.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 Sep 2019 12:18:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Tue, Sep 10, 2019 at 8:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Generally speaking, having more guc's for autovacuum and that too\n> > which are in some way dependent on existing guc's sounds bit scary,\n> > but OTOH whatever you wrote makes sense and can help the scenarios\n> > which this thread is trying to deal with. Have you given any thought\n> > to what Alvaro mentioned up-thread \"certain tables would have some\n> > sort of partial scan that sets the visibility map. There's no reason\n> > to invoke the whole vacuuming machinery. I don't think this is\n> > limited to append-only tables, but\n> > rather those are just the ones that are affected the most.\"?\n> >\n>\n> Speaking of partial scan I've considered before that we could use WAL\n> to find which pages have garbage much and not all-visible pages. We\n> can vacuum only a particular part of table that is most effective of\n> garbage collection instead of whole tables. I've shared some results\n> of that at PGCon and it's still in PoC state.\n>\n> Also, to address the issue of updating VM of mostly-append-only tables\n> I considered some possible solutions:\n>\n> 1. Using INDEX_CLEANUP = false and TRUNCATE = false vacuum does hot\n> pruning, vacuuming table and updating VM. In addition to updating VM\n> we need to do other two operations but since the mostly-insert-only\n> tables would have less garbage the hot pruning and vacuuming table\n> should be light workload. This is what I proposed on up-thread.\n>\n\nYes, this is an option, but it might be better if we can somehow avoid\ntriggering the vacuum machinery.\n\n> 2. This may have already been discussed before but we could update\n> VM when hot pruning during SELECT operation. Since this affects SELECT\n> performance it should be enabled on only particular tables by user\n> request.\n>\n\nYeah, doing anything additional in SELECT's can be tricky and think of\na case where actually there is nothing to prune on-page, in that case\nalso if we run the visibility checks and then mark the visibility map,\nthen it can be a noticeable overhead. OTOH, I think this will be a\none-time overhead because after the first scan the visibility map will\nbe updated and future scans don't need to update visibility map unless\nsomeone has updated that page. I was wondering why not do this during\nwrite workloads. For example, when Insert operation finds that there\nis no space in the current page and it has to move to next page, it\ncan check if the page (that doesn't have space to accommodate current\ntuple) can be marked all-visible. In this case, we would have already\ndone the costly part of an operation which is to Read/Lock the buffer.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 Sep 2019 09:52:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nThis patch is currently in \"needs review\" state, but that seems quite\nwrong - there's been a lot of discussions about how we might improve\nbehavior for append-only-tables, but IMO there's no clear consensus nor\na patch that we might review.\n\nSo I think this should be either \"waiting on author\" or maybe \"rejected\nwith feedback\". Is there any chance of getting a reviewable patch in the\ncurrent commitfest? If not, I propose to mark it as RWF.\n\nI still hope we can improve this somehow in time for PG13. The policy is\nnot to allow new non-trivial patches in the last CF, but hopefully this\nmight be considered an old patch.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 7 Jan 2020 19:05:24 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-01-07 at 19:05 +0100, Tomas Vondra wrote:\n> This patch is currently in \"needs review\" state, but that seems quite\n> wrong - there's been a lot of discussions about how we might improve\n> behavior for append-only-tables, but IMO there's no clear consensus nor\n> a patch that we might review.\n> \n> So I think this should be either \"waiting on author\" or maybe \"rejected\n> with feedback\". Is there any chance of getting a reviewable patch in the\n> current commitfest? If not, I propose to mark it as RWF.\n> \n> I still hope we can improve this somehow in time for PG13. The policy is\n> not to allow new non-trivial patches in the last CF, but hopefully this\n> might be considered an old patch.\n\nI think that no conclusion was reached because there are *many* things\nthat could be improved, and *many* interesting and ambitious ideas were\nvented.\n\nBut I think it would be good to have *something* that addresses the immediate\nproblem (INSERT-only tables are autovacuumed too late), as long as\nthat does not have negative side-effects or blocks further improvements.\n\nI don't feel totally well with the very simplistic approach of this\npatch (use the same metric to trigger autoanalyze and autovacuum),\nbut what about this:\n\n- a new table storage option autovacuum_vacuum_insert_threshold,\n perhaps a GUC of the same name, by default deactivated.\n\n- if tabentry->tuples_inserted exceeds this threshold, but not one\n of the others, lauch autovacuum with index_cleanup=off.\n\nHow would you feel about that?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 02 Mar 2020 14:57:03 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-02 at 14:57 +0100, I wrote:\n> But I think it would be good to have *something* that addresses the immediate\n> problem (INSERT-only tables are autovacuumed too late), as long as\n> that does not have negative side-effects or blocks further improvements.\n> \n> I don't feel totally well with the very simplistic approach of this\n> patch (use the same metric to trigger autoanalyze and autovacuum),\n> but what about this:\n> \n> - a new table storage option autovacuum_vacuum_insert_threshold,\n> perhaps a GUC of the same name, by default deactivated.\n> \n> - if tabentry->tuples_inserted exceeds this threshold, but not one\n> of the others, lauch autovacuum with index_cleanup=off.\n\nAs a more substantial base for discussion, here is a patch that:\n\n- introduces a GUC and reloption \"autovacuum_vacuum_insert_limit\",\n default 10000000\n\n- introduces a statistics counter \"inserts_since_vacuum\" per table\n that gets reset to 0 after vacuum\n\n- causes autovacuum to run without cleaning up indexes if\n inserts_since_vacuum > autovacuum_vacuum_insert_limit\n and there is no other reason for an autovacuum\n\nNo doc patch is included yet, and perhaps the new counter should\nbe shown in \"pg_stat_user_tables\".\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 03 Mar 2020 16:28:57 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-03 at 16:28 +0100, Laurenz Albe wrote:\n> As a more substantial base for discussion, here is a patch that:\n> \n> - introduces a GUC and reloption \"autovacuum_vacuum_insert_limit\",\n> default 10000000\n> \n> - introduces a statistics counter \"inserts_since_vacuum\" per table\n> that gets reset to 0 after vacuum\n> \n> - causes autovacuum to run without cleaning up indexes if\n> inserts_since_vacuum > autovacuum_vacuum_insert_limit\n> and there is no other reason for an autovacuum\n\nI just realized that the exercise is pointless unless that\nautovacuum also runs with FREEZE on.\n\nUpdated patch attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Wed, 04 Mar 2020 16:15:47 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 5 Mar 2020 at 04:15, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> I just realized that the exercise is pointless unless that\n> autovacuum also runs with FREEZE on.\n\nI think we need to move forward with doing something to cope with\nINSERT-only tables not being auto-vacuumed.\n\nI think the patch you have is something along the lines to what I'd\nhave imagined we should do. However, there are a few things that I'd\ndo a different way.\n\n1. I'd go for 2 new GUCs and reloptions.\nautovacuum_vacuum_insert_threshold (you're currently calling this\nautovacuum_vacuum_insert_limit. I don't see why the word \"limit\" is\nrelevant here). The other GUC I think should be named\nautovacuum_vacuum_insert_scale_factor and these should work exactly\nthe same way as autovacuum_vacuum_threshold and\nautovacuum_vacuum_scale_factor, but be applied in a similar way to the\nvacuum settings, but only be applied after we've checked to ensure the\ntable is not otherwise eligible to be vacuumed.\n\n2. I believe you're right in setting the freeze_min_age to 0 rather\nthan freeze_min_age. My understanding of freeze_min_age is that we\ndon't too pro-actively freeze tuples as in many workloads, a freshly\nINSERTed tuple is more likely to receive an UPDATE than an old tuple\nis. e.g something like new orders having their status updated various\ntimes until the order is complete, where it's not updated ever again.\nIn the INSERT-only case, there seems to be not much reason not to just\nfreeze right away.\n\n3. The name \"insert_only\" does not seem the best for the new boolean\nvariable that you're using in various places. That name seems to be\ntoo closely related to our current intended use case. Maybe\nskip_index_cleanup is more to the point.\n\n4. Are you sure you mean \"Maximum\" here? Isn't it the minimum? At\nleast it will be once you add both options. Otherwise, I think Maximum\nis not the correct word. Perhaps \"The threshold\"\n\n+ {\"autovacuum_vacuum_insert_limit\", PGC_SIGHUP, AUTOVACUUM,\n+ gettext_noop(\"Maximum number of tuple inserts prior to vacuum.\"),\n+ NULL\n+ },\n\n\n5. I think the new field in this struct should be named vacuum_insert_threshold\n\n@@ -252,6 +252,7 @@ typedef struct AutoVacOpts\n {\n bool enabled;\n int vacuum_threshold;\n+ int vacuum_ins_limit;\n\n6. Are you sure you meant to default this to 50?\n\nindex e58e4788a8..9d96d58ed2 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -598,6 +598,8 @@\n #autovacuum_naptime = 1min # time between autovacuum runs\n #autovacuum_vacuum_threshold = 50 # min number of row updates before\n # vacuum\n+#autovacuum_vacuum_insert_limit = 50 # max number of row inserts before\n+ # vacuum\n\nSeems excessive given there's no scale factor in the current patch.\n\n7. I know you know.... missing docs... would be good to get those.\n\n8. Should we care when setting the insert counter back to 0 if\nauto-vacuum has skipped pages?\n\n9. You should add a new column to the pg_stat_all_tables view to allow\nvisibility of the insert since the last vacuum. The column should be\nnamed n_ins_since_vacuum. This seems like the best combination of\nn_mod_since_analyze and n_tup_ins.\n\n10. I'm slightly worried about the case where we don't quite trigger a\nnormal vacuum but trigger a vacuum due to INSERTs then skip cleaning\nup the indexes but proceed to leave dead index entries causing indexes\nto become bloated. It does not seem impossible that given the right\nbalance of INSERTs and UPDATE/DELETEs that this could happen every\ntime and the indexes would just become larger and larger.\n\nIt's pretty easy to see this in action with:\n\ncreate extension if not exists pgstattuple;\ncreate table t0 (a int primary key);\nalter table t0 set (autovacuum_enabled=off);\ninsert into t0 select generate_Series(1,1000000);\n\ndelete from t0 where a&1=0; vacuum (index_cleanup off) t0; insert into\nt0 select generate_series(2,1000000,2); select * from\npgstattuple('t0'),pg_relation_size('t0') as t0_size; select n_dead_tup\nfrom pg_stat_all_tables where relid = 't0'::regclass; -- repeat this a\nfew times and watch the indexes bloat\n\n11. We probably do also need to debate if we want this on or off by\ndefault. I'd have leaned towards enabling by default if I'd not\npersonally witnessed the fact that people rarely* increase auto-vacuum\nto run faster than the standard cost settings. I've seen hundreds of\nservers over the years with all workers busy for days on something\nthey'll never finish quickly enough. We increased those settings 10x\nin PG12, so there will be fewer people around suffering from that now,\nbut even after having reduced the vacuum_cost_delay x10 over the PG11\nsettings, it's by no means fast enough for everyone. I've mixed\nfeelings about giving auto-vacuum more work to do for those people, so\nperhaps the best option is to keep this off by default so as not to\naffect the people who don't tune auto-vacuum. They'll just suffer the\npain all at once when they hit max freeze age instead of more\ngradually with the additional load on the workers. At least adding\nthis feature gives the people who do tune auto-vacuum some ability to\nhandle read-only tables in some sane way.\n\nAn alternative way of doing it would be to set the threshold to some\nnumber of million tuples and set the scale_factor to 0.2 so that it\nonly has an effect on larger tables, of which generally people only\nhave a smallish number of.\n\n\n* My opinion may be biased as the sample of people did arrive asking for help\n\n\n",
"msg_date": "Thu, 5 Mar 2020 19:40:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nThanks Laurenz for taking action on this and writing a better patch\nthan my initial.\nThis will help avoid both Mandrill-like downtimes and get Index Only\nScan just work on large telemetry databases like the one I was\nresponsible for back when I was in Juno.\n\nOn Thu, Mar 5, 2020 at 9:40 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 5 Mar 2020 at 04:15, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > I just realized that the exercise is pointless unless that\n> > autovacuum also runs with FREEZE on.\n\n> 8. Should we care when setting the insert counter back to 0 if\n> auto-vacuum has skipped pages?\n\nI believe it would be enough just to leave a comment about this in code.\n\n> 10. I'm slightly worried about the case where we don't quite trigger a\n> normal vacuum but trigger a vacuum due to INSERTs then skip cleaning\n> up the indexes but proceed to leave dead index entries causing indexes\n> to become bloated. It does not seem impossible that given the right\n> balance of INSERTs and UPDATE/DELETEs that this could happen every\n> time and the indexes would just become larger and larger.\n\nCan we not reset statistics about dead tuples upon index-skipping\nvacuum, since we didn't really take care of them?\n\n> 11. We probably do also need to debate if we want this on or off by\n> default. I'd have leaned towards enabling by default if I'd not\n> personally witnessed the fact that people rarely* increase auto-vacuum\n> to run faster than the standard cost settings. I've seen hundreds of\n> servers over the years with all workers busy for days on something\n> they'll never finish quickly enough. We increased those settings 10x\n> in PG12, so there will be fewer people around suffering from that now,\n> but even after having reduced the vacuum_cost_delay x10 over the PG11\n> settings, it's by no means fast enough for everyone. I've mixed\n> feelings about giving auto-vacuum more work to do for those people, so\n> perhaps the best option is to keep this off by default so as not to\n> affect the people who don't tune auto-vacuum. They'll just suffer the\n> pain all at once when they hit max freeze age instead of more\n> gradually with the additional load on the workers. At least adding\n> this feature gives the people who do tune auto-vacuum some ability to\n> handle read-only tables in some sane way.\n\nThat's exactly the situation we're trying to avoid with this patch.\nSuffering all at once takes large production deployments down for\nweeks, and that gets into news.\nIn current cloud setups it's plain impossible to read the whole\ndatabase at all, let alone rewrite, with IO budgets.\nI say we should enable this setting by default.\nIf my calculations are correct, that will freeze the table once each\nnew gigabyte of data is written, which is usually fitting into burst\nread thresholds.\n\n\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n\n",
"msg_date": "Thu, 5 Mar 2020 13:16:22 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-05 at 19:40 +1300, David Rowley wrote:\n> I think we need to move forward with doing something to cope with\n> INSERT-only tables not being auto-vacuumed.\n> \n> I think the patch you have is something along the lines to what I'd\n> have imagined we should do. However, there are a few things that I'd\n> do a different way.\n\nThanks for the review, and that's good news.\n\n> 1. I'd go for 2 new GUCs and reloptions.\n> autovacuum_vacuum_insert_threshold (you're currently calling this\n> autovacuum_vacuum_insert_limit. I don't see why the word \"limit\" is\n> relevant here). The other GUC I think should be named\n> autovacuum_vacuum_insert_scale_factor and these should work exactly\n> the same way as autovacuum_vacuum_threshold and\n> autovacuum_vacuum_scale_factor, but be applied in a similar way to the\n> vacuum settings, but only be applied after we've checked to ensure the\n> table is not otherwise eligible to be vacuumed.\n\nYes, \"threshold\" is better than \"limit\" I have renamed the GUC and\nthe reloption.\n\nI disagree about the scale_factor (and have not added it to the\nupdated version of the patch). If we have a scale_factor, then the\ntime between successive autovacuum runs would increase as the table\ngets bigger, which defeats the purpose of reducing the impact of each\nautovacuum run.\n\nSince autovacuum skips pages where it has nothing to do, we can expect\nthat runs on a large table won't be much more expensive than runs on a\nsmaller table, right?\n\n> 3. The name \"insert_only\" does not seem the best for the new boolean\n> variable that you're using in various places. That name seems to be\n> too closely related to our current intended use case. Maybe\n> skip_index_cleanup is more to the point.\n\nI originally called the variable \"skip_indexes\", but when I decided\nthat such vacuum runs also aggressively freeze the table, I thought\nthat the name was misleading and renamed it.\n\nI won't put up a fight about this, though.\n\n> 4. Are you sure you mean \"Maximum\" here? Isn't it the minimum? At\n> least it will be once you add both options. Otherwise, I think Maximum\n> is not the correct word. Perhaps \"The threshold\"\n> \n> + {\"autovacuum_vacuum_insert_limit\", PGC_SIGHUP, AUTOVACUUM,\n> + gettext_noop(\"Maximum number of tuple inserts prior to vacuum.\"),\n> + NULL\n> + },\n\nI had actually been debating whether to use \"maximum\" or \"minimum\".\nI realize now that this strange uncertainty stems from the fact that\nthere is (yet) only a single parameter to govern this.\n\nThe updated patch desctibes the GUC as\n\"Number of tuple inserts prior to vacuum.\"\n\n> 5. I think the new field in this struct should be named vacuum_insert_threshold\n> \n> @@ -252,6 +252,7 @@ typedef struct AutoVacOpts\n> {\n> bool enabled;\n> int vacuum_threshold;\n> + int vacuum_ins_limit;\n\nI agree as above, renamed.\n\n> 6. Are you sure you meant to default this to 50?\n> \n> index e58e4788a8..9d96d58ed2 100644\n> --- a/src/backend/utils/misc/postgresql.conf.sample\n> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> @@ -598,6 +598,8 @@\n> #autovacuum_naptime = 1min # time between autovacuum runs\n> #autovacuum_vacuum_threshold = 50 # min number of row updates before\n> # vacuum\n> +#autovacuum_vacuum_insert_limit = 50 # max number of row inserts before\n> + # vacuum\n> \n> Seems excessive given there's no scale factor in the current patch.\n\nThat was a mistake.\nI chose 10000000 as the actual default value, but forgot to put the\nsame value into \"postgresql.conf\".\n\n> 7. I know you know.... missing docs... would be good to get those.\n\nThe updated version of the patch has documentation.\n\nI just wanted to get a feeling if my patch would be killed cold before\nI went to the effort of writing documentation.\n\n> 8. Should we care when setting the insert counter back to 0 if\n> auto-vacuum has skipped pages?\n\nSince this is only an approximate value anyway, I decided not to care.\nI don't know if that is acceptable.\n\n> 9. You should add a new column to the pg_stat_all_tables view to allow\n> visibility of the insert since the last vacuum. The column should be\n> named n_ins_since_vacuum. This seems like the best combination of\n> n_mod_since_analyze and n_tup_ins.\n\nDone.\n\n> 10. I'm slightly worried about the case where we don't quite trigger a\n> normal vacuum but trigger a vacuum due to INSERTs then skip cleaning\n> up the indexes but proceed to leave dead index entries causing indexes\n> to become bloated. It does not seem impossible that given the right\n> balance of INSERTs and UPDATE/DELETEs that this could happen every\n> time and the indexes would just become larger and larger.\n\nI understand.\n\nThis might particularly be a problem with larger tables, where\na normal autovacuum is rare because of the scale_factor.\n\nPerhaps we can take care of the problem by *not* skipping index\ncleanup if \"changes_since_analyze\" is substantially greater than 0.\n\nWhat do you think?\n\n> 11. We probably do also need to debate if we want this on or off by\n> default. I'd have leaned towards enabling by default if I'd not\n> personally witnessed the fact that people rarely* increase auto-vacuum\n> to run faster than the standard cost settings. I've seen hundreds of\n> servers over the years with all workers busy for days on something\n> they'll never finish quickly enough. We increased those settings 10x\n> in PG12, so there will be fewer people around suffering from that now,\n> but even after having reduced the vacuum_cost_delay x10 over the PG11\n> settings, it's by no means fast enough for everyone. I've mixed\n> feelings about giving auto-vacuum more work to do for those people, so\n> perhaps the best option is to keep this off by default so as not to\n> affect the people who don't tune auto-vacuum. They'll just suffer the\n> pain all at once when they hit max freeze age instead of more\n> gradually with the additional load on the workers. At least adding\n> this feature gives the people who do tune auto-vacuum some ability to\n> handle read-only tables in some sane way.\n> \n> An alternative way of doing it would be to set the threshold to some\n> number of million tuples and set the scale_factor to 0.2 so that it\n> only has an effect on larger tables, of which generally people only\n> have a smallish number of.\n\nYes, I think that disabling this by default defeats the purpose.\n\nKnowledgeable people can avoid the problem today by manually scheduling\nVACUUM runs on insert-only tables, and the functionality proposed here\nis specifically to improve the lives of people who don't know enough\nto tune autovacuum.\n\nMy original idea was to set the threshold to 10 million and have no scale\nfactor.\n\n\nUpdated patch attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 05 Mar 2020 15:27:31 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 05, 2020 at 03:27:31PM +0100, Laurenz Albe wrote:\n> On Thu, 2020-03-05 at 19:40 +1300, David Rowley wrote:\n> > 1. I'd go for 2 new GUCs and reloptions.\n> > autovacuum_vacuum_insert_scale_factor and these should work exactly\n> \n> I disagree about the scale_factor (and have not added it to the\n> updated version of the patch). If we have a scale_factor, then the\n> time between successive autovacuum runs would increase as the table\n> gets bigger, which defeats the purpose of reducing the impact of each\n> autovacuum run.\n\nI would vote to include scale factor. You're right that a nonzero scale factor\nwould cause vacuum to run with geometrically decreasing frequency. The same\nthing currently happens with autoanalyze as a table grows in size. I found\nthat our monthly-partitioned tables were being analyzed too infrequently\ntowards the end of the month. (At the beginning of the month, 10% is 2.4 hours\nworth of timeseries data, but at the end of the month 10% is 3 days, which was\nan issue when querying the previous day may have rowcount estimates near zero.)\nIf someone wanted to avoid that, they'd set scale_factor=0. I think this patch\nshould parallel what's already in place, and we can add documention for the\nbehavior if need be. Possibly scale_factor should default to zero, which I\nthink might make sense since insert-only tables seem to be the main target of\nthis patch.\n\n> +++ b/doc/src/sgml/maintenance.sgml\n> + <para>\n> + Tables that have received more than\n> + <xref linkend=\"guc-autovacuum-vacuum-insert-threshold\"/>\n> + inserts since they were last vacuumed and are not eligible for vacuuming\n> + based on the above criteria will be vacuumed to reduce the impact of a future\n> + anti-wraparound vacuum run.\n> + Such a vacuum will aggressively freeze tuples, and it will not clean up dead\n> + index tuples.\n\n\"BUT will not clean ..\"\n\n> +++ b/src/backend/postmaster/autovacuum.c\n> +\t\t/*\n> +\t\t * If the number of inserted tuples exceeds the limit\n\nI would say \"exceeds the threshold\"\n\nThanks for working on this; we would use this feature on our insert-only\ntables.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 5 Mar 2020 11:27:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 6 Mar 2020 at 03:27, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Thu, 2020-03-05 at 19:40 +1300, David Rowley wrote:\n> > 1. I'd go for 2 new GUCs and reloptions.\n> > autovacuum_vacuum_insert_threshold (you're currently calling this\n> > autovacuum_vacuum_insert_limit. I don't see why the word \"limit\" is\n> > relevant here). The other GUC I think should be named\n> > autovacuum_vacuum_insert_scale_factor and these should work exactly\n> > the same way as autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor, but be applied in a similar way to the\n> > vacuum settings, but only be applied after we've checked to ensure the\n> > table is not otherwise eligible to be vacuumed.\n>\n> I disagree about the scale_factor (and have not added it to the\n> updated version of the patch). If we have a scale_factor, then the\n> time between successive autovacuum runs would increase as the table\n> gets bigger, which defeats the purpose of reducing the impact of each\n> autovacuum run.\n\nMy view here is not really to debate what logically makes the most\nsense. I don't really think for a minute that the current\nauto-vacuums scale_factor and thresholds are perfect for the job. It's\ntrue that the larger a table becomes, the less often it'll be\nvacuumed, but these are control knobs that people have become\naccustomed to and I don't really think that making an exception for\nthis is warranted. Perhaps we can zero out the scale factor by\ndefault and set the threshold into the millions of tuples. We can have\npeople chime in on what they think about that and why once the code is\nwritten and even perhaps committed.\n\nLack of a scale_factor does leave people who regularly truncate their\n\"append-only\" tables out in the cold a bit. Perhaps they'd like\nindex-only scans to kick in soon after they truncate without having to\nwait for 10 million tuples, or so.\n\n> > 10. I'm slightly worried about the case where we don't quite trigger a\n> > normal vacuum but trigger a vacuum due to INSERTs then skip cleaning\n> > up the indexes but proceed to leave dead index entries causing indexes\n> > to become bloated. It does not seem impossible that given the right\n> > balance of INSERTs and UPDATE/DELETEs that this could happen every\n> > time and the indexes would just become larger and larger.\n>\n> I understand.\n>\n> This might particularly be a problem with larger tables, where\n> a normal autovacuum is rare because of the scale_factor.\n>\n> Perhaps we can take care of the problem by *not* skipping index\n> cleanup if \"changes_since_analyze\" is substantially greater than 0.\n>\n> What do you think?\n\nWell, there is code that skips the index scans when there are 0 dead\ntuples found in the heap. If the table is truly INSERT-only then it\nwon't do any harm since we'll skip the index scan anyway. I think\nit's less risky to clean the indexes. If we skip that then there will\nbe a group of people will suffer from index bloat due to this, no\nmatter if they realise it or not.\n\n> > 11. We probably do also need to debate if we want this on or off by\n> > default. I'd have leaned towards enabling by default if I'd not\n> > personally witnessed the fact that people rarely* increase auto-vacuum\n> > to run faster than the standard cost settings. I've seen hundreds of\n> > servers over the years with all workers busy for days on something\n> > they'll never finish quickly enough. We increased those settings 10x\n> > in PG12, so there will be fewer people around suffering from that now,\n> > but even after having reduced the vacuum_cost_delay x10 over the PG11\n> > settings, it's by no means fast enough for everyone. I've mixed\n> > feelings about giving auto-vacuum more work to do for those people, so\n> > perhaps the best option is to keep this off by default so as not to\n> > affect the people who don't tune auto-vacuum. They'll just suffer the\n> > pain all at once when they hit max freeze age instead of more\n> > gradually with the additional load on the workers. At least adding\n> > this feature gives the people who do tune auto-vacuum some ability to\n> > handle read-only tables in some sane way.\n> >\n> > An alternative way of doing it would be to set the threshold to some\n> > number of million tuples and set the scale_factor to 0.2 so that it\n> > only has an effect on larger tables, of which generally people only\n> > have a smallish number of.\n>\n> Yes, I think that disabling this by default defeats the purpose.\n\nPerhaps the solution to that is somewhere else then. I can picture\nsome sort of load average counters for auto-vacuum and spamming the\nlogs with WARNINGs if we maintain high enough load for long enough,\nbut we'd likely be better just completely overhauling the vacuum cost\nsettings to be a percentage of total effort rather than some fixed\nspeed. That would allow more powerful servers to run vacuum more\nquickly and it would also run more quickly during low load periods.\nWe'd just need to sample now and again how long vacuuming a series of\npage takes then sleep for a time based on how long that took. That's\nnot for this patch though.\n\n> Updated patch attached.\n\nThanks. I've not looked yet as I really think we need a scale_factor\nfor this. I'm interested to hear what others think. So far both\nJustin and I think it's a good idea.\n\n\n",
"msg_date": "Fri, 6 Mar 2020 10:52:33 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Thanks, Justin, for the review.\nI have applied the changes where still applicable.\n\nOn Fri, 2020-03-06 at 10:52 +1300, David Rowley wrote:\n> On Fri, 6 Mar 2020 at 03:27, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > On Thu, 2020-03-05 at 19:40 +1300, David Rowley wrote:\n> > > 1. I'd go for 2 new GUCs and reloptions.\n> > > autovacuum_vacuum_insert_threshold (you're currently calling this\n> > > autovacuum_vacuum_insert_limit. I don't see why the word \"limit\" is\n> > > relevant here). The other GUC I think should be named\n> > > autovacuum_vacuum_insert_scale_factor and these should work exactly\n> > > the same way as autovacuum_vacuum_threshold and\n> > > autovacuum_vacuum_scale_factor, but be applied in a similar way to the\n> > > vacuum settings, but only be applied after we've checked to ensure the\n> > > table is not otherwise eligible to be vacuumed.\n> > \n> > I disagree about the scale_factor (and have not added it to the\n> > updated version of the patch). If we have a scale_factor, then the\n> > time between successive autovacuum runs would increase as the table\n> > gets bigger, which defeats the purpose of reducing the impact of each\n> > autovacuum run.\n> \n> My view here is not really to debate what logically makes the most\n> sense. I don't really think for a minute that the current\n> auto-vacuums scale_factor and thresholds are perfect for the job. It's\n> true that the larger a table becomes, the less often it'll be\n> vacuumed, but these are control knobs that people have become\n> accustomed to and I don't really think that making an exception for\n> this is warranted. Perhaps we can zero out the scale factor by\n> default and set the threshold into the millions of tuples. We can have\n> people chime in on what they think about that and why once the code is\n> written and even perhaps committed.\n\nOk, I submit. My main desire was to keep the number of new GUCs as\nlow as reasonably possible, but making the feature tunable along the\nknown and \"trusted\" lines may be a good thing.\n\nThe new parameter is called \"autovacuum_vacuum_insert_scale_factor\".\n\n> Lack of a scale_factor does leave people who regularly truncate their\n> \"append-only\" tables out in the cold a bit. Perhaps they'd like\n> index-only scans to kick in soon after they truncate without having to\n> wait for 10 million tuples, or so.\n\nThat point I don't see.\nTruncating a table resets the counters to 0.\n\n> > > 10. I'm slightly worried about the case where we don't quite trigger a\n> > > normal vacuum but trigger a vacuum due to INSERTs then skip cleaning\n> > > up the indexes but proceed to leave dead index entries causing indexes\n> > > to become bloated. It does not seem impossible that given the right\n> > > balance of INSERTs and UPDATE/DELETEs that this could happen every\n> > > time and the indexes would just become larger and larger.\n> > \n> > Perhaps we can take care of the problem by *not* skipping index\n> > cleanup if \"changes_since_analyze\" is substantially greater than 0.\n> > \n> > What do you think?\n> \n> Well, there is code that skips the index scans when there are 0 dead\n> tuples found in the heap. If the table is truly INSERT-only then it\n> won't do any harm since we'll skip the index scan anyway. I think\n> it's less risky to clean the indexes. If we skip that then there will\n> be a group of people will suffer from index bloat due to this, no\n> matter if they realise it or not.\n\nOh I didn't know that.\n\nIn that case it is better to have this vacuum process indexes as well.\nI have changed the patch so that it freezes tuples, but does not skip\nindex cleanup.\n\nBetter err on the side of caution.\n\n> > Yes, I think that disabling this by default defeats the purpose.\n> \n> Perhaps the solution to that is somewhere else then. I can picture\n> some sort of load average counters for auto-vacuum and spamming the\n> logs with WARNINGs if we maintain high enough load for long enough,\n> but we'd likely be better just completely overhauling the vacuum cost\n> settings to be a percentage of total effort rather than some fixed\n> speed. That would allow more powerful servers to run vacuum more\n> quickly and it would also run more quickly during low load periods.\n> We'd just need to sample now and again how long vacuuming a series of\n> page takes then sleep for a time based on how long that took. That's\n> not for this patch though.\n\nRight.\n\n\nUpdated patch attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Fri, 06 Mar 2020 15:45:50 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 7 Mar 2020 at 03:45, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> Thanks, Justin, for the review.\n> I have applied the changes where still applicable.\n>\n> On Fri, 2020-03-06 at 10:52 +1300, David Rowley wrote:\n> > Lack of a scale_factor does leave people who regularly truncate their\n> > \"append-only\" tables out in the cold a bit. Perhaps they'd like\n> > index-only scans to kick in soon after they truncate without having to\n> > wait for 10 million tuples, or so.\n>\n> That point I don't see.\n> Truncating a table resets the counters to 0.\n\nThe scenario there is that if we don't have any\nautovacuum_vacuum_insert_scale_factor and we set the threshold to 10\nmillion tuples. The user truncates the table on a monthly basis and\nnearer to the end of the month the tuples accumulates around 100\nmillion tuples, roughly 3.2 million are inserted per day, so\nauto-vacuum kicks in for this table around once every 3 days. At the\nstart of the month, the table is truncated and it begins refilling.\nThe n_ins_since_vacuum is reset to 0 during the truncate. Meanwhile,\nthe table is being queried constantly and it takes 3 days for us to\nvacuum the table again. Queries hitting the table are unable to use\nIndex Only Scans for 3 days. The DBAs don't have a lot of control\nover this.\n\nI think we can help users with that by giving them a bit more control\nover when auto-vacuum will run for the table. scale_factor and\nthreshold.\n\n> Updated patch attached.\n\nGreat. I'll have a look.\n\n\n",
"msg_date": "Tue, 10 Mar 2020 09:56:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 10 Mar 2020 at 09:56, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 7 Mar 2020 at 03:45, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > Updated patch attached.\n>\n> Great. I'll have a look.\n\nI don't really have many complaints about the v4 patch. However,\nduring my pass of it, I did note down a few things that you might want\nto have a look at.\n\n1. Do we need to change documentation on freeze_min_age to mention\nthat it does not apply in all cases? I'm leaning towards not changing\nthis as `VACUUM FREEZE` is also an exception to this, which I don't\nsee mentioned.\n\n2. Perhaps the documentation in maintenance.sgml should mention that\nthe table will be vacuumed with the equivalent of having\nvacuum_freeze_min_age = 0, instead of:\n\n\"Such a vacuum will aggressively freeze tuples.\"\n\naggressive is the wrong word here. We call it an aggressive vacuum if\nwe disable page skipping, not for setting the vacuum_freeze_min_age to\n0.\n\nSee heap_vacuum_rel()\n\n/*\n* We request an aggressive scan if the table's frozen Xid is now older\n* than or equal to the requested Xid full-table scan limit; or if the\n* table's minimum MultiXactId is older than or equal to the requested\n* mxid full-table scan limit; or if DISABLE_PAGE_SKIPPING was specified.\n*/\naggressive = TransactionIdPrecedesOrEquals(onerel->rd_rel->relfrozenxid,\n xidFullScanLimit);\naggressive |= MultiXactIdPrecedesOrEquals(onerel->rd_rel->relminmxid,\n mxactFullScanLimit);\nif (params->options & VACOPT_DISABLE_PAGE_SKIPPING)\naggressive = true;\n\n3. The following DEBUG3 elog should be updated to include the new values:\n\nelog(DEBUG3, \"%s: vac: %.0f (threshold %.0f), anl: %.0f (threshold %.0f)\",\nNameStr(classForm->relname),\nvactuples, vacthresh, anltuples, anlthresh);\n\nSomeone might be confused at why auto-vacuum is running if you don't\nput those in.\n\n4. This would be nicer if you swapped the order of the operands to the\n< condition and replaced the operator with >. That'll match the way it\nis done above.\n\n/*\n* If the number of inserted tuples exceeds the threshold and no\n* vacuum is necessary for other reasons, run an \"insert-only\" vacuum\n* that freezes aggressively.\n*/\nif (!(*dovacuum) && vacinsthresh < tabentry->inserts_since_vacuum)\n{\n*dovacuum = true;\n*freeze_all = true;\n}\n\nIt would also be nicer if you assigned the value of\ntabentry->inserts_since_vacuum to a variable, so as to match what the\nother code there is doing. That'll also make the change for #3 neater.\n\n5. The following text:\n\n A threshold similar to the above is calculated from\n <xref linkend=\"guc-autovacuum-vacuum-insert-threshold\"/> and\n <xref linkend=\"guc-autovacuum-vacuum-insert-scale-factor\"/>.\n Tables that have received more inserts than the calculated threshold\n since they were last vacuumed (and are not eligible for vacuuming for\n other reasons) will be vacuumed to reduce the impact of a future\n anti-wraparound vacuum run.\n\nI think \"... will be vacuumed with the equivalent of having <xref\nlinkend=\"guc-vacuum-freeze-min-age\"/> set to <literal>0</literal>\".\nI'm not sure we need to mention the reduction of impact to\nanti-wraparound vacuums.\n\n6. Please run the regression tests and make sure they pass. The\n\"rules\" test is currently failing due to the new column in\n\"pg_stat_all_tables\"\n\nApart from the above, does anyone else have objections or concerns\nwith the patch? I'd like to take a serious look at pushing it once\nthe above points are resolved.\n\n\n",
"msg_date": "Tue, 10 Mar 2020 13:53:42 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-10 at 09:56 +1300, David Rowley wrote:\n> > > Lack of a scale_factor does leave people who regularly truncate their\n> > > \"append-only\" tables out in the cold a bit. Perhaps they'd like\n> > > index-only scans to kick in soon after they truncate without having to\n> > > wait for 10 million tuples, or so.\n> > \n> > That point I don't see.\n> > Truncating a table resets the counters to 0.\n> \n> The scenario there is that if we don't have any\n> autovacuum_vacuum_insert_scale_factor and we set the threshold to 10\n> million tuples. The user truncates the table on a monthly basis and\n> nearer to the end of the month the tuples accumulates around 100\n> million tuples, roughly 3.2 million are inserted per day, so\n> auto-vacuum kicks in for this table around once every 3 days. At the\n> start of the month, the table is truncated and it begins refilling.\n> The n_ins_since_vacuum is reset to 0 during the truncate. Meanwhile,\n> the table is being queried constantly and it takes 3 days for us to\n> vacuum the table again. Queries hitting the table are unable to use\n> Index Only Scans for 3 days. The DBAs don't have a lot of control\n> over this.\n> \n> I think we can help users with that by giving them a bit more control\n> over when auto-vacuum will run for the table. scale_factor and\n> threshold.\n\nOh, that's a good point.\nI only thought about anti-wraparound vacuum, but the feature might be useful\nfor index-only scans as well.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 10 Mar 2020 04:09:24 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> +#autovacuum_vacuum_insert_threshold = 10000000\t# min number of row inserts\n> +\t\t\t\t\t# before vacuum\n\nSimilar to a previous comment [0] about reloptions or GUC:\n\nCan we say \"threshold number of insertions before vacuum\" ?\n..or \"maximum number of insertions before triggering autovacuum\"\n\n-- \nJustin\n\n[0] https://www.postgresql.org/message-id/602873766faa0e9200a60dcc26dc10c636761d5d.camel%40cybertec.at\n\n\n",
"msg_date": "Tue, 10 Mar 2020 00:00:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 6 Mar 2020 at 23:46, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> Thanks, Justin, for the review.\n> I have applied the changes where still applicable.\n>\n> On Fri, 2020-03-06 at 10:52 +1300, David Rowley wrote:\n> > On Fri, 6 Mar 2020 at 03:27, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > On Thu, 2020-03-05 at 19:40 +1300, David Rowley wrote:\n> > > > 1. I'd go for 2 new GUCs and reloptions.\n> > > > autovacuum_vacuum_insert_threshold (you're currently calling this\n> > > > autovacuum_vacuum_insert_limit. I don't see why the word \"limit\" is\n> > > > relevant here). The other GUC I think should be named\n> > > > autovacuum_vacuum_insert_scale_factor and these should work exactly\n> > > > the same way as autovacuum_vacuum_threshold and\n> > > > autovacuum_vacuum_scale_factor, but be applied in a similar way to the\n> > > > vacuum settings, but only be applied after we've checked to ensure the\n> > > > table is not otherwise eligible to be vacuumed.\n> > >\n> > > I disagree about the scale_factor (and have not added it to the\n> > > updated version of the patch). If we have a scale_factor, then the\n> > > time between successive autovacuum runs would increase as the table\n> > > gets bigger, which defeats the purpose of reducing the impact of each\n> > > autovacuum run.\n> >\n> > My view here is not really to debate what logically makes the most\n> > sense. I don't really think for a minute that the current\n> > auto-vacuums scale_factor and thresholds are perfect for the job. It's\n> > true that the larger a table becomes, the less often it'll be\n> > vacuumed, but these are control knobs that people have become\n> > accustomed to and I don't really think that making an exception for\n> > this is warranted. Perhaps we can zero out the scale factor by\n> > default and set the threshold into the millions of tuples. We can have\n> > people chime in on what they think about that and why once the code is\n> > written and even perhaps committed.\n>\n> Ok, I submit. My main desire was to keep the number of new GUCs as\n> low as reasonably possible, but making the feature tunable along the\n> known and \"trusted\" lines may be a good thing.\n>\n> The new parameter is called \"autovacuum_vacuum_insert_scale_factor\".\n>\n> > Lack of a scale_factor does leave people who regularly truncate their\n> > \"append-only\" tables out in the cold a bit. Perhaps they'd like\n> > index-only scans to kick in soon after they truncate without having to\n> > wait for 10 million tuples, or so.\n>\n> That point I don't see.\n> Truncating a table resets the counters to 0.\n>\n> > > > 10. I'm slightly worried about the case where we don't quite trigger a\n> > > > normal vacuum but trigger a vacuum due to INSERTs then skip cleaning\n> > > > up the indexes but proceed to leave dead index entries causing indexes\n> > > > to become bloated. It does not seem impossible that given the right\n> > > > balance of INSERTs and UPDATE/DELETEs that this could happen every\n> > > > time and the indexes would just become larger and larger.\n> > >\n> > > Perhaps we can take care of the problem by *not* skipping index\n> > > cleanup if \"changes_since_analyze\" is substantially greater than 0.\n> > >\n> > > What do you think?\n> >\n> > Well, there is code that skips the index scans when there are 0 dead\n> > tuples found in the heap. If the table is truly INSERT-only then it\n> > won't do any harm since we'll skip the index scan anyway. I think\n> > it's less risky to clean the indexes. If we skip that then there will\n> > be a group of people will suffer from index bloat due to this, no\n> > matter if they realise it or not.\n\n+1\n\nFYI actually vacuum could perform index cleanup phase (i.g.\nPROGRESS_VACUUM_PHASE_INDEX_CLEANUP phase) on a table even if it's a\ntruly INSERT-only table, depending on\nvacuum_cleanup_index_scale_factor. Anyway, I also agree with not\ndisabling index cleanup in insert-only vacuum case, because it could\nbecome not only a cause of index bloat but also a big performance\nissue. For example, if autovacuum on a table always run without index\ncleanup, gin index on that table will accumulate insertion tuples in\nits pending list and will be cleaned up by a backend process while\ninserting new tuple, not by a autovacuum process. We can disable index\nvacuum by index_cleanup storage parameter per tables, so it would be\nbetter to defer these settings to users.\n\nI have one question about this patch from architectural perspective:\nhave you considered to use autovacuum_vacuum_threshold and\nautovacuum_vacuum_scale_factor also for this purpose? That is, we\ncompare the threshold computed by these values to not only the number\nof dead tuples but also the number of inserted tuples. If the number\nof dead tuples exceeds the threshold, we trigger autovacuum as usual.\nOn the other hand if the number of inserted tuples exceeds, we trigger\nautovacuum with vacuum_freeze_min_age = 0. I'm concerned that how user\nconsider the settings of newly added two parameters. We will have in\ntotal 4 parameters. Amit also was concerned about that[1].\n\nI think this idea also works fine. In insert-only table case, since\nonly the number of inserted tuples gets increased, only one threshold\n(that is, threshold computed by autovacuum_vacuum_threshold and\nautovacuum_vacuum_scale_factor) is enough to trigger autovacuum. And\nin mostly-insert table case, in the first place, we can trigger\nautovacuum even in current PostgreSQL, since we have some dead tuples.\nBut if we want to trigger autovacuum more frequently by the number of\nnewly inserted tuples, we can set that threshold lower while\nconsidering only the number of inserted tuples.\n\nAnd I briefly looked at this patch:\n\n@@ -2889,7 +2898,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,\n tab->at_params.truncate = VACOPT_TERNARY_DEFAULT;\n /* As of now, we don't support parallel vacuum for autovacuum */\n tab->at_params.nworkers = -1;\n- tab->at_params.freeze_min_age = freeze_min_age;\n+ tab->at_params.freeze_min_age = freeze_all ? 0 : freeze_min_age;\n tab->at_params.freeze_table_age = freeze_table_age;\n tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;\n tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;\n\nI think we can set multixact_freeze_min_age to 0 as well.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BrCxS_Pg4GdSa6G8ESOTHK%2BjDVgqYd_dnO07rGNaewKA%40mail.gmail.com\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\n\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 10 Mar 2020 18:14:36 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-10 at 13:53 +1300, David Rowley wrote:\n> 1. Do we need to change documentation on freeze_min_age to mention\n> that it does not apply in all cases? I'm leaning towards not changing\n> this as `VACUUM FREEZE` is also an exception to this, which I don't\n> see mentioned.\n\nI agree with that. Too little documentation is bad, but too much of\nit can also confuse and make it hard to find the needle in the haystack.\n\n> 2. Perhaps the documentation in maintenance.sgml should mention that\n> the table will be vacuumed with the equivalent of having\n> vacuum_freeze_min_age = 0, instead of:\n> \n> \"Such a vacuum will aggressively freeze tuples.\"\n> \n> aggressive is the wrong word here. We call it an aggressive vacuum if\n> we disable page skipping, not for setting the vacuum_freeze_min_age to\n> 0.\n\nAgreed, see below.\n\n> 3. The following DEBUG3 elog should be updated to include the new values:\n> \n> elog(DEBUG3, \"%s: vac: %.0f (threshold %.0f), anl: %.0f (threshold %.0f)\",\n> NameStr(classForm->relname),\n> vactuples, vacthresh, anltuples, anlthresh);\n\nDone.\n\n> Someone might be confused at why auto-vacuum is running if you don't\n> put those in.\n> \n> 4. This would be nicer if you swapped the order of the operands to the\n> < condition and replaced the operator with >. That'll match the way it\n> is done above.\n> \n> /*\n> * If the number of inserted tuples exceeds the threshold and no\n> * vacuum is necessary for other reasons, run an \"insert-only\" vacuum\n> * that freezes aggressively.\n> */\n> if (!(*dovacuum) && vacinsthresh < tabentry->inserts_since_vacuum)\n> {\n> *dovacuum = true;\n> *freeze_all = true;\n> }\n> \n> It would also be nicer if you assigned the value of\n> tabentry->inserts_since_vacuum to a variable, so as to match what the\n> other code there is doing. That'll also make the change for #3 neater.\n\nChanged that way.\n\n> 5. The following text:\n> \n> A threshold similar to the above is calculated from\n> <xref linkend=\"guc-autovacuum-vacuum-insert-threshold\"/> and\n> <xref linkend=\"guc-autovacuum-vacuum-insert-scale-factor\"/>.\n> Tables that have received more inserts than the calculated threshold\n> since they were last vacuumed (and are not eligible for vacuuming for\n> other reasons) will be vacuumed to reduce the impact of a future\n> anti-wraparound vacuum run.\n> \n> I think \"... will be vacuumed with the equivalent of having <xref\n> linkend=\"guc-vacuum-freeze-min-age\"/> set to <literal>0</literal>\".\n> I'm not sure we need to mention the reduction of impact to\n> anti-wraparound vacuums.\n\nDone like that.\n\nI left in the explanation of the purpose of this setting.\nUnderstanding the purpose of the GUCs will make it easier to tune them\ncorrectly.\n\n> 6. Please run the regression tests and make sure they pass. The\n> \"rules\" test is currently failing due to the new column in\n> \"pg_stat_all_tables\"\n\nOops, sorry. I ran pgindent, but forgot to re-run the regression tests.\n\nDone.\n\n\nAttached is V5, which also fixes the bug discovered my Masahiko Sawada.\nHe made an interesting suggestion which we should consider before committing.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 10 Mar 2020 20:07:03 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-10 at 00:00 -0500, Justin Pryzby wrote:\n> > +++ b/src/backend/utils/misc/postgresql.conf.sample\n> > +#autovacuum_vacuum_insert_threshold = 10000000 # min number of row inserts\n> > + # before vacuum\n> \n> Similar to a previous comment [0] about reloptions or GUC:\n> \n> Can we say \"threshold number of insertions before vacuum\" ?\n> ..or \"maximum number of insertions before triggering autovacuum\"\n\nHmm. I copied the wording from \"autovacuum_vacuum_threshold\".\n\nSince the parameters have similar semantics, a different wording\nwould confuse.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 10 Mar 2020 20:08:39 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-10 at 18:14 +0900, Masahiko Sawada wrote:\n\nThanks for the review and your thoughts!\n\n> FYI actually vacuum could perform index cleanup phase (i.g.\n> PROGRESS_VACUUM_PHASE_INDEX_CLEANUP phase) on a table even if it's a\n> truly INSERT-only table, depending on\n> vacuum_cleanup_index_scale_factor. Anyway, I also agree with not\n> disabling index cleanup in insert-only vacuum case, because it could\n> become not only a cause of index bloat but also a big performance\n> issue. For example, if autovacuum on a table always run without index\n> cleanup, gin index on that table will accumulate insertion tuples in\n> its pending list and will be cleaned up by a backend process while\n> inserting new tuple, not by a autovacuum process. We can disable index\n> vacuum by index_cleanup storage parameter per tables, so it would be\n> better to defer these settings to users.\n\nThanks for the confirmation.\n\n> I have one question about this patch from architectural perspective:\n> have you considered to use autovacuum_vacuum_threshold and\n> autovacuum_vacuum_scale_factor also for this purpose? That is, we\n> compare the threshold computed by these values to not only the number\n> of dead tuples but also the number of inserted tuples. If the number\n> of dead tuples exceeds the threshold, we trigger autovacuum as usual.\n> On the other hand if the number of inserted tuples exceeds, we trigger\n> autovacuum with vacuum_freeze_min_age = 0. I'm concerned that how user\n> consider the settings of newly added two parameters. We will have in\n> total 4 parameters. Amit also was concerned about that[1].\n> \n> I think this idea also works fine. In insert-only table case, since\n> only the number of inserted tuples gets increased, only one threshold\n> (that is, threshold computed by autovacuum_vacuum_threshold and\n> autovacuum_vacuum_scale_factor) is enough to trigger autovacuum. And\n> in mostly-insert table case, in the first place, we can trigger\n> autovacuum even in current PostgreSQL, since we have some dead tuples.\n> But if we want to trigger autovacuum more frequently by the number of\n> newly inserted tuples, we can set that threshold lower while\n> considering only the number of inserted tuples.\n\nI am torn.\n\nOn the one hand it would be wonderful not to have to add yet more GUCs\nto the already complicated autovacuum configuration. It already confuses\ntoo many users.\n\nOn the other hand that will lead to unnecessary vacuums for small\ntables.\nWorse, the progression caused by the comparatively large scale\nfactor may make it vacuum large tables too seldom.\n\nI'd be grateful if somebody knowledgeable could throw his or her opinion\ninto the scales.\n\n> And I briefly looked at this patch:\n> \n> @@ -2889,7 +2898,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,\n> tab->at_params.truncate = VACOPT_TERNARY_DEFAULT;\n> /* As of now, we don't support parallel vacuum for autovacuum */\n> tab->at_params.nworkers = -1;\n> - tab->at_params.freeze_min_age = freeze_min_age;\n> + tab->at_params.freeze_min_age = freeze_all ? 0 : freeze_min_age;\n> tab->at_params.freeze_table_age = freeze_table_age;\n> tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;\n> tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;\n> \n> I think we can set multixact_freeze_min_age to 0 as well.\n\nUgh, yes, that is a clear oversight.\nI have fixed it in the latest version.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 10 Mar 2020 20:17:54 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 08:17, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2020-03-10 at 18:14 +0900, Masahiko Sawada wrote:\n> > I have one question about this patch from architectural perspective:\n> > have you considered to use autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor also for this purpose? That is, we\n> > compare the threshold computed by these values to not only the number\n> > of dead tuples but also the number of inserted tuples. If the number\n> > of dead tuples exceeds the threshold, we trigger autovacuum as usual.\n> > On the other hand if the number of inserted tuples exceeds, we trigger\n> > autovacuum with vacuum_freeze_min_age = 0. I'm concerned that how user\n> > consider the settings of newly added two parameters. We will have in\n> > total 4 parameters. Amit also was concerned about that[1].\n> >\n> > I think this idea also works fine. In insert-only table case, since\n> > only the number of inserted tuples gets increased, only one threshold\n> > (that is, threshold computed by autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor) is enough to trigger autovacuum. And\n> > in mostly-insert table case, in the first place, we can trigger\n> > autovacuum even in current PostgreSQL, since we have some dead tuples.\n> > But if we want to trigger autovacuum more frequently by the number of\n> > newly inserted tuples, we can set that threshold lower while\n> > considering only the number of inserted tuples.\n>\n> I am torn.\n>\n> On the one hand it would be wonderful not to have to add yet more GUCs\n> to the already complicated autovacuum configuration. It already confuses\n> too many users.\n>\n> On the other hand that will lead to unnecessary vacuums for small\n> tables.\n> Worse, the progression caused by the comparatively large scale\n> factor may make it vacuum large tables too seldom.\n\nI think we really need to discuss what the default values for these\nINSERT-only vacuums should be before we can decide if we need 2\nfurther GUCc to control the feature. Right now the default is 0.0 on\nthe scale factor and 10 million tuples threshold. I'm not saying\nthose are good or bad values, but if they are good, then they're\npretty different from the normal threshold of 50 and the normal scale\nfactor of 0.2, therefore (assuming the delete/update thresholds are\nalso good), then we need the additional GUCs.\n\nIf someone wants to put forward a case for making the defaults more\nsimilar, then perhaps we can consider merging the options. One case\nmight be the fact that we want INSERT-only tables to benefit from\nIndex Only Scans more often than after 10 million inserts.\n\nAs for pros and cons. Feel free to add to the following list:\n\nFor new GUCs/reloptions:\n1. Gives users more control over this new auto-vacuum behaviour\n2. The new feature can be completely disabled. This might be very\nuseful for people who suffer from auto-vacuum starvation.\n\nAgainst new GUCs/reloptions:\n1. Adds more code, documentation and maintenance.\n2. Adds more complexity to auto-vacuum configuration.\n\nAs for my opinion, I'm leaning towards keeping the additional options.\nI think if we were just adding auto-vacuum to core code now, then I'd\nbe voting to keep the configuration as simple as possible. However,\nthat's far from the case, and we do have over a decade of people that\nhave gotten used to how auto-vacuum currently behaves. Many people are\nunlikely to even notice the change, but some will, and then there will\nbe another group of people who want to turn it off, and that group\nmight be upset when we tell them that they can't, at least not without\nflipping the big red \"autovacuum\" switch into the off position (of\nwhich, I'm pretty hesitant to recommend that anyone ever does).\n\n\n",
"msg_date": "Wed, 11 Mar 2020 10:32:47 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 04:17, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2020-03-10 at 18:14 +0900, Masahiko Sawada wrote:\n>\n> Thanks for the review and your thoughts!\n>\n> > FYI actually vacuum could perform index cleanup phase (i.g.\n> > PROGRESS_VACUUM_PHASE_INDEX_CLEANUP phase) on a table even if it's a\n> > truly INSERT-only table, depending on\n> > vacuum_cleanup_index_scale_factor. Anyway, I also agree with not\n> > disabling index cleanup in insert-only vacuum case, because it could\n> > become not only a cause of index bloat but also a big performance\n> > issue. For example, if autovacuum on a table always run without index\n> > cleanup, gin index on that table will accumulate insertion tuples in\n> > its pending list and will be cleaned up by a backend process while\n> > inserting new tuple, not by a autovacuum process. We can disable index\n> > vacuum by index_cleanup storage parameter per tables, so it would be\n> > better to defer these settings to users.\n>\n> Thanks for the confirmation.\n>\n> > I have one question about this patch from architectural perspective:\n> > have you considered to use autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor also for this purpose? That is, we\n> > compare the threshold computed by these values to not only the number\n> > of dead tuples but also the number of inserted tuples. If the number\n> > of dead tuples exceeds the threshold, we trigger autovacuum as usual.\n> > On the other hand if the number of inserted tuples exceeds, we trigger\n> > autovacuum with vacuum_freeze_min_age = 0. I'm concerned that how user\n> > consider the settings of newly added two parameters. We will have in\n> > total 4 parameters. Amit also was concerned about that[1].\n> >\n> > I think this idea also works fine. In insert-only table case, since\n> > only the number of inserted tuples gets increased, only one threshold\n> > (that is, threshold computed by autovacuum_vacuum_threshold and\n> > autovacuum_vacuum_scale_factor) is enough to trigger autovacuum. And\n> > in mostly-insert table case, in the first place, we can trigger\n> > autovacuum even in current PostgreSQL, since we have some dead tuples.\n> > But if we want to trigger autovacuum more frequently by the number of\n> > newly inserted tuples, we can set that threshold lower while\n> > considering only the number of inserted tuples.\n>\n> I am torn.\n>\n> On the one hand it would be wonderful not to have to add yet more GUCs\n> to the already complicated autovacuum configuration. It already confuses\n> too many users.\n>\n> On the other hand that will lead to unnecessary vacuums for small\n> tables.\n> Worse, the progression caused by the comparatively large scale\n> factor may make it vacuum large tables too seldom.\n>\n\nI might be missing your point but could you elaborate on that in what\nkind of case you think this lead to unnecessary vacuums?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Mar 2020 12:00:41 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 2020-03-11 at 12:00 +0900, Masahiko Sawada wrote:\n> > > I have one question about this patch from architectural perspective:\n> > > have you considered to use autovacuum_vacuum_threshold and\n> > > autovacuum_vacuum_scale_factor also for this purpose?\n> >\n> > I am torn.\n> > \n> > On the one hand it would be wonderful not to have to add yet more GUCs\n> > to the already complicated autovacuum configuration. It already confuses\n> > too many users.\n> > \n> > On the other hand that will lead to unnecessary vacuums for small\n> > tables.\n> > Worse, the progression caused by the comparatively large scale\n> > factor may make it vacuum large tables too seldom.\n> \n> I might be missing your point but could you elaborate on that in what\n> kind of case you think this lead to unnecessary vacuums?\n\nIf you have an insert-only table that has 100000 entries, it will get\nvacuumed roughly every 20000 new entries. The impact is probably too\nlittle to care, but it will increase the contention for the three\nautovacuum workers available by default.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 11 Mar 2020 05:24:28 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 13:24, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Wed, 2020-03-11 at 12:00 +0900, Masahiko Sawada wrote:\n> > > > I have one question about this patch from architectural perspective:\n> > > > have you considered to use autovacuum_vacuum_threshold and\n> > > > autovacuum_vacuum_scale_factor also for this purpose?\n> > >\n> > > I am torn.\n> > >\n> > > On the one hand it would be wonderful not to have to add yet more GUCs\n> > > to the already complicated autovacuum configuration. It already confuses\n> > > too many users.\n> > >\n> > > On the other hand that will lead to unnecessary vacuums for small\n> > > tables.\n> > > Worse, the progression caused by the comparatively large scale\n> > > factor may make it vacuum large tables too seldom.\n> >\n> > I might be missing your point but could you elaborate on that in what\n> > kind of case you think this lead to unnecessary vacuums?\n>\n> If you have an insert-only table that has 100000 entries, it will get\n> vacuumed roughly every 20000 new entries. The impact is probably too\n> little to care, but it will increase the contention for the three\n> autovacuum workers available by default.\n\nThe same is true for read-write table, right? If that becomes a\nproblem, it's a mis-configuration and user should increase these\nvalues just like when we set these values for read-write tables.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 11 Mar 2020 14:59:32 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 17:24, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Wed, 2020-03-11 at 12:00 +0900, Masahiko Sawada wrote:\n> > I might be missing your point but could you elaborate on that in what\n> > kind of case you think this lead to unnecessary vacuums?\n>\n> If you have an insert-only table that has 100000 entries, it will get\n> vacuumed roughly every 20000 new entries. The impact is probably too\n> little to care, but it will increase the contention for the three\n> autovacuum workers available by default.\n\nI guess that depends on your definition of unnecessary. If you want\nIndex Only Scans, then those settings don't seem unreasonable. If you\nwant it just to reduce the chances or impact of an anti-wraparound\nvacuum then likely it's a bit too often.\n\nI understand this patch was born due to the anti-wraparound case, but\nshould we really just ignore the Index Only Scan case?\n\n\n",
"msg_date": "Thu, 12 Mar 2020 17:07:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 11 Mar 2020 at 19:00, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Wed, 11 Mar 2020 at 13:24, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > If you have an insert-only table that has 100000 entries, it will get\n> > vacuumed roughly every 20000 new entries. The impact is probably too\n> > little to care, but it will increase the contention for the three\n> > autovacuum workers available by default.\n>\n> The same is true for read-write table, right? If that becomes a\n> problem, it's a mis-configuration and user should increase these\n> values just like when we set these values for read-write tables.\n\nIt is true that if vacuum has more to do than it can do, then\nsomething is not configured correctly.\n\nI imagine Laurenz set the scale factor to 0.0 and the threshold to 10\nmillion to reduce the chances that someone will encounter that\nproblem. I mentioned somewhere upthread that commonly used to see\nproduction servers running with the standard vacuum_cost_limit of 200\nand the (pre-PG12) autovacuum_vacuum_cost_delay of 20. Generally, it\ndidn't go well for them. autovacuum_vacuum_cost_delay is now 2 by\ndefault, so auto-vacuum in PG12 and beyond runs 10x faster, but it's\nstill pretty conservative and it'll still need another bump in several\nyears when hardware is faster than it is today. So, by no means did\nthat 10x increase mean that nobody will suffer from auto-vacuum\nstarvation ever again.\n\nNow, perhaps it remains to be seen if adding additional work onto\nauto-vacuum will help or hinder those people. If their auto-vacuum\ncan just keep up until the cluster is old enough to need\nanti-wraparound vacuums and then falls massively behind, then perhaps\nthis is a good thing as they might notice at some point before their\nserver explodes in the middle of the night. By that time they might\nhave become complacent. Additionally, I think this is pretty well\naligned to the case mentioned in the subject line of this email. We\nnow have a freeze map, so performing vacuums to freeze tuples twice as\noften is not really much more expensive in total than doing that\nvacuuming half as often. Even tables (e.g log tables) that are never\nqueried won't become much more costly to maintain. In the meantime,\nfor tables that do receive queries, then we're more likely to get an\nindex-only scan.\n\nPerhaps a good way to decide what the scale_factor should be set to\nshould depend on the run-time of an Index Only Scan, vs an Index Scan.\n\ncreate table ios (a int, b text);\ninsert into ios select x,x::text from generate_series(1,1000000)x;\ncreate index on ios (a);\nvacuum analyze ios;\n\nexplain (analyze, buffers) select a from ios order by a; -- on 2nd exec\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using ios_a_idx on ios (cost=0.42..25980.42\nrows=1000000 width=4) (actual time=0.035..212.602 rows=1000000\nloops=1)\n Heap Fetches: 0\n Buffers: shared hit=2736\n Planning Time: 0.095 ms\n Execution Time: 246.864 ms\n(5 rows)\n\nset enable_indexonlyscan=0;\nexplain (analyze, buffers) select a from ios order by a;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ios_a_idx on ios (cost=0.42..31388.42 rows=1000000\nwidth=4) (actual time=0.036..451.381 rows=1000000 loops=1)\n Buffers: shared hit=8140\n Planning Time: 0.089 ms\n Execution Time: 486.582 ms\n(4 rows)\n\nSo about twice as fast with the IOS. When it's going to be beneficial\nto perform the vacuum will depend on the reads to insert ratio. I'm\nstarting to think that we should set the scale_factor to something\nlike 0.3 and the threshold to 50. Is anyone strongly against that? Or\nLaurenz, are you really set on the 10 million threshold?\n\n\n",
"msg_date": "Thu, 12 Mar 2020 17:47:57 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-12 at 17:47 +1300, David Rowley wrote:\n> I'm starting to think that we should set the scale_factor to something\n> like 0.3 and the threshold to 50. Is anyone strongly against that? Or\n> Laurenz, are you really set on the 10 million threshold?\n\nThese values are almost the same as \"autovacuum_vacuum_scale_factor\"\nand \"autovacuum_vacuum_threshold\", so you actually agree with Masahiko\nwith the exception that you want it tunable separately.\n\nI don't like the high scale factor.\n\nIf your insert-only table was last vacuumed when it had 500 million rows,\nthe next autovacuum will freeze 150 million tuples, which is a lot.\nThe impact will be less than that of an anti-wraparound vacuum because\nit is not as persistent, but if our 150 million tuple autovacuum backs\ndown because it hits a lock or gets killed by the DBA, that is also not\ngood, since it will just come again.\nAnd the bigger the vacuum run is, the more likely it is to meet an obstacle.\n\nSo I think that large insert-only tables should be vacuumed more often\nthan that. If the number of tuples that have to be frozen is small,\nthe vacuum run will be short and is less likely to cause problems.\nThat is why I chose a scale factor of 0 here.\n\n\nBut I totally see your point about index-only scans.\n\nI think the problem is that this insert-only autovacuum serves two masters:\n1. preventing massive anti-wraparound vacuum that severely impacts the system\n2. maintaining the visibility map for index-only scans\n\nI thought of the first case when I chose the parameter values.\n\nI am afraid that we cannot come up with one setting that fits all, so I\nadvocate a setting that targets the first problem, which I think is more\nimportant (and was the motivation for this thread).\n\nI could add a paragraph to the documentation that tells people how to\nconfigure the parameters if they want to use it to get index-only scans.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 12 Mar 2020 06:38:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 12 Mar 2020 at 18:38, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Thu, 2020-03-12 at 17:47 +1300, David Rowley wrote:\n> > Laurenz, are you really set on the 10 million threshold?\n>\n> These values are almost the same as \"autovacuum_vacuum_scale_factor\"\n> and \"autovacuum_vacuum_threshold\", so you actually agree with Masahiko\n> with the exception that you want it tunable separately.\n>\n> I don't like the high scale factor.\n>\n> If your insert-only table was last vacuumed when it had 500 million rows,\n> the next autovacuum will freeze 150 million tuples, which is a lot.\n> The impact will be less than that of an anti-wraparound vacuum because\n> it is not as persistent, but if our 150 million tuple autovacuum backs\n> down because it hits a lock or gets killed by the DBA, that is also not\n> good, since it will just come again.\n> And the bigger the vacuum run is, the more likely it is to meet an obstacle.\n>\n> So I think that large insert-only tables should be vacuumed more often\n> than that. If the number of tuples that have to be frozen is small,\n> the vacuum run will be short and is less likely to cause problems.\n> That is why I chose a scale factor of 0 here.\n\nThat's a good point. If those 150 million inserts were done one per\ntransaction, then it wouldn't take many more tuples before wraparound\nvacuums occur more often than insert vacuums. The only way I see\naround that is to a) configure it the way you'd like, or; b) add yet\nanother GUC and reloption to represent how close to\nautovacuum_freeze_max_age / autovacuum_multixact_freeze_max_age the\ntable is. I'm not very excited about adding yet another GUC, plus\nanti-wraparound vacuums already occur 10 times more often than they\nneed to. If we added such a GUC and set it to, say, 0.1, then they'd\nhappen 100 times more often than needed before actual wraparound\noccurs.\n\nI'm starting to see now why you were opposed to the scale_factor in\nthe first place.\n\nI really think that this is really a problem with the design of the\nthreshold and scale_factor system. I used to commonly see people with\nlarger tables zeroing out the scale_factor and setting a reasonable\nthreshold or dropping the scale_factor down to some fraction of a\npercent. I don't really have any better design in mind though, at\nleast not one that does not require adding new vacuum options.\n\n\n",
"msg_date": "Thu, 12 Mar 2020 19:14:12 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 12 Mar 2020 at 14:38, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Thu, 2020-03-12 at 17:47 +1300, David Rowley wrote:\n> > I'm starting to think that we should set the scale_factor to something\n> > like 0.3 and the threshold to 50. Is anyone strongly against that? Or\n> > Laurenz, are you really set on the 10 million threshold?\n>\n> These values are almost the same as \"autovacuum_vacuum_scale_factor\"\n> and \"autovacuum_vacuum_threshold\", so you actually agree with Masahiko\n> with the exception that you want it tunable separately.\n>\n> I don't like the high scale factor.\n>\n> If your insert-only table was last vacuumed when it had 500 million rows,\n> the next autovacuum will freeze 150 million tuples, which is a lot.\n> The impact will be less than that of an anti-wraparound vacuum because\n> it is not as persistent, but if our 150 million tuple autovacuum backs\n> down because it hits a lock or gets killed by the DBA, that is also not\n> good, since it will just come again.\n> And the bigger the vacuum run is, the more likely it is to meet an obstacle.\n>\n> So I think that large insert-only tables should be vacuumed more often\n> than that. If the number of tuples that have to be frozen is small,\n> the vacuum run will be short and is less likely to cause problems.\n> That is why I chose a scale factor of 0 here.\n\nThe reason why you want to add new GUC parameters is to use different\ndefault values for insert-update table case and insert-only table\ncase? I think I understand the pros and cons of adding separate\nparameters, but I still cannot understand use cases where we cannot\nhandle without separate parameters.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Mar 2020 15:49:26 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 12 Mar 2020 at 19:50, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n> The reason why you want to add new GUC parameters is to use different\n> default values for insert-update table case and insert-only table\n> case?\n\nYes, but in particular so it can be completely disabled easily.\n\n> I think I understand the pros and cons of adding separate\n> parameters, but I still cannot understand use cases where we cannot\n> handle without separate parameters.\n\nThat's a lot of negatives. I think I understand that you don't feel\nthat additional GUCs are worth it?\n\nLaurenz highlighted a seemingly very valid reason that the current\nGUCs cannot be reused. Namely, say the table has 1 billion rows, if we\nuse the current scale factor of 0.2, then we'll run an insert-only\nvacuum every 200 million rows. If those INSERTs are one per\ntransaction then the new feature does nothing as the wraparound vacuum\nwill run instead. Since this feature was born due to large insert-only\ntables, this concern seems very valid to me.\n\n\n",
"msg_date": "Thu, 12 Mar 2020 20:28:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 12 Mar 2020 at 16:28, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 12 Mar 2020 at 19:50, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> > The reason why you want to add new GUC parameters is to use different\n> > default values for insert-update table case and insert-only table\n> > case?\n>\n> Yes, but in particular so it can be completely disabled easily.\n>\n> > I think I understand the pros and cons of adding separate\n> > parameters, but I still cannot understand use cases where we cannot\n> > handle without separate parameters.\n>\n> That's a lot of negatives. I think I understand that you don't feel\n> that additional GUCs are worth it?\n>\n> Laurenz highlighted a seemingly very valid reason that the current\n> GUCs cannot be reused. Namely, say the table has 1 billion rows, if we\n> use the current scale factor of 0.2, then we'll run an insert-only\n> vacuum every 200 million rows. If those INSERTs are one per\n> transaction then the new feature does nothing as the wraparound vacuum\n> will run instead. Since this feature was born due to large insert-only\n> tables, this concern seems very valid to me.\n\nYeah, I understand and agree that since most people would use default\nvalues we can reduce mis-configuration cases by adding separate GUCs\nthat have appropriate default values for that purpose but on the other\nhand I'm not sure it's worth that we cover the large insert-only table\ncase by adding separate GUCs in spite of being able to cover it even\nby existing two GUCs. If we want to disable this feature on the\nparticular table, we can have a storage parameter that means not to\nconsider the number of inserted tuples rather than having multiple\nGUCs that allows us to fine tuning. And IIUC even in the above case, I\nthink that if we trigger insert-only vacuum by comparing the number of\ninserted tuples to the threshold computed by existing threshold and\nscale factor, we can cover it. But since you and Laurenz already\nagreed to adding two GUCs I'm not going to insist on that.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 12 Mar 2020 21:43:17 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 13 Mar 2020 at 01:43, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Thu, 12 Mar 2020 at 16:28, David Rowley <dgrowleyml@gmail.com> wrote:\n> > Laurenz highlighted a seemingly very valid reason that the current\n> > GUCs cannot be reused. Namely, say the table has 1 billion rows, if we\n> > use the current scale factor of 0.2, then we'll run an insert-only\n> > vacuum every 200 million rows. If those INSERTs are one per\n> > transaction then the new feature does nothing as the wraparound vacuum\n> > will run instead. Since this feature was born due to large insert-only\n> > tables, this concern seems very valid to me.\n>\n> Yeah, I understand and agree that since most people would use default\n> values we can reduce mis-configuration cases by adding separate GUCs\n> that have appropriate default values for that purpose but on the other\n> hand I'm not sure it's worth that we cover the large insert-only table\n> case by adding separate GUCs in spite of being able to cover it even\n> by existing two GUCs.\n\nIn light of the case above, do you have an alternative suggestion?\n\n> If we want to disable this feature on the\n> particular table, we can have a storage parameter that means not to\n> consider the number of inserted tuples rather than having multiple\n> GUCs that allows us to fine tuning. And IIUC even in the above case, I\n> think that if we trigger insert-only vacuum by comparing the number of\n> inserted tuples to the threshold computed by existing threshold and\n> scale factor, we can cover it.\n\nSo you're suggesting we drive the insert-vacuums from existing\nscale_factor and threshold? What about the 1 billion row table\nexample above?\n\n\n",
"msg_date": "Fri, 13 Mar 2020 09:10:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-13 at 09:10 +1300, David Rowley wrote:\n> So you're suggesting we drive the insert-vacuums from existing\n> scale_factor and threshold? What about the 1 billion row table\n> example above?\n\nI am still not 100% certain if that is really realistic.\nTransactions that insert only a single row are probably the\nexception in large insert-only tables.\n\nBut I think that we probably always can find a case where any given\nparameter setting is not so great, so in order to get ahead\nlet's decide on something that is not right out stupid.\nChanging the defaults later is always an option.\n\nSo the three options are:\n\n1. introduce no new parameters and trigger autovacuum if the number\n of inserts exceeds the regular vacuum threshold.\n\n2. introduce the new parameters with high base threshold and zero scale factor.\n\n3. introduce the new parameters with low base threshold and high scale factor.\n\nI think all three are viable.\nIf nobody else wants to weigh in, throw a coin.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 01:19:33 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 3:19 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2020-03-13 at 09:10 +1300, David Rowley wrote:\n> > So you're suggesting we drive the insert-vacuums from existing\n> > scale_factor and threshold? What about the 1 billion row table\n> > example above?\n>\n> I am still not 100% certain if that is really realistic.\n> Transactions that insert only a single row are probably the\n> exception in large insert-only tables.\n>\n> But I think that we probably always can find a case where any given\n> parameter setting is not so great, so in order to get ahead\n> let's decide on something that is not right out stupid.\n> Changing the defaults later is always an option.\n>\n> So the three options are:\n>\n> 1. introduce no new parameters and trigger autovacuum if the number\n> of inserts exceeds the regular vacuum threshold.\n>\n> 2. introduce the new parameters with high base threshold and zero scale factor.\n\nBoth of these look good to me. 1 is approach in my initial patch\nsketch, 2 is approach taken by Laurenz.\nValues I think in when considering vacuum is \"how many megabytes of\ntable aren't frozen/visible\" (since that's what translates into\nprocessing time knowing io limits of storage), and \"how many pages\naren't yet vacuumed\".\n\nThreshold in Laurenz's patch was good enough for my taste - it's\nbasically \"vacuum after every gigabyte\", and that's exactly what we\nimplemented when working around this issue manually. There's enough\nchance that latest gigabyte is in RAM and vacuum will be super fast on\nit; reading a gigabyte of data is not a showstopper for most\ncontemporary physical and cloud environments I can think of. If\nreading a gigabyte is a problem already then wraparound is a\nguaranteed disaster.\n\nAbout index only scan, this threshold seems good enough too. There's a\ngood chance last gig is already in RAM, and previous data was\nprocessed with previous vacuum. Anyway - with this patch Index Only\nScan starts actually working :)\n\nI'd vote for 2 with a note \"rip it off all together later and redesign\nscale factors and thresholds system to something more easily\ngraspable\". Whoever needs to cancel the new behavior for some reason\nwill have a knob then, and patch is laid out already.\n\n> 3. introduce the new parameters with low base threshold and high scale factor.\n\nThis looks bad to me. \"the bigger the table, the longer we wait\" does\nnot look good for me for something designed as a measure preventing\nissues with big tables.\n\n> I think all three are viable.\n> If nobody else wants to weigh in, throw a coin.\n>\n> Yours,\n> Laurenz Albe\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n\n",
"msg_date": "Fri, 13 Mar 2020 12:05:58 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Mar 11, 2020 at 10:32:47AM +1300, David Rowley wrote:\n> 2. The new feature can be completely disabled. This might be very\n> useful for people who suffer from auto-vacuum starvation.\n\nOn Thu, Mar 12, 2020 at 08:28:05PM +1300, David Rowley wrote:\n> Yes, but in particular so it can be completely disabled easily.\n\nHow is it disabled ? By setting scale_factor=100 ?\n\n+ { \n+ \"autovacuum_vacuum_insert_scale_factor\", \n+ \"Number of tuple inserts prior to vacuum as a fraction of reltuples\", \n+ RELOPT_KIND_HEAP | RELOPT_KIND_TOAST, \n+ ShareUpdateExclusiveLock \n+ }, \n+ -1, 0.0, 100.0 \n\nNote, vacuum_cleanup_index_scale_factor uses max: 1e10\nSee 4d54543efa5eb074ead4d0fadb2af4161c943044\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 13 Mar 2020 07:00:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-13 at 12:05 +0300, Darafei \"Komяpa\" Praliaskouski wrote:\n> 1. introduce no new parameters and trigger autovacuum if the number\n> > of inserts exceeds the regular vacuum threshold.\n> > \n> > 2. introduce the new parameters with high base threshold and zero scale factor.\n> \n> Both of these look good to me. 1 is approach in my initial patch\n> sketch, 2 is approach taken by Laurenz.\n> Values I think in when considering vacuum is \"how many megabytes of\n> table aren't frozen/visible\" (since that's what translates into\n> processing time knowing io limits of storage), and \"how many pages\n> aren't yet vacuumed\".\n> \n> Threshold in Laurenz's patch was good enough for my taste - it's\n> basically \"vacuum after every gigabyte\", and that's exactly what we\n> implemented when working around this issue manually. There's enough\n> chance that latest gigabyte is in RAM and vacuum will be super fast on\n> it; reading a gigabyte of data is not a showstopper for most\n> contemporary physical and cloud environments I can think of. If\n> reading a gigabyte is a problem already then wraparound is a\n> guaranteed disaster.\n> \n> About index only scan, this threshold seems good enough too. There's a\n> good chance last gig is already in RAM, and previous data was\n> processed with previous vacuum. Anyway - with this patch Index Only\n> Scan starts actually working :)\n> \n> I'd vote for 2 with a note \"rip it off all together later and redesign\n> scale factors and thresholds system to something more easily\n> graspable\". Whoever needs to cancel the new behavior for some reason\n> will have a knob then, and patch is laid out already.\n> \n> > 3. introduce the new parameters with low base threshold and high scale factor.\n> \n> This looks bad to me. \"the bigger the table, the longer we wait\" does\n> not look good for me for something designed as a measure preventing\n> issues with big tables.\n\nThanks for the feedback.\n\nIt looks like we have a loose consensus on #2, i.e. my patch.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 13:04:16 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-13 at 07:00 -0500, Justin Pryzby wrote:\n> > 2. The new feature can be completely disabled. This might be very\n> > useful for people who suffer from auto-vacuum starvation.\n> \n> > Yes, but in particular so it can be completely disabled easily.\n> \n> How is it disabled ? By setting scale_factor=100 ?\n> \n> + { \n> + \"autovacuum_vacuum_insert_scale_factor\", \n> + \"Number of tuple inserts prior to vacuum as a fraction of reltuples\", \n> + RELOPT_KIND_HEAP | RELOPT_KIND_TOAST, \n> + ShareUpdateExclusiveLock \n> + }, \n> + -1, 0.0, 100.0 \n> \n> Note, vacuum_cleanup_index_scale_factor uses max: 1e10\n> See 4d54543efa5eb074ead4d0fadb2af4161c943044\n\nBy setting the threshold very high, or by setting the scale factor to 100.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 13:07:52 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 01:53:42PM +1300, David Rowley wrote:\n> 2. Perhaps the documentation in maintenance.sgml should mention that\n> the table will be vacuumed with the equivalent of having\n> vacuum_freeze_min_age = 0, instead of:\n> \n> \"Such a vacuum will aggressively freeze tuples.\"\n> \n> aggressive is the wrong word here. We call it an aggressive vacuum if\n> we disable page skipping, not for setting the vacuum_freeze_min_age to\n> 0.\n\nPossible it would be better to run VACUUM *without* freeze_min_age=0 ? (I get\nconfused and have to spend 20min re-reading the vacuum GUC docs every time I\ndeal with this stuff, so maybe I'm off).\n\nAs I understand, the initial motivation of this patch was to avoid disruptive\nanti-wraparound vacuums on insert-only table. But if vacuum were triggered at\nall, it would freeze the oldest tuples, which is all that's needed; especially\nsince fd31cd2651 \"Don't vacuum all-frozen pages.\", those pages would never need\nto be vacuumed again. Recently written tuples wouldn't be frozen, which is ok,\nthey're handled next time.\n\nAnother motivation of the patch is to allow indexonly scan, for which the\nplanner looks at pages' \"relallvisible\" fraction (and at execution if a page\nisn't allvisible, visits the heap). Again, that happens if vacuum were run at\nall. Again, some pages won't be marked allvisible, which is fine, they're\nhandled next time.\n\nI think freeze_min_age=0 could negatively affect people who have insert-mostly\ntables (I'm not concerned, but that includes us). If they consistently hit the\nautovacuum insert threshold before the cleanup threshold for updated/deleted\ntuples, any updated/deleted tuples would be frozen, which would be\nwasteful: \n\n|One disadvantage of decreasing vacuum_freeze_min_age is that it might cause\n|VACUUM to do useless work: freezing a row version is a waste of time if the row\n|is modified soon thereafter (causing it to acquire a new XID). So the setting\n|should be large enough that rows are not frozen until they are unlikely to\n|change any more.\n\nSo my question is if autovacuum triggered by insert threshold should trigger\nVACUUM with the same settings as a vacuum due to deleted tuples. I realize the\nDBA could just configure the thresholds so they'd hit vacuum for cleaning dead\ntuples, so my suggestion maybe just improves the case with the default\nsettings. It's possible to set the reloption autovacuum_freeze_min_age, which\nI think supports the idea of running a vacuum normally and letting it (and the\nDBA) decide what do with existing logic.\n\nAlso, there was a discussion about index cleanup with the conclusion that it\nwas safer not to skip it, since otherwise indexes might bloat. I think that's\nright, since vacuum for cleanup is triggered by the number of dead heap tuples.\nTo skip index cleanup, I think you'd want a metric for\nn_dead_since_index_cleanup. (Or maybe analyze could track dead index tuples\nand trigger vacuum of each index separately).\n\nHaving now played with the patch, I'll suggest that 10000000 is too high a\nthreshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\nmuch lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC. \n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 13 Mar 2020 13:44:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-13 13:44:42 -0500, Justin Pryzby wrote:\n> As I understand, the initial motivation of this patch was to avoid disruptive\n> anti-wraparound vacuums on insert-only table. But if vacuum were triggered at\n> all, it would freeze the oldest tuples, which is all that's needed; especially\n> since fd31cd2651 \"Don't vacuum all-frozen pages.\", those pages would never need\n> to be vacuumed again. Recently written tuples wouldn't be frozen, which is ok,\n> they're handled next time.\n> \n> Another motivation of the patch is to allow indexonly scan, for which the\n> planner looks at pages' \"relallvisible\" fraction (and at execution if a page\n> isn't allvisible, visits the heap). Again, that happens if vacuum were run at\n> all. Again, some pages won't be marked allvisible, which is fine, they're\n> handled next time.\n> \n> I think freeze_min_age=0 could negatively affect people who have insert-mostly\n> tables (I'm not concerned, but that includes us). If they consistently hit the\n> autovacuum insert threshold before the cleanup threshold for updated/deleted\n> tuples, any updated/deleted tuples would be frozen, which would be\n> wasteful: \n\nI think that's a valid concern.\n\n\n> |One disadvantage of decreasing vacuum_freeze_min_age is that it might cause\n> |VACUUM to do useless work: freezing a row version is a waste of time if the row\n> |is modified soon thereafter (causing it to acquire a new XID). So the setting\n> |should be large enough that rows are not frozen until they are unlikely to\n> |change any more.\n\nI think the overhead here might be a bit overstated. Once a page is\ndirtied (or already dirty) during vacuum, and we freeze a single row\n(necessating WAL logging), there's not really a good reason to not also\nfreeze the rest of the row on that page. The added cost for freezing\nanother row is miniscule compared to the \"constant\" cost of freezing\nanything on the page. It's of course different if there are otherwise\nno tuples worth freezing on the page (not uncommon). But there's really\nno reason for that to be the case:\n\nAfaict the only problem with more aggressively freezing when we touch\n(beyond hint bits) the page anyway is that we commonly end up with\nmultiple WAL records for the same page:\n\n1) lazy_scan_heap()->heap_page_prune() will log a XLOG_HEAP2_CLEAN record, but leave\n itemids in place most of the time\n2) lazy_scan_heap()->log_heap_freeze() will log a XLOG_HEAP2_FREEZE_PAGE record\n3a) if no indexes exist/index cleanup is disabled:\n lazy_vacuum_page()->lazy_vacuum_page() will log a XLOG_HEAP2_CLEAN\n record, removing dead tuples (including itemids)\n3b) if indexes need to be cleaned up,\n lazy_vacuum_heap()->lazy_vacuum_page() will log a XLOG_HEAP2_CLEAN\n\nwhich is not nice. It likely is worth merging xl_heap_freeze_page into\nxl_heap_clean, and having heap pruning always freeze once it decides to\ndirty a page.\n\nWe could probably always prune dead tuples as part of heap_prune_chain()\nif there's no indexes - but I'm doubtful it's worth it, since there'll\nbe few tables with lots of dead tuples that don't have indexes.\n\nMerging 3b's WAL record would be harder, I think.\n\n\nThere's also a significant source of additional WAL records here, one\nthat I think should really not have been introduced:\n\n4) HeapTupleSatisfiesVacuum() called both by heap_prune_chain(), and\n lazy_scan_heap() will often trigger a WAL record when the checksums or\n wal_log_hint_bits are enabled. If the page hasn't been modified in the\n current checkpoint window (extremely common for VACUUM, reasonably\n common for opportunistic pruning), we will log a full page write.\n\n Imo this really should have been avoided when checksums were added,\n that's a pretty substantial and unnecessary increase in overhead.\n\n\nIt's probably overkill to tie fixing the 'insert only' case to improving\nthe WAL logging for vacuuming / pruning. But it'd certainly would\nlargely remove the tradeoff discussed here, by removing additional\noverhead of freezing in tables that are also updated.\n\n\n> Also, there was a discussion about index cleanup with the conclusion that it\n> was safer not to skip it, since otherwise indexes might bloat. I think that's\n> right, since vacuum for cleanup is triggered by the number of dead heap tuples.\n> To skip index cleanup, I think you'd want a metric for\n> n_dead_since_index_cleanup. (Or maybe analyze could track dead index tuples\n> and trigger vacuum of each index separately).\n> \n> Having now played with the patch, I'll suggest that 10000000 is too high a\n> threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n\nISTM that the danger of regressing workloads due to suddenly repeatedly\nscanning huge indexes that previously were never / rarely scanned is\nsignificant (if there's a few dead tuples, otherwise most indexes will\nbe able to skip the scan since the vacuum_cleanup_index_scale_factor\nintroduction)).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Mar 2020 14:38:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-13 at 13:44 -0500, Justin Pryzby wrote:\n> Possible it would be better to run VACUUM *without* freeze_min_age=0 ? (I get\n> confused and have to spend 20min re-reading the vacuum GUC docs every time I\n> deal with this stuff, so maybe I'm off).\n> \n> As I understand, the initial motivation of this patch was to avoid disruptive\n> anti-wraparound vacuums on insert-only table. But if vacuum were triggered at\n> all, it would freeze the oldest tuples, which is all that's needed; especially\n> since fd31cd2651 \"Don't vacuum all-frozen pages.\", those pages would never need\n> to be vacuumed again. Recently written tuples wouldn't be frozen, which is ok,\n> they're handled next time.\n\nFreezing tuples too early is wasteful if the tuples get updated or deleted\nsoon after, but based on the assumption that an autovacuum triggered by insert\nis dealing with an insert-mostly table, it is not that wasteful.\n\nIf we didn't freeze all tuples, it is easy to envision a situation where\nbulk data loads load several million rows in a few transactions, which\nwould trigger a vacuum. With the normal vacuum_freeze_min_age, that vacuum\nwould do nothing at all. It is better if each vacuum freezes some rows,\nin other words, if it does some of the anti-wraparound work.\n\n> Another motivation of the patch is to allow indexonly scan, for which the\n> planner looks at pages' \"relallvisible\" fraction (and at execution if a page\n> isn't allvisible, visits the heap). Again, that happens if vacuum were run at\n> all. Again, some pages won't be marked allvisible, which is fine, they're\n> handled next time.\n\nYes, freezing is irrelevant with respect to index only scans, but it helps\nwith mitigating the impact of anti-wraparound vacuum runs.\n\n> I think freeze_min_age=0 could negatively affect people who have insert-mostly\n> tables (I'm not concerned, but that includes us). If they consistently hit the\n> autovacuum insert threshold before the cleanup threshold for updated/deleted\n> tuples, any updated/deleted tuples would be frozen, which would be\n> wasteful: \n\nI don't get that. Surely tuples whose xmax is committed won't be frozen.\n\n> So my question is if autovacuum triggered by insert threshold should trigger\n> VACUUM with the same settings as a vacuum due to deleted tuples. I realize the\n> DBA could just configure the thresholds so they'd hit vacuum for cleaning dead\n> tuples, so my suggestion maybe just improves the case with the default\n> settings. It's possible to set the reloption autovacuum_freeze_min_age, which\n> I think supports the idea of running a vacuum normally and letting it (and the\n> DBA) decide what do with existing logic.\n\nYes, the DBA can explicitly set vacuum_freeze_min_age to 0.\n\nBut for one DBA who understands his or her workload well enough, and who knows\nthe workings of autovacuum well enough to do that kind of tuning, there are\n99 DBAs who don't, and it is the goal of the patch (expressed in the subject)\nto make things work for those people who go with the default.\n\nAnd I believe that is better achieved with freezing as many tuples as possible.\n\n> Also, there was a discussion about index cleanup with the conclusion that it\n> was safer not to skip it, since otherwise indexes might bloat. I think that's\n> right, since vacuum for cleanup is triggered by the number of dead heap tuples.\n> To skip index cleanup, I think you'd want a metric for\n> n_dead_since_index_cleanup. (Or maybe analyze could track dead index tuples\n> and trigger vacuum of each index separately).\n\nYes, I think we pretty much all agree on that.\n\n> Having now played with the patch, I'll suggest that 10000000 is too high a\n> threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n\nThere is the concern that that might treat large table to seldom.\n\nI am curious - what were the findings that led you to think that 10000000\nis too high?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 13 Mar 2020 22:48:27 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> > |One disadvantage of decreasing vacuum_freeze_min_age is that it might cause\n> > |VACUUM to do useless work: freezing a row version is a waste of time if the row\n> > |is modified soon thereafter (causing it to acquire a new XID). So the setting\n> > |should be large enough that rows are not frozen until they are unlikely to\n> > |change any more.\n> \n> I think the overhead here might be a bit overstated. Once a page is\n\nCould you clarify if you mean the language in docs in general or specifically\nin the context of this patch ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 13 Mar 2020 19:10:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> > Having now played with the patch, I'll suggest that 10000000 is too high a\n> > threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> > much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n> \n> ISTM that the danger of regressing workloads due to suddenly repeatedly\n> scanning huge indexes that previously were never / rarely scanned is\n> significant\n\nYou're right - at one point, I was going to argue to skip index cleanup, and I\nthink wrote that before I finished convincing myself why it wasn't ok to skip.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Mar 2020 05:01:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 13 Mar 2020 at 05:11, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 13 Mar 2020 at 01:43, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Thu, 12 Mar 2020 at 16:28, David Rowley <dgrowleyml@gmail.com> wrote:\n> > > Laurenz highlighted a seemingly very valid reason that the current\n> > > GUCs cannot be reused. Namely, say the table has 1 billion rows, if we\n> > > use the current scale factor of 0.2, then we'll run an insert-only\n> > > vacuum every 200 million rows. If those INSERTs are one per\n> > > transaction then the new feature does nothing as the wraparound vacuum\n> > > will run instead. Since this feature was born due to large insert-only\n> > > tables, this concern seems very valid to me.\n> >\n> > Yeah, I understand and agree that since most people would use default\n> > values we can reduce mis-configuration cases by adding separate GUCs\n> > that have appropriate default values for that purpose but on the other\n> > hand I'm not sure it's worth that we cover the large insert-only table\n> > case by adding separate GUCs in spite of being able to cover it even\n> > by existing two GUCs.\n>\n> In light of the case above, do you have an alternative suggestion?\n>\n> > If we want to disable this feature on the\n> > particular table, we can have a storage parameter that means not to\n> > consider the number of inserted tuples rather than having multiple\n> > GUCs that allows us to fine tuning. And IIUC even in the above case, I\n> > think that if we trigger insert-only vacuum by comparing the number of\n> > inserted tuples to the threshold computed by existing threshold and\n> > scale factor, we can cover it.\n>\n> So you're suggesting we drive the insert-vacuums from existing\n> scale_factor and threshold? What about the 1 billion row table\n> example above?\n\nMy suggestion is the initial approach proposed by Justin; comparing\nthe number of inserted tuples to the threshold computed by\nautovacuum_vacum_threshold and autovacuum_vacuum_scale_factor in order\nto trigger autovacuum. But as discussed, there is a downside; if the\nnumber of inserted tuples are almost the same as, but a little larger\nthan, the number of dead tuples, we will trigger insert-only vacuum\nbut it's wasteful.\n\nThere is already a consensus on introducing new 2 parameters, but as\nthe second idea I'd like to add one (or two) GUC(s) to my suggestion,\nsay autovacuum_vacuum_freeze_insert_ratio; this parameter is the ratio\nof the number of inserted tuples for total number of tuples modified\nand inserted, in order to trigger insert-only vacuum. For example,\nsuppose the table has 1,000,000 tuples and we set threshold = 0,\nscale_factor = 0.2 and freeze_insert_ratio = 0.9, we will trigger\nnormal autovacuum when n_dead_tup + n_ins_since_vacuum > 200,000, but\nwe will instead trigger insert-only autovacuum, which is a vacuum with\nvacuum_freeze_min_age = 0, when n_ins_since_vacuum > 180,000 (=200,000\n* 0.9). IOW if 90% of modified tuples are insertions, we freeze tuples\naggressively. If we want to trigger insert-only vacuum only on\ninsert-only table we can set freeze_insert_ratio = 1.0. The down side\nof this idea is that we cannot disable autovacuum triggered by the\nnumber of inserted, although we might be able to introduce more one\nGUC that controls whether to include the number of inserted tuples for\ntriggering autovacuum (say, autovacuum_vacuum_triggered_by_insert =\non|off). The pros of this idea would be that we can ensure that\ninsert-only vacuum will run only in the case where the ratio of\ninsertion is large enough.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Mar 2020 12:53:43 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 10:48:27PM +0100, Laurenz Albe wrote:\n> On Fri, 2020-03-13 at 13:44 -0500, Justin Pryzby wrote:\n> > Possible it would be better to run VACUUM *without* freeze_min_age=0 ? (I get\n> > confused and have to spend 20min re-reading the vacuum GUC docs every time I\n> > deal with this stuff, so maybe I'm off).\n> > \n> > As I understand, the initial motivation of this patch was to avoid disruptive\n> > anti-wraparound vacuums on insert-only table. But if vacuum were triggered at\n> > all, it would freeze the oldest tuples, which is all that's needed; especially\n> > since fd31cd2651 \"Don't vacuum all-frozen pages.\", those pages would never need\n> > to be vacuumed again. Recently written tuples wouldn't be frozen, which is ok,\n> > they're handled next time.\n> \n> Freezing tuples too early is wasteful if the tuples get updated or deleted\n> soon after, but based on the assumption that an autovacuum triggered by insert\n> is dealing with an insert-mostly table, it is not that wasteful.\n\nYou're right that it's not *that* wasteful. If it's a table that gets 90%\ninserts/10% updates, then only 10% of its tuples will be frozen. In the worst\ncase, it's the same tuples every time, and that's somewhat wasteful. In the\nbest case, those tuples are clustered on a small number of pages.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Mar 2020 23:34:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-16 at 12:53 +0900, Masahiko Sawada wrote:\n> There is already a consensus on introducing new 2 parameters, but as\n> the second idea I'd like to add one (or two) GUC(s) to my suggestion,\n> say autovacuum_vacuum_freeze_insert_ratio; this parameter is the ratio\n> of the number of inserted tuples for total number of tuples modified\n> and inserted, in order to trigger insert-only vacuum. For example,\n> suppose the table has 1,000,000 tuples and we set threshold = 0,\n> scale_factor = 0.2 and freeze_insert_ratio = 0.9, we will trigger\n> normal autovacuum when n_dead_tup + n_ins_since_vacuum > 200,000, but\n> we will instead trigger insert-only autovacuum, which is a vacuum with\n> vacuum_freeze_min_age = 0, when n_ins_since_vacuum > 180,000 (=200,000\n> * 0.9). IOW if 90% of modified tuples are insertions, we freeze tuples\n> aggressively. If we want to trigger insert-only vacuum only on\n> insert-only table we can set freeze_insert_ratio = 1.0. The down side\n> of this idea is that we cannot disable autovacuum triggered by the\n> number of inserted, although we might be able to introduce more one\n> GUC that controls whether to include the number of inserted tuples for\n> triggering autovacuum (say, autovacuum_vacuum_triggered_by_insert =\n> on|off). The pros of this idea would be that we can ensure that\n> insert-only vacuum will run only in the case where the ratio of\n> insertion is large enough.\n\nTwo more parameters :^( But your reasoning is good.\n\nHow about we go with what we have now and leave that for future\ndiscussion and patches?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 08:54:46 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 12:53:43PM +0900, Masahiko Sawada wrote:\n\n> There is already a consensus on introducing new 2 parameters, but as\n> the second idea I'd like to add one (or two) GUC(s) to my suggestion,\n> say autovacuum_vacuum_freeze_insert_ratio; this parameter is the ratio\n> of the number of inserted tuples for total number of tuples modified\n> and inserted, in order to trigger insert-only vacuum. For example,\n> suppose the table has 1,000,000 tuples and we set threshold = 0,\n> scale_factor = 0.2 and freeze_insert_ratio = 0.9, we will trigger\n> normal autovacuum when n_dead_tup + n_ins_since_vacuum > 200,000, but\n> we will instead trigger insert-only autovacuum, which is a vacuum with\n> vacuum_freeze_min_age = 0, when n_ins_since_vacuum > 180,000 (=200,000\n> * 0.9). IOW if 90% of modified tuples are insertions, we freeze tuples\n> aggressively. If we want to trigger insert-only vacuum only on\n> insert-only table we can set freeze_insert_ratio = 1.0. The down side\n> of this idea is that we cannot disable autovacuum triggered by the\n> number of inserted, although we might be able to introduce more one\n> GUC that controls whether to include the number of inserted tuples for\n> triggering autovacuum (say, autovacuum_vacuum_triggered_by_insert =\n> on|off). The pros of this idea would be that we can ensure that\n> insert-only vacuum will run only in the case where the ratio of\n> insertion is large enough.\n\nI was thinking about something like this myself. I would appreciate keeping\nseparate the thresholds for 1) triggering vacuum; and, 2) the options\nautovacuum uses when it runs (in this case, FREEZE). Someone might want\nautovacuum to run with FREEZE on a table vacuumed due to dead tuples (say, on a\npartitioned table), or might *not* want to run FREEZE on a table vacuumed due\nto insertions (maybe because index scans are too expensive or FREEZE makes it\ntoo slow).\n\nNormally, when someone complains about bad plan related to no index-onlyscan,\nwe tell them to run vacuum, and if that helps, then ALTER TABLE .. SET\n(autovacuum_vacuum_scale_factor=0.005).\n\nIf there's two thresholds (4 GUCs and 4 relopts) for autovacuum, then do we\nhave to help determine which one was being hit, and which relopt to set?\n\nI wonder if the new insert GUCs should default to -1 (disabled)? And the\ninsert thresholds should be set by new insert relopt (if set), or by new insert\nGUC (default -1), else normal relopt, or normal GUC. The defaults would give\n50 + 0.20*n. When someone asks about IOS, we'd tell them to set\nautovacuum_vacuum_scale_factor=0.005, same as now.\n\nvac_ins_scale_factor =\n\t(relopts && relopts->vacuum_ins_scale_factor >= 0) ? relopts->vacuum_ins_scale_factor :\n\tautovacuum_vac_ins_scale >= 0 ? autovacuum_vac_ins_scale : \n\t(relopts && relopts->vacuum_scale_factor >= 0) ? relopts->vacuum_scale_factor :\n\tautovacuum_vac_scale;\n\nOne would disable autovacuum triggered by insertions by setting\nautovacuum_vacuum_insert_scale_factor=1e10 (which I think should also be the\nmax for this patch).\n\nIt seems to me that the easy thing to do is to implement this initially without\nFREEZE (which is controlled by vacuum_freeze_table_age), and defer until\nJuly/v14 further discussion and implementation of another GUC/relopt for\nautovacuum freezing to be controlled by insert thresholds (or ratio).\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Mar 2020 07:47:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-13 19:10:00 -0500, Justin Pryzby wrote:\n> On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> > > |One disadvantage of decreasing vacuum_freeze_min_age is that it might cause\n> > > |VACUUM to do useless work: freezing a row version is a waste of time if the row\n> > > |is modified soon thereafter (causing it to acquire a new XID). So the setting\n> > > |should be large enough that rows are not frozen until they are unlikely to\n> > > |change any more.\n> > \n> > I think the overhead here might be a bit overstated. Once a page is\n> \n> Could you clarify if you mean the language in docs in general or specifically\n> in the context of this patch ?\n\nIn the docs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Mar 2020 11:57:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-16 at 07:47 -0500, Justin Pryzby wrote:\n> It seems to me that the easy thing to do is to implement this initially without\n> FREEZE (which is controlled by vacuum_freeze_table_age), and defer until\n> July/v14 further discussion and implementation of another GUC/relopt for\n> autovacuum freezing to be controlled by insert thresholds (or ratio).\n\nFreezing tuples is the point of this patch.\nAs I have said, if you have a table where you insert many rows in few\ntransactions, you would trigger an autovacuum that then ends up doing nothing\nbecause none of the rows have reached vacuum_freeze_table_age yet.\n\nThen some time later you will get a really large vacuum run.\n\nIt seems to me that if we keep trying finding the formula that will vacuum\nevery table just right and never so the wrong thing, we will never get to anything.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 20:49:43 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-16 20:49:43 +0100, Laurenz Albe wrote:\n> On Mon, 2020-03-16 at 07:47 -0500, Justin Pryzby wrote:\n> > It seems to me that the easy thing to do is to implement this initially without\n> > FREEZE (which is controlled by vacuum_freeze_table_age), and defer until\n> > July/v14 further discussion and implementation of another GUC/relopt for\n> > autovacuum freezing to be controlled by insert thresholds (or ratio).\n> \n> Freezing tuples is the point of this patch.\n\nSure. But not hurting existing installation is also a goal of the\npatch. Since this is introducing potentially significant performance\ndownsides, I think it's good to be a bit conservative with the default\nconfiguration.\n\nI'm gettin a bit more bullish on implementing some of what what I\ndiscussed in\nhttps://www.postgresql.org/message-id/20200313213851.ejrk5gptnmp65uoo%40alap3.anarazel.de\nat the same time as this patch.\n\nIn particularl, I think it'd make sense to *not* have a lower freezing\nhorizon for insert vacuums (because it *will* cause problems), but if\nthe page is dirty anyway, then do the freezing even if freeze_min_age\netc would otherwise prevent us from doing so?\n\nIt'd probably be ok to incur the WAL logging overhead unconditionally,\nbut I'm not sure about it.\n\n\n> As I have said, if you have a table where you insert many rows in few\n> transactions, you would trigger an autovacuum that then ends up doing nothing\n> because none of the rows have reached vacuum_freeze_table_age yet.\n\n> Then some time later you will get a really large vacuum run.\n\nWell, only if you don't further insert into the table. Which isn't that\ncommon a case for a table having a \"really large vacuum run\".\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Mar 2020 13:13:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 08:49:43PM +0100, Laurenz Albe wrote:\n> On Mon, 2020-03-16 at 07:47 -0500, Justin Pryzby wrote:\n> > It seems to me that the easy thing to do is to implement this initially without\n> > FREEZE (which is controlled by vacuum_freeze_table_age), and defer until\n> > July/v14 further discussion and implementation of another GUC/relopt for\n> > autovacuum freezing to be controlled by insert thresholds (or ratio).\n> \n> Freezing tuples is the point of this patch.\n> As I have said, if you have a table where you insert many rows in few\n> transactions, you would trigger an autovacuum that then ends up doing nothing\n> because none of the rows have reached vacuum_freeze_table_age yet.\n> \n> Then some time later you will get a really large vacuum run.\n\nBest practice is to vacuum following bulk load. I don't think this patch is\ngoing to change that. Bulk-loaded tuples will be autovacuumed, which is nice,\nbut I don't think it'll be ideal if large bulk loads trigger an autovacuum with\ncost delays which ISTM if it runs with FREEZE will take even longer.\n\nIf it's a bulk load, then I think it's okay to assume it was vacuumed, or\notherwise that it'll eventually be hit by autovac at some later date.\n\nIf it's not a \"bulk load\" but a normal runtime, and the table continues to\nreceive inserts/deletes, then eventually it'll hit a vacuum threshold and\ntuples can be frozen.\n\nIf it receives a bunch of activity, which then stops (like a partition of a\ntable of timeseries data), then maybe it doesn't hit a vacuum threshold, until\nwraparound vacuum. I think in that case it's not catastrophic, since then it\nwasn't big enough to hit any threshold (it's partitioned). If every day,\nautovacuum kicks in and does wraparound vacuum on table with data from (say)\n100 days ago, I think that's reasonable.\n\nOne case which would suck is if the insert_threshold were 1e6, and you restore\na DB with 1000 tables of historic data (which are no longer being inserted\ninto) which have 9e5 rows each (just below the threshold). Then autovacuum\nwill hit them all at once. The solution to that is to manual vacuum after bulk\nload, same as today. As a practical matter, some of the tables are likely to\nhit the autovacuum insert threshold, and some are likely to be pruned (or\nupdated) before wraparound vacuum, so the patch usually does improve that case.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Mar 2020 16:07:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-16 at 13:13 -0700, Andres Freund wrote:\n> > Freezing tuples is the point of this patch.\n> \n> Sure. But not hurting existing installation is also a goal of the\n> patch. Since this is introducing potentially significant performance\n> downsides, I think it's good to be a bit conservative with the default\n> configuration.\n> \n> I'm gettin a bit more bullish on implementing some of what what I\n> discussed in\n> https://www.postgresql.org/message-id/20200313213851.ejrk5gptnmp65uoo%40alap3.anarazel.de\n> at the same time as this patch.\n>\n> In particularl, I think it'd make sense to *not* have a lower freezing\n> horizon for insert vacuums (because it *will* cause problems), but if\n> the page is dirty anyway, then do the freezing even if freeze_min_age\n> etc would otherwise prevent us from doing so?\n\nI don't quite see why freezing tuples in insert-only tables will cause\nproblems - are you saying that more WAL will be written compared to\nfreezing with a higher freeze_min_age?\n\n> > As I have said, if you have a table where you insert many rows in few\n> > transactions, you would trigger an autovacuum that then ends up doing nothing\n> > because none of the rows have reached vacuum_freeze_table_age yet.\n> > Then some time later you will get a really large vacuum run.\n> \n> Well, only if you don't further insert into the table. Which isn't that\n> common a case for a table having a \"really large vacuum run\".\n\nAh, yes, you are right.\nSo it actually would not be worse if we use the normal freeze_min_age\nfor insert-only vacuums.\n\nSo do you think the patch would be ok as it is if we change only that?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 22:25:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-16 at 16:07 -0500, Justin Pryzby wrote:\n> Best practice is to vacuum following bulk load.\n\nYes.\n\n> If it's a bulk load, then I think it's okay to assume it was vacuumed,\n\nNo. This patch is there precisely because too many people don't know\nthat they should vacuum their table after a bulk insert.\nThe idea of autovacuum is to do these things for you atomatically.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 16 Mar 2020 22:30:01 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-16 22:25:11 +0100, Laurenz Albe wrote:\n> On Mon, 2020-03-16 at 13:13 -0700, Andres Freund wrote:\n> > > Freezing tuples is the point of this patch.\n> > \n> > Sure. But not hurting existing installation is also a goal of the\n> > patch. Since this is introducing potentially significant performance\n> > downsides, I think it's good to be a bit conservative with the default\n> > configuration.\n> > \n> > I'm gettin a bit more bullish on implementing some of what what I\n> > discussed in\n> > https://www.postgresql.org/message-id/20200313213851.ejrk5gptnmp65uoo%40alap3.anarazel.de\n> > at the same time as this patch.\n> >\n> > In particularl, I think it'd make sense to *not* have a lower freezing\n> > horizon for insert vacuums (because it *will* cause problems), but if\n> > the page is dirty anyway, then do the freezing even if freeze_min_age\n> > etc would otherwise prevent us from doing so?\n> \n> I don't quite see why freezing tuples in insert-only tables will cause\n> problems - are you saying that more WAL will be written compared to\n> freezing with a higher freeze_min_age?\n\nAs far as I understand the patch may trigger additional vacuums e.g. for\ntables that have some heavily updated parts / key ranges, and otherwise\nare largely insert only (as long as there are in total considerably more\ninserts than updates). That's not at all uncommon.\n\nAnd for the heavily updated regions the additional vacuums with a 0 min\nage could prove to be costly. I've not looked at the new code, but it'd\nbe particularly bad if the changes were to trigger the\nlazy_check_needs_freeze() check in lazy_scan_heap() - it'd have the\npotential for a lot more contention.\n\n\n> > > As I have said, if you have a table where you insert many rows in few\n> > > transactions, you would trigger an autovacuum that then ends up doing nothing\n> > > because none of the rows have reached vacuum_freeze_table_age yet.\n> > > Then some time later you will get a really large vacuum run.\n> > \n> > Well, only if you don't further insert into the table. Which isn't that\n> > common a case for a table having a \"really large vacuum run\".\n> \n> Ah, yes, you are right.\n> So it actually would not be worse if we use the normal freeze_min_age\n> for insert-only vacuums.\n\nWell, it's still be worse, because it'd likely trigger more writes of\nthe same pages. Once for setting hint bits during the first vacuum, and\nthen later a second for freezing. Which is why I was pondering using the\nlogic\n\n\n> So do you think the patch would be ok as it is if we change only that?\n\nI've not looked at it in enough detail so far to say either way, sorry.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Mar 2020 14:34:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-16 at 14:34 -0700, Andres Freund wrote:\n> > > In particularl, I think it'd make sense to *not* have a lower freezing\n> > > horizon for insert vacuums (because it *will* cause problems), but if\n> > > the page is dirty anyway, then do the freezing even if freeze_min_age\n> > > etc would otherwise prevent us from doing so?\n> > \n> > I don't quite see why freezing tuples in insert-only tables will cause\n> > problems - are you saying that more WAL will be written compared to\n> > freezing with a higher freeze_min_age?\n> \n> As far as I understand the patch may trigger additional vacuums e.g. for\n> tables that have some heavily updated parts / key ranges, and otherwise\n> are largely insert only (as long as there are in total considerably more\n> inserts than updates). That's not at all uncommon.\n> \n> And for the heavily updated regions the additional vacuums with a 0 min\n> age could prove to be costly. I've not looked at the new code, but it'd\n> be particularly bad if the changes were to trigger the\n> lazy_check_needs_freeze() check in lazy_scan_heap() - it'd have the\n> potential for a lot more contention.\n\nI think I got it.\n\nHere is a version of the patch that does *not* freeze more tuples than\nnormal, except if a prior tuple on the same page is already eligible for freezing.\n\nlazy_check_needs_freeze() is only called for an aggressive vacuum, which\nthis isn't.\n\nDoes that look sane?\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 17 Mar 2020 01:14:02 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 01:14:02AM +0100, Laurenz Albe wrote:\n> lazy_check_needs_freeze() is only called for an aggressive vacuum, which\n> this isn't.\n\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1388,17 +1388,26 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\tbool\t\ttuple_totally_frozen;\n> +\t\t\t\tbool\t\tfreeze_all;\n> \n> \t\t\t\tnum_tuples += 1;\n> \t\t\t\thastup = true;\n> \n> +\t\t\t\t/*\n> +\t\t\t\t * If any tuple was already frozen in the block and this is\n> +\t\t\t\t * an insert-only vacuum, we might as well freeze all other\n> +\t\t\t\t * tuples in that block.\n> +\t\t\t\t */\n> +\t\t\t\tfreeze_all = params->is_insert_only && has_dead_tuples;\n> +\n\nYou're checking if any (previously-scanned) tuple was *dead*, but I think you\nneed to check nfrozen>=0.\n\nAlso, this will fail to freeze tuples on a page which *could* be\noppotunistically-frozen, but *follow* the first tuple which *needs* to be\nfrozen.\n\nI think Andres was thinking this would maybe be an optimization independent of\nis_insert_only (?)\n\n> \t\t\t\t/*\n> \t\t\t\t * Each non-removable tuple must be checked to see if it needs\n> \t\t\t\t * freezing. Note we already have exclusive buffer lock.\n> \t\t\t\t */\n> \t\t\t\tif (heap_prepare_freeze_tuple(tuple.t_data,\n> \t\t\t\t\t\t\t\t\t\t\t relfrozenxid, relminmxid,\n> -\t\t\t\t\t\t\t\t\t\t\t FreezeLimit, MultiXactCutoff,\n> +\t\t\t\t\t\t\t\t\t\t\t freeze_all ? 0 : FreezeLimit,\n> +\t\t\t\t\t\t\t\t\t\t\t freeze_all ? 0 : MultiXactCutoff,\n> \t\t\t\t\t\t\t\t\t\t\t &frozen[nfrozen],\n> \t\t\t\t\t\t\t\t\t\t\t &tuple_totally_frozen))\n\n> +\t/* normal autovacuum shouldn't freeze aggressively */\n> +\t*insert_only = false;\n\nAggressively is a bad choice of words. In the context of vacuum, it usually\nmeans \"visit all pages, even those which are allvisible\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 10:24:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 10:24 -0500, Justin Pryzby wrote:\n> > --- a/src/backend/access/heap/vacuumlazy.c\n> > +++ b/src/backend/access/heap/vacuumlazy.c\n> > @@ -1388,17 +1388,26 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> > else\n> > {\n> > bool tuple_totally_frozen;\n> > + bool freeze_all;\n> > \n> > num_tuples += 1;\n> > hastup = true;\n> > \n> > + /*\n> > + * If any tuple was already frozen in the block and this is\n> > + * an insert-only vacuum, we might as well freeze all other\n> > + * tuples in that block.\n> > + */\n> > + freeze_all = params->is_insert_only && has_dead_tuples;\n> > +\n> \n> You're checking if any (previously-scanned) tuple was *dead*, but I think you\n> need to check nfrozen>=0.\n\nYes, that was a silly typo.\n\n> Also, this will fail to freeze tuples on a page which *could* be\n> oppotunistically-frozen, but *follow* the first tuple which *needs* to be\n> frozen.\n\nI am aware of that. I was trying to see if that went in the direction that\nAndres intends before trying more invasive modifications.\n\n> I think Andres was thinking this would maybe be an optimization independent of\n> is_insert_only (?)\n\nI wasn't sure.\n\nIn the light of that, I have ripped out that code again.\n\nAlso, since aggressive^H^H^H^H^H^H^H^H^H^Hproactive freezing seems to be a\nperformance problem in some cases (pages with UPDATEs and DELETEs in otherwise\nINSERT-mostly tables), I have done away with the whole freezing thing,\nwhich made the whole patch much smaller and simpler.\n\nNow all that is introduced are the threshold and scale factor and\nthe new statistics counter to track the number of inserts since the last\nVACUUM.\n\n> > + /* normal autovacuum shouldn't freeze aggressively */\n> > + *insert_only = false;\n> \n> Aggressively is a bad choice of words. In the context of vacuum, it usually\n> means \"visit all pages, even those which are allvisible\".\n\nThis is gone in the latest patch.\n\nUpdated patch attached.\n\nPerhaps we can reach a consensus on this reduced functionality.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 17 Mar 2020 20:42:07 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 08:42:07PM +0100, Laurenz Albe wrote:\n> Also, since aggressive^H^H^H^H^H^H^H^H^H^Hproactive freezing seems to be a\n> performance problem in some cases (pages with UPDATEs and DELETEs in otherwise\n> INSERT-mostly tables), I have done away with the whole freezing thing,\n> which made the whole patch much smaller and simpler.\n> \n> Now all that is introduced are the threshold and scale factor and\n> the new statistics counter to track the number of inserts since the last\n> VACUUM.\n> \n> Updated patch attached.\n> \n> Perhaps we can reach a consensus on this reduced functionality.\n\n+1\n\nI still suggest scale_factor maximum of 1e10, like\n4d54543efa5eb074ead4d0fadb2af4161c943044\n\nWhich alows more effectively disabling it than a factor of 100, which would\nprogress like: ~1, 1e2, 1e4, 1e6, 1e8, 1e10, ..\n\nI don't think that 1e4 would be a problem, but 1e6 and 1e8 could be. With\n1e10, it's first vacuumed when there's 10billion inserts, if we didn't previous\nhit the n_dead threshold.\n\nI think that's ok? If one wanted to disable it up to 1e11 tuples, I think\nthey'd disable autovacuum, or preferably just implement an vacuum job.\n\nThe commit message says:\n|The scale factor defaults to 0, which means that it is\n|effectively disabled, but it offers some flexibility\n..but \"it\" is ambiguous, so should say something like: \"the table size does not\ncontribute to the autovacuum threshold\".\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 14:56:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 14:56 -0500, Justin Pryzby wrote:\n> I still suggest scale_factor maximum of 1e10, like\n> 4d54543efa5eb074ead4d0fadb2af4161c943044\n> \n> Which alows more effectively disabling it than a factor of 100, which would\n> progress like: ~1, 1e2, 1e4, 1e6, 1e8, 1e10, ..\n> \n> I don't think that 1e4 would be a problem, but 1e6 and 1e8 could be. With\n> 1e10, it's first vacuumed when there's 10billion inserts, if we didn't previous\n> hit the n_dead threshold.\n> \n> I think that's ok? If one wanted to disable it up to 1e11 tuples, I think\n> they'd disable autovacuum, or preferably just implement an vacuum job.\n\nAssume a scale factor >= 1, for example 2, and n live tuples.\nThe table has just been vacuumed.\n\nNow we insert m number tuples (which are live).\n\nThen the condition\n\n threshold + scale_factor * live_tuples < newly_inserted_tuples\n\nbecomes\n\n 10000000 + 2 * (n + m) < m\n\nwhich can never be true for non-negative n and m.\n\nSo a scale factor >= 1 disables the feature.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 17 Mar 2020 22:01:15 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 10:01:15PM +0100, Laurenz Albe wrote:\n> On Tue, 2020-03-17 at 14:56 -0500, Justin Pryzby wrote:\n> > I still suggest scale_factor maximum of 1e10, like\n> > 4d54543efa5eb074ead4d0fadb2af4161c943044\n> > \n> > Which alows more effectively disabling it than a factor of 100, which would\n> > progress like: ~1, 1e2, 1e4, 1e6, 1e8, 1e10, ..\n> > \n> > I don't think that 1e4 would be a problem, but 1e6 and 1e8 could be. With\n> > 1e10, it's first vacuumed when there's 10billion inserts, if we didn't previous\n> > hit the n_dead threshold.\n> > \n> > I think that's ok? If one wanted to disable it up to 1e11 tuples, I think\n> > they'd disable autovacuum, or preferably just implement an vacuum job.\n> \n> Assume a scale factor >= 1, for example 2, and n live tuples.\n> The table has just been vacuumed.\n> \n> Now we insert m number tuples (which are live).\n> \n> Then the condition\n> \n> threshold + scale_factor * live_tuples < newly_inserted_tuples\n> \n> becomes\n> \n> 10000000 + 2 * (n + m) < m\n> \n> which can never be true for non-negative n and m.\n> \n> So a scale factor >= 1 disables the feature.\n\nNo, this is what we mailed about privately yesterday, and I demonstrated that\nautovac can still run with factor=100. I said:\n\n|It's a multiplier, not a percent out of 100 (fraction is not a great choice of\n|words).\n|\n| &autovacuum_vac_scale,\n| 0.2, 0.0, 100.0,\n|\n|The default is 0.2 (20%), so 100 means after updating/deleting 100*reltuples.\n\nlive tuples is an estimate, from the most recent vacuum OR analyze.\n\nIf 1.0 disabled the feature, it wouldn't make much sense to allow factor up to\n100.\n\n+ {\n+ {\"autovacuum_vacuum_insert_scale_factor\", PGC_SIGHUP, AUTOVACUUM,\n+ gettext_noop(\"Number of tuple inserts prior to vacuum as a fraction of reltuples.\"),\n+ NULL\n+ },\n+ &autovacuum_vac_ins_scale,\n+ 0.0, 0.0, 100.0,\n+ NULL, NULL, NULL\n+ },\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 16:07:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 16:07 -0500, Justin Pryzby wrote:\n> > Assume a scale factor >= 1, for example 2, and n live tuples.\n> > The table has just been vacuumed.\n> > \n> > Now we insert m number tuples (which are live).\n> > \n> > Then the condition\n> > \n> > threshold + scale_factor * live_tuples < newly_inserted_tuples\n> > \n> > becomes\n> > \n> > 10000000 + 2 * (n + m) < m\n> > \n> > which can never be true for non-negative n and m.\n> > \n> > So a scale factor >= 1 disables the feature.\n> \n> No, this is what we mailed about privately yesterday, and I demonstrated that\n> autovac can still run with factor=100. I said:\n\nI remember.\nCan you point out where exactly the flaw in my reasoning is?\n\n> > It's a multiplier, not a percent out of 100 (fraction is not a great choice of\n> > words).\n> > \n> > &autovacuum_vac_scale,\n> > 0.2, 0.0, 100.0,\n> > \n> > The default is 0.2 (20%), so 100 means after updating/deleting 100*reltuples.\n\nYes, exactly.\n\n> If 1.0 disabled the feature, it wouldn't make much sense to allow factor up to\n> 100.\n\nTrue, we could set the upper limit to 2, but it doesn't matter much.\n\nNote that this is different from autovacuum_vacuum_scale_factor,\nbecause inserted tuples are live, while dead tuples are not.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 17 Mar 2020 22:22:44 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 10:22:44PM +0100, Laurenz Albe wrote:\n> On Tue, 2020-03-17 at 16:07 -0500, Justin Pryzby wrote:\n> > > Assume a scale factor >= 1, for example 2, and n live tuples.\n> > > The table has just been vacuumed.\n> > > \n> > > Now we insert m number tuples (which are live).\n\n.. but not yet counted in reltuples.\n\nOn Tue, Mar 17, 2020 at 10:22:44PM +0100, Laurenz Albe wrote:\n> Note that this is different from autovacuum_vacuum_scale_factor,\n> because inserted tuples are live, while dead tuples are not.\n\nBut they're not counted in reltuples until after the next vacuum (or analyze),\nwhich is circular, since it's exactly what we're trying to schedule.\n\n reltuples = classForm->reltuples;\n vactuples = tabentry->n_dead_tuples;\n+ instuples = tabentry->inserts_since_vacuum;\n anltuples = tabentry->changes_since_analyze;\n \n vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;\n+ vacinsthresh = (float4) vac_ins_base_thresh + vac_ins_scale_factor * reltuples;\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 16:34:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 16:34 -0500, Justin Pryzby wrote:\n> > > > Now we insert m number tuples (which are live).\n> \n> .. but not yet counted in reltuples.\n\nThanks for pointing out my mistake.\n\nHere is another patch, no changes except setting the upper limit\nfor autovacuum_vacuum_insert_scale_factor to 1e10.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 17 Mar 2020 22:55:32 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 07:47:13AM -0500, Justin Pryzby wrote:\n> Normally, when someone complains about bad plan related to no index-onlyscan,\n> we tell them to run vacuum, and if that helps, then ALTER TABLE .. SET\n> (autovacuum_vacuum_scale_factor=0.005).\n> \n> If there's two thresholds (4 GUCs and 4 relopts) for autovacuum, then do we\n> have to help determine which one was being hit, and which relopt to set?\n\nI don't think we came to any resolution on this.\n\nRight now, to encourage IOS, we'd tell someone to set\nautovacuum_vacuum_scale_factor=0.005. That wouldn't work for an insert-only\ntable, but I've never heard back from someone that it didn't work.\n\nSo with this patch, we'd maybe tell them to do this, to also get IOS on\ninsert-only tables ?\n|ALTER TABLE .. SET (autovacuum_vacuum_scale_factor=0.005, autovacuum_vacuum_insert_threshold=50000);\n\n> I wonder if the new insert GUCs should default to -1 (disabled)? And the\n> insert thresholds should be set by new insert relopt (if set), or by new insert\n> GUC (default -1), else normal relopt, or normal GUC. The defaults would give\n> 50 + 0.20*n. When someone asks about IOS, we'd tell them to set\n> autovacuum_vacuum_scale_factor=0.005, same as now.\n> \n> vac_ins_scale_factor =\n> \t(relopts && relopts->vacuum_ins_scale_factor >= 0) ? relopts->vacuum_ins_scale_factor :\n> \tautovacuum_vac_ins_scale >= 0 ? autovacuum_vac_ins_scale : \n> \t(relopts && relopts->vacuum_scale_factor >= 0) ? relopts->vacuum_scale_factor :\n> \tautovacuum_vac_scale;\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 18:32:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-17 01:14:02 +0100, Laurenz Albe wrote:\n> lazy_check_needs_freeze() is only called for an aggressive vacuum, which\n> this isn't.\n\nHm? I mean some of these will be aggressive vacuums, because it's older\nthan vacuum_freeze_table_age? And the lower age limit would make that\npotentially more painful, no?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Mar 2020 17:26:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > I think Andres was thinking this would maybe be an optimization independent of\n> > is_insert_only (?)\n>\n> I wasn't sure.\n\nI'm not sure myself - but I'm doubtful that using a 0 min age by default\nwill be ok.\n\nI was trying to say (in a later email) that I think it might be a good\ncompromise to opportunistically freeze if we're dirtying the page\nanyway, but not optimize WAL emission etc. That's a pretty simple\nchange, and it'd address a lot of the potential performance regressions,\nwhile still freezing for the \"first\" vacuum in insert only workloads.\n\n\n> Add \"autovacuum_vacuum_insert_threshold\" and\n> \"autovacuum_vacuum_insert_scale_factor\" GUC and reloption.\n> The default value for the threshold is 10000000.\n> The scale factor defaults to 0, which means that it is\n> effectively disabled, but it offers some flexibility\n> to tune the feature similar to other autovacuum knobs.\n\nI don't think a default scale factor of 0 is going to be ok. For\nlarge-ish tables this will basically cause permanent vacuums. And it'll\nsometimes trigger for tables that actually coped well so far. 10 million\nrows could be a few seconds, not more.\n\nI don't think that the argument that otherwise a table might not get\nvacuumed before autovacuum_freeze_max_age is convincing enough.\n\na) if that's indeed the argument, we should increase the default\n autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n the main argument against that from before isn't valid anymore.\n\nb) there's not really a good arguments for vacuuming more often than\n autovacuum_freeze_max_age for such tables. It'll not be not frequent\n enough to allow IOS for new data, and you're not preventing\n anti-wraparound vacuums from happening.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Mar 2020 18:02:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 9:03 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > > I think Andres was thinking this would maybe be an optimization independent of\n> > > is_insert_only (?)\n> >\n> > I wasn't sure.\n>\n> I'm not sure myself - but I'm doubtful that using a 0 min age by default\n> will be ok.\n>\n> I was trying to say (in a later email) that I think it might be a good\n> compromise to opportunistically freeze if we're dirtying the page\n> anyway, but not optimize WAL emission etc. That's a pretty simple\n> change, and it'd address a lot of the potential performance regressions,\n> while still freezing for the \"first\" vacuum in insert only workloads.\n\nIf we have truly insert-only tables, then doesn't vacuuming with\nfreezing every tuple actually decrease total vacuum cost (perhaps\nsignificantly) since otherwise every vacuum keeps having to scan the\nheap for dead tuples on pages where we know there are none? Those\npages could conceptually be frozen and ignored, but are not frozen\nbecause of the default behavior, correct?\n\nWe have tables that log each change to a business object (as I suspect\nmany transactional workloads do), and I've often thought that\nimmediately freeze every page as soon as it fills up would be a real\nwin for us.\n\nIf that's all true, it seems to me that removing that part of the\npatch significantly lowers its value.\n\nIf we opportunistically freeze only if we're already dirtying a page,\nwould that help a truly insert-only workload? E.g., are there hint\nbits on the page that would need to change the first time we vacuum a\nfull page with no dead tuples? I would have assumed the answer was\n\"no\" (since if so I think it would follow that _all_ pages need\nupdated the first time they're vacuumed?). But if that's the case,\nthen this kind of opportunistic freezing wouldn't help this kind of\nworkload. Maybe there's something I'm misunderstanding about how\nvacuum works though.\n\nThanks,\nJames\n\n\n",
"msg_date": "Tue, 17 Mar 2020 21:58:53 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 09:58:53PM -0400, James Coleman wrote:\n> On Tue, Mar 17, 2020 at 9:03 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > > > I think Andres was thinking this would maybe be an optimization independent of\n> > > > is_insert_only (?)\n> > >\n> > > I wasn't sure.\n> >\n> > I'm not sure myself - but I'm doubtful that using a 0 min age by default\n> > will be ok.\n> >\n> > I was trying to say (in a later email) that I think it might be a good\n> > compromise to opportunistically freeze if we're dirtying the page\n> > anyway, but not optimize WAL emission etc. That's a pretty simple\n> > change, and it'd address a lot of the potential performance regressions,\n> > while still freezing for the \"first\" vacuum in insert only workloads.\n> \n> If we have truly insert-only tables, then doesn't vacuuming with\n> freezing every tuple actually decrease total vacuum cost (perhaps\n> significantly) since otherwise every vacuum keeps having to scan the\n> heap for dead tuples on pages where we know there are none? Those\n> pages could conceptually be frozen and ignored, but are not frozen\n> because of the default behavior, correct?\n\nThe essential part of this patch is to trigger vacuum *at all* on an\ninsert-only table. Before today's updated patch, it also used FREEZE on any\ntable which hit the new insert threshold. The concern I raised is for\ninsert-MOSTLY tables. I thought it might be an issue if repeatedly freezing\nupdated tuples caused vacuum to be too slow, especially if they're distributed\nin pages all across the table rather than clustered.\n\nAnd I asked that the behavior (FREEZE) be configurable by a separate setting\nthan the one that triggers autovacuum to run. FREEZE is already controlled by\nthe vacuum_freeze_table_age param.\n\nI think you're right that VACUUM FREEZE on an insert-only table would be less\nexpensive than vacuum once without freeze and vacuum again later, which uses\nfreeze. To me, that suggests setting vacuum_freeze_table_age to a low value on\nthose tables.\n\nRegular vacuum avoids scanning all-visible pages, so for an insert-only table\npages should only be vacuumed once (if frozen the 1st time) or twice (if not).\n\n * Except when aggressive is set, we want to skip pages that are\n * all-visible according to the visibility map, but only when we can skip\n\npostgres=# CREATE TABLE t (i int) ; INSERT INTO t SELECT generate_series(1,999999); VACUUM VERBOSE t; VACUUM VERBOSE t;\n...\nINFO: \"t\": found 0 removable, 999999 nonremovable row versions in 4425 out of 4425 pages\n...\nVACUUM\nTime: 106.038 ms\nINFO: \"t\": found 0 removable, 175 nonremovable row versions in 1 out of 4425 pages\nVACUUM\nTime: 1.828 ms\n\n=> That's its not very clear way of saying that it only scanned 1 page the 2nd\ntime around.\n\n> We have tables that log each change to a business object (as I suspect\n> many transactional workloads do), and I've often thought that\n> immediately freeze every page as soon as it fills up would be a real\n> win for us.\n> \n> If that's all true, it seems to me that removing that part of the\n> patch significantly lowers its value.\n\n> If we opportunistically freeze only if we're already dirtying a page,\n> would that help a truly insert-only workload? E.g., are there hint\n> bits on the page that would need to change the first time we vacuum a\n> full page with no dead tuples? I would have assumed the answer was\n> \"no\" (since if so I think it would follow that _all_ pages need\n> updated the first time they're vacuumed?).\n\nYou probably know that hint bits are written by the first process to access the\ntuple after it was written. I think you're asking if the first *vacuum*\nrequires additional writes beyond that. And I think vacuum wouldn't touch the\npage until it decides to freeze tuples.\n\nI do have a patch to display the number of hint bits written and pages frozen.\nhttps://www.postgresql.org/message-id/flat/20200126141328.GP13621%40telsasoft.com\n\n> But if that's the case, then this kind of opportunistic freezing wouldn't\n> help this kind of workload. Maybe there's something I'm misunderstanding\n> about how vacuum works though.\n\nI am reminding myself about vacuum with increasing frequency and usually still\nlearn something new.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Mar 2020 22:37:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-17 21:58:53 -0400, James Coleman wrote:\n> On Tue, Mar 17, 2020 at 9:03 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > > > I think Andres was thinking this would maybe be an optimization independent of\n> > > > is_insert_only (?)\n> > >\n> > > I wasn't sure.\n> >\n> > I'm not sure myself - but I'm doubtful that using a 0 min age by default\n> > will be ok.\n> >\n> > I was trying to say (in a later email) that I think it might be a good\n> > compromise to opportunistically freeze if we're dirtying the page\n> > anyway, but not optimize WAL emission etc. That's a pretty simple\n> > change, and it'd address a lot of the potential performance regressions,\n> > while still freezing for the \"first\" vacuum in insert only workloads.\n> \n> If we have truly insert-only tables, then doesn't vacuuming with\n> freezing every tuple actually decrease total vacuum cost (perhaps\n> significantly) since otherwise every vacuum keeps having to scan the\n> heap for dead tuples on pages where we know there are none? Those\n> pages could conceptually be frozen and ignored, but are not frozen\n> because of the default behavior, correct?\n\nYes.\n\n\n> If that's all true, it seems to me that removing that part of the\n> patch significantly lowers its value.\n\nWell, perfect sometimes is the enemy of the good. We gotta get something\nin, and having some automated vacuuming for insert mostly/only tables is\na huge step forward. And avoiding regressions is an important part of\ndoing so.\n\nI outlined the steps we could take to allow for more aggressive\nvacuuming upthread.\n\n\n> If we opportunistically freeze only if we're already dirtying a page,\n> would that help a truly insert-only workload?\n\nYes.\n\n\n> E.g., are there hint bits on the page that would need to change the\n> first time we vacuum a full page with no dead tuples?\n\nYes. HEAP_XMIN_COMMITTED.\n\n\n> I would have assumed the answer was \"no\" (since if so I think it would\n> follow that _all_ pages need updated the first time they're\n> vacuumed?).\n\nThat is the case. Although they might already be set when the tuples are\naccessed for other reasons.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Mar 2020 10:08:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 11:37 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 17, 2020 at 09:58:53PM -0400, James Coleman wrote:\n> > On Tue, Mar 17, 2020 at 9:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > > > > I think Andres was thinking this would maybe be an optimization independent of\n> > > > > is_insert_only (?)\n> > > >\n> > > > I wasn't sure.\n> > >\n> > > I'm not sure myself - but I'm doubtful that using a 0 min age by default\n> > > will be ok.\n> > >\n> > > I was trying to say (in a later email) that I think it might be a good\n> > > compromise to opportunistically freeze if we're dirtying the page\n> > > anyway, but not optimize WAL emission etc. That's a pretty simple\n> > > change, and it'd address a lot of the potential performance regressions,\n> > > while still freezing for the \"first\" vacuum in insert only workloads.\n> >\n> > If we have truly insert-only tables, then doesn't vacuuming with\n> > freezing every tuple actually decrease total vacuum cost (perhaps\n> > significantly) since otherwise every vacuum keeps having to scan the\n> > heap for dead tuples on pages where we know there are none? Those\n> > pages could conceptually be frozen and ignored, but are not frozen\n> > because of the default behavior, correct?\n>\n> The essential part of this patch is to trigger vacuum *at all* on an\n> insert-only table. Before today's updated patch, it also used FREEZE on any\n> table which hit the new insert threshold. The concern I raised is for\n> insert-MOSTLY tables. I thought it might be an issue if repeatedly freezing\n> updated tuples caused vacuum to be too slow, especially if they're distributed\n> in pages all across the table rather than clustered.\n\nYeah, for some reason I'd completely forgotten (caught up in thinking\nabout the best possible outcome re: freezing insert only tables) that\nthe bigger problem was just triggering vacuum at all on those tables.\n\n> And I asked that the behavior (FREEZE) be configurable by a separate setting\n> than the one that triggers autovacuum to run. FREEZE is already controlled by\n> the vacuum_freeze_table_age param.\n>\n> I think you're right that VACUUM FREEZE on an insert-only table would be less\n> expensive than vacuum once without freeze and vacuum again later, which uses\n> freeze. To me, that suggests setting vacuum_freeze_table_age to a low value on\n> those tables.\n>\n> Regular vacuum avoids scanning all-visible pages, so for an insert-only table\n> pages should only be vacuumed once (if frozen the 1st time) or twice (if not).\n>\n> * Except when aggressive is set, we want to skip pages that are\n> * all-visible according to the visibility map, but only when we can skip\n>\n> postgres=# CREATE TABLE t (i int) ; INSERT INTO t SELECT generate_series(1,999999); VACUUM VERBOSE t; VACUUM VERBOSE t;\n> ...\n> INFO: \"t\": found 0 removable, 999999 nonremovable row versions in 4425 out of 4425 pages\n> ...\n> VACUUM\n> Time: 106.038 ms\n> INFO: \"t\": found 0 removable, 175 nonremovable row versions in 1 out of 4425 pages\n> VACUUM\n> Time: 1.828 ms\n>\n> => That's its not very clear way of saying that it only scanned 1 page the 2nd\n> time around.\n\nI didn't realize that about the visibility map being taken into account.\n\n> > We have tables that log each change to a business object (as I suspect\n> > many transactional workloads do), and I've often thought that\n> > immediately freeze every page as soon as it fills up would be a real\n> > win for us.\n> >\n> > If that's all true, it seems to me that removing that part of the\n> > patch significantly lowers its value.\n>\n> > If we opportunistically freeze only if we're already dirtying a page,\n> > would that help a truly insert-only workload? E.g., are there hint\n> > bits on the page that would need to change the first time we vacuum a\n> > full page with no dead tuples? I would have assumed the answer was\n> > \"no\" (since if so I think it would follow that _all_ pages need\n> > updated the first time they're vacuumed?).\n>\n> You probably know that hint bits are written by the first process to access the\n> tuple after it was written. I think you're asking if the first *vacuum*\n> requires additional writes beyond that. And I think vacuum wouldn't touch the\n> page until it decides to freeze tuples.\n\nI think my assumption is that (at least in our case), the first\nprocess to access will definitely not be vacuum on any regular basis.\n\n> I do have a patch to display the number of hint bits written and pages frozen.\n> https://www.postgresql.org/message-id/flat/20200126141328.GP13621%40telsasoft.com\n\nI'll take a look at that too.\n\n> > But if that's the case, then this kind of opportunistic freezing wouldn't\n> > help this kind of workload. Maybe there's something I'm misunderstanding\n> > about how vacuum works though.\n>\n> I am reminding myself about vacuum with increasing frequency and usually still\n> learn something new.\n\nFor sure.\n\nThanks,\nJames\n\n\n",
"msg_date": "Wed, 18 Mar 2020 13:33:07 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 1:08 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-17 21:58:53 -0400, James Coleman wrote:\n> > On Tue, Mar 17, 2020 at 9:03 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2020-03-17 20:42:07 +0100, Laurenz Albe wrote:\n> > > > > I think Andres was thinking this would maybe be an optimization independent of\n> > > > > is_insert_only (?)\n> > > >\n> > > > I wasn't sure.\n> > >\n> > > I'm not sure myself - but I'm doubtful that using a 0 min age by default\n> > > will be ok.\n> > >\n> > > I was trying to say (in a later email) that I think it might be a good\n> > > compromise to opportunistically freeze if we're dirtying the page\n> > > anyway, but not optimize WAL emission etc. That's a pretty simple\n> > > change, and it'd address a lot of the potential performance regressions,\n> > > while still freezing for the \"first\" vacuum in insert only workloads.\n> >\n> > If we have truly insert-only tables, then doesn't vacuuming with\n> > freezing every tuple actually decrease total vacuum cost (perhaps\n> > significantly) since otherwise every vacuum keeps having to scan the\n> > heap for dead tuples on pages where we know there are none? Those\n> > pages could conceptually be frozen and ignored, but are not frozen\n> > because of the default behavior, correct?\n>\n> Yes.\n>\n>\n> > If that's all true, it seems to me that removing that part of the\n> > patch significantly lowers its value.\n>\n> Well, perfect sometimes is the enemy of the good. We gotta get something\n> in, and having some automated vacuuming for insert mostly/only tables is\n> a huge step forward. And avoiding regressions is an important part of\n> doing so.\n\nYep, as I responded to Justin, in thinking about the details I'd lost\nsight of the biggest issue.\n\nSo I withdraw that concern in favor of getting something out that\nimproves things now.\n\n...\n\n> > If we opportunistically freeze only if we're already dirtying a page,\n> > would that help a truly insert-only workload?\n>\n> Yes.\n\nOnly if some other process hasn't already read and caused hint bits to\nbe written, correct? Or am I missing something there too?\n\n> > E.g., are there hint bits on the page that would need to change the\n> > first time we vacuum a full page with no dead tuples?\n>\n> Yes. HEAP_XMIN_COMMITTED.\n\nThis can be set opportunistically by other non-vacuum processes though?\n\n> > I would have assumed the answer was \"no\" (since if so I think it would\n> > follow that _all_ pages need updated the first time they're\n> > vacuumed?).\n>\n> That is the case. Although they might already be set when the tuples are\n> accessed for other reasons.\n\nAh, I think this is answering what I'd asked above.\n\nI'm very excited to see improvements in flight on this use case.\n\nThanks,\nJames\n\n\n",
"msg_date": "Wed, 18 Mar 2020 13:37:13 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 17:26 -0700, Andres Freund wrote:\n> On 2020-03-17 01:14:02 +0100, Laurenz Albe wrote:\n> > lazy_check_needs_freeze() is only called for an aggressive vacuum, which\n> > this isn't.\n> \n> Hm? I mean some of these will be aggressive vacuums, because it's older\n> than vacuum_freeze_table_age? And the lower age limit would make that\n> potentially more painful, no?\n\nYou are right. I thought of autovacuum_freeze_max_age, but not of\nvacuum_freeze_table_age.\n\nAutovacuum configuration is so woefully complicated that it makes me\nfeel bad to propose two more parameters :^(\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 18 Mar 2020 20:55:30 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 2020-03-17 at 18:02 -0700, Andres Freund wrote:\n> I don't think a default scale factor of 0 is going to be ok. For\n> large-ish tables this will basically cause permanent vacuums. And it'll\n> sometimes trigger for tables that actually coped well so far. 10 million\n> rows could be a few seconds, not more.\n> \n> I don't think that the argument that otherwise a table might not get\n> vacuumed before autovacuum_freeze_max_age is convincing enough.\n> \n> a) if that's indeed the argument, we should increase the default\n> autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n> the main argument against that from before isn't valid anymore.\n> \n> b) there's not really a good arguments for vacuuming more often than\n> autovacuum_freeze_max_age for such tables. It'll not be not frequent\n> enough to allow IOS for new data, and you're not preventing\n> anti-wraparound vacuums from happening.\n\nAccording to my reckoning, that is the remaining objection to the patch\nas it is (with ordinary freezing behavior).\n\nHow about a scale_factor od 0.005? That will be high enough for large\ntables, which seem to be the main concern here.\n\nI fully agree with your point a) - should that be part of the patch?\n\nI am not sure about b). In my mind, the objective is not to prevent\nanti-wraparound vacuums, but to see that they have less work to do,\nbecause previous autovacuum runs already have frozen anything older than\nvacuum_freeze_min_age. So, assuming linear growth, the number of tuples\nto freeze during any run would be at most one fourth of today's number\nwhen we hit autovacuum_freeze_max_age.\n\nI am still sorry to see more proactive freezing go, which would\nreduce the impact for truly insert-only tables.\nAfter sleeping on it, here is one last idea.\n\nGranted, freezing with vacuum_freeze_min_age = 0 poses a problem\nfor those parts of the table that will receive updates or deletes.\nBut what if insert-triggered vacuum operates with - say -\none tenth of vacuum_freeze_min_age (unless explicitly overridden\nfor the table)? That might still be high enough not to needlessly\nfreeze too many tuples that will still be modified, but it will\nreduce the impact on insert-only tables.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 19 Mar 2020 06:45:48 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> On 2020-03-13 13:44:42 -0500, Justin Pryzby wrote:\n> > Having now played with the patch, I'll suggest that 10000000 is too high a\n> > threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> > much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n> \n> ISTM that the danger of regressing workloads due to suddenly repeatedly\n> scanning huge indexes that previously were never / rarely scanned is\n> significant (if there's a few dead tuples, otherwise most indexes will\n> be able to skip the scan since the vacuum_cleanup_index_scale_factor\n> introduction)).\n\nWe could try to avoid that issue here:\n\n| /* If any tuples need to be deleted, perform final vacuum cycle */\n| /* XXX put a threshold on min number of tuples here? */\n| if (dead_tuples->num_tuples > 0)\n| {\n| /* Work on all the indexes, and then the heap */\n| lazy_vacuum_all_indexes(onerel, Irel, indstats, vacrelstats,\n| lps, nindexes);\n|\n| /* Remove tuples from heap */\n| lazy_vacuum_heap(onerel, vacrelstats);\n| }\n\nAs you said, an insert-only table can skip scanning indexes, but an\ninsert-mostly table currently cannot.\n\nMaybe we could skip the final index scan if we hit the autovacuum insert\nthreshold?\n\nI still don't like mixing the thresholds with the behavior they imply, but\nmaybe what's needed is better docs describing all of vacuum's roles and its\nprocedure and priority in executing them.\n\nThe dead tuples would just be cleaned up during a future vacuum, right ? So\nthat would be less efficient, but (no surprise) there's a balance to strike and\nthat can be tuned. I think that wouldn't be an issue for most people; the\nworst case would be if you set high maint_work_mem, and low insert threshold,\nand you got increased bloat. But faster vacuum if we avoided idx scans.\n\nThat might allow more flexibility in our discussion around default values for\nthresholds for insert-triggered vacuum.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 19 Mar 2020 01:06:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 19 Mar 2020 at 18:45, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2020-03-17 at 18:02 -0700, Andres Freund wrote:\n> > I don't think a default scale factor of 0 is going to be ok. For\n> > large-ish tables this will basically cause permanent vacuums. And it'll\n> > sometimes trigger for tables that actually coped well so far. 10 million\n> > rows could be a few seconds, not more.\n> >\n> > I don't think that the argument that otherwise a table might not get\n> > vacuumed before autovacuum_freeze_max_age is convincing enough.\n> >\n> > a) if that's indeed the argument, we should increase the default\n> > autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n> > the main argument against that from before isn't valid anymore.\n> >\n> > b) there's not really a good arguments for vacuuming more often than\n> > autovacuum_freeze_max_age for such tables. It'll not be not frequent\n> > enough to allow IOS for new data, and you're not preventing\n> > anti-wraparound vacuums from happening.\n>\n> According to my reckoning, that is the remaining objection to the patch\n> as it is (with ordinary freezing behavior).\n>\n> How about a scale_factor od 0.005? That will be high enough for large\n> tables, which seem to be the main concern here.\n\nI agree with that, however, I'd thought 0.01, just so we're still\nclose to having about 100 times less work to do for huge insert-only\ntables when it comes to having to perform an anti-wraparound vacuum.\n\n> I fully agree with your point a) - should that be part of the patch?\n\nI think it will be a good idea to increase this, but I really don't\nthink this patch should be touching it. It's something to put on the\nissues list for after the CF so more people have the bandwidth to chip\nin their thoughts.\n\n> I am not sure about b). In my mind, the objective is not to prevent\n> anti-wraparound vacuums, but to see that they have less work to do,\n> because previous autovacuum runs already have frozen anything older than\n> vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> to freeze during any run would be at most one fourth of today's number\n> when we hit autovacuum_freeze_max_age.\n\nI hear what Andres is saying about proactive freezing for already\ndirty pages. I think that's worth looking into, but don't feel like\nwe need to do it for this patch. The patch is worthy without it and\nsuch a change affects more than insert-vacuums, so should be a\nseparate commit.\n\nIf people really do have an insert-only table then we can recommend\nthat they set the table's autovacuum_freeze_min_age to 0.\n\n> I am still sorry to see more proactive freezing go, which would\n> reduce the impact for truly insert-only tables.\n> After sleeping on it, here is one last idea.\n>\n> Granted, freezing with vacuum_freeze_min_age = 0 poses a problem\n> for those parts of the table that will receive updates or deletes.\n> But what if insert-triggered vacuum operates with - say -\n> one tenth of vacuum_freeze_min_age (unless explicitly overridden\n> for the table)? That might still be high enough not to needlessly\n> freeze too many tuples that will still be modified, but it will\n> reduce the impact on insert-only tables.\n\nI think that might be a bit too magical and may not be what some\npeople want. I know that most people won't set\nautovacuum_freeze_min_age to 0 for insert-only tables, but we can at\nleast throw something in the documents to mention it's a good idea,\nhowever, looking over the docs I'm not too sure the best place to note\nthat down.\n\nI've attached a small fix which I'd like to apply to your v8 patch.\nWith that, and pending one final look, I'd like to push this during my\nMonday (New Zealand time). So if anyone strongly objects to that,\nplease state their case before then.\n\nDavid",
"msg_date": "Thu, 19 Mar 2020 21:39:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 19 Mar 2020 at 19:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> > On 2020-03-13 13:44:42 -0500, Justin Pryzby wrote:\n> > > Having now played with the patch, I'll suggest that 10000000 is too high a\n> > > threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> > > much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n> >\n> > ISTM that the danger of regressing workloads due to suddenly repeatedly\n> > scanning huge indexes that previously were never / rarely scanned is\n> > significant (if there's a few dead tuples, otherwise most indexes will\n> > be able to skip the scan since the vacuum_cleanup_index_scale_factor\n> > introduction)).\n>\n> We could try to avoid that issue here:\n>\n> | /* If any tuples need to be deleted, perform final vacuum cycle */\n> | /* XXX put a threshold on min number of tuples here? */\n> | if (dead_tuples->num_tuples > 0)\n> | {\n> | /* Work on all the indexes, and then the heap */\n> | lazy_vacuum_all_indexes(onerel, Irel, indstats, vacrelstats,\n> | lps, nindexes);\n> |\n> | /* Remove tuples from heap */\n> | lazy_vacuum_heap(onerel, vacrelstats);\n> | }\n>\n> As you said, an insert-only table can skip scanning indexes, but an\n> insert-mostly table currently cannot.\n>\n> Maybe we could skip the final index scan if we hit the autovacuum insert\n> threshold?\n>\n> I still don't like mixing the thresholds with the behavior they imply, but\n> maybe what's needed is better docs describing all of vacuum's roles and its\n> procedure and priority in executing them.\n>\n> The dead tuples would just be cleaned up during a future vacuum, right ? So\n> that would be less efficient, but (no surprise) there's a balance to strike and\n> that can be tuned. I think that wouldn't be an issue for most people; the\n> worst case would be if you set high maint_work_mem, and low insert threshold,\n> and you got increased bloat. But faster vacuum if we avoided idx scans.\n>\n> That might allow more flexibility in our discussion around default values for\n> thresholds for insert-triggered vacuum.\n\nWe went over this a bit already. The risk is that if you have an\ninsert-mostly table and always trigger an auto-vacuum for inserts and\nnever due to dead tuples, then you'll forego the index cleanup every\ntime causing the indexes to bloat over time.\n\nI think any considerations to add some sort of threshold on dead\ntuples before cleaning the index should be considered independently.\nTrying to get everyone to agree to what's happening here is hard\nenough without adding more options to the list. I understand that\nthere may be small issues with insert-only tables with a tiny number\nof dead tuples, perhaps due to aborts could cause some issues while\nscanning the index, but that's really one of the big reasons why the\n10 million insert threshold has been added. Just in the past few hours\nwe've talked about having a very small scale factor to protect from\nover-vacuum on huge tables that see 10 million tuples inserted in\nshort spaces of time. I think that's a good compromise, however,\ncertainly not perfect.\n\nDavid\n\n\n",
"msg_date": "Thu, 19 Mar 2020 21:52:11 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 09:52:11PM +1300, David Rowley wrote:\n> On Thu, 19 Mar 2020 at 19:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Fri, Mar 13, 2020 at 02:38:51PM -0700, Andres Freund wrote:\n> > > On 2020-03-13 13:44:42 -0500, Justin Pryzby wrote:\n> > > > Having now played with the patch, I'll suggest that 10000000 is too high a\n> > > > threshold. If autovacuum runs without FREEZE, I don't see why it couldn't be\n> > > > much lower (100000?) or use (0.2 * n_ins + 50) like the other autovacuum GUC.\n> > >\n> > > ISTM that the danger of regressing workloads due to suddenly repeatedly\n> > > scanning huge indexes that previously were never / rarely scanned is\n> > > significant (if there's a few dead tuples, otherwise most indexes will\n> > > be able to skip the scan since the vacuum_cleanup_index_scale_factor\n> > > introduction)).\n> >\n> > We could try to avoid that issue here:\n> >\n> > | /* If any tuples need to be deleted, perform final vacuum cycle */\n> > | /* XXX put a threshold on min number of tuples here? */\n> > | if (dead_tuples->num_tuples > 0)\n> > | {\n> > | /* Work on all the indexes, and then the heap */\n> > | lazy_vacuum_all_indexes(onerel, Irel, indstats, vacrelstats,\n> > | lps, nindexes);\n> > |\n> > | /* Remove tuples from heap */\n> > | lazy_vacuum_heap(onerel, vacrelstats);\n> > | }\n> >\n> > As you said, an insert-only table can skip scanning indexes, but an\n> > insert-mostly table currently cannot.\n> >\n> > Maybe we could skip the final index scan if we hit the autovacuum insert\n> > threshold?\n> >\n> > I still don't like mixing the thresholds with the behavior they imply, but\n> > maybe what's needed is better docs describing all of vacuum's roles and its\n> > procedure and priority in executing them.\n> >\n> > The dead tuples would just be cleaned up during a future vacuum, right ? So\n> > that would be less efficient, but (no surprise) there's a balance to strike and\n> > that can be tuned. I think that wouldn't be an issue for most people; the\n> > worst case would be if you set high maint_work_mem, and low insert threshold,\n> > and you got increased bloat. But faster vacuum if we avoided idx scans.\n> >\n> > That might allow more flexibility in our discussion around default values for\n> > thresholds for insert-triggered vacuum.\n> \n> We went over this a bit already. The risk is that if you have an\n> insert-mostly table and always trigger an auto-vacuum for inserts and\n> never due to dead tuples, then you'll forego the index cleanup every\n> time causing the indexes to bloat over time.\n\nAt the time, we were talking about skipping index *cleanup* phase.\nWhich also incurs an index scan.\n>+\t\ttab->at_params.index_cleanup = insert_only ? VACOPT_TERNARY_DISABLED : VACOPT_TERNARY_DEFAULT;\nWe decided not to skip this, since it would allow index bloat, if vacuum were\nonly ever run due to inserts, and cleanup never happened.\n\nI'm suggesting the possibility of skipping not index *cleanup* but index (and\nheap) *vacuum*. So that saves an index scan itself, and I think implies later\nskipping cleanup (since no index tuples were removed). But more importantly, I\nthink if we skip that during an insert-triggered vacuum, the dead heap tuples\nare still there during the next vacuum instance. So unlike index cleanup\n(where we don't keep track of the total number of dead index tuples), this can\naccumulate over time, and eventually trigger index+heap vacuum, and cleanup.\n\n> I think any considerations to add some sort of threshold on dead\n> tuples before cleaning the index should be considered independently.\n\n+1, yes. I'm hoping to anticipate and mitigate any objections and regressions\nmore than raise them.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 19 Mar 2020 09:09:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-19 at 21:39 +1300, David Rowley wrote:\n> > According to my reckoning, that is the remaining objection to the patch\n> > as it is (with ordinary freezing behavior).\n> >\n> > How about a scale_factor od 0.005? That will be high enough for large\n> > tables, which seem to be the main concern here.\n> \n> I agree with that, however, I'd thought 0.01, just so we're still\n> close to having about 100 times less work to do for huge insert-only\n> tables when it comes to having to perform an anti-wraparound vacuum.\n\nFine with me.\n\n> > I am still sorry to see more proactive freezing go, which would\n> > reduce the impact for truly insert-only tables.\n> > After sleeping on it, here is one last idea.\n> >\n> > Granted, freezing with vacuum_freeze_min_age = 0 poses a problem\n> > for those parts of the table that will receive updates or deletes.\n> > But what if insert-triggered vacuum operates with - say -\n> > one tenth of vacuum_freeze_min_age (unless explicitly overridden\n> > for the table)? That might still be high enough not to needlessly\n> > freeze too many tuples that will still be modified, but it will\n> > reduce the impact on insert-only tables.\n> \n> I think that might be a bit too magical and may not be what some\n> people want. I know that most people won't set\n> autovacuum_freeze_min_age to 0 for insert-only tables, but we can at\n> least throw something in the documents to mention it's a good idea,\n> however, looking over the docs I'm not too sure the best place to note\n> that down.\n\nI was afraid that idea would be too cute to appeal.\n\n> I've attached a small fix which I'd like to apply to your v8 patch.\n> With that, and pending one final look, I'd like to push this during my\n> Monday (New Zealand time). So if anyone strongly objects to that,\n> please state their case before then.\n\nThanks!\n\nI have rolled your edits into the attached patch v9, rebased against\ncurrent master.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 19 Mar 2020 20:47:40 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-19 06:45:48 +0100, Laurenz Albe wrote:\n> On Tue, 2020-03-17 at 18:02 -0700, Andres Freund wrote:\n> > I don't think a default scale factor of 0 is going to be ok. For\n> > large-ish tables this will basically cause permanent vacuums. And it'll\n> > sometimes trigger for tables that actually coped well so far. 10 million\n> > rows could be a few seconds, not more.\n> > \n> > I don't think that the argument that otherwise a table might not get\n> > vacuumed before autovacuum_freeze_max_age is convincing enough.\n> > \n> > a) if that's indeed the argument, we should increase the default\n> > autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n> > the main argument against that from before isn't valid anymore.\n> > \n> > b) there's not really a good arguments for vacuuming more often than\n> > autovacuum_freeze_max_age for such tables. It'll not be not frequent\n> > enough to allow IOS for new data, and you're not preventing\n> > anti-wraparound vacuums from happening.\n> \n> According to my reckoning, that is the remaining objection to the patch\n> as it is (with ordinary freezing behavior).\n> \n> How about a scale_factor od 0.005? That will be high enough for large\n> tables, which seem to be the main concern here.\n\nSeems low on a first blush. On a large-ish table with 1 billion tuples,\nwe'd vacuum every 5 million inserts. For many ETL workloads this will\nresult in a vacuum after every bulk operation. Potentially with an index\nscan associated (even if there's no errors, a lot of bulk loads use ON\nCONFLICT INSERT leading to the occasional update).\n\nPersonally I think we should be considerably more conservative in the\nfirst release or two. Exposing a lot of people that previously didn't\nhave a lot of problems to vacuuming being *massively* more aggressive,\nbasically permanently running on an insert only table, will be bad.\n\n\n> I fully agree with your point a) - should that be part of the patch?\n\nWe can just make it a seperate patch committed shortly afterwards.\n\n\n> I am not sure about b). In my mind, the objective is not to prevent\n> anti-wraparound vacuums, but to see that they have less work to do,\n> because previous autovacuum runs already have frozen anything older than\n> vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> to freeze during any run would be at most one fourth of today's number\n> when we hit autovacuum_freeze_max_age.\n\nThis whole chain of arguments seems like it actually has little to do\nwith vacuuming insert only/mostly tables. The same problem exists for\ntables that aren't insert only/mostly. Instead it IMO is an argument for\na general change in logic about when to freeze.\n\nWhat exactly is it that you want to achieve by having anti-wrap vacuums\nbe quicker? If the goal is to reduce the window in which autovacuums\naren't automatically cancelled when there's a conflicting lock request,\nor in which autovacuum just schedules based on xid age, then you can't\nhave wraparound vacuums needing to do substantial amount of work.\n\nExcept for not auto-cancelling, and the autovac scheduling issue,\nthere's really nothing magic about anti-wrap vacuums.\n\n\nIf the goal is to avoid redundant writes, then it's largely unrelated to\nanti-wrap vacuums, and can to a large degree addressed by\nopportunistically freezing (best even during hot pruning!).\n\n\nI am more and more convinced that it's a seriously bad idea to tie\ncommitting \"autovacuum after inserts\" to also committing a change in\nlogic around freezing. That's not to say we shouldn't try to address\nboth this cycle, but discussing them as if they really are one item\nmakes it both more likely that we get nothing in, and more likely that\nwe miss the larger picture.\n\n\n> I am still sorry to see more proactive freezing go, which would\n> reduce the impact for truly insert-only tables.\n> After sleeping on it, here is one last idea.\n> \n> Granted, freezing with vacuum_freeze_min_age = 0 poses a problem\n> for those parts of the table that will receive updates or deletes.\n\nIMO it's not at all just those regions that are potentially negatively\naffected:\nIf there are no other modifications to the page, more aggressively\nfreezing can lead to seriously increased write volume. Its quite normal\nto have databases where data in insert only tables *never* gets old\nenough to need to be frozen (either because xid usage is low, or because\nolder partitions are dropped). If data in an insert-only table isn't\nwrite-only, the hint bits are likely to already be set, which means that\nvacuum will just cause the entire table to be written another time,\nwithout a reason.\n\n\nI don't see how it's ok to substantially regress this very common\nworkload. IMO this basically means that more aggressively and\nnon-opportunistically freezing simply is a no-go (be it for insert or\nother causes for vacuuming).\n\nWhat am I missing?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 14:38:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "> > According to my reckoning, that is the remaining objection to the patch\n> > as it is (with ordinary freezing behavior).\n> >\n> > How about a scale_factor od 0.005? That will be high enough for large\n> > tables, which seem to be the main concern here.\n>\n> Seems low on a first blush. On a large-ish table with 1 billion tuples,\n> we'd vacuum every 5 million inserts. For many ETL workloads this will\n> result in a vacuum after every bulk operation. Potentially with an index\n> scan associated (even if there's no errors, a lot of bulk loads use ON\n> CONFLICT INSERT leading to the occasional update).\n\nThis is a good and wanted thing. Upthread it was already suggested\nthat \"everyone knows to vacuum after bulk operations\". This will go and vacuum\nthe data while it's hot and in caches, not afterwards, reading from disk.\n\n\n> > I am not sure about b). In my mind, the objective is not to prevent\n> > anti-wraparound vacuums, but to see that they have less work to do,\n> > because previous autovacuum runs already have frozen anything older than\n> > vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> > to freeze during any run would be at most one fourth of today's number\n> > when we hit autovacuum_freeze_max_age.\n>\n> This whole chain of arguments seems like it actually has little to do\n> with vacuuming insert only/mostly tables. The same problem exists for\n> tables that aren't insert only/mostly. Instead it IMO is an argument for\n> a general change in logic about when to freeze.\n>\n> What exactly is it that you want to achieve by having anti-wrap vacuums\n> be quicker? If the goal is to reduce the window in which autovacuums\n> aren't automatically cancelled when there's a conflicting lock request,\n> or in which autovacuum just schedules based on xid age, then you can't\n> have wraparound vacuums needing to do substantial amount of work.\n\nThe problem hit by Mandrill is simple: in modern cloud environments\nit's sometimes simply impossible to read all the data on disk because\nof different kinds of throttling.\nAt some point your production database just shuts down and asks to\nVACUUM in single user mode for 40 days.\n\nYou want vacuum to happen long before that, preferably when the data\nis still in RAM, or, at least, fits your cloud provider's disk burst\nperformance budget, where performance of block device resembles that\nof an SSD and not of a Floppy Disk.\n\nSome more reading on how that works:\nhttps://aws.amazon.com/ru/blogs/database/understanding-burst-vs-baseline-performance-with-amazon-rds-and-gp2/\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\n\n",
"msg_date": "Fri, 20 Mar 2020 01:11:23 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-19 20:47:40 +0100, Laurenz Albe wrote:\n> On Thu, 2020-03-19 at 21:39 +1300, David Rowley wrote:\n> > I've attached a small fix which I'd like to apply to your v8 patch.\n> > With that, and pending one final look, I'd like to push this during my\n> > Monday (New Zealand time). So if anyone strongly objects to that,\n> > please state their case before then.\n\nI am doubtful it should be committed with the current settings. See below.\n\n\n> From 3ba4b572d82969bbb2af787d1bccc72f417ad3a0 Mon Sep 17 00:00:00 2001\n> From: Laurenz Albe <laurenz.albe@cybertec.at>\n> Date: Thu, 19 Mar 2020 20:26:43 +0100\n> Subject: [PATCH] Autovacuum tables that have received only inserts\n>\n> Add \"autovacuum_vacuum_insert_threshold\" and\n> \"autovacuum_vacuum_insert_scale_factor\" GUC and reloption.\n> The default value for the threshold is 10000000;\n> the scale factor defaults to 0.01.\n>\n> Any table that has received more inserts since it was\n> last vacuumed (and that is not vacuumed for another\n> reason) will be autovacuumed.\n>\n> This avoids the known problem that insert-only tables\n> are never autovacuumed until they need to have their\n> anti-wraparound autovacuum, which then can be massive\n> and disruptive.\n\nShouldn't this also mention index only scans? IMO that's at least as big\na problem as the \"large vacuum\" problem.\n\n\n> + <varlistentry id=\"guc-autovacuum-vacuum-insert-threshold\" xreflabel=\"autovacuum_vacuum_insert_threshold\">\n> + <term><varname>autovacuum_vacuum_insert_threshold</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>autovacuum_vacuum_insert_threshold</varname></primary>\n> + <secondary>configuration parameter</secondary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies the number of inserted tuples needed to trigger a\n> + <command>VACUUM</command> in any one table.\n> + The default is 10000000 tuples.\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line;\n> + but the setting can be overridden for individual tables by\n> + changing table storage parameters.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n> <varlistentry id=\"guc-autovacuum-analyze-threshold\" xreflabel=\"autovacuum_analyze_threshold\">\n> <term><varname>autovacuum_analyze_threshold</varname> (<type>integer</type>)\n> <indexterm>\n> @@ -7342,6 +7362,27 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n> </listitem>\n> </varlistentry>\n>\n> + <varlistentry id=\"guc-autovacuum-vacuum-insert-scale-factor\" xreflabel=\"autovacuum_vacuum_insert_scale_factor\">\n> + <term><varname>autovacuum_vacuum_insert_scale_factor</varname> (<type>floating point</type>)\n> + <indexterm>\n> + <primary><varname>autovacuum_vacuum_insert_scale_factor</varname></primary>\n> + <secondary>configuration parameter</secondary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies a fraction of the table size to add to\n> + <varname>autovacuum_vacuum_insert_threshold</varname>\n> + when deciding whether to trigger a <command>VACUUM</command>.\n> + The default is 0.01 (1% of table size).\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line;\n> + but the setting can be overridden for individual tables by\n> + changing table storage parameters.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> +\n\nI am *VERY* doubtful that the attempt of using a large threshold, and a\ntiny scale factor, is going to work out well. I'm not confident enough\nin my gut feeling to full throatedly object, but confident enough that\nI'd immediately change it on any important database I operated.\n\nIndependent of how large a constant you set the threshold to, for\ndatabases with substantially bigger tables this will lead to [near]\nconstant vacuuming. As soon as you hit 1 billion rows - which isn't\nactually that much - this is equivalent to setting\nautovacuum_{vacuum,analyze}_scale_factor to 0.01. There's cases where\nthat can be a sensible setting, but I don't think anybody would suggest\nit as a default.\n\n\nAfter thinking about it for a while, I think it's fundamentally flawed\nto use large constant thresholds to avoid unnecessary vacuums. It's easy\nto see cases where it's bad for common databases of today, but it'll be\nmuch worse a few years down the line where common table sizes have grown\nby a magnitude or two. Nor do they address the difference between tables\nof a certain size with e.g. 2kb wide rows, and a same sized table with\n28 byte wide rows. The point of constant thresholds imo can only be to\navoid unnecessary work at the *small* (even tiny) end, not the opposite.\n\n\nI think there's too much \"reinventing\" autovacuum scheduling in a\n\"local\" insert-only manner happening in this thread. And as far as I can\ntell additionally only looking at a somewhat narrow slice of insert only\nworkloads.\n\n\nI, again, strongly suggest using much more conservative values here. And\nthen try to address the shortcomings - like not freezing aggressively\nenough - in separate patches (and by now separate releases, in all\nlikelihood).\n\n\nThis will have a huge impact on a lot of postgres\ninstallations. Autovacuum already is perceived as one of the biggest\nissues around postgres. If the ratio of cases where these changes\nimprove things to the cases it regresses isn't huge, it'll be painful\n(silent improvements are obviously less noticed than breakages).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 15:17:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-20 01:11:23 +0300, Darafei \"Komяpa\" Praliaskouski wrote:\n> > > According to my reckoning, that is the remaining objection to the patch\n> > > as it is (with ordinary freezing behavior).\n> > >\n> > > How about a scale_factor od 0.005? That will be high enough for large\n> > > tables, which seem to be the main concern here.\n> >\n> > Seems low on a first blush. On a large-ish table with 1 billion tuples,\n> > we'd vacuum every 5 million inserts. For many ETL workloads this will\n> > result in a vacuum after every bulk operation. Potentially with an index\n> > scan associated (even if there's no errors, a lot of bulk loads use ON\n> > CONFLICT INSERT leading to the occasional update).\n> \n> This is a good and wanted thing.\n\nI don't think that's true in general. As proposed this can increase the\noverall amount of IO (both reads and writes) due to vacuum by a *LOT*.\n\n\n> Upthread it was already suggested that \"everyone knows to vacuum after\n> bulk operations\". This will go and vacuum the data while it's hot and\n> in caches, not afterwards, reading from disk.\n\nFor many bulk load cases the data will not be in cache, in particular not\nwhen individual bulk inserts are more than a few gigabytes.\n\n\n> The problem hit by Mandrill is simple: in modern cloud environments\n> it's sometimes simply impossible to read all the data on disk because\n> of different kinds of throttling.\n\nYes. Which is one of the reasons why this has the potential to cause\nserious issues. The proposed changes very often will *increase* the\ntotal amount of IO. A good fraction of the time that will be \"hidden\" by\ncaching, but it'll be far from all the time.\n\n\n> At some point your production database just shuts down and asks to\n> VACUUM in single user mode for 40 days.\n\nThat basically has nothing to do with what we're talking about here. The\ndefault wraparound trigger is 200 million xids, and shutdowns start at\nmore than 2 billion xids. If an anti-wrap autovacuum can't finish within\n2 billion rows, then this won't be addressed by vacuuming more\nfrequently (including more frequent index scans, causing a lot more\nIO!).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 15:27:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 20 Mar 2020 at 11:17, Andres Freund <andres@anarazel.de> wrote:\n> I think there's too much \"reinventing\" autovacuum scheduling in a\n> \"local\" insert-only manner happening in this thread. And as far as I can\n> tell additionally only looking at a somewhat narrow slice of insert only\n> workloads.\n\nI understand your concern and you might be right. However, I think the\nmain reason that the default settings for the new threshold and scale\nfactor has deviated this far from the existing settings is regarding\nthe example of a large insert-only table that receives inserts of 1\nrow per xact. If we were to copy the existing settings then when that\ntable gets to 1 billion rows, it would be eligible for an\ninsert-vacuum after 200 million tuples/xacts, which does not help the\nsituation since an anti-wraparound vacuum would be triggering then\nanyway.\n\nI'm unsure if it will help with the discussion, but I put together a\nquick and dirty C program to show when a table will be eligible for an\nauto-vacuum with the given scale_factor and threshold\n\n$ gcc -O2 vacuum.c -o vacuum\n$ ./vacuum\nSyntax ./vacuum <scale_factor> <threshold> <maximum table size in rows>\n$ ./vacuum 0.01 10000000 100000000000 | tail -n 1\nVacuum 463 at 99183465731 reltuples, 991915456 inserts\n$ ./vacuum 0.2 50 100000000000 | tail -n 1\nVacuum 108 at 90395206733 reltuples, 15065868288 inserts\n\nSo, yeah, certainly, there are more than four times as many vacuums\nwith an insert-only table of 100 billion rows using the proposed\nsettings vs the defaults for the existing scale_factor and threshold.\nHowever, at the tail end of the first run there, we were close to a\nbillion rows (991,915,456) between vacucums. Is that too excessive?\n\nI'm sharing this in the hope that it'll make it easy to experiment\nwith settings which we can all agree on.\n\nFor a 1 billion row table, the proposed settings give us 69 vacuums\nand the standard settings 83.",
"msg_date": "Fri, 20 Mar 2020 15:05:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-20 15:05:03 +1300, David Rowley wrote:\n> On Fri, 20 Mar 2020 at 11:17, Andres Freund <andres@anarazel.de> wrote:\n> > I think there's too much \"reinventing\" autovacuum scheduling in a\n> > \"local\" insert-only manner happening in this thread. And as far as I can\n> > tell additionally only looking at a somewhat narrow slice of insert only\n> > workloads.\n> \n> I understand your concern and you might be right. However, I think the\n> main reason that the default settings for the new threshold and scale\n> factor has deviated this far from the existing settings is regarding\n> the example of a large insert-only table that receives inserts of 1\n> row per xact. If we were to copy the existing settings then when that\n> table gets to 1 billion rows, it would be eligible for an\n> insert-vacuum after 200 million tuples/xacts, which does not help the\n> situation since an anti-wraparound vacuum would be triggering then\n> anyway.\n\nSure, that'd happen for inserts that happen after that threshold. I'm\njust not convinced that this is as huge a problem as presented in this\nthread. And I'm fairly convinced the proposed solution is the wrong\ndirection to go into.\n\nIt's not like that's not an issue for updates? If you update one row per\ntransaction, then you run into exactly the same issue for a table of the\nsame size? You maybe could argue that it's more common to insert 1\nbillion tuples in individual transaction, than it is to update 1 billion\ntuples in individual transactions, but I don't think it's a huge\ndifference if it even exist.\n\nIn fact the problem is worse for the update case, because that tends to\ngenerate a lot more random looking IO during vacuum (both because only\nparts of the table are updated causing small block reads/writes, and\nbecause it will need [multiple] index scans/vacuum, and because the\nvacuum is a lot more expensive CPU time wise).\n\nImo this line of reasoning is about adding autovacuum scheduling based\non xid age, not about insert only workloads.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 19:23:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-19 at 15:17 -0700, Andres Freund wrote:\n> I am doubtful it should be committed with the current settings. See below.\n> \n> > From 3ba4b572d82969bbb2af787d1bccc72f417ad3a0 Mon Sep 17 00:00:00 2001\n> > From: Laurenz Albe <laurenz.albe@cybertec.at>\n> > Date: Thu, 19 Mar 2020 20:26:43 +0100\n> > Subject: [PATCH] Autovacuum tables that have received only inserts\n> > \n> > This avoids the known problem that insert-only tables\n> > are never autovacuumed until they need to have their\n> > anti-wraparound autovacuum, which then can be massive\n> > and disruptive.\n> \n> Shouldn't this also mention index only scans? IMO that's at least as big\n> a problem as the \"large vacuum\" problem.\n\nYes, that would be good.\n\n> I am *VERY* doubtful that the attempt of using a large threshold, and a\n> tiny scale factor, is going to work out well. I'm not confident enough\n> in my gut feeling to full throatedly object, but confident enough that\n> I'd immediately change it on any important database I operated.\n> \n> Independent of how large a constant you set the threshold to, for\n> databases with substantially bigger tables this will lead to [near]\n> constant vacuuming. As soon as you hit 1 billion rows - which isn't\n> actually that much - this is equivalent to setting\n> autovacuum_{vacuum,analyze}_scale_factor to 0.01. There's cases where\n> that can be a sensible setting, but I don't think anybody would suggest\n> it as a default.\n\nIn that, you are assuming that the bigger a table is, the more data\nmodifications it will get, so that making the scale factor the dominant\nelement will work out better.\n\nMy experience is that it is more likely for the change rate (inserts,\nI am less certain about updates and deletes) to be independent of the\ntable size. (Too) many large databases are so large not because the\ndata influx grows linearly over time, but because people don't want to\nget rid of old data (or would very much like to do so, but never planned\nfor it).\n\nThis second scenario would be much better served by a high threshold and\na low scale factor.\n\n> After thinking about it for a while, I think it's fundamentally flawed\n> to use large constant thresholds to avoid unnecessary vacuums. It's easy\n> to see cases where it's bad for common databases of today, but it'll be\n> much worse a few years down the line where common table sizes have grown\n> by a magnitude or two. Nor do they address the difference between tables\n> of a certain size with e.g. 2kb wide rows, and a same sized table with\n> 28 byte wide rows. The point of constant thresholds imo can only be to\n> avoid unnecessary work at the *small* (even tiny) end, not the opposite.\n> \n> \n> I think there's too much \"reinventing\" autovacuum scheduling in a\n> \"local\" insert-only manner happening in this thread. And as far as I can\n> tell additionally only looking at a somewhat narrow slice of insert only\n> workloads.\n\nPerhaps. The traditional \"high scale factor, low threshold\" system\nis (in my perception) mostly based on the objective of cleaning up\ndead tuples. When autovacuum was introduced, index only scans were\nonly a dream.\n\nWith the objective of getting rid of dead tuples, having the scale factor\nbe the dominant part makes sense: it is OK for bloat to be a certain\npercentage of the table size.\n\nAlso, as you say, tables were much smaller then, and they will only\nbecome bigger in the future. But I find that to be an argument *for*\nmaking the threshold the dominant element: otherwise, you vacuum less\nand less often, and the individual runs become larger and larger.\nNow that vacuum skips pages where it knows it has nothing to do,\ndoesn't take away much of the pain of vacuuming large tables where\nnothing much has changed?\n\n> I, again, strongly suggest using much more conservative values here. And\n> then try to address the shortcomings - like not freezing aggressively\n> enough - in separate patches (and by now separate releases, in all\n> likelihood).\n\nThere is much to say for that, I agree.\n\n> This will have a huge impact on a lot of postgres\n> installations. Autovacuum already is perceived as one of the biggest\n> issues around postgres. If the ratio of cases where these changes\n> improve things to the cases it regresses isn't huge, it'll be painful\n> (silent improvements are obviously less noticed than breakages).\n\nYes, that makes it scary to mess with autovacuum.\n\nOne of the problems I see in the course of this discussion is that one\ncan always come up with examples that make any choice look bad.\nIt is impossible to do it right for everybody.\n\nIn the light of that, I won't object to a more conservative default\nvalue for the parameters, even though my considerations above suggest\nto me the opposite. But perhaps my conclusions are based on flawed\npremises.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 20 Mar 2020 06:59:57 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-19 at 14:38 -0700, Andres Freund wrote:\n> > I am not sure about b). In my mind, the objective is not to prevent\n> > anti-wraparound vacuums, but to see that they have less work to do,\n> > because previous autovacuum runs already have frozen anything older than\n> > vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> > to freeze during any run would be at most one fourth of today's number\n> > when we hit autovacuum_freeze_max_age.\n> \n> This whole chain of arguments seems like it actually has little to do\n> with vacuuming insert only/mostly tables. The same problem exists for\n> tables that aren't insert only/mostly. Instead it IMO is an argument for\n> a general change in logic about when to freeze.\n\nMy goal was to keep individual vacuum runs from having too much\nwork to do. The freezing was an afterthought.\n\nThe difference (for me) is that I am more convinced that the insert\nrate for insert-only table is constant over time than I am of the\nupdate rate to be constant.\n\n> What exactly is it that you want to achieve by having anti-wrap vacuums\n> be quicker? If the goal is to reduce the window in which autovacuums\n> aren't automatically cancelled when there's a conflicting lock request,\n> or in which autovacuum just schedules based on xid age, then you can't\n> have wraparound vacuums needing to do substantial amount of work.\n> \n> Except for not auto-cancelling, and the autovac scheduling issue,\n> there's really nothing magic about anti-wrap vacuums.\n\nYes. I am under the impression that it is the duration and amount\nof work per vacuum run that is the problem here, not the aggressiveness\nas such.\n\nIf you are in the habit of frequently locking tables with high\nlock modes (and I have seen people do that), you are lost anyway:\nnormal autovacuum runs will always die, and anti-wraparound vacuum\nwill kill you. There is nothing we can do about that, except perhaps\nput a fat warning in the documentation of LOCK.\n\n> If the goal is to avoid redundant writes, then it's largely unrelated to\n> anti-wrap vacuums, and can to a large degree addressed by\n> opportunistically freezing (best even during hot pruning!).\n> \n> \n> I am more and more convinced that it's a seriously bad idea to tie\n> committing \"autovacuum after inserts\" to also committing a change in\n> logic around freezing. That's not to say we shouldn't try to address\n> both this cycle, but discussing them as if they really are one item\n> makes it both more likely that we get nothing in, and more likely that\n> we miss the larger picture.\n\nI hear you, and I agree that we shouldn't do it with this patch.\n\n> If there are no other modifications to the page, more aggressively\n> freezing can lead to seriously increased write volume. Its quite normal\n> to have databases where data in insert only tables *never* gets old\n> enough to need to be frozen (either because xid usage is low, or because\n> older partitions are dropped). If data in an insert-only table isn't\n> write-only, the hint bits are likely to already be set, which means that\n> vacuum will just cause the entire table to be written another time,\n> without a reason.\n> \n> \n> I don't see how it's ok to substantially regress this very common\n> workload. IMO this basically means that more aggressively and\n> non-opportunistically freezing simply is a no-go (be it for insert or\n> other causes for vacuuming).\n> \n> What am I missing?\n\nNothing that I can see, and these are good examples why eager freezing\nmay not be such a smart idea after all.\n\nI think your idea of freezing everything on a page when we know it is\ngoing to be dirtied anyway is the smartest way of going about that.\n\nMy only remaining quibbles are about scale factor and threshold, see\nmy other mail.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 20 Mar 2020 07:17:40 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-19 06:45:48 +0100, Laurenz Albe wrote:\n> On Tue, 2020-03-17 at 18:02 -0700, Andres Freund wrote:\n> > I don't think a default scale factor of 0 is going to be ok. For\n> > large-ish tables this will basically cause permanent vacuums. And it'll\n> > sometimes trigger for tables that actually coped well so far. 10 million\n> > rows could be a few seconds, not more.\n> > \n> > I don't think that the argument that otherwise a table might not get\n> > vacuumed before autovacuum_freeze_max_age is convincing enough.\n> > \n> > a) if that's indeed the argument, we should increase the default\n> > autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n> > the main argument against that from before isn't valid anymore.\n> > \n> > b) there's not really a good arguments for vacuuming more often than\n> > autovacuum_freeze_max_age for such tables. It'll not be not frequent\n> > enough to allow IOS for new data, and you're not preventing\n> > anti-wraparound vacuums from happening.\n> \n> According to my reckoning, that is the remaining objection to the patch\n> as it is (with ordinary freezing behavior).\n> \n> How about a scale_factor od 0.005? That will be high enough for large\n> tables, which seem to be the main concern here.\n> \n> I fully agree with your point a) - should that be part of the patch?\n> \n> I am not sure about b). In my mind, the objective is not to prevent\n> anti-wraparound vacuums, but to see that they have less work to do,\n> because previous autovacuum runs already have frozen anything older than\n> vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> to freeze during any run would be at most one fourth of today's number\n> when we hit autovacuum_freeze_max_age.\n\nBased on two IM conversations I think it might be worth emphasizing how\nvacuum_cleanup_index_scale_factor works:\n\nFor btree, even if there is not a single deleted tuple, we can *still*\nend up doing a full index scans at the end of vacuum. As the docs describe\nvacuum_cleanup_index_scale_factor:\n\n <para>\n Specifies the fraction of the total number of heap tuples counted in\n the previous statistics collection that can be inserted without\n incurring an index scan at the <command>VACUUM</command> cleanup stage.\n This setting currently applies to B-tree indexes only.\n </para>\n\nI.e. with the default settings we will perform a whole-index scan\n(without visibility map or such) after every 10% growth of the\ntable. Which means that, even if the visibility map prevents repeated\ntables accesses, increasing the rate of vacuuming for insert-only tables\ncan cause a lot more whole index scans. Which means that vacuuming an\ninsert-only workload frequently *will* increase the total amount of IO,\neven if there is not a single dead tuple. Rather than just spreading the\nsame amount of IO over more vacuums.\n\nAnd both gin and gist just always do a full index scan, regardless of\nvacuum_cleanup_index_scale_factor (either during a bulk delete, or\nduring the cleanup). Thus more frequent vacuuming for insert-only\ntables can cause a *lot* of pain (even an approx quadratic increase of\nIO? O(increased_frequency * peak_index_size)?) if you have large\nindexes - which is very common for gin/gist.\n\n\nIs there something missing in the above description?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 23:20:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-20 06:59:57 +0100, Laurenz Albe wrote:\n> On Thu, 2020-03-19 at 15:17 -0700, Andres Freund wrote:\n> > I am *VERY* doubtful that the attempt of using a large threshold, and a\n> > tiny scale factor, is going to work out well. I'm not confident enough\n> > in my gut feeling to full throatedly object, but confident enough that\n> > I'd immediately change it on any important database I operated.\n> > \n> > Independent of how large a constant you set the threshold to, for\n> > databases with substantially bigger tables this will lead to [near]\n> > constant vacuuming. As soon as you hit 1 billion rows - which isn't\n> > actually that much - this is equivalent to setting\n> > autovacuum_{vacuum,analyze}_scale_factor to 0.01. There's cases where\n> > that can be a sensible setting, but I don't think anybody would suggest\n> > it as a default.\n> \n> In that, you are assuming that the bigger a table is, the more data\n> modifications it will get, so that making the scale factor the dominant\n> element will work out better.\n\n> My experience is that it is more likely for the change rate (inserts,\n> I am less certain about updates and deletes) to be independent of the\n> table size. (Too) many large databases are so large not because the\n> data influx grows linearly over time, but because people don't want to\n> get rid of old data (or would very much like to do so, but never planned\n> for it).\n\nI don't think growing ingest rate into insert only tables is exactly\nrare. Maybe I've been too long in the Bay Area though.\n\n\n> This second scenario would be much better served by a high threshold and\n> a low scale factor.\n\nI don't think that's really true. As soon as there's any gin/gist\nindexes, a single non-HOT dead tuple, or a btree index grew by more\nthan vacuum_cleanup_index_scale_factor, indexes are scanned as a\nwhole. See the email I just concurrently happened to write:\nhttps://postgr.es/m/20200320062031.uwagypenawujwajx%40alap3.anarazel.de\n\nWhich means that often each additional vacuum causes IO that's\nproportional to the *total* index size, *not* the table size\ndelta. Which means that the difference in total IO basically is\nO(increased_frequency * peak_table_size) in the worst case.\n\n\n\n\n> > After thinking about it for a while, I think it's fundamentally flawed\n> > to use large constant thresholds to avoid unnecessary vacuums. It's easy\n> > to see cases where it's bad for common databases of today, but it'll be\n> > much worse a few years down the line where common table sizes have grown\n> > by a magnitude or two. Nor do they address the difference between tables\n> > of a certain size with e.g. 2kb wide rows, and a same sized table with\n> > 28 byte wide rows. The point of constant thresholds imo can only be to\n> > avoid unnecessary work at the *small* (even tiny) end, not the opposite.\n> > \n> > \n> > I think there's too much \"reinventing\" autovacuum scheduling in a\n> > \"local\" insert-only manner happening in this thread. And as far as I can\n> > tell additionally only looking at a somewhat narrow slice of insert only\n> > workloads.\n> \n> Perhaps. The traditional \"high scale factor, low threshold\" system\n> is (in my perception) mostly based on the objective of cleaning up\n> dead tuples. When autovacuum was introduced, index only scans were\n> only a dream.\n> \n> With the objective of getting rid of dead tuples, having the scale factor\n> be the dominant part makes sense: it is OK for bloat to be a certain\n> percentage of the table size.\n> \n\nAs far as I can tell this argument doesn't make sense in light of the ob\nfact that many vacuums trigger whole index scans, even if there are no\ndeleted tuples, as described above?\n\n\nEven disregarding the index issue, I still don't think your argument is\nvery convicing. For one, as I mentioned in another recent email, 10\nmillion rows in a narrow table is something entirely different than 10\nmillion rows in a very wide table. scale_factor doesn't have that\nproblem to the same degree. Also, it's fairly obvious that this\nargument doesn't hold in the general sense, otherwise we could just set\na threshold of, say, 10000.\n\nThere's also the issue that frequent vacuums will often not be able to\nmark most of the the new data all-visible, due to concurrent\nsessions. E.g. concurrent bulk loading sessions, analytics queries\nactually looking at the data, replicas all can easily prevent data that\nwas just inserted from being marked 'all-visible' (not to speak of\nfrozen). That's not likely to be a problem in a purely oltp system that\ninserts only single rows per xact, and has no longlived readers (nor\nreplicas with hs_feedback = on), but outside of that...\n\n\n> Also, as you say, tables were much smaller then, and they will only\n> become bigger in the future. But I find that to be an argument *for*\n> making the threshold the dominant element: otherwise, you vacuum less\n> and less often, and the individual runs become larger and larger.\n\nWhich mostly is ok, because there are significant costs that scale with\nthe table size. And in a lot (but far from all!) of cases the benefits\nof vacuuming scale more with the overall table size than with the delta\nof the size.\n\n\n> Now that vacuum skips pages where it knows it has nothing to do,\n> doesn't take away much of the pain of vacuuming large tables where\n> nothing much has changed?\n\nUnfortunately not really.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Mar 2020 23:44:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 20 Mar 2020 at 15:20, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-19 06:45:48 +0100, Laurenz Albe wrote:\n> > On Tue, 2020-03-17 at 18:02 -0700, Andres Freund wrote:\n> > > I don't think a default scale factor of 0 is going to be ok. For\n> > > large-ish tables this will basically cause permanent vacuums. And it'll\n> > > sometimes trigger for tables that actually coped well so far. 10 million\n> > > rows could be a few seconds, not more.\n> > >\n> > > I don't think that the argument that otherwise a table might not get\n> > > vacuumed before autovacuum_freeze_max_age is convincing enough.\n> > >\n> > > a) if that's indeed the argument, we should increase the default\n> > > autovacuum_freeze_max_age - now that there's insert triggered vacuums,\n> > > the main argument against that from before isn't valid anymore.\n> > >\n> > > b) there's not really a good arguments for vacuuming more often than\n> > > autovacuum_freeze_max_age for such tables. It'll not be not frequent\n> > > enough to allow IOS for new data, and you're not preventing\n> > > anti-wraparound vacuums from happening.\n> >\n> > According to my reckoning, that is the remaining objection to the patch\n> > as it is (with ordinary freezing behavior).\n> >\n> > How about a scale_factor od 0.005? That will be high enough for large\n> > tables, which seem to be the main concern here.\n> >\n> > I fully agree with your point a) - should that be part of the patch?\n> >\n> > I am not sure about b). In my mind, the objective is not to prevent\n> > anti-wraparound vacuums, but to see that they have less work to do,\n> > because previous autovacuum runs already have frozen anything older than\n> > vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> > to freeze during any run would be at most one fourth of today's number\n> > when we hit autovacuum_freeze_max_age.\n>\n> Based on two IM conversations I think it might be worth emphasizing how\n> vacuum_cleanup_index_scale_factor works:\n>\n> For btree, even if there is not a single deleted tuple, we can *still*\n> end up doing a full index scans at the end of vacuum. As the docs describe\n> vacuum_cleanup_index_scale_factor:\n>\n> <para>\n> Specifies the fraction of the total number of heap tuples counted in\n> the previous statistics collection that can be inserted without\n> incurring an index scan at the <command>VACUUM</command> cleanup stage.\n> This setting currently applies to B-tree indexes only.\n> </para>\n>\n> I.e. with the default settings we will perform a whole-index scan\n> (without visibility map or such) after every 10% growth of the\n> table. Which means that, even if the visibility map prevents repeated\n> tables accesses, increasing the rate of vacuuming for insert-only tables\n> can cause a lot more whole index scans. Which means that vacuuming an\n> insert-only workload frequently *will* increase the total amount of IO,\n> even if there is not a single dead tuple. Rather than just spreading the\n> same amount of IO over more vacuums.\n\nRight.\n\n>\n> And both gin and gist just always do a full index scan, regardless of\n> vacuum_cleanup_index_scale_factor (either during a bulk delete, or\n> during the cleanup). Thus more frequent vacuuming for insert-only\n> tables can cause a *lot* of pain (even an approx quadratic increase of\n> IO? O(increased_frequency * peak_index_size)?) if you have large\n> indexes - which is very common for gin/gist.\n\nThat's right but for gin, more frequent vacuuming for insert-only\ntables can help to clean up the pending list, which increases search\nspeed and better than doing it by a backend process.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Mar 2020 15:59:01 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2020-03-19 at 23:20 -0700, Andres Freund wrote:\n> I am not sure about b). In my mind, the objective is not to prevent\n> > anti-wraparound vacuums, but to see that they have less work to do,\n> > because previous autovacuum runs already have frozen anything older than\n> > vacuum_freeze_min_age. So, assuming linear growth, the number of tuples\n> > to freeze during any run would be at most one fourth of today's number\n> > when we hit autovacuum_freeze_max_age.\n> \n> Based on two IM conversations I think it might be worth emphasizing how\n> vacuum_cleanup_index_scale_factor works:\n> \n> For btree, even if there is not a single deleted tuple, we can *still*\n> end up doing a full index scans at the end of vacuum. As the docs describe\n> vacuum_cleanup_index_scale_factor:\n> \n> <para>\n> Specifies the fraction of the total number of heap tuples counted in\n> the previous statistics collection that can be inserted without\n> incurring an index scan at the <command>VACUUM</command> cleanup stage.\n> This setting currently applies to B-tree indexes only.\n> </para>\n> \n> I.e. with the default settings we will perform a whole-index scan\n> (without visibility map or such) after every 10% growth of the\n> table. Which means that, even if the visibility map prevents repeated\n> tables accesses, increasing the rate of vacuuming for insert-only tables\n> can cause a lot more whole index scans. Which means that vacuuming an\n> insert-only workload frequently *will* increase the total amount of IO,\n> even if there is not a single dead tuple. Rather than just spreading the\n> same amount of IO over more vacuums.\n> \n> And both gin and gist just always do a full index scan, regardless of\n> vacuum_cleanup_index_scale_factor (either during a bulk delete, or\n> during the cleanup). Thus more frequent vacuuming for insert-only\n> tables can cause a *lot* of pain (even an approx quadratic increase of\n> IO? O(increased_frequency * peak_index_size)?) if you have large\n> indexes - which is very common for gin/gist.\n\nOk, ok. Thanks for the explanation.\n\nIn the light of that, I agree that we should increase the scale_factor.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 20 Mar 2020 14:43:20 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-20 at 14:43 +0100, Laurenz Albe wrote:\n> I.e. with the default settings we will perform a whole-index scan\n> > (without visibility map or such) after every 10% growth of the\n> > table. Which means that, even if the visibility map prevents repeated\n> > tables accesses, increasing the rate of vacuuming for insert-only tables\n> > can cause a lot more whole index scans. Which means that vacuuming an\n> > insert-only workload frequently *will* increase the total amount of IO,\n> > even if there is not a single dead tuple. Rather than just spreading the\n> > same amount of IO over more vacuums.\n> > \n> > And both gin and gist just always do a full index scan, regardless of\n> > vacuum_cleanup_index_scale_factor (either during a bulk delete, or\n> > during the cleanup). Thus more frequent vacuuming for insert-only\n> > tables can cause a *lot* of pain (even an approx quadratic increase of\n> > IO? O(increased_frequency * peak_index_size)?) if you have large\n> > indexes - which is very common for gin/gist.\n> \n> In the light of that, I agree that we should increase the scale_factor.\n\nHere is version 10 of the patch, which uses a scale factor of 0.2.\n\nYours,\nLaurenz Albe",
"msg_date": "Mon, 23 Mar 2020 14:27:29 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 2020-03-23 at 14:27 +0100, Laurenz Albe wrote:\n> Here is version 10 of the patch, which uses a scale factor of 0.2.\n\nThis patch should be what everybody can live with.\n\nIt would be good if we can get at least that committed before feature freeze.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 25 Mar 2020 15:38:03 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 02:27:29PM +0100, Laurenz Albe wrote:\n> Here is version 10 of the patch, which uses a scale factor of 0.2.\n\nThanks\n\n> Any table that has received more inserts since it was\n> last vacuumed (and that is not vacuumed for another\n> reason) will be autovacuumed.\n\nSince this vacuum doesn't trigger any special behavior (freeze), you can remove\nthe parenthesized part: \"(and that is not vacuumed for another reason)\".\n\nMaybe in the docs you can write this with thousands separators: 10,000,000\n\nIt looks like the GUC uses scale factor max=1e10, but the relopt is still\nmax=100, which means it's less possible to disable for a single rel.\n\n> +++ b/src/backend/access/common/reloptions.c\n> @@ -398,6 +407,15 @@ static relopt_real realRelOpts[] =\n> \t\t},\n> \t\t-1, 0.0, 100.0\n> \t},\n> +\t{\n> +\t\t{\n> +\t\t\t\"autovacuum_vacuum_insert_scale_factor\",\n> +\t\t\t\"Number of tuple inserts prior to vacuum as a fraction of reltuples\",\n> +\t\t\tRELOPT_KIND_HEAP | RELOPT_KIND_TOAST,\n> +\t\t\tShareUpdateExclusiveLock\n> +\t\t},\n> +\t\t-1, 0.0, 100.0\n> +\t},\n> \t{\n> \t\t{\n> \t\t\t\"autovacuum_analyze_scale_factor\",\n\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -3549,6 +3558,17 @@ static struct config_real ConfigureNamesReal[] =\n> \t\t0.2, 0.0, 100.0,\n> \t\tNULL, NULL, NULL\n> \t},\n> +\n> +\t{\n> +\t\t{\"autovacuum_vacuum_insert_scale_factor\", PGC_SIGHUP, AUTOVACUUM,\n> +\t\t\tgettext_noop(\"Number of tuple inserts prior to vacuum as a fraction of reltuples.\"),\n> +\t\t\tNULL\n> +\t\t},\n> +\t\t&autovacuum_vac_ins_scale,\n> +\t\t0.2, 0.0, 1e10,\n> +\t\tNULL, NULL, NULL\n> +\t},\n> +\n> \t{\n> \t\t{\"autovacuum_analyze_scale_factor\", PGC_SIGHUP, AUTOVACUUM,\n> \t\t\tgettext_noop(\"Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples.\"),\n\n\n",
"msg_date": "Wed, 25 Mar 2020 10:06:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On 2020-Mar-25, Justin Pryzby wrote:\n\n> Maybe in the docs you can write this with thousands separators: 10,000,000\n> \n> It looks like the GUC uses scale factor max=1e10, but the relopt is still\n> max=100, which means it's less possible to disable for a single rel.\n\nI have paid no attention to this thread, but how does it make sense to\nhave a scale factor to be higher than 100? Surely you mean the\nthreshold value that should be set to ten million, not the scale factor?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:46:52 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 12:46:52PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-25, Justin Pryzby wrote:\n> \n> > Maybe in the docs you can write this with thousands separators: 10,000,000\n> > \n> > It looks like the GUC uses scale factor max=1e10, but the relopt is still\n> > max=100, which means it's less possible to disable for a single rel.\n> \n> I have paid no attention to this thread, but how does it make sense to\n> have a scale factor to be higher than 100? Surely you mean the\n> threshold value that should be set to ten million, not the scale factor?\n\nWe went over this here:\nhttps://www.postgresql.org/message-id/20200317195616.GZ26184%40telsasoft.com\n...\nhttps://www.postgresql.org/message-id/20200317213426.GB26184%40telsasoft.com\n\nThe scale factor is relative to the reltuples estimate, which comes from vacuum\n(which presently doesn't run against insert-only tables, and what we're trying\nto schedule), or analyze, which probably runs adequately, but might be disabled\nor run too infrequently.\n\nSince we talked about how scale_factor can be used to effectively disable this\nnew feature, I thought that scale=100 was too small and suggesed 1e10 (same as\nmax for vacuum_cleanup_index_scale_factor since 4d54543ef). That should allow\nhandling the case that analyze is disabled, or its threshold is high, or it\nhasn't run yet, or it's running but hasn't finished, or analyze is triggered as\nsame time as vacuum.\n\nA table with 1e7 tuples (threshold) into which one inserts 1e9 tuples would hit\nscale_factor=100 threshold, which means scale_factor failed to \"disable\" the\nfeature, as claimed. If anything, I think it may need to be larger...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 Mar 2020 11:05:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-25 11:05:21 -0500, Justin Pryzby wrote:\n> Since we talked about how scale_factor can be used to effectively disable this\n> new feature, I thought that scale=100 was too small and suggesed 1e10 (same as\n> max for vacuum_cleanup_index_scale_factor since 4d54543ef). That should allow\n> handling the case that analyze is disabled, or its threshold is high, or it\n> hasn't run yet, or it's running but hasn't finished, or analyze is triggered as\n> same time as vacuum.\n\nFor disabling we instead should allow -1, and disable the feature if set\nto < 0.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Mar 2020 12:26:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 10:26 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-25 11:05:21 -0500, Justin Pryzby wrote:\n> > Since we talked about how scale_factor can be used to effectively disable this\n> > new feature, I thought that scale=100 was too small and suggesed 1e10 (same as\n> > max for vacuum_cleanup_index_scale_factor since 4d54543ef). That should allow\n> > handling the case that analyze is disabled, or its threshold is high, or it\n> > hasn't run yet, or it's running but hasn't finished, or analyze is triggered as\n> > same time as vacuum.\n>\n> For disabling we instead should allow -1, and disable the feature if set\n> to < 0.\n\nThis patch introduces both GUC and reloption. In reloptions we\ntypically use -1 for \"disable reloption, use GUC value instead\"\nsemantics. So it's unclear how should we allow reloption to both\ndisable feature and disable reloption. I think we don't have a\nprecedent in the codebase yet. We could allow -2 (disable reloption)\nand -1 (disable feature) for reloption. Opinions?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 25 Mar 2020 23:19:23 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, 2020-03-25 at 23:19 +0300, Alexander Korotkov wrote:\n> On Wed, Mar 25, 2020 at 10:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2020-03-25 11:05:21 -0500, Justin Pryzby wrote:\n> > > Since we talked about how scale_factor can be used to effectively disable this\n> > > new feature, I thought that scale=100 was too small and suggesed 1e10 (same as\n> > > max for vacuum_cleanup_index_scale_factor since 4d54543ef). That should allow\n> > > handling the case that analyze is disabled, or its threshold is high, or it\n> > > hasn't run yet, or it's running but hasn't finished, or analyze is triggered as\n> > > same time as vacuum.\n> > \n> > For disabling we instead should allow -1, and disable the feature if set\n> > to < 0.\n> \n> This patch introduces both GUC and reloption. In reloptions we\n> typically use -1 for \"disable reloption, use GUC value instead\"\n> semantics. So it's unclear how should we allow reloption to both\n> disable feature and disable reloption. I think we don't have a\n> precedent in the codebase yet. We could allow -2 (disable reloption)\n> and -1 (disable feature) for reloption. Opinions?\n\nHere is patch v11, where the reloption has the same upper limit 1e10\nas the GUC. There is no good reason to have them different.\n\nI am reluctant to introduce new semantics like a reloption value of -2\nto disable a feature in this patch right before feature freeze.\n\nI believe there are enough options to disable insert-only vacuuming for\nan individual table:\n\n- Set the threshold to 2147483647. True, that will not work for very\n large tables, but I think that there are few tables that insert that\n many rows before they hit autovacuum_freeze_max_age anyway.\n\n- Set the scale factor to some astronomical value.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Thu, 26 Mar 2020 10:12:39 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-26 10:12:39 +0100, Laurenz Albe wrote:\n> On Wed, 2020-03-25 at 23:19 +0300, Alexander Korotkov wrote:\n> I am reluctant to introduce new semantics like a reloption value of -2\n> to disable a feature in this patch right before feature freeze.\n> \n> I believe there are enough options to disable insert-only vacuuming for\n> an individual table:\n\n> - Set the threshold to 2147483647. True, that will not work for very\n> large tables, but I think that there are few tables that insert that\n> many rows before they hit autovacuum_freeze_max_age anyway.\n> \n> - Set the scale factor to some astronomical value.\n\nMeh. You *are* adding new semantics with these. And they're terrible.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Mar 2020 11:50:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 27 Mar 2020 at 07:51, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-26 10:12:39 +0100, Laurenz Albe wrote:\n> > On Wed, 2020-03-25 at 23:19 +0300, Alexander Korotkov wrote:\n> > I am reluctant to introduce new semantics like a reloption value of -2\n> > to disable a feature in this patch right before feature freeze.\n> >\n> > I believe there are enough options to disable insert-only vacuuming for\n> > an individual table:\n>\n> > - Set the threshold to 2147483647. True, that will not work for very\n> > large tables, but I think that there are few tables that insert that\n> > many rows before they hit autovacuum_freeze_max_age anyway.\n> >\n> > - Set the scale factor to some astronomical value.\n>\n> Meh. You *are* adding new semantics with these. And they're terrible.\n\nI've modified this to allow a proper way to disable the entire feature\nby allowing the setting to be set to -1 to disable the feature. I feel\npeople are fairly used to using -1 to disable various features (e.g.\nlog_autovacuum_min_duration). I've used the special value of -2 for\nthe reloption to have that cascade to using the GUC instead. The\nautovacuum_vacuum_insert_threshold reloption may be explicitly set to\n-1 to disable autovacuums for inserts for the relation.\n\nI've also knocked the default threshold down to 1000. I feel this is a\nbetter value given that the scale factor is now 0.2. There seemed to\nbe no need to exclude smaller tables from seeing gains such as\nindex-only scans.\n\nIf nobody objects, I plan to push this one shortly.\n\nDavid",
"msg_date": "Fri, 27 Mar 2020 10:18:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 2020-03-27 at 10:18 +1300, David Rowley wrote:\n> > > I believe there are enough options to disable insert-only vacuuming for\n> > > an individual table:\n> >\n> > > - Set the threshold to 2147483647. True, that will not work for very\n> > > large tables, but I think that there are few tables that insert that\n> > > many rows before they hit autovacuum_freeze_max_age anyway.\n> > >\n> > > - Set the scale factor to some astronomical value.\n> >\n> > Meh. You *are* adding new semantics with these. And they're terrible.\n> \n> I've modified this to allow a proper way to disable the entire feature\n> by allowing the setting to be set to -1 to disable the feature. I feel\n> people are fairly used to using -1 to disable various features (e.g.\n> log_autovacuum_min_duration). I've used the special value of -2 for\n> the reloption to have that cascade to using the GUC instead. The\n> autovacuum_vacuum_insert_threshold reloption may be explicitly set to\n> -1 to disable autovacuums for inserts for the relation.\n> \n> I've also knocked the default threshold down to 1000. I feel this is a\n> better value given that the scale factor is now 0.2. There seemed to\n> be no need to exclude smaller tables from seeing gains such as\n> index-only scans.\n> \n> If nobody objects, I plan to push this one shortly.\n\nThanks for the help!\n\nThe new meaning of -2 should be documented, other than that it looks\ngood to me.\n\nI'll accept the new semantics, but they don't make me happy. People are\nused to -1 meaning \"use the GUC value instead\".\n\nI still don't see why we need that. Contrary to Andres' opinion, I don't\nthink that disabling a parameter by setting it to a value high enough that\nit does not take effect is a bad thing.\n\nI won't put up a fight though.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 27 Mar 2020 10:40:00 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 27 Mar 2020 at 22:40, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> The new meaning of -2 should be documented, other than that it looks\n> good to me.\n\nBut the users don't need to know anything about -2. It's not possible\nto explicitly set the value to -2. This is just the reset value of the\nreloption which means \"use the GUC\".\n\n> I'll accept the new semantics, but they don't make me happy. People are\n> used to -1 meaning \"use the GUC value instead\".\n\nThe problem with having -1 on the reloption meaning use the GUC, in\nthis case, is that it means the reset value of the reloption must be\n-1 and we need to allow them to set -2 explicitly, and if we do that,\nthen -1 also becomes a valid value that users can set. Maybe that's\nnot the end of the world, but I'd rather have the reset value be\nunsettable by users. To me, that's less confusing as there are fewer\nspecial values to remember the meaning of.\n\nThe reason I want a method to explicitly disable the feature is the\nfact that it's easy to document and it should reduce the number of\npeople who are confused about the best method to disable the feature.\nI know there's going to be a non-zero number of people who'll want to\ndo that.\n\n\n",
"msg_date": "Sat, 28 Mar 2020 11:59:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 2020-03-28 at 11:59 +1300, David Rowley wrote:\n> On Fri, 27 Mar 2020 at 22:40, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > The new meaning of -2 should be documented, other than that it looks\n> > good to me.\n> \n> But the users don't need to know anything about -2. It's not possible\n> to explicitly set the value to -2. This is just the reset value of the\n> reloption which means \"use the GUC\".\n\nI see.\n\n> > I'll accept the new semantics, but they don't make me happy. People are\n> > used to -1 meaning \"use the GUC value instead\".\n> \n> The problem with having -1 on the reloption meaning use the GUC, in\n> this case, is that it means the reset value of the reloption must be\n> -1 and we need to allow them to set -2 explicitly, and if we do that,\n> then -1 also becomes a valid value that users can set. Maybe that's\n> not the end of the world, but I'd rather have the reset value be\n> unsettable by users. To me, that's less confusing as there are fewer\n> special values to remember the meaning of.\n> \n> The reason I want a method to explicitly disable the feature is the\n> fact that it's easy to document and it should reduce the number of\n> people who are confused about the best method to disable the feature.\n> I know there's going to be a non-zero number of people who'll want to\n> do that.\n\nIn the light of that, I have no objections.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Sat, 28 Mar 2020 05:12:11 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 28 Mar 2020 at 17:12, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> In the light of that, I have no objections.\n\nThank you. Pushed.\n\nDavid\n\n\n",
"msg_date": "Sat, 28 Mar 2020 19:21:33 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 28 Mar 2020 at 19:21, David Rowley <dgrowleyml@gmail.com> wrote:\n> Thank you. Pushed.\n\nI'm unsure yet if this has caused an instability on lousyjack's run in\n[1]. I see that table does have 30,000 rows inserted, so it does seem\nprobable that it may receive an autovacuum now when didn't before. I\ndid a quick local test to see if swapping the \"ANALYZE pagg_tab_ml;\"\nto \"VACUUM ANALYZE pagg_tab_ml;\" would do the same on my local\nmachine, but it didn't.\n\nI'll keep an eye on lousyjack's next run. If it passes next run, I\nmay add some SQL to determine if pg_stat_all_tables.autovacuum_count\nfor those tables are varying between passing and failing runs.\n\nDavid\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\n\n\n",
"msg_date": "Sat, 28 Mar 2020 22:22:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Sat, 28 Mar 2020 at 17:12, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>> In the light of that, I have no objections.\n\n> Thank you. Pushed.\n\nIt seems like this commit has resulted in some buildfarm instability:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\n\napparent change of plan\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2009%3A20%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2013%3A20%3A05\n\nunstable results in stats_ext test\n\nI initially thought that Dean's functional-stats adjustment might be\nthe culprit, but the timestamps on these failures disprove that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Mar 2020 13:26:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, 29 Mar 2020 at 06:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Sat, 28 Mar 2020 at 17:12, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >> In the light of that, I have no objections.\n>\n> > Thank you. Pushed.\n>\n> It seems like this commit has resulted in some buildfarm instability:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\n>\n> apparent change of plan\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2009%3A20%3A05\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2013%3A20%3A05\n>\n> unstable results in stats_ext test\n\nYeah, thanks for pointing that out. I'm just doing some tests locally\nto see if I can recreate those results after vacuuming the mcv_list\ntable, so far I'm unable to.\n\n\n",
"msg_date": "Sun, 29 Mar 2020 10:30:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, 29 Mar 2020 at 10:30, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 29 Mar 2020 at 06:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > On Sat, 28 Mar 2020 at 17:12, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > >> In the light of that, I have no objections.\n> >\n> > > Thank you. Pushed.\n> >\n> > It seems like this commit has resulted in some buildfarm instability:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\n> >\n> > apparent change of plan\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2009%3A20%3A05\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2013%3A20%3A05\n> >\n> > unstable results in stats_ext test\n>\n> Yeah, thanks for pointing that out. I'm just doing some tests locally\n> to see if I can recreate those results after vacuuming the mcv_list\n> table, so far I'm unable to.\n\nI'm considering pushing the attached to try to get some confirmation\nthat additional autovacuums are the issue. However, I'm not too sure\nit's a wise idea to as I can trigger an additional auto-vacuum and\nhave these new tests fail with make installcheck after setting\nautovacuum_naptime to 1s, but I'm not getting the other diffs\nexperienced by lousyjack and petalura. The patch may just cause more\nfailures without proving much, especially so with slower machines.\n\nThe other idea I had was just to change the\nautovacuum_vacuum_insert_threshold relopt to -1 for the problem tables\nand see if that stabilises things.\n\nYet another option would be to see if reltuples varies between runs\nand ditch the autovacuum_count column from the attached. There does\nnot appear to be any part of the tests which would cause any dead\ntuples in any of the affected relations, so I'm unsure why reltuples\nwould vary between what ANALYZE and VACUUM would set it to.\n\nI'm still thinking. Input welcome.\n\nDavid",
"msg_date": "Sun, 29 Mar 2020 15:29:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sun, 29 Mar 2020 at 15:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm considering pushing the attached to try to get some confirmation\n> that additional autovacuums are the issue. However, I'm not too sure\n> it's a wise idea to as I can trigger an additional auto-vacuum and\n> have these new tests fail with make installcheck after setting\n> autovacuum_naptime to 1s, but I'm not getting the other diffs\n> experienced by lousyjack and petalura. The patch may just cause more\n> failures without proving much, especially so with slower machines.\n\nInstead of the above, I ended up modifying the two intermittently\nfailing tests to change the ANALYZE into a VACUUM ANALYZE. This\nshould prevent autovacuum sneaking in a vacuum at some point in time\nafter the ANALYZE has taken place.\n\nI don't believe any of the current buildfarm failures can be\nattributed to any of the recent changes to autovacuum, but I'll\ncontinue to monitor the farm to see if anything is suspect.\n\nDavid\n\n\n",
"msg_date": "Mon, 30 Mar 2020 15:04:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I don't believe any of the current buildfarm failures can be\n> attributed to any of the recent changes to autovacuum, but I'll\n> continue to monitor the farm to see if anything is suspect.\n\nI agree none of the failures I see right now are related to that\n(there's some \"No space left on device\" failures, Windows randomicity,\nsnapper's compiler bug, and don't-know-what on hyrax).\n\nBut the ones that were seemingly due to that were intermittent,\nso we'll have to watch for awhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Mar 2020 22:17:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 7:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I don't believe any of the current buildfarm failures can be\n> > attributed to any of the recent changes to autovacuum, but I'll\n> > continue to monitor the farm to see if anything is suspect.\n>\n> I agree none of the failures I see right now are related to that\n> (there's some \"No space left on device\" failures, Windows randomicity,\n> snapper's compiler bug, and don't-know-what on hyrax).\n>\n> But the ones that were seemingly due to that were intermittent,\n> so we'll have to watch for awhile.\n>\n\nToday, stats_ext failed on petalura [1]. Can it be due to this? I\nhave also committed a patch but immediately I don't see it to be\nrelated to my commit.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-30%2002%3A20%3A03\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Mar 2020 08:37:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, Mar 30, 2020 at 7:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> But the ones that were seemingly due to that were intermittent,\n>> so we'll have to watch for awhile.\n\n> Today, stats_ext failed on petalura [1]. Can it be due to this? I\n> have also committed a patch but immediately I don't see it to be\n> related to my commit.\n\nYeah, this looks just like petalura's previous failures, so the\nproblem is still there :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Mar 2020 23:33:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 2020-03-28 at 19:21 +1300, David Rowley wrote:\n> Thank you. Pushed.\n\nThanks for your efforts on this, and thanks for working on the fallout.\n\nHow can it be that even after an explicit VACUUM, this patch can cause\nunstable regression test results?\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 30 Mar 2020 06:57:08 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 30 Mar 2020 at 17:57, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> How can it be that even after an explicit VACUUM, this patch can cause\n> unstable regression test results?\n\nI only added vacuums for mcv_lists. The problem with petalura [1] is\nwith the functional_dependencies table.\n\nI'll see if I can come up with some way to do this in a more\ndeterministic way to determine which tables to add vacuums for, rather\nthan waiting for and reacting post-failure.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-30%2002%3A20%3A03\n\n\n",
"msg_date": "Mon, 30 Mar 2020 19:49:35 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Mon, 30 Mar 2020 at 19:49, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll see if I can come up with some way to do this in a more\n> deterministic way to determine which tables to add vacuums for, rather\n> than waiting for and reacting post-failure.\n\nI ended up running make installcheck on an instance with\nautovacuum_naptime set to 1s with a small additional debug line in\nautovacuum.c, namely:\n\ndiff --git a/src/backend/postmaster/autovacuum.c\nb/src/backend/postmaster/autovacuum.c\nindex 7e97ffab27..ad81e321dc 100644\n--- a/src/backend/postmaster/autovacuum.c\n+++ b/src/backend/postmaster/autovacuum.c\n@@ -3099,6 +3099,9 @@ relation_needs_vacanalyze(Oid relid,\n *dovacuum = force_vacuum || (vactuples > vacthresh) ||\n (vac_ins_base_thresh >= 0 &&\ninstuples > vacinsthresh);\n *doanalyze = (anltuples > anlthresh);\n+\n+ if (vac_ins_base_thresh >= 0 && instuples > vacinsthresh)\n+ elog(LOG, \"******** %s\", NameStr(classForm->relname));\n }\n else\n {\n\nI grepped the log after the installcheck to grab the table names that\nsaw an insert vacuum during the test then grepped the test output to\nsee if the table appears to pose a risk of test instability.\n\nI've classed each table with a risk factor. \"VeryLow\" seems like\nthere's almost no risk because we don't ever look at EXPLAIN. Low\nrisk tables look at EXPLAIN, but I feel are not quite looking in\nenough detail to cause issues. Medium risk look at EXPLAIN and I feel\nthere's a risk of some change, I think these are all Append nodes\nwhich do order subnodes based on their cost. High risk.... those are\nthe ones I'm about to look into changing.\n\nThe full results of my analysis are:\n\nTable: agg_group_1 aggregates.out. Nothing looks at EXPLAIN. Risk:VeryLow\nTable: agg_hash_1 aggregates.out. Nothing looks at EXPLAIN. Risk:VeryLow\nTable: atest12 privileges.out. Lots of looking at EXPLAIN, but nothing\nappears to look into row estimates in detail. Risk:Low\nTable: brin_test brin.out. Test already does VACUUM ANALYZE. Risk:VeryLow\nTable: bt_f8_heap btree_index.out, create_index.out. Rows loaded in\ncopy.source. Nothing appears to look at EXPLAIN. Risk:VeryLow\nTable: bt_i4_heap btree_index.out, create_index.out. Rows loaded in\ncopy.source. Nothing appears to look at EXPLAIN. Risk:VeryLow\nTable: bt_name_heap btree_index.out, create_index.out. Rows loaded in\ncopy.source. Nothing appears to look at EXPLAIN. Risk:VeryLow\nTable: bt_txt_heap btree_index.out, create_index.out. Rows loaded in\ncopy.source. Nothing appears to look at EXPLAIN. Risk:VeryLow\nTable: dupindexcols create_index.out. Some looking at EXPLAIN plans,\nbut nothing appears to look into row estimates in detail. Risk:Low\nTable: fast_emp4000 create_am.out, create_index.out, create_misc.out.\nLots of looking at EXPLAIN, but nothing appears to look into row\nestimates in detail. Risk:Low\nTable: functional_dependencies stats_ext.out. Lots of looking at\nEXPLAIN output. Test looks at row estimates. Risk:High\nTable: gist_tbl gist.out. Lots of looking at EXPLAIN, but nothing\nappears to look into row estimates in detail. Risk:Low\nTable: hash_f8_heap hash_index.out. Rows loaded in copy.source.\nNothing appears to look at EXPLAIN. Risk:VeryLow\nTable: hash_i4_heap hash_index.out. Rows loaded in copy.source.\nNothing appears to look at EXPLAIN. Risk:VeryLow\nTable: hash_name_heap hash_index.out. Rows loaded in copy.source.\nNothing appears to look at EXPLAIN. Risk:VeryLow\nTable: hash_txt_heap hash_index.out. Rows loaded in copy.source.\nNothing appears to look at EXPLAIN. Risk:VeryLow\nTable: kd_point_tbl create_index_spgist.out. Lots of looking at\nEXPLAIN, but nothing appears to look into row estimates in detail.\nRisk:Low\nTable: mcv_lists stats_ext.out. Lots of looking at EXPLAIN, but tests\nappear to VACUUM after loading rows. Risk:Low\nTable: mcv_lists_arrays stats_ext.out. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: mcv_lists_bool stats_ext.out. Lots of looking at EXPLAIN\noutput. Test looks at row estimates. Risk:High\nTable: ndistinct stats_ext.out. Lots of looking at EXPLAIN output.\nTest looks at row estimates. Only 1000 rows are loaded initially and\nthen 5000 after a truncate. 1000 rows won't trigger the auto-vacuum.\nRisk:High\nTable: onek Lots of files. Sees a VACUUM in sanity_check test,\nhowever, some tests run before sanity_check, e.g. create_index,\nselect, copy, none of which appear to pay particular attention to\nanything vacuum might change. Risk:Low\nTable: pagg_tab_ml_p2_s1 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_ml_p2_s2 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_ml_p3_s1 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_ml_p3_s2 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_para_p1 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_para_p2 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pagg_tab_para_p3 partition_aggregate.out Appears to be some\nrisk of Append reordering partitions based on cost. Risk:Medium\nTable: pg_attribute Seen in several tests. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: pg_depend Seen in several tests. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: pg_largeobject Seen in several tests. Nothing appears to look\nat EXPLAIN. Risk:VeryLow\nTable: quad_box_tbl box.out. Sees some use of EXPLAIN, but nothing\nlooks critical. Risk:Low\nTable: quad_box_tbl_ord_seq1 box.out. No EXPLAIN usage. Risk:VeryLow\nTable: quad_box_tbl_ord_seq2 box.out. No EXPLAIN usage. Risk:VeryLow\nTable: quad_point_tbl create_index_spgist.out Sees some use of\nEXPLAIN. Index Only Scans are already being used. Risk:Low\nTable: quad_poly_tbl polygon.out Some usages of EXPLAIN. Risk:Low\nTable: radix_text_tbl create_index_spgist.out Some usages of EXPLAIN. Risk:Low\nTable: road various tests. Nothing appears to look at EXPLAIN. Risk:VeryLow\nTable: slow_emp4000 various tests. Nothing appears to look at EXPLAIN.\nRisk:VeryLow\nTable: spgist_box_tbl spgist.out. Nothing appears to look at EXPLAIN.\nRisk:VeryLow\nTable: spgist_point_tbl spgist.out. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: spgist_text_tbl spgist.out. Nothing appears to look at EXPLAIN.\nRisk:VeryLow\nTable: tenk1 aggregates.out, groupingsets.out, join.out, limit.out,\nmisc_functions.out, rowtypes.out,select_distinct.out,\nselect_parallel.out, subselect.out, tablesample.out, tidscan.out,\nunion.out, window.out and write_parallel.out are after vacuum in\nsanity_check. EXPLAIN used in create_index.out and inherit.out, which\nare all run before sanity_check does the vacuum. Risk:Medium\nTable: tenk2 Only sees EXPLAIN usages in select_parallel.out, which is\nafter the table is vacuumed in sanity_check. Risk:Low\nTable: test_range_gist rangetypes.out. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: test_range_spgist rangetypes.out. Some EXPLAIN usage. Risk:Low\nTable: testjsonb jsonb.out. Some EXPLAIN usage. Risk:Low\nTable: transition_table_level2 plpgsql.out. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\nTable: transition_table_status plpgsql.out. Nothing appears to look at\nEXPLAIN. Risk:VeryLow\n\nI'd like to wait to see if we get failures for the ones I've classed\nas medium risk.\n\nDavid\n\n\n",
"msg_date": "Mon, 30 Mar 2020 22:49:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Sat, 28 Mar 2020 at 22:22, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm unsure yet if this has caused an instability on lousyjack's run in\n> [1].\n\npogona has just joined in on the fun [1], so, we're not out the woods\non this yet. I'll start having a look at this in more detail.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2023%3A10%3A03\n\n\n",
"msg_date": "Tue, 31 Mar 2020 16:38:24 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 31 Mar 2020 at 04:39, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 28 Mar 2020 at 22:22, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'm unsure yet if this has caused an instability on lousyjack's run in\n> > [1].\n>\n> pogona has just joined in on the fun [1], so, we're not out the woods\n> on this yet. I'll start having a look at this in more detail.\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2023%3A10%3A03\n>\n\nI had a go at reproducing this. I wasn't able to produce the reported\nfailure, but I can reliably produce an Assert failure that may be\nrelated by doing a VACUUM FULL simultaneously with an ANALYZE that is\ngenerating extended stats, which produces:\n\n#0 0x00007f28081c9520 in raise () from /lib64/libc.so.6\n#1 0x00007f28081cab01 in abort () from /lib64/libc.so.6\n#2 0x0000000000aad1ad in ExceptionalCondition (conditionName=0xb2f1a1\n\"ItemIdIsNormal(lp)\", errorType=0xb2e7c9 \"FailedAssertion\",\nfileName=0xb2e848 \"heapam.c\", lineNumber=3016) at assert.c:67\n#3 0x00000000004fb79e in heap_update (relation=0x7f27feebeda8,\notid=0x2d881fc, newtup=0x2d881f8, cid=0, crosscheck=0x0, wait=true,\ntmfd=0x7ffc568a5900, lockmode=0x7ffc568a58fc) at heapam.c:3016\n#4 0x00000000004fdead in simple_heap_update (relation=0x7f27feebeda8,\notid=0x2d881fc, tup=0x2d881f8) at heapam.c:3902\n#5 0x00000000005be860 in CatalogTupleUpdate (heapRel=0x7f27feebeda8,\notid=0x2d881fc, tup=0x2d881f8) at indexing.c:230\n#6 0x00000000008df898 in statext_store (statOid=18964, ndistinct=0x0,\ndependencies=0x2a85fe0, mcv=0x0, stats=0x2a86570) at\nextended_stats.c:553\n#7 0x00000000008deec0 in BuildRelationExtStatistics\n(onerel=0x7f27feed9008, totalrows=5000, numrows=5000, rows=0x2ad5a30,\nnatts=7, vacattrstats=0x2a75f40) at extended_stats.c:187\n#8 0x000000000065c1b7 in do_analyze_rel (onerel=0x7f27feed9008,\nparams=0x7ffc568a5fc0, va_cols=0x0, acquirefunc=0x65ce37\n<acquire_sample_rows>, relpages=31, inh=false, in_outer_xact=false,\nelevel=13) at analyze.c:606\n#9 0x000000000065b532 in analyze_rel (relid=18956,\nrelation=0x29b0bc0, params=0x7ffc568a5fc0, va_cols=0x0,\nin_outer_xact=false, bstrategy=0x2a7dfa0) at analyze.c:263\n#10 0x00000000006fd768 in vacuum (relations=0x2a7e148,\nparams=0x7ffc568a5fc0, bstrategy=0x2a7dfa0, isTopLevel=true) at\nvacuum.c:468\n#11 0x00000000006fd22c in ExecVacuum (pstate=0x2a57a00,\nvacstmt=0x29b0ca8, isTopLevel=true) at vacuum.c:251\n\nIt looks to me as though the problem is that statext_store() needs to\ntake its lock on pg_statistic_ext_data *before* searching for the\nstats tuple to update.\n\nIt's late here, so I haven't worked up a patch yet, but it looks\npretty straightforward.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 31 Mar 2020 21:23:35 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I had a go at reproducing this. I wasn't able to produce the reported\n> failure, but I can reliably produce an Assert failure that may be\n> related by doing a VACUUM FULL simultaneously with an ANALYZE that is\n> generating extended stats, which produces:\n> ...\n> It looks to me as though the problem is that statext_store() needs to\n> take its lock on pg_statistic_ext_data *before* searching for the\n> stats tuple to update.\n\nHmm, yeah, that seems like clearly a bad idea.\n\n> It's late here, so I haven't worked up a patch yet, but it looks\n> pretty straightforward.\n\nI can take care of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Mar 2020 16:48:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "I wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> I had a go at reproducing this. I wasn't able to produce the reported\n>> failure, but I can reliably produce an Assert failure that may be\n>> related by doing a VACUUM FULL simultaneously with an ANALYZE that is\n>> generating extended stats, which produces:\n>> ...\n>> It looks to me as though the problem is that statext_store() needs to\n>> take its lock on pg_statistic_ext_data *before* searching for the\n>> stats tuple to update.\n\n> Hmm, yeah, that seems like clearly a bad idea.\n\nI pushed a fix for that, but I think it must be unrelated to the\nbuildfarm failures we're seeing. For that coding to be a problem,\nit would have to run concurrently with a VACUUM FULL or CLUSTER\non pg_statistic_ext_data (which would give all the tuples new TIDs).\nAFAICS that won't happen with the tests that are giving trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Mar 2020 17:16:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Tue, 31 Mar 2020 at 22:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> >> ...\n> >> It looks to me as though the problem is that statext_store() needs to\n> >> take its lock on pg_statistic_ext_data *before* searching for the\n> >> stats tuple to update.\n>\n> > Hmm, yeah, that seems like clearly a bad idea.\n>\n> I pushed a fix for that\n\nThanks for doing that (looks like it was my mistake originally).\n\n> but I think it must be unrelated to the\n> buildfarm failures we're seeing. For that coding to be a problem,\n> it would have to run concurrently with a VACUUM FULL or CLUSTER\n> on pg_statistic_ext_data (which would give all the tuples new TIDs).\n> AFAICS that won't happen with the tests that are giving trouble.\n>\n\nYeah, that makes sense. I still can't see what might be causing those\nfailures. The tests that were doing an ALTER COLUMN and then expecting\nto see the results of a non-analysed table ought to be fixed by\n0936d1b6f, but that doesn't match the buildfarm failures. Possibly\n0936d1b6f will help with those anyway, but if so, it'll be annoying\nnot understanding why.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 1 Apr 2020 09:16:23 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Yeah, that makes sense. I still can't see what might be causing those\n> failures. The tests that were doing an ALTER COLUMN and then expecting\n> to see the results of a non-analysed table ought to be fixed by\n> 0936d1b6f, but that doesn't match the buildfarm failures. Possibly\n> 0936d1b6f will help with those anyway, but if so, it'll be annoying\n> not understanding why.\n\nQuite :-(. While it's too early to declare victory, we've seen no\nmore failures of this ilk since 0936d1b6f, so it's sure looking like\nautovacuum did have something to do with it.\n\nJust to save people repeating the search I did, these are the buildfarm\nfailures of interest so far:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2009%3A20%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2013%3A20%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2020%3A03%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-03-28%2022%3A00%3A19\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-03-29%2006%3A45%3A02\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-30%2002%3A20%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2020-03-30%2006%3A00%3A06\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2006%3A10%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2023%3A10%3A03\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2020-03-31%2005%3A00%3A35\n\nThe first of those is unlike the rest, and I'm not trying to account for\nit here. In the rest, what we see is that sometimes the estimates are off\nby a little bit from what's expected, up or down just a percent or two.\nAnd those deltas kick at inconsistent spots partway through a series of\nsimilar tests, so it's hard to deny that *something* asynchronous to the\ntest script is causing it.\n\nAfter contemplating the failures for awhile, I have a theory that\nat least partially matches the data. What I think is happening is\nthat autovacuum (NOT auto-analyze) launches on the table, and since\nit is running concurrently with the foreground test script, it fails\nto immediately acquire buffer lock on one or more of the table pages.\nSince this isn't an aggressive vacuum scan, it just politely backs\noff and doesn't scan those pages. And that translates to not getting\na perfectly accurate reltuples estimate at the end of the vacuum.\nOn my x86_64 machine, which matches the buildfarm critters having\ntrouble, the actual contents of both of the troublesome tables will\nbe 5000 tuples in 31 pages --- which comes out to be 30 pages with\n162 tuples each and then 140 tuples in the last page. Working through\nthe math in vac_estimate_reltuples (and assuming that the \"old\" values\nwere accurate numbers from the test script's own ANALYZE), what I find\nis that autovacuum will conclude there are 4999 tuples if it misses\nscanning one of the first 30 pages, or 5021 tuples if it misses scanning\nthe last page, because its interpolation from the old tuple density\nfigure will underestimate or overestimate the number of missed tuples\naccordingly. Once that slightly-off number gets pushed into pg_class,\nwe start to get slightly-off rowcount estimates in the test cases.\n\nSo what I'm hypothesizing is that the pg_statistic data is perfectly\nfine but pg_class.reltuples goes off a little bit after autovacuum.\nThe percentage changes in reltuples that I predict this way don't\nquite square with the percentage changes we see in the overall\nrowcount estimates, which is a problem for this theory. But the test\ncases are exercising some fairly complex estimation logic, and it\nwouldn't surprise me much if the estimates aren't linearly affected by\nreltuples. (Tomas, do you want to comment further on that point?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Apr 2020 23:13:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Thu, 2 Apr 2020 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > Yeah, that makes sense. I still can't see what might be causing those\n> > failures. The tests that were doing an ALTER COLUMN and then expecting\n> > to see the results of a non-analysed table ought to be fixed by\n> > 0936d1b6f, but that doesn't match the buildfarm failures. Possibly\n> > 0936d1b6f will help with those anyway, but if so, it'll be annoying\n> > not understanding why.\n>\n> Quite :-(. While it's too early to declare victory, we've seen no\n> more failures of this ilk since 0936d1b6f, so it's sure looking like\n> autovacuum did have something to do with it.\n\nHow about [1]? It seems related to me and also post 0936d1b6f.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-04-01%2017%3A03%3A05\n\n\n",
"msg_date": "Thu, 2 Apr 2020 19:11:48 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 11:13:12PM -0400, Tom Lane wrote:\n>Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> Yeah, that makes sense. I still can't see what might be causing those\n>> failures. The tests that were doing an ALTER COLUMN and then expecting\n>> to see the results of a non-analysed table ought to be fixed by\n>> 0936d1b6f, but that doesn't match the buildfarm failures. Possibly\n>> 0936d1b6f will help with those anyway, but if so, it'll be annoying\n>> not understanding why.\n>\n>Quite :-(. While it's too early to declare victory, we've seen no\n>more failures of this ilk since 0936d1b6f, so it's sure looking like\n>autovacuum did have something to do with it.\n>\n>Just to save people repeating the search I did, these are the buildfarm\n>failures of interest so far:\n>\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2006%3A33%3A02\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2009%3A20%3A05\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-28%2013%3A20%3A05\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lousyjack&dt=2020-03-28%2020%3A03%3A03\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2020-03-28%2022%3A00%3A19\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-03-29%2006%3A45%3A02\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=petalura&dt=2020-03-30%2002%3A20%3A03\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2020-03-30%2006%3A00%3A06\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2006%3A10%3A05\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=pogona&dt=2020-03-30%2023%3A10%3A03\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2020-03-31%2005%3A00%3A35\n>\n>The first of those is unlike the rest, and I'm not trying to account for\n>it here. In the rest, what we see is that sometimes the estimates are off\n>by a little bit from what's expected, up or down just a percent or two.\n>And those deltas kick at inconsistent spots partway through a series of\n>similar tests, so it's hard to deny that *something* asynchronous to the\n>test script is causing it.\n>\n>After contemplating the failures for awhile, I have a theory that\n>at least partially matches the data. What I think is happening is\n>that autovacuum (NOT auto-analyze) launches on the table, and since\n>it is running concurrently with the foreground test script, it fails\n>to immediately acquire buffer lock on one or more of the table pages.\n>Since this isn't an aggressive vacuum scan, it just politely backs\n>off and doesn't scan those pages. And that translates to not getting\n>a perfectly accurate reltuples estimate at the end of the vacuum.\n>On my x86_64 machine, which matches the buildfarm critters having\n>trouble, the actual contents of both of the troublesome tables will\n>be 5000 tuples in 31 pages --- which comes out to be 30 pages with\n>162 tuples each and then 140 tuples in the last page. Working through\n>the math in vac_estimate_reltuples (and assuming that the \"old\" values\n>were accurate numbers from the test script's own ANALYZE), what I find\n>is that autovacuum will conclude there are 4999 tuples if it misses\n>scanning one of the first 30 pages, or 5021 tuples if it misses scanning\n>the last page, because its interpolation from the old tuple density\n>figure will underestimate or overestimate the number of missed tuples\n>accordingly. Once that slightly-off number gets pushed into pg_class,\n>we start to get slightly-off rowcount estimates in the test cases.\n>\n>So what I'm hypothesizing is that the pg_statistic data is perfectly\n>fine but pg_class.reltuples goes off a little bit after autovacuum.\n>The percentage changes in reltuples that I predict this way don't\n>quite square with the percentage changes we see in the overall\n>rowcount estimates, which is a problem for this theory. But the test\n>cases are exercising some fairly complex estimation logic, and it\n>wouldn't surprise me much if the estimates aren't linearly affected by\n>reltuples. (Tomas, do you want to comment further on that point?)\n>\n\nI think this theory makes perfect sense. I think it's much less likely\nto see the last page skipped, so we're likely to end up with reltuples\nlower than 5000 (as opposed to seeing the 5021). That kinda matches the\nreports, where we generally see estimates reduced by 1 or 2. The -1\nchange could be explained by rounding errors, I guess - with 5000 we\nmight have produced 139.51, rounded up to 140, a slight drop may get us\n139. Not sure about the -2 changes, but I suppose it's possible we might\nactually skip multiple pages, reducing the reltuples estimate even more?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Apr 2020 14:57:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 2 Apr 2020 at 16:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Quite :-(. While it's too early to declare victory, we've seen no\n>> more failures of this ilk since 0936d1b6f, so it's sure looking like\n>> autovacuum did have something to do with it.\n\n> How about [1]? It seems related to me and also post 0936d1b6f.\n\nThat looks much like the first lousyjack failure, which as I said\nI wasn't trying to account for at that point.\n\nAfter looking at those failures, though, I believe that the root cause\nmay be the same, ie small changes in pg_class.reltuples due to\nautovacuum not seeing all pages of the tables. The test structure\nis a bit different, but it is accessing the tables in between EXPLAIN\nattempts, so it could be preventing a concurrent autovac from seeing\nall pages.\n\nI see your fix at cefb82d49, but it feels a bit brute-force. Unlike\nstats_ext.sql, we're not (supposed to be) dependent on exact planner\nestimates in this test. So I think the real problem here is crappy test\ncase design. Namely, that these various sub-tables are exactly the\nsame size, despite which the test is expecting that the planner will\norder them consistently --- with a planning algorithm that prefers\nto put larger tables first in parallel appends (cf. create_append_path).\nIt's not surprising that the result is unstable in the face of small\nvariations in the rowcount estimates.\n\nI'd be inclined to undo what you did in favor of initializing the\ntest tables to contain significantly different numbers of rows,\nbecause that would (a) achieve plan stability more directly,\nand (b) demonstrate that the planner is actually ordering the\ntables by cost correctly. Maybe somewhere else we have a test\nthat is verifying (b), but these test cases abysmally fail to\ncheck that point.\n\nI'm not really on board with disabling autovacuum in the regression\ntests anywhere we aren't absolutely forced to do so. It's not\nrepresentative of real world practice (or at least not real world\nbest practice ;-)) and it could help hide actual bugs. We don't seem\nto have much choice with the stats_ext tests as they are constituted,\nbut those tests look really fragile to me. Let's not adopt that\ntechnique where we have other possible ways to stabilize test results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Apr 2020 10:44:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "I wrote:\n> I'd be inclined to undo what you did in favor of initializing the\n> test tables to contain significantly different numbers of rows,\n> because that would (a) achieve plan stability more directly,\n> and (b) demonstrate that the planner is actually ordering the\n> tables by cost correctly. Maybe somewhere else we have a test\n> that is verifying (b), but these test cases abysmally fail to\n> check that point.\n\nConcretely, I suggest the attached, which replaces the autovac disables\nwith adjusting partition boundaries so that the partitions contain\ndifferent numbers of rows.\n\nI did not touch the partition boundaries for pagg_tab1 and pagg_tab2,\nbecause that would have required also changing the associated test\nqueries (which are designed to access only particular partitions).\nIt seemed like too much work to verify that the answers were still\nright, and it's not really necessary because those tables are so\nsmall as to fit in single pages. That means that autovac will either\nsee the whole table or none of it, and in either case it won't change\nreltuples.\n\nThis does cause the order of partitions to change in a couple of the\npagg_tab_ml EXPLAIN results, but I think that's fine; the ordering\ndoes now match the actual sizes of the partitions.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 02 Apr 2020 11:46:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "On Fri, 3 Apr 2020 at 04:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I'd be inclined to undo what you did in favor of initializing the\n> > test tables to contain significantly different numbers of rows,\n> > because that would (a) achieve plan stability more directly,\n> > and (b) demonstrate that the planner is actually ordering the\n> > tables by cost correctly. Maybe somewhere else we have a test\n> > that is verifying (b), but these test cases abysmally fail to\n> > check that point.\n>\n> Concretely, I suggest the attached, which replaces the autovac disables\n> with adjusting partition boundaries so that the partitions contain\n> different numbers of rows.\n\nI've looked over this and I agree that it's a better solution to the problem.\n\nI'm happy for you to go ahead on this.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Apr 2020 09:49:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 3 Apr 2020 at 04:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Concretely, I suggest the attached, which replaces the autovac disables\n>> with adjusting partition boundaries so that the partitions contain\n>> different numbers of rows.\n\n> I've looked over this and I agree that it's a better solution to the problem.\n> I'm happy for you to go ahead on this.\n\nPushed, thanks for looking at it!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Apr 2020 19:44:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Berserk Autovacuum (let's save next Mandrill)"
}
] |
[
{
"msg_contents": "doc: Add some images\n\nAdd infrastructure for having images in the documentation, in SVG\nformat. Add two images to start with. See the included README file\nfor instructions.\n\nAuthor: Jürgen Purtz <juergen@purtz.de>\nAuthor: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\nDiscussion: https://www.postgresql.org/message-id/flat/aaa54502-05c0-4ea5-9af8-770411a6bf4b@purtz.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ea55aec0a97d6cade0186df1913da2c8cb5c6f2c\n\nModified Files\n--------------\n.gitattributes | 1 +\ndoc/src/sgml/Makefile | 14 +-\ndoc/src/sgml/gin.sgml | 12 +-\ndoc/src/sgml/images/Makefile | 18 ++\ndoc/src/sgml/images/README | 61 ++++++\ndoc/src/sgml/images/gin.dot | 93 +++++++++\ndoc/src/sgml/images/gin.svg | 320 +++++++++++++++++++++++++++++++\ndoc/src/sgml/images/pagelayout.svg | 40 ++++\ndoc/src/sgml/images/pagelayout.txt | 11 ++\ndoc/src/sgml/storage.sgml | 14 ++\ndoc/src/sgml/stylesheet-hh.xsl | 6 +\ndoc/src/sgml/stylesheet-html-common.xsl | 1 +\ndoc/src/sgml/stylesheet-html-nochunk.xsl | 11 ++\ndoc/src/sgml/stylesheet.xsl | 6 +\n14 files changed, 602 insertions(+), 6 deletions(-)\n\n",
"msg_date": "Wed, 27 Mar 2019 22:15:06 +0000",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pgsql: doc: Add some images"
},
{
"msg_contents": "On 2019-Mar-27, Peter Eisentraut wrote:\n\n> doc: Add some images\n> \n> Add infrastructure for having images in the documentation, in SVG\n> format. Add two images to start with. See the included README file\n> for instructions.\n\nI think you need something like the attached.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 27 Mar 2019 19:33:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: Add some images"
},
{
"msg_contents": "On 2019-Mar-27, Peter Eisentraut wrote:\n\n> doc: Add some images\n> \n> Add infrastructure for having images in the documentation, in SVG\n> format. Add two images to start with. See the included README file\n> for instructions.\n\n> Author: J�rgen Purtz <juergen@purtz.de>\n> Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n\nNow when I test J�rgen's new proposed image genetic-algorithm I find\nthat this stuff doesn't work in VPATH builds, at least for PDF -- I\ndon't get a build failure, but instead I get just a section title that\ndoesn't precede any actual image. (There's a very small warning hidden\nin the tons of other fop output). If I edit the .fo file by hand to\nmake the path to .svg absolute, the image appears correctly.\n\nI don't see any way in the fop docs to specify the base path for images.\n\nI'm not sure what's a good way to fix this problem in a general way.\nWould some new rule in the xslt would work?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 16 Aug 2019 16:00:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: Add some images"
},
{
"msg_contents": "On 16.08.19 23:00, Alvaro Herrera wrote:\n> On 2019-Mar-27, Peter Eisentraut wrote:\n>\n>> doc: Add some images\n>>\n>> Add infrastructure for having images in the documentation, in SVG\n>> format. Add two images to start with. See the included README file\n>> for instructions.\n>> Author: Jürgen Purtz <juergen@purtz.de>\n>> Author: Peter Eisentraut <peter.eisentraut@2ndquadrant.com>\n> Now when I test Jürgen's new proposed image genetic-algorithm I find\n> that this stuff doesn't work in VPATH builds, at least for PDF -- I\n> don't get a build failure, but instead I get just a section title that\n> doesn't precede any actual image. (There's a very small warning hidden\n> in the tons of other fop output). If I edit the .fo file by hand to\n> make the path to .svg absolute, the image appears correctly.\n>\n> I don't see any way in the fop docs to specify the base path for images.\n>\n> I'm not sure what's a good way to fix this problem in a general way.\n> Would some new rule in the xslt would work?\n>\n\nHello Alvaro,\n\nit is be possible that you face the following situation: the image \nsubdirectory contains all ditaa and graphviz source files, but not all \ncorresponding svg files. Those svg files are created by the given \nMakefile of this subdirectory resp. should be included in git (and \npatches - what was not the case in one of my patches).\n\nCan you acknowledge, that this is your starting situation when you miss \nthe graphic in PDF? If no, please give us more information: operation \nsystem, ... . If yes, we have the following options:\n\na) Make sure that all svg files exists in addition to the original \nsource files (as it was originally planned)\n\nb) Run make in images subdirectory manually\n\nc) Append a line \" $(MAKE) -C images\" to Makefile of sgml directory \nfor PDF, HTML and EPUB targets to check the dependencies within the \nimages subdirectory.\n\nKind regards, Jürgen\n\n\n\n\n",
"msg_date": "Sun, 18 Aug 2019 18:29:30 +0300",
"msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: Add some images"
},
{
"msg_contents": "Hi J�rgen,\n\nOn 2019-Aug-18, J�rgen Purtz wrote:\n\n> it is be possible that you face the following situation: the image\n> subdirectory contains all ditaa and graphviz source files, but not all\n> corresponding svg files. Those svg files are created by the given Makefile\n> of this subdirectory resp. should be included in git (and patches - what was\n> not the case in one of my patches).\n\nNot really ... I did create the .svg file by invoking the make rule for\nit manually.\n\nThe files do exist, but they are in the wrong directory: they're in\n/pgsql/source/master/doc/src/sgml/images\nand the \"make postgres-A4.pdf\" was invoked in\n/pgsql/build/master/doc/src/sgml/images\n ^^^^^\n\nAs I said, if I edit the .fo file to change the \"images/foobar.svg\"\nentry to read \"/pgsql/source/master/doc/src/sgml/images/foobar.svg\" then\nthe built docco correctly includes the image.\n\nA \"VPATH\" build is one where I invoke \"configure\" from a different\ndirectory.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 18 Aug 2019 20:18:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: Add some images"
},
{
"msg_contents": "On 2019-08-16 22:00, Alvaro Herrera wrote:\n> Now when I test Jürgen's new proposed image genetic-algorithm I find\n> that this stuff doesn't work in VPATH builds, at least for PDF -- I\n> don't get a build failure, but instead I get just a section title that\n> doesn't precede any actual image. (There's a very small warning hidden\n> in the tons of other fop output). If I edit the .fo file by hand to\n> make the path to .svg absolute, the image appears correctly.\n\nfixed\n\n(I'm puzzled that one can't tell FOP to abort on an error like this.\nThe ASF JIRA is down right now, so I can't research this, but I'm\ntempted to file a bug about this.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 19 Aug 2019 10:35:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: doc: Add some images"
}
] |
[
{
"msg_contents": "Hi hacker,\n\nPostgres is quite frequently used in different Internet services with \nmulti-tenant architecture.\nIt means that all object stored in the database have something like \n\"tenant_id\" foreign key.\nThis key is used in all queries, i.e.\n\n select * from Product where tenant_id=? and product_name=?;\n\nThe problem is that columns \"tenant_id\" and \"product_name\" are \nfrequently highly correlated (for example if this product is produced \njust by one company).\nAnd Postgres knows nothing about this correlation and so makes incorrect \nestimation of selectivity of this predicate.\n\nCertainly it is possible to create multicolumn statistics to notify \nPostgres about columns correlation.\nBut unfortunately it is not good and working solution.\n\nFirst of all we have to create multicolumn statistic for all possible \ncombinations of table's attributes including \"tenant_id\".\nIt is very inconvenient and inefficient.\n\nSecond - right now multicolumn statistic is not used for calculating \njoin selectivity. And for joins estimation errors are most critical,\ncausing Postgres to choose bad execution plans.\n\n From my point of view the best solution is to make Postgres take in \naccount possible statistics errors and choose \"stable\" plan which\ncost is not significantly increased in case of estimation errors. But it \nrequires huge refactoring of optimizer.\n\nRight now I have information that some of Postgres customer which faced \nwith such problem just hacked calc_joinrel_size_estimate function,\nchecking attribute name and if it is \"tenant_id\" then do not take its \nselectivity in account.\nIt leads to good query plans but certainly can not be considered as \nacceptable solution.\n\nI thought about more straightforward ways for reaching the same effect.\nRight now Postgres allows to explicitly specify number of distinct \nvalues for the attribute:\n\n alter table foo alter column x set (n_distinct=1);\n\nUnfortunately just setting it to 1 doesn't work. Postgres calculates \nselectivity based on MCV or histogram and not using n_distinct value.\nIt is also possible to disable collection of statistic for this columns:\n\n alter table foo alter column x set statistics 0;\n\nBut in this case Postgres is choosing DEFAULT_NUM_DISTINCT despite to \nn_distinct option specified for this attribute.\nI propose small patch which makes Postgres to use explicitly specified \nn_distinct attribute option value when no statistic is available.\n\nThis test illustrating how it works (without this patch estimation for \nthis query is 1 row):\n\npostgres=# create table foo(x integer, y integer);\nCREATE TABLE\npostgres=# insert into foo values (generate_series(1,100000)/10, \ngenerate_series(1,100000)/10);\nINSERT 0 100000\npostgres=# alter table foo alter column x set (n_distinct=1);\nALTER TABLE\npostgres=# alter table foo alter column x set statistics 0;\nALTER TABLE\npostgres=# analyze foo;\nANALYZE\npostgres=# explain select * from foo where x=100 and y=100;\n QUERY PLAN\n-------------------------------------------------------\n Seq Scan on foo (cost=0.00..1943.00 rows=10 width=8)\n Filter: ((x = 100) AND (y = 100))\n(2 rows)\n\n\nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 28 Mar 2019 15:40:47 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Multitenancy optimization"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 5:40 AM Konstantin Knizhnik <\nk.knizhnik@postgrespro.ru> wrote:\n\n> Certainly it is possible to create multicolumn statistics to notify\n> Postgres about columns correlation.\n> But unfortunately it is not good and working solution.\n>\n> First of all we have to create multicolumn statistic for all possible\n> combinations of table's attributes including \"tenant_id\".\n> It is very inconvenient and inefficient.\n>\n\nOn the inconvenient part: doesn't postgres itself automatically create\nfunctional dependencies on combinations? i.e. it seems to me if we create\nstatistics on (a, b, c), then we don't need to create statistics on (a, b)\nor (a, c) or (b, c), because the pg_statistic_ext entry for (a, b, c)\nalready includes enough information.\n\nOn the inefficient part, I think there's some areas of improvement here.\nFor example, if (product_id) -> seller_id correlation is 1.0, then\n(product_id, product_name) -> seller_id correlation is definitely 1.0 and\nwe don't need to store it. So we can reduce the amount of information\nstored in pg_statistic_ext -> stxdependencies, without losing any data\npoints.\n\nMore generally, if (a) -> b correlation is X, then (a, c) -> b correlation\nis >= X. Maybe we can have a threshold to reduce number of entries in\npg_statistic_ext -> stxdependencies.\n\n-- Hadi\n\nOn Thu, Mar 28, 2019 at 5:40 AM Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote: \nCertainly it is possible to create multicolumn statistics to notify \nPostgres about columns correlation.\nBut unfortunately it is not good and working solution.\n\nFirst of all we have to create multicolumn statistic for all possible \ncombinations of table's attributes including \"tenant_id\".\nIt is very inconvenient and inefficient. On the inconvenient part: doesn't postgres itself automatically create functional dependencies on combinations? i.e. it seems to me if we create statistics on (a, b, c), then we don't need to create statistics on (a, b) or (a, c) or (b, c), because the pg_statistic_ext entry for (a, b, c) already includes enough information.On the inefficient part, I think there's some areas of improvement here. For example, if (product_id) -> seller_id correlation is 1.0, then (product_id, product_name) -> seller_id correlation is definitely 1.0 and we don't need to store it. So we can reduce the amount of information stored in pg_statistic_ext -> stxdependencies, without losing any data points.More generally, if (a) -> b correlation is X, then (a, c) -> b correlation is >= X. Maybe we can have a threshold to reduce number of entries in pg_statistic_ext -> stxdependencies.-- Hadi",
"msg_date": "Fri, 29 Mar 2019 01:06:31 -0700",
"msg_from": "Hadi Moshayedi <hadi@moshayedi.net>",
"msg_from_op": false,
"msg_subject": "Re: Multitenancy optimization"
},
{
"msg_contents": "On 29.03.2019 11:06, Hadi Moshayedi wrote:\n> On Thu, Mar 28, 2019 at 5:40 AM Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n>\n> Certainly it is possible to create multicolumn statistics to notify\n> Postgres about columns correlation.\n> But unfortunately it is not good and working solution.\n>\n> First of all we have to create multicolumn statistic for all possible\n> combinations of table's attributes including \"tenant_id\".\n> It is very inconvenient and inefficient.\n>\n> On the inconvenient part: doesn't postgres itself automatically create \n> functional dependencies on combinations? i.e. it seems to me if we \n> create statistics on (a, b, c), then we don't need to create \n> statistics on (a, b) or (a, c) or (b, c), because the pg_statistic_ext \n> entry for (a, b, c) already includes enough information.\n>\n> On the inefficient part, I think there's some areas of improvement \n> here. For example, if (product_id) -> seller_id correlation is 1.0, \n> then (product_id, product_name) -> seller_id correlation is definitely \n> 1.0 and we don't need to store it. So we can reduce the amount of \n> information stored in pg_statistic_ext -> stxdependencies, without \n> losing any data points.\n>\n> More generally, if (a) -> b correlation is X, then (a, c) -> b \n> correlation is >= X. Maybe we can have a threshold to reduce number of \n> entries in pg_statistic_ext -> stxdependencies.\n>\n> -- Hadi\n\nYes, Postgres automatically creates functional dependencies on combinations.\nBut actually you do not need ALL combinations. Table can contain \nhundreds of attributes: number of combination in this case will not fit \nin bigint.\nThis is why Postgres doesn't allow to create muticolumn statistic for \nmore than 8 columns.\nSo if you have table with hundred attributes and tenant_id, you with \nhave to manually create statistic for each <tenant_id,att-i> pair.\nAnd it is very inconvenient (and as I already mentioned doesn't \ncompletely solve the problem with join selectivity estimation).\n\nMay be there are some other ways of addressing this problem (although I \ndo not them).\nBut I think that in any case, if number of distinction values is \nexplicitly specified for the attribute, then this value should be used \nby optimizer instead of dummy DEFAULT_NUM_DISTINCT.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 29.03.2019 11:06, Hadi Moshayedi\n wrote:\n\n\n\n\nOn Thu, Mar 28, 2019 at 5:40 AM Konstantin\n Knizhnik <k.knizhnik@postgrespro.ru>\n wrote: \n\n\n\n Certainly it is possible to create multicolumn statistics to\n notify \n Postgres about columns correlation.\n But unfortunately it is not good and working solution.\n\n First of all we have to create multicolumn statistic for all\n possible \n combinations of table's attributes including \"tenant_id\".\n It is very inconvenient and inefficient.\n\n \nOn the inconvenient part: doesn't postgres itself\n automatically create functional dependencies on\n combinations? i.e. it seems to me if we create statistics on\n (a, b, c), then we don't need to create statistics on (a, b)\n or (a, c) or (b, c), because the pg_statistic_ext entry for\n (a, b, c) already includes enough information.\n\n\nOn the inefficient part, I think there's some areas of\n improvement here. For example, if (product_id) ->\n seller_id correlation is 1.0, then (product_id,\n product_name) -> seller_id correlation is definitely 1.0\n and we don't need to store it. So we can reduce the amount\n of information stored in pg_statistic_ext ->\n stxdependencies, without losing any data points.\n\n\n\nMore generally, if (a) -> b correlation is X, then (a,\n c) -> b correlation is >= X. Maybe we can have a\n threshold to reduce number of entries in pg_statistic_ext\n -> stxdependencies.\n\n\n-- Hadi\n\n\n\n\n\n Yes, Postgres automatically creates functional dependencies on\n combinations.\n But actually you do not need ALL combinations. Table can contain\n hundreds of attributes: number of combination in this case will not\n fit in bigint.\n This is why Postgres doesn't allow to create muticolumn statistic\n for more than 8 columns.\n So if you have table with hundred attributes and tenant_id, you with\n have to manually create statistic for each <tenant_id,att-i>\n pair.\n And it is very inconvenient (and as I already mentioned doesn't\n completely solve the problem with join selectivity estimation).\n\n May be there are some other ways of addressing this problem\n (although I do not them).\n But I think that in any case, if number of distinction values is\n explicitly specified for the attribute, then this value should be\n used by optimizer instead of dummy DEFAULT_NUM_DISTINCT.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 29 Mar 2019 11:42:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Multitenancy optimization"
}
] |
[
{
"msg_contents": "Following the dicussion here,\nhttps://www.postgresql.org/message-id/flat/CAD21AoB9%2By8N4%2BFan-ne-_7J5yTybPttxeVKfwUocKp4zT1vNQ%40mail.gmail.com#90a8316b1e643532e1cdb352c91c22a7\n\nI'm proposing these changes to clean up docs for previous (more or less\nunrelated) commit.\n\n From 15d42c5a8f2f811a7add3e4179edcc1f7cd291f7 Mon Sep 17 00:00:00 2001\nFrom: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu, 28 Mar 2019 08:53:26 -0500\nSubject: [PATCH v1] Clean up docs for log_statement_sample_rate..\n\n..which was added at commit 88bdbd3f746049834ae3cc972e6e650586ec3c9d\n---\n doc/src/sgml/config.sgml | 18 +++++++++---------\n src/backend/utils/misc/guc.c | 4 ++--\n src/backend/utils/misc/postgresql.conf.sample | 6 +++---\n 3 files changed, 14 insertions(+), 14 deletions(-)\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex d383de2..4019a31 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -5786,9 +5786,9 @@ local0.* /var/log/postgresql\n Causes the duration of each completed statement to be logged\n if the statement ran for at least the specified number of\n milliseconds, modulated by <varname>log_statement_sample_rate</varname>.\n- Setting this to zero prints all statement durations. Minus-one (the default)\n- disables logging statement durations. For example, if you set it to\n- <literal>250ms</literal> then all SQL statements that run 250ms or longer\n+ Setting this to zero prints all statement durations. <literal>-1</literal> (the default)\n+ disables logging statements due to exceeding duration threshold. For example, if you set it to\n+ <literal>250ms</literal>, then all SQL statements that run 250ms or longer\n will be logged. Enabling this parameter can be helpful in tracking down\n unoptimized queries in your applications.\n Only superusers can change this setting.\n@@ -5824,13 +5824,13 @@ local0.* /var/log/postgresql\n </term>\n <listitem>\n <para>\n- Determines the fraction of the statements that exceed\n- <xref linkend=\"guc-log-min-duration-statement\"/> which to log.\n- The default is <literal>1</literal>, meaning log to all such\n+ Determines the fraction of statements that exceed\n+ <xref linkend=\"guc-log-min-duration-statement\"/> to be logged.\n+ The default is <literal>1</literal>, meaning log all such\n statements.\n- Setting this to zero disables logging, same as setting\n+ Setting this to zero disables logging by duration, same as setting\n <varname>log_min_duration_statement</varname>\n- to minus-one. <varname>log_statement_sample_rate</varname>\n+ to <literal>-1</literal>. <varname>log_statement_sample_rate</varname>\n is helpful when the traffic is too high to log all queries.\n </para>\n </listitem>\n@@ -6083,7 +6083,7 @@ local0.* /var/log/postgresql\n \n <note>\n <para>\n- The difference between setting this option and setting\n+ The difference between enabling <varname>log_duration</varname> and setting\n <xref linkend=\"guc-log-min-duration-statement\"/> to zero is that\n exceeding <varname>log_min_duration_statement</varname> forces the text of\n the query to be logged, but this option doesn't. Thus, if\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex aa564d1..415cd78 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -3357,8 +3357,8 @@ static struct config_real ConfigureNamesReal[] =\n \n \t{\n \t\t{\"log_statement_sample_rate\", PGC_SUSET, LOGGING_WHEN,\n-\t\t\tgettext_noop(\"Fraction of statements over log_min_duration_statement to log.\"),\n-\t\t\tgettext_noop(\"If you only want a sample, use a value between 0 (never \"\n+\t\t\tgettext_noop(\"Fraction of statements exceeding log_min_duration_statement to be logged.\"),\n+\t\t\tgettext_noop(\"If you only want a sample, use a value between 0.0 (never \"\n \t\t\t\t\t\t \"log) and 1.0 (always log).\")\n \t\t},\n \t\t&log_statement_sample_rate,\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex cccb5f1..684f5e7 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -489,9 +489,9 @@\n \t\t\t\t\t# 0 logs all statement, > 0 logs only statements running at\n \t\t\t\t\t# least this number of milliseconds.\n \n-#log_statement_sample_rate = 1\t# Fraction of logged statements over\n-\t\t\t\t\t# log_min_duration_statement. 1.0 logs all statements,\n-\t\t\t\t\t# 0 never logs.\n+#log_statement_sample_rate = 1.0\t# Fraction of logged statements exceeding\n+\t\t\t\t\t# log_min_duration_statement to be logged\n+\t\t\t\t\t# 1.0 logs all statements, 0.0 never logs\n \n # - What to Log -",
"msg_date": "Thu, 28 Mar 2019 08:59:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "clean up docs for log_statement_sample_rate"
},
{
"msg_contents": "Re: Justin Pryzby 2019-03-28 <20190328135918.GA27808@telsasoft.com>\n> + Determines the fraction of statements that exceed\n> + <xref linkend=\"guc-log-min-duration-statement\"/> to be logged.\n> + The default is <literal>1</literal>, meaning log all such\n ^ 1.0\n\nThanks for taking care of this!\nChristoph\n\n\n",
"msg_date": "Thu, 28 Mar 2019 15:02:29 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for log_statement_sample_rate"
},
{
"msg_contents": "On 3/28/19 2:59 PM, Justin Pryzby wrote:\n> Following the dicussion here,\n> https://www.postgresql.org/message-id/flat/CAD21AoB9%2By8N4%2BFan-ne-_7J5yTybPttxeVKfwUocKp4zT1vNQ%40mail.gmail.com#90a8316b1e643532e1cdb352c91c22a7\n> \n> I'm proposing these changes to clean up docs for previous (more or less\n> unrelated) commit.\n\nI intended to fix misusage of mixing 0 and 1.0. I will be more careful.\n\nThanks for taking the point and other fixes!\n\n\n",
"msg_date": "Thu, 28 Mar 2019 15:56:11 +0100",
"msg_from": "Adrien NAYRAT <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for log_statement_sample_rate"
},
{
"msg_contents": "On 2019-Mar-28, Justin Pryzby wrote:\n\n> Following the dicussion here,\n> https://www.postgresql.org/message-id/flat/CAD21AoB9%2By8N4%2BFan-ne-_7J5yTybPttxeVKfwUocKp4zT1vNQ%40mail.gmail.com#90a8316b1e643532e1cdb352c91c22a7\n> \n> I'm proposing these changes to clean up docs for previous (more or less\n> unrelated) commit.\n\nThanks, pushed.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 3 Apr 2019 18:59:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for log_statement_sample_rate"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nThe current Windows build system supports compiling with Windows SDK up to\nv8.1. When building with the latest Windows SDK v10 which is the default\nfor Visual Studio 2017, we will get the following error:\n\nerror MSB8036: The Windows SDK version 8.1 was not found.\n\nWhen the build system generates projects files for MSBuild to consume, it\ndoesn't include a SDK version number. Then MSBuild will assume v8.1 as\ndefault.\nBut if we only install the latest v10 but not v8.1, MSBuild will error out.\n\nIf we open the Visual Studio solution and manually chooses the correct\nWindows SDK version in project property dialog, it will compile without\nproblem.\nBy doing this, we actually add a \"WindowsTargetPlatformVersion\" element in\nthe vcxproj xml file, under \"Global\" property group like this:\n\n <PropertyGroup Label=\"Globals\">\n <ProjectGuid>{E0F9C6B0-1947-4EBE-9848-9AB367FFC49E}</ProjectGuid>\n\n<WindowsTargetPlatformVersion>10.0.17763.0</WindowsTargetPlatformVersion>\n </PropertyGroup>\n\nSo if we add WindowsTargetPlatformVersion to every project the whole pgsql\nsolution will compile.\nThe SDK version number can be obtained from \"WindowsSDKVersion\" environment\nvariable.\nThis is setup when you start with the Visual Studio Command Prompt.\nAttached a patch to fix the build system.\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Fri, 29 Mar 2019 00:01:26 +0900",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "Hi Peifeng,\n\nOn Fri, Mar 29, 2019 at 12:01:26AM +0900, Peifeng Qiu wrote:\n> The current Windows build system supports compiling with Windows SDK up to\n> v8.1. When building with the latest Windows SDK v10 which is the default\n> for Visual Studio 2017, we will get the following error:\n> \n> error MSB8036: The Windows SDK version 8.1 was not found.\n\nActually up to 10, no? Sorry for the delay, I have just noticed this\npatch registered in the commit fest. And now is review time.\n\n> When the build system generates projects files for MSBuild to consume, it\n> doesn't include a SDK version number. Then MSBuild will assume v8.1 as\n> default.\n> But if we only install the latest v10 but not v8.1, MSBuild will error out.\n\nSo... This actually boils down to that behavior:\nhttps://developercommunity.visualstudio.com/content/problem/140294/windowstargetplatformversion-makes-it-impossible-t.html\n\nWhile WindowsSDKVersion seems to be present all the time. I think\nthat we should be more defensive if the variable is not defined, and\ninstead rely on the default provided by the system, whatever it may\nbe. In short it seems to me that the tag WindowsTargetPlatformVersion\nshould be added only if the variable exists, and your patch always\nsets it.\n\nFor anything with Postgres on Windows, I have been using Visual Studio\n2015 and 2019 lately to compile Postgres mainly with the Native Tools\ncommand prompt so I have never actually faced this failure even with\nthe most recent VS 2019. Using just a command prompt causes a failure\nwhen finding out nmake for example as that's not in the default PATH.\nOur buildfarm members don't complain either, and there are two animals\nusing VS 2017: hamerkop and bowerbird.\n--\nMichael",
"msg_date": "Thu, 18 Jul 2019 17:09:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "Hi Michael. Thanks for your review.\nI updated the patch to only include the WindowsTargetPlatformVersion node\nif WindowsSDKVersion is present.\nI can confirm that this issue no longer exists for VS2019. So only VS2017\nis problematic.\n\nI'm also very curious on how hamerkop and bowerbird build postgres with\nVS2017.\nLooks like hamerkop and bowerbird both exist before VS2017 and maybe they\nget SDK v8.1 from previous\nVS installations. I will contact admin of hamerkop and bowerbird and see if\nthat's the case.\nAs of now I can still encounter the same issue with fresh installed Windows\nServer 2016 and\nVS2017, on both azure and google cloud. So better to patch the build system\nanyway.\n\nPeifeng Qiu\nBest regards,\n\nOn Thu, Jul 18, 2019 at 4:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi Peifeng,\n>\n> On Fri, Mar 29, 2019 at 12:01:26AM +0900, Peifeng Qiu wrote:\n> > The current Windows build system supports compiling with Windows SDK up\n> to\n> > v8.1. When building with the latest Windows SDK v10 which is the default\n> > for Visual Studio 2017, we will get the following error:\n> >\n> > error MSB8036: The Windows SDK version 8.1 was not found.\n>\n> Actually up to 10, no? Sorry for the delay, I have just noticed this\n> patch registered in the commit fest. And now is review time.\n>\n> > When the build system generates projects files for MSBuild to consume, it\n> > doesn't include a SDK version number. Then MSBuild will assume v8.1 as\n> > default.\n> > But if we only install the latest v10 but not v8.1, MSBuild will error\n> out.\n>\n> So... This actually boils down to that behavior:\n>\n> https://developercommunity.visualstudio.com/content/problem/140294/windowstargetplatformversion-makes-it-impossible-t.html\n>\n> While WindowsSDKVersion seems to be present all the time. I think\n> that we should be more defensive if the variable is not defined, and\n> instead rely on the default provided by the system, whatever it may\n> be. In short it seems to me that the tag WindowsTargetPlatformVersion\n> should be added only if the variable exists, and your patch always\n> sets it.\n>\n> For anything with Postgres on Windows, I have been using Visual Studio\n> 2015 and 2019 lately to compile Postgres mainly with the Native Tools\n> command prompt so I have never actually faced this failure even with\n> the most recent VS 2019. Using just a command prompt causes a failure\n> when finding out nmake for example as that's not in the default PATH.\n> Our buildfarm members don't complain either, and there are two animals\n> using VS 2017: hamerkop and bowerbird.\n> --\n> Michael\n>",
"msg_date": "Fri, 19 Jul 2019 15:39:49 +0800",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 03:39:49PM +0800, Peifeng Qiu wrote:\n> I updated the patch to only include the WindowsTargetPlatformVersion node\n> if WindowsSDKVersion is present. I can confirm that this issue no\n> longer exists for VS2019. So only VS2017 is problematic.\n\n(Could you please avoid to top-post?)\nUgh. That's one I don't have at hand.\n\n> I'm also very curious on how hamerkop and bowerbird build postgres with\n> VS2017. Looks like hamerkop and bowerbird both exist before VS2017\n> and maybe they get SDK v8.1 from previous VS installations. I will\n> contact admin of hamerkop and bowerbird and see if that's the case.\n> As of now I can still encounter the same issue with fresh installed\n> Windows Server 2016 and VS2017, on both azure and google cloud. So\n> better to patch the build system anyway.\n\nI guess so but I cannot confirm myself. I am adding Andrew and Hari\nin CC to double-check if they have seen this problem or not. Hari has\nalso worked on porting VS 2017 and 2019 in the MSVC scripts in the\ntree.\n--\nMichael",
"msg_date": "Fri, 19 Jul 2019 18:51:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "\nOn 7/19/19 5:51 AM, Michael Paquier wrote:\n>\n>> I'm also very curious on how hamerkop and bowerbird build postgres with\n>> VS2017. Looks like hamerkop and bowerbird both exist before VS2017\n>> and maybe they get SDK v8.1 from previous VS installations. I will\n>> contact admin of hamerkop and bowerbird and see if that's the case.\n>> As of now I can still encounter the same issue with fresh installed\n>> Windows Server 2016 and VS2017, on both azure and google cloud. So\n>> better to patch the build system anyway.\n> I guess so but I cannot confirm myself. I am adding Andrew and Hari\n> in CC to double-check if they have seen this problem or not. Hari has\n> also worked on porting VS 2017 and 2019 in the MSVC scripts in the\n> tree.\n\n\n\nMy tests of the VS2017 stuff used this install mechanism on a fresh\nWindows instance:\n\n\nchoco install -y visualstudio2017-workload-vctools --package-parameters\n\"--includeOptional\"\n\n\nThis installed Windows Kits 8.1 and 10, among many other things.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 19 Jul 2019 08:30:38 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "On Fri, Jul 19, 2019 at 08:30:38AM -0400, Andrew Dunstan wrote:\n> My tests of the VS2017 stuff used this install mechanism on a fresh\n> Windows instance:\n> \n> choco install -y visualstudio2017-workload-vctools --package-parameters\n> \"--includeOptional\"\n> \n> This installed Windows Kits 8.1 and 10, among many other things.\n\nSo you have bypassed the problem by installing the v8.1 SDK. And if\nyou don't do that and only include the v10 SDK, then you see the\nproblem. Functionally, it also means that with a VS2017 compilation\nthe SDK version is forcibly downgraded, isn't that a bad idea anyway?\n--\nMichael",
"msg_date": "Sat, 20 Jul 2019 10:10:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "\nOn 7/19/19 9:10 PM, Michael Paquier wrote:\n> On Fri, Jul 19, 2019 at 08:30:38AM -0400, Andrew Dunstan wrote:\n>> My tests of the VS2017 stuff used this install mechanism on a fresh\n>> Windows instance:\n>>\n>> choco install -y visualstudio2017-workload-vctools --package-parameters\n>> \"--includeOptional\"\n>>\n>> This installed Windows Kits 8.1 and 10, among many other things.\n> So you have bypassed the problem by installing the v8.1 SDK. And if\n> you don't do that and only include the v10 SDK, then you see the\n> problem. Functionally, it also means that with a VS2017 compilation\n> the SDK version is forcibly downgraded, isn't that a bad idea anyway?\n\n\n\nFor VS2017, the 8.1 SDK is part of the optional package set (see\n<https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2017#visual-c-build-tools>)\nbut for VS2019 it is not (see\n<https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2019#visual-c-build-tools>)\nso yes, we need to deal with the issue, but it's really only a major\nissue for VS2019, ISTM. I guess we might need a test for what SDK is\navailable? That's going to be fun ...\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n",
"msg_date": "Sun, 21 Jul 2019 09:25:55 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "> For VS2017, the 8.1 SDK is part of the optional package set\nYeah, if you install 8.1 SDK VS2017 can compile. I install VS2017 using the\nGUI installer.\nThe main page are big checkboxs for packages sets like C++, .NET, Azure etc.\nChecking C++ will only install the IDE and 10 SDK. 8.1 SDK is on the side\npanel detailed list.\n\n>but it's really only a major issue for VS2019\nVS2019 will use the latest v10 SDK by default. So no need to install 8.1\nfor VS2019.\n\n> I guess we might need a test for what SDK is available?\nWe can just use the WindowsSDKVersion environment variable to determine the\nSDK for\ncurrent cmd session. It's set when you start the Visual Studio Prompt or\ncall one bat script.\nDevelopers can choose the right version best suit their need. Detecting all\ninstalled SDK\nversion can be done with some registry magic but I think that's not\nnecessary in this case.\n\nWe should change the title of the patch to \"compile from source with VS2017\nand SDK v10\",\nsince that's the only problematic combination. Our need is compile our own\ntools that link to\nlibpq and latest VC runtime. So libpq must also be linked with the same VC\nruntime, and\nthus use the same SDK version.\n\nBest regards,\nPeifeng Qiu\n\n> For VS2017, the 8.1 SDK is part of the optional package setYeah, if you install 8.1 SDK VS2017 can compile. I install VS2017 using the GUI installer.The main page are big checkboxs for packages sets like C++, .NET, Azure etc.Checking C++ will only install the IDE and 10 SDK. 8.1 SDK is on the side panel detailed list.>but it's really only a major issue for VS2019VS2019 will use the latest v10 SDK by default. So no need to install 8.1 for VS2019.> I guess we might need a test for what SDK is available?We can just use the WindowsSDKVersion environment variable to determine the SDK forcurrent cmd session. It's set when you start the Visual Studio Prompt or call one bat script.Developers can choose the right version best suit their need. Detecting all installed SDKversion can be done with some registry magic but I think that's not necessary in this case.We should change the title of the patch to \"compile from source with VS2017 and SDK v10\",since that's the only problematic combination. Our need is compile our own tools that link tolibpq and latest VC runtime. So libpq must also be linked with the same VC runtime, andthus use the same SDK version. Best regards,Peifeng Qiu",
"msg_date": "Mon, 22 Jul 2019 16:01:46 +0800",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "On Mon, Jul 22, 2019 at 04:01:46PM +0800, Peifeng Qiu wrote:\n>> but it's really only a major issue for VS2019\n>\n> VS2019 will use the latest v10 SDK by default. So no need to install 8.1\n> for VS2019.\n\nYes, FWIW, I have tested with VS2019 when committing 2b1394f, and in\nthis case only the v10 SDK got installed, with no actual issues\nrelated to the dependency of the SDK reported. In this case I have\ninstalled VS using the community installer provided by Microsoft.\n\n>> I guess we might need a test for what SDK is available?\n> \n> We can just use the WindowsSDKVersion environment variable to\n> determine the SDK for current cmd session. It's set when you start\n> the Visual Studio Prompt or call one bat script. Developers can\n> choose the right version best suit their need. Detecting all\n> installed SDK version can be done with some registry magic but I\n> think that's not necessary in this case.\n\nThis looks more sensible to do if the environment variable is\navailable. Looking around this variable is available when using the\ncommand prompt for native tools. So using it sounds like a good idea\nto me if it exists.\n--\nMichael",
"msg_date": "Mon, 22 Jul 2019 17:23:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "\nOn 7/22/19 4:23 AM, Michael Paquier wrote:\n> On Mon, Jul 22, 2019 at 04:01:46PM +0800, Peifeng Qiu wrote:\n>>> but it's really only a major issue for VS2019\n>> VS2019 will use the latest v10 SDK by default. So no need to install 8.1\n>> for VS2019.\n> Yes, FWIW, I have tested with VS2019 when committing 2b1394f, and in\n> this case only the v10 SDK got installed, with no actual issues\n> related to the dependency of the SDK reported. In this case I have\n> installed VS using the community installer provided by Microsoft.\n>\n>>> I guess we might need a test for what SDK is available?\n>> We can just use the WindowsSDKVersion environment variable to\n>> determine the SDK for current cmd session. It's set when you start\n>> the Visual Studio Prompt or call one bat script. Developers can\n>> choose the right version best suit their need. Detecting all\n>> installed SDK version can be done with some registry magic but I\n>> think that's not necessary in this case.\n> This looks more sensible to do if the environment variable is\n> available. Looking around this variable is available when using the\n> command prompt for native tools. So using it sounds like a good idea\n> to me if it exists.\n\n\n\nYeah, on consideration I think Peifeng's patch upthread looks OK.\n(Incidentally, this variable is not set in the very old version of VC\nrunning on currawong).\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:38:47 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 10:38:47AM -0400, Andrew Dunstan wrote:\n> Yeah, on consideration I think Peifeng's patch upthread looks OK.\n> (Incidentally, this variable is not set in the very old version of VC\n> running on currawong).\n\nInteresting. I am not actually sure in which version of VS this has\nbeen introduced. But it would be fine enough to do nothing if the\nvariable is not defined and rely on the default. Except for the\nformatting and indentation, the patch looks right. Andrew, perhaps\nyou would prefer doing the final touch on it and commit it?\n--\nMichael",
"msg_date": "Thu, 25 Jul 2019 09:02:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
},
{
"msg_contents": "On Thu, Jul 25, 2019 at 09:02:14AM +0900, Michael Paquier wrote:\n> Interesting. I am not actually sure in which version of VS this has\n> been introduced. But it would be fine enough to do nothing if the\n> variable is not defined and rely on the default. Except for the\n> formatting and indentation, the patch looks right. Andrew, perhaps\n> you would prefer doing the final touch on it and commit it?\n\nAndrew has just applied the patch as of 20e99cd (+ cb9bb15 for an\nextra fix by Tom).\n--\nMichael",
"msg_date": "Fri, 26 Jul 2019 08:52:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile from source using latest Microsoft Windows SDK"
}
] |
[
{
"msg_contents": "I am seeing psql crash and massive regression test failures in git head.\nThe psql crash happens if .psqlrc contains:\n\n\t\\set COMP_KEYWORD_CASE upper\n\nand the crash backtrace is:\n\n\tProgram received signal SIGSEGV, Segmentation fault.\n\t0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n\t1325 *yy_cp = yyg->yy_hold_char;\n\t(gdb) bt\n\t#0 0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n\t#1 0x00005555555806a2 in psql_scan_slash_command (state=state@entry=0x5555555f8b20) at psqlscanslash.l:510\n\t#2 0x00005555555689b0 in HandleSlashCmds (scan_state=scan_state@entry=0x5555555f8b20, cstack=cstack@entry=0x5555555fb760, query_buf=0x5555555fb780,\n\t previous_buf=0x5555555fb8b0) at command.c:212\n\t#3 0x000055555557e3c9 in MainLoop (source=source@entry=0x5555555fb490) at mainloop.c:486\n\t#4 0x00005555555670ed in process_file (filename=0x7fffffffda10 \"/var/lib/postgresql/.psqlrc\", use_relative_path=<optimized out>) at command.c:3594\n\t#5 0x0000555555584a1e in process_psqlrc_file (filename=0x7fffffffda10 \"/var/lib/postgresql/.psqlrc\") at startup.c:781\n\t#6 0x0000555555584b71 in process_psqlrc (argv0=<optimized out>) at startup.c:756\n\t#7 0x0000555555585adc in main (argc=<optimized out>, argv=0x7fffffffe418) at startup.c:315\n\nThe regression test seem to be crashing from \\d. For example, in\nsrc/test/regress/results/tablespace.out, once this line appears:\n\n\t\\d testschema.test_index1\n\nthe file ends, while the expected file has many more lines.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 28 Mar 2019 11:03:22 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "git head crash/regression failures"
},
{
"msg_contents": "On 2019-Mar-28, Bruce Momjian wrote:\n\n> I am seeing psql crash and massive regression test failures in git head.\n> The psql crash happens if .psqlrc contains:\n> \n> \t\\set COMP_KEYWORD_CASE upper\n> \n> and the crash backtrace is:\n> \n> \tProgram received signal SIGSEGV, Segmentation fault.\n> \t0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n> \t1325 *yy_cp = yyg->yy_hold_char;\n> \t(gdb) bt\n> \t#0 0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n> \t#1 0x00005555555806a2 in psql_scan_slash_command (state=state@entry=0x5555555f8b20) at psqlscanslash.l:510\n\nDid you try \"make maintainer-clean\"?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 12:10:23 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: git head crash/regression failures"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 12:10:23PM -0300, Alvaro Herrera wrote:\n> On 2019-Mar-28, Bruce Momjian wrote:\n> \n> > I am seeing psql crash and massive regression test failures in git head.\n> > The psql crash happens if .psqlrc contains:\n> > \n> > \t\\set COMP_KEYWORD_CASE upper\n> > \n> > and the crash backtrace is:\n> > \n> > \tProgram received signal SIGSEGV, Segmentation fault.\n> > \t0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n> > \t1325 *yy_cp = yyg->yy_hold_char;\n> > \t(gdb) bt\n> > \t#0 0x000055555557f350 in slash_yylex (yylval_param=yylval_param@entry=0x0, yyscanner=0x5555555fb6c0) at psqlscanslash.c:1325\n> > \t#1 0x00005555555806a2 in psql_scan_slash_command (state=state@entry=0x5555555f8b20) at psqlscanslash.l:510\n> \n> Did you try \"make maintainer-clean\"?\n\nWow, that fixed it. I thought my work flow didn't require that, but\nobviously it does. Thanks so much.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 28 Mar 2019 11:14:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: git head crash/regression failures"
}
] |
[
{
"msg_contents": "SQLite has a bubble generator tool that they use to generate syntax\ndiagrams for their documentation:\n\nhttps://www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator.tcl?mimetype=text/plain\n\nI think that the results are rather good. See, for example, the INSERT\ndocumentation:\n\nhttps://www.sqlite.org/lang_insert.html\n\nNow that we have infrastructure that can add images to our\ndocumentation, we may want to consider something like this. Note that\nBison has an option that outputs a grammar as a Graphviz dot file:\n\nhttps://www.gnu.org/software/bison/manual/html_node/Graphviz.html\n\nIt's probably not possible to create a useful visualization/syntax\ndiagram with Bison's --graph option, but it might at least be an\ninteresting starting point.\n\nI don't think that it's necessary to discuss this now. This can be a\nplaceholder thread that we may come back to when we're all less busy.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Mar 2019 14:56:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Syntax diagrams in user documentation"
},
{
"msg_contents": "On 2019-Mar-28, Peter Geoghegan wrote:\n\n> SQLite has a bubble generator tool that they use to generate syntax\n> diagrams for their documentation:\n> \n> https://www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator.tcl?mimetype=text/plain\n\nInteresting. SQLite itself is in the public domain, so it's not totally\nunreasonable to borrow this code ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 19:25:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On Thu, 28 Mar 2019 at 17:56, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> SQLite has a bubble generator tool that they use to generate syntax\n> diagrams for their documentation:\n>\n>\n> https://www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator.tcl?mimetype=text/plain\n>\n> I think that the results are rather good. See, for example, the INSERT\n> documentation:\n>\n> https://www.sqlite.org/lang_insert.html\n>\n\nneato!\n\n(And no coincidence that GraphViz has a command by that name...\nhttps://www.graphviz.org/pdf/neatoguide.pdf)\n\nAn especially cool idea if we could automatically dig input directly from\nsrc/backend/parser/gram.y\n-- \nWhen confronted by a difficult problem, solve it by reducing it to the\nquestion, \"How would the Lone Ranger handle this?\"\n\nOn Thu, 28 Mar 2019 at 17:56, Peter Geoghegan <pg@bowt.ie> wrote:SQLite has a bubble generator tool that they use to generate syntax\ndiagrams for their documentation:\n\nhttps://www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator.tcl?mimetype=text/plain\n\nI think that the results are rather good. See, for example, the INSERT\ndocumentation:\n\nhttps://www.sqlite.org/lang_insert.htmlneato!(And no coincidence that GraphViz has a command by that name... https://www.graphviz.org/pdf/neatoguide.pdf)An especially cool idea if we could automatically dig input directly from src/backend/parser/gram.y-- When confronted by a difficult problem, solve it by reducing it to thequestion, \"How would the Lone Ranger handle this?\"",
"msg_date": "Thu, 28 Mar 2019 18:35:20 -0400",
"msg_from": "Christopher Browne <cbbrowne@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On 3/28/19 14:56, Peter Geoghegan wrote:\n> SQLite has a bubble generator tool that they use to generate syntax\n> diagrams for their documentation:\n> \n> ...\n> \n> I don't think that it's necessary to discuss this now. This can be a\n> placeholder thread that we may come back to when we're all less busy.\n\nWe're just gearing up for the Google Season of Docs and I think this\nwould be a great task for a doc writer to help with. Any reason to\nexpect serious objections to syntax diagram graphics in the docs?\n\n(Peter E, I did notice that you just added the idea of more images to\nthe GSoD wiki.)\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Thu, 28 Mar 2019 15:45:57 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 3:46 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n> We're just gearing up for the Google Season of Docs and I think this\n> would be a great task for a doc writer to help with. Any reason to\n> expect serious objections to syntax diagram graphics in the docs?\n\nIt might be hard to come to a consensus, because it's one of those\nthings that everybody can be expected to have an opinion on. It\nprobably won't be hard to get something committed that's clearly more\ninformative than what we have right now, though.\n\nThere is a question about how we maintain consistency between the\nsyntax diagrams in psql if we go this way, though. Not sure what to do\nabout that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 28 Mar 2019 15:49:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On 03/28/19 17:56, Peter Geoghegan wrote:\n> SQLite has a bubble generator tool that they use to generate syntax\n> diagrams for their documentation:\n> \n> https://www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator.tcl?mimetype=text/plain\n> \n> I think that the results are rather good. See, for example, the INSERT\n> documentation:\n> \n> https://www.sqlite.org/lang_insert.html\n> \n> Now that we have infrastructure that can add images to our\n> documentation, we may want to consider something like this. Note that\n> Bison has an option that outputs a grammar as a Graphviz dot file:\n> \n> https://www.gnu.org/software/bison/manual/html_node/Graphviz.html\n> \n> It's probably not possible to create a useful visualization/syntax\n> diagram with Bison's --graph option, but it might at least be an\n> interesting starting point.\n\nI did a thing at $work where a query from pg_authid and pg_auth_members\nproduces an XML file with elements for roles and elements for grant arcs,\nan XSL transform from that into Graphviz language, transformed into SVG\nby Viz.js (which is graphviz turned into javascript by Emscripten,\nrunning right in the browser and making SVG).\n\nSVG itself, being XML, is amenable to further XSL transforms too, so\nthere are several places it should be possible to intervene and filter/\ntweak the output.\n\nA quick glance at the bison --graph option makes me think it creates\na giant impractical dot file of the whole grammar at once. I'm thinking\nit would be more practical to use the --xml option to get the output\ninstead in a form that XSLT can pull individual productions from in\nisolation and produce dot (or svg) from those.\n\nI'm guessing the biggest automation challenges will be about where to\nbreak and wrap things.\n\n-Chap\n\n\n",
"msg_date": "Thu, 28 Mar 2019 19:01:17 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On Thu, Mar 28, 2019 at 6:49 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Thu, Mar 28, 2019 at 3:46 PM Jeremy Schneider <schnjere@amazon.com>\n> wrote:\n> > We're just gearing up for the Google Season of Docs and I think this\n> > would be a great task for a doc writer to help with. Any reason to\n> > expect serious objections to syntax diagram graphics in the docs?\n>\n> It might be hard to come to a consensus, because it's one of those\n> things that everybody can be expected to have an opinion on. It\n> probably won't be hard to get something committed that's clearly more\n> informative than what we have right now, though.\n>\n> There is a question about how we maintain consistency between the\n> syntax diagrams in psql if we go this way, though. Not sure what to do\n> about that.\n>\n\nThis discussion is highly relevant to an upcoming talk I have called \"In\nAid Of RTFM\", and the work I hope would follow from it.\n\nWhile I personally like these bubble charts because they remind me of my\nmisspent youth at IBM, they have some drawbacks:\n\n1. They look like something out of an IBM manual\n2. Images conceal information from visually impaired people\n3. They aren't copy paste-able text\n4. They aren't easily comparable\n5. They bake in the language of the comments\n\nThe merits of #1 can be argued forever, and it's possible that a more\nmodern bubble chart theme is possible.\n\n#2 is problematic, because things like ADA compliance and the EU\nAccessibility Requirements frown upon conveying text inside images. The way\naround this might be to have the alt-text of the image be the original\nsyntax as we have it now.\n\n#3 is important when attempting to relay the relevant excerpt of a very\nlarge documentation page via email or slack. Yes, I could right click and\ncopy the URL of the image (in this case\nhttps://www.sqlite.org/images/syntax/insert-stmt.gif and others), but\nthat's more work that copy-paste. We could add an HTML anchor to each image\n(my talk discusses our current lack of reference anchors) and that would\nmitigate it somewhat. Making the original text available via mouse-over or\na \"copy text\" link might work too.\n\n#3b As long as I live, I will never properly memorize the syntax for RANGE\nBETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. I will google this and\ncopy-paste it. I suspect I'm not alone. If it's available only in an image,\nthen I can't copy paste, and I *will* mistype some part of that at least\ntwice.\n\n#4 isn't such an immediate issue, but one of my points in the talk is that\nright now there is no way to easily distinguish text on a page that is new\nin the most recent version of pgsql (i.e. a red-line markup). We could of\ncourse flag that an image changed from version X-1 to X, but it would be\ntougher to convey which parts of the image changed.\n\n#5 it not such a big issue because most of what is in the diagram is pure\nsyntax, but comments will leak in, and those snippets of English will be\nburied very deep in bubble-markup.\n\nOn Thu, Mar 28, 2019 at 6:49 PM Peter Geoghegan <pg@bowt.ie> wrote:On Thu, Mar 28, 2019 at 3:46 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n> We're just gearing up for the Google Season of Docs and I think this\n> would be a great task for a doc writer to help with. Any reason to\n> expect serious objections to syntax diagram graphics in the docs?\n\nIt might be hard to come to a consensus, because it's one of those\nthings that everybody can be expected to have an opinion on. It\nprobably won't be hard to get something committed that's clearly more\ninformative than what we have right now, though.\n\nThere is a question about how we maintain consistency between the\nsyntax diagrams in psql if we go this way, though. Not sure what to do\nabout that.This discussion is highly relevant to an upcoming talk I have called \"In Aid Of RTFM\", and the work I hope would follow from it.While I personally like these bubble charts because they remind me of my misspent youth at IBM, they have some drawbacks:1. They look like something out of an IBM manual2. Images conceal information from visually impaired people3. They aren't copy paste-able text4. They aren't easily comparable5. They bake in the language of the commentsThe merits of #1 can be argued forever, and it's possible that a more modern bubble chart theme is possible.#2 is problematic, because things like ADA compliance and the EU Accessibility Requirements frown upon conveying text inside images. The way around this might be to have the alt-text of the image be the original syntax as we have it now.#3 is important when attempting to relay the relevant excerpt of a very large documentation page via email or slack. Yes, I could right click and copy the URL of the image (in this case https://www.sqlite.org/images/syntax/insert-stmt.gif and others), but that's more work that copy-paste. We could add an HTML anchor to each image (my talk discusses our current lack of reference anchors) and that would mitigate it somewhat. Making the original text available via mouse-over or a \"copy text\" link might work too.#3b As long as I live, I will never properly memorize the syntax for RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. I will google this and copy-paste it. I suspect I'm not alone. If it's available only in an image, then I can't copy paste, and I will mistype some part of that at least twice.#4 isn't such an immediate issue, but one of my points in the talk is that right now there is no way to easily distinguish text on a page that is new in the most recent version of pgsql (i.e. a red-line markup). We could of course flag that an image changed from version X-1 to X, but it would be tougher to convey which parts of the image changed.#5 it not such a big issue because most of what is in the diagram is pure syntax, but comments will leak in, and those snippets of English will be buried very deep in bubble-markup.",
"msg_date": "Thu, 28 Mar 2019 22:53:38 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "Christopher Browne <cbbrowne@gmail.com> writes:\n> An especially cool idea if we could automatically dig input directly from\n> src/backend/parser/gram.y\n\nFWIW, I think the odds of getting desirable diagrams that way are nil.\nThere are *way* too many things about our Bison grammar that can\nbe described charitably as implementation details, or uncharitably\nas ugly hacks.\n\nIt may or may not be useful to present the grammar as railroad\ndiagrams or the like; but I think we need to expect that that'd be\nan abstraction of the syntax, not something that can be automatically\nreverse-engineered from the implementation.\n\nIt might be more useful to try to generate pretty pictures from\nthe SGML^H^H^H^HXML docs' <synopsis> sections.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 28 Mar 2019 23:42:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On 2019-03-29 03:53, Corey Huinker wrote:\n> #3b As long as I live, I will never properly memorize the syntax for\n> RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. I will google this\n> and copy-paste it. I suspect I'm not alone. If it's available only in an\n> image, then I can't copy paste, and I /will/ mistype some part of that\n> at least twice.\n\nI doubt that we would remove the current textual synopses. The graphics\nwould just be an addition.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 09:22:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
},
{
"msg_contents": "On 2019-03-28 23:45, Jeremy Schneider wrote:\n> We're just gearing up for the Google Season of Docs and I think this\n> would be a great task for a doc writer to help with. Any reason to\n> expect serious objections to syntax diagram graphics in the docs?\n\nIt's worth a thought, but I tend to think that this would not be a good\ntask for a \"technical writer\". It's either an automation task or\nbusywork transcribing the syntax to whatever new intermediate format.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 09:25:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Syntax diagrams in user documentation"
}
] |
[
{
"msg_contents": "Hi All,\n\nI noticed that irrespective of whoever grants privileges on an object,\nit's always the object owner who is seen as a grantor in the output of\ntable_privileges view. As an example, consider the following case.\n\ncreate user u1;\ncreate user u2 with superuser;\ncreate user u3;\n\n\\c postgres u1\ncreate table t1(a integer);\n\n\\c postgres u2\ngrant select on t1 to u3; -- it's u2 who is granting select privileges\non t1 to u3\n\n\\c postgres u3\nselect * from table_privileges where table_name = 't1';\n\npostgres=# \\c postgres u3\nYou are now connected to database \"postgres\" as user \"u3\".\n\npostgres=> select * from information_schema.table_privileges where\ntable_name = 't1';\n grantor | grantee | table_catalog | table_schema | table_name |\nprivilege_type | is_grantable | with_hierarchy\n---------+---------+---------------+--------------+------------+----------------+--------------+----------------\n u1 | u3 | postgres | public | t1 |\nSELECT | NO | YES\n(1 row)\n\npostgres=> select * from t1;\n a\n---\n(0 rows)\n\nAbove output of table_privilges shows 'u1' (who is the object owner of\nt1) as a grantor instead of u2. Isn't that a wrong information ? If\nincase that isn't wrong then may i know why does the postgresql\ndocumentation on \"table_privilegs\" describes grantor as \"Name of the\nrole that granted the privilege\". Here is the documentation link for\ntable_privilges view.\n\nhttps://www.postgresql.org/docs/current/infoschema-table-privileges.html\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:57:56 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "table_privileges view always show object owner as a grantor"
},
{
"msg_contents": "Ashutosh Sharma wrote:\n> I noticed that irrespective of whoever grants privileges on an object,\n> it's always the object owner who is seen as a grantor in the output of\n> table_privileges view.\n\n> Isn't that a wrong information ? If\n> incase that isn't wrong then may i know why does the postgresql\n> documentation on \"table_privilegs\" describes grantor as \"Name of the\n> role that granted the privilege\". Here is the documentation link for\n> table_privilges view.\n> \n> https://www.postgresql.org/docs/current/infoschema-table-privileges.html\n\nCurrently the grantor of a privilege is the owner if a superuser\ngrants a privilege on the object.\n\nIf that were not so, how would you disambiguate between privileges\ngranted by a superuser and privileges passed on by somebody\nwho has been granted the privilege WITH GRANT OPTION?\n\nOr, with an example:\nIf A grants SELECT to a table WITH GRANT OPTION to B, and\nB grants the privilege to C, A cannot directly revoke the\nprivilege from C. All A can to is revoke the privilege from\nB with the CASCADE option.\n\nThis distiction would be lost if B could appear as grantor\njust because he has been superuser at some time in the past\n(and doesn't hold the privilege himself).\n\nSo I'd say the behavior is fine as it is, but it would not harm to\ndocument it better (or at all).\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 15:15:50 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: table_privileges view always show object owner as a grantor"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> Ashutosh Sharma wrote:\n>> I noticed that irrespective of whoever grants privileges on an object,\n>> it's always the object owner who is seen as a grantor in the output of\n>> table_privileges view.\n\nThe above is demonstrably false ...\n\nregression=# create user alice;\nCREATE ROLE\nregression=# create user bob;\nCREATE ROLE\nregression=# create user charlie;\nCREATE ROLE\nregression=# \\c - alice\nYou are now connected to database \"regression\" as user \"alice\".\nregression=> create table a1(f int);\nCREATE TABLE\nregression=> grant select on table a1 to bob with grant option;\nGRANT\nregression=> \\c - bob\nYou are now connected to database \"regression\" as user \"bob\".\nregression=> grant select on table a1 to charlie; \nGRANT\nregression=> select * from information_schema.table_privileges where table_name = 'a1';\n grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy \n---------+---------+---------------+--------------+------------+----------------+--------------+----------------\n bob | charlie | regression | public | a1 | SELECT | NO | YES\n alice | bob | regression | public | a1 | SELECT | YES | YES\n(2 rows)\n\n> Currently the grantor of a privilege is the owner if a superuser\n> grants a privilege on the object.\n\nYes, that is true.\n\n> So I'd say the behavior is fine as it is, but it would not harm to\n> document it better (or at all).\n\nIt is documented, see under GRANT:\n\n If a superuser chooses to issue a GRANT or REVOKE command, the command\n is performed as though it were issued by the owner of the affected\n object. In particular, privileges granted via such a command will\n appear to have been granted by the object owner. (For role membership,\n the membership appears to have been granted by the containing role\n itself.)\n\n GRANT and REVOKE can also be done by a role that is not the owner of\n the affected object, but is a member of the role that owns the object,\n or is a member of a role that holds privileges WITH GRANT OPTION on\n the object. In this case the privileges will be recorded as having\n been granted by the role that actually owns the object or holds the\n privileges WITH GRANT OPTION. For example, if table t1 is owned by\n role g1, of which role u1 is a member, then u1 can grant privileges on\n t1 to u2, but those privileges will appear to have been granted\n directly by g1. Any other member of role g1 could revoke them later.\n\n If the role executing GRANT holds the required privileges indirectly\n via more than one role membership path, it is unspecified which\n containing role will be recorded as having done the grant. In such\n cases it is best practice to use SET ROLE to become the specific role\n you want to do the GRANT as.\n\nThe point about other members of the owning role being able to revoke\nthe privileges is why it's done this way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2019 10:45:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: table_privileges view always show object owner as a grantor"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 8:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > Ashutosh Sharma wrote:\n> >> I noticed that irrespective of whoever grants privileges on an object,\n> >> it's always the object owner who is seen as a grantor in the output of\n> >> table_privileges view.\n>\n> The above is demonstrably false ...\n>\n\nOkay. Seems like that is only true when the grantor of a privilege is superuser.\n\n> regression=# create user alice;\n> CREATE ROLE\n> regression=# create user bob;\n> CREATE ROLE\n> regression=# create user charlie;\n> CREATE ROLE\n> regression=# \\c - alice\n> You are now connected to database \"regression\" as user \"alice\".\n> regression=> create table a1(f int);\n> CREATE TABLE\n> regression=> grant select on table a1 to bob with grant option;\n> GRANT\n> regression=> \\c - bob\n> You are now connected to database \"regression\" as user \"bob\".\n> regression=> grant select on table a1 to charlie;\n> GRANT\n> regression=> select * from information_schema.table_privileges where table_name = 'a1';\n> grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy\n> ---------+---------+---------------+--------------+------------+----------------+--------------+----------------\n> bob | charlie | regression | public | a1 | SELECT | NO | YES\n> alice | bob | regression | public | a1 | SELECT | YES | YES\n> (2 rows)\n>\n> > Currently the grantor of a privilege is the owner if a superuser\n> > grants a privilege on the object.\n>\n> Yes, that is true.\n>\n> > So I'd say the behavior is fine as it is, but it would not harm to\n> > document it better (or at all).\n>\n> It is documented, see under GRANT:\n>\n\nOkay, Thanks for the pointer. I was actually referring to the\ndocumentation on table_privileges view where the description for\ngrantor column says : \"Name of the role that granted the privilege\"\n\n> If a superuser chooses to issue a GRANT or REVOKE command, the command\n> is performed as though it were issued by the owner of the affected\n> object. In particular, privileges granted via such a command will\n> appear to have been granted by the object owner. (For role membership,\n> the membership appears to have been granted by the containing role\n> itself.)\n>\n> GRANT and REVOKE can also be done by a role that is not the owner of\n> the affected object, but is a member of the role that owns the object,\n> or is a member of a role that holds privileges WITH GRANT OPTION on\n> the object. In this case the privileges will be recorded as having\n> been granted by the role that actually owns the object or holds the\n> privileges WITH GRANT OPTION. For example, if table t1 is owned by\n> role g1, of which role u1 is a member, then u1 can grant privileges on\n> t1 to u2, but those privileges will appear to have been granted\n> directly by g1. Any other member of role g1 could revoke them later.\n>\n> If the role executing GRANT holds the required privileges indirectly\n> via more than one role membership path, it is unspecified which\n> containing role will be recorded as having done the grant. In such\n> cases it is best practice to use SET ROLE to become the specific role\n> you want to do the GRANT as.\n>\n> The point about other members of the owning role being able to revoke\n> the privileges is why it's done this way.\n>\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Mar 2019 21:05:21 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: table_privileges view always show object owner as a grantor"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhile trying to explore on CHR() function in PG,\nI found that few of the ASCII values are returning hex number values(like\n'\\x08', '\\x0B')\nand few are executing within SQL (i.e. chr(9) => Horizontal tab, chr(10)\n=> Line feed) as below example.\n\npostgres=# select 1|| chr(8)|| 2 || chr(9)||3 || chr(10)||4 || chr(11)||5\n|| chr(12)||6 || chr(13)||7 as col1;\n col1\n----------------\n 1*\\x08*2 3 * +*\n 4*\\x0B*5*\\x0C*6*\\r*7\n(1 row)\n\nMy question here is, why these inconsistencies in the behavior of CHR()\nfunction?\n\n-- \n\n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Corporation\n\nThe Postgres Database Company\n\nHi All,While trying to explore on CHR() function in PG, I found that few of the ASCII values are returning hex number values(like '\\x08', '\\x0B') and few are executing within SQL (i.e. chr(9) => Horizontal tab, chr(10) => Line feed) as below example.postgres=# select 1|| chr(8)|| 2 || chr(9)||3 || chr(10)||4 || chr(11)||5 || chr(12)||6 || chr(13)||7 as col1; col1 ---------------- 1\\x082 3 + 4\\x0B5\\x0C6\\r7(1 row)My question here is, why these inconsistencies in the behavior of CHR() function? -- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB CorporationThe Postgres Database Company",
"msg_date": "Fri, 29 Mar 2019 16:55:26 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Inconsistencies in the behavior of CHR() function in PG."
},
{
"msg_contents": "Re: Prabhat Sahu 2019-03-29 <CANEvxPqaQqojU+XyKrfiwt729P+ZikjYsfn=hQhEzcTKm5iWoQ@mail.gmail.com>\n> While trying to explore on CHR() function in PG,\n> I found that few of the ASCII values are returning hex number values(like\n> '\\x08', '\\x0B')\n> and few are executing within SQL (i.e. chr(9) => Horizontal tab, chr(10)\n> => Line feed) as below example.\n\nThat's not a property of chr(), but generally of the \"text\" datatype:\n\n# select E'\\002'::text;\n text\n──────\n \\x02\n\nNon-printable characters are quoted. See also:\n\n# select i, chr(i) from generate_series(1, 256) g(i);\n\nChristoph\n\n\n",
"msg_date": "Fri, 29 Mar 2019 12:51:58 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies in the behavior of CHR() function in PG."
}
] |
[
{
"msg_contents": "Do we need to review the fsync error handling in pg_receivewal and\npg_recvlogical, following the recent backend changes? The current\ndefault behavior is that these tools will log fsync errors and then\nreconnect and proceed with the next data streaming in. As a result, you\nmight then have some files in the accumulated WAL that have not been\nfsynced. Perhaps a hard exit would be more appropriate?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 29 Mar 2019 12:48:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 12:48:09PM +0100, Peter Eisentraut wrote:\n> Do we need to review the fsync error handling in pg_receivewal and\n> pg_recvlogical, following the recent backend changes? The current\n> default behavior is that these tools will log fsync errors and then\n> reconnect and proceed with the next data streaming in. As a result, you\n> might then have some files in the accumulated WAL that have not been\n> fsynced. Perhaps a hard exit would be more appropriate?\n\nYes, I think that we are going to need an equivalent of that for all\nfrontend tools. At various degrees, making sure that a fsync happens\nis also important for pg_dump, pg_basebackup, pg_rewind and\npg_checksums so it is not only a problem of the two tools you mention.\nIt seems to me that the most correct way to have those failures would\nbe to use directly exit(EXIT_FAILURE) in file_utils.c where\nappropriate.\n--\nMichael",
"msg_date": "Fri, 29 Mar 2019 22:05:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "On 2019-03-29 14:05, Michael Paquier wrote:\n> Yes, I think that we are going to need an equivalent of that for all\n> frontend tools. At various degrees, making sure that a fsync happens\n> is also important for pg_dump, pg_basebackup, pg_rewind and\n> pg_checksums so it is not only a problem of the two tools you mention.\n> It seems to me that the most correct way to have those failures would\n> be to use directly exit(EXIT_FAILURE) in file_utils.c where\n> appropriate.\n\nYeah, there is more to do. The reason I'm focusing on these two right\nnow is that they would typically run as a background service, and a\nclean exit is most important there. In the other cases, the program\nruns more often in the foreground and you can see error messages. There\nare also some cases where fsync() failures are intentionally ignored\n((void) casts), so some of that would need to be investigated further.\n\nHere is a patch to get started. Note that these calls don't go through\nfile_utils.c, so it's a separate issue anyway.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 25 Jun 2019 14:23:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 02:23:05PM +0200, Peter Eisentraut wrote:\n> Yeah, there is more to do. The reason I'm focusing on these two right\n> now is that they would typically run as a background service, and a\n> clean exit is most important there. In the other cases, the program\n> runs more often in the foreground and you can see error messages. There\n> are also some cases where fsync() failures are intentionally ignored\n> ((void) casts), so some of that would need to be investigated further.\n\nThe remaining three calls all go through file_utils.c.\n\n> Here is a patch to get started. Note that these calls don't go through\n> file_utils.c, so it's a separate issue anyway.\n\nWhy using a different error code. Using EXIT_FAILURE is a more common\npractice in the in-core binaries. The patch looks fine to me except\nthat, that's a good first cut.\n--\nMichael",
"msg_date": "Wed, 26 Jun 2019 13:11:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "Hi,\n\nTried out this patch and it applies, compiles, and passes check-world. Also\nflipped things around in pg_recvlogical.c to exit-on-success to ensure it's\nactually being called and that worked too. Outside of a more complicated\nharness that simulates fsync errors not sure how else to test this further.\n\nI did some searching and found a FUSE based on that looks interesting:\nCharybdeFS[1]. Rather than being fixed at mount time, it has a\nclient/server interface so you can change the handling of syscalls on the\nfly[2]. For example you can error out fsync calls halfway through a test\nrather than always or randomly. Haven't tried it out but leaving it here as\nit seems relevant.\n\n[1]: https://github.com/scylladb/charybdefs\n[2]:\nhttps://www.scylladb.com/2016/05/02/fault-injection-filesystem-cookbook/\n\nOn Wed, Jun 26, 2019 at 12:11 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> Why using a different error code. Using EXIT_FAILURE is a more common\n> practice in the in-core binaries. The patch looks fine to me except\n> that, that's a good first cut.\n>\n\nAn error code specific to fsync issues could help with tests as the harness\ncould check it to ensure things died for the right reasons. With a generic\n\"messed up fsync\" harness you might even be able to run some existing tests\nthat would otherwise pass and check for the fsync-specific exit code.\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\nHi,Tried out this patch and it applies, compiles, and passes check-world. Also flipped things around in pg_recvlogical.c to exit-on-success to ensure it's actually being called and that worked too. Outside of a more complicated harness that simulates fsync errors not sure how else to test this further.I did some searching and found a FUSE based on that looks interesting: CharybdeFS[1]. Rather than being fixed at mount time, it has a client/server interface so you can change the handling of syscalls on the fly[2]. For example you can error out fsync calls halfway through a test rather than always or randomly. Haven't tried it out but leaving it here as it seems relevant.[1]: https://github.com/scylladb/charybdefs[2]: https://www.scylladb.com/2016/05/02/fault-injection-filesystem-cookbook/On Wed, Jun 26, 2019 at 12:11 AM Michael Paquier <michael@paquier.xyz> wrote:Why using a different error code. Using EXIT_FAILURE is a more common\npractice in the in-core binaries. The patch looks fine to me except\nthat, that's a good first cut.An error code specific to fsync issues could help with tests as the harness could check it to ensure things died for the right reasons. With a generic \"messed up fsync\" harness you might even be able to run some existing tests that would otherwise pass and check for the fsync-specific exit code.Regards,-- Sehrope SarkuniFounder & CEO | JackDB, Inc. | https://www.jackdb.com/",
"msg_date": "Sat, 27 Jul 2019 13:02:33 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": false,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "While reviewing this patch I read through some of the other fsync\ncallsites and noticed this typo (walkdir is in file_utils.c, not\ninitdb.c) too:\n\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex 315c74c745..9b79df2d7f 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -3208,7 +3208,7 @@ SyncDataDirectory(void)\n *\n * Errors are reported at level elevel, which might be ERROR or less.\n *\n- * See also walkdir in initdb.c, which is a frontend version of this logic.\n+ * See also walkdir in file_utils.c, which is a frontend version of this logic.\n */\n static void\n walkdir(const char *path,\n\nRegards,\n-- Sehrope Sarkuni\nFounder & CEO | JackDB, Inc. | https://www.jackdb.com/\n\n\n",
"msg_date": "Sat, 27 Jul 2019 13:06:06 -0400",
"msg_from": "Sehrope Sarkuni <sehrope@jackdb.com>",
"msg_from_op": false,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 01:06:06PM -0400, Sehrope Sarkuni wrote:\n> While reviewing this patch I read through some of the other fsync\n> callsites and noticed this typo (walkdir is in file_utils.c, not\n> initdb.c) too:\n\nThanks, Sehrope. Applied.\n--\nMichael",
"msg_date": "Sun, 28 Jul 2019 16:23:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
},
{
"msg_contents": "On 2019-07-27 19:02, Sehrope Sarkuni wrote:\n> Tried out this patch and it applies, compiles, and passes check-world.\n> Also flipped things around in pg_recvlogical.c to exit-on-success to\n> ensure it's actually being called and that worked too. Outside of a more\n> complicated harness that simulates fsync errors not sure how else to\n> test this further.\n\nI have committed this, with the exit code changed back, as requested by\nMichael.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 29 Jul 2019 08:06:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: fsync error handling in pg_receivewal, pg_recvlogical"
}
] |
[
{
"msg_contents": "Hi,\n\nFound by one of the my colleague - Kashif Jeeshan , in PG 9.6 - make is \nfailing for test_decoding contrib module.\n\n[centos@centos-cpula test_decoding]$ make\ngcc -Wall -Wmissing-prototypes -Wpointer-arith \n-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute \n-Wformat-security -fno-strict-aliasing -fwrapv -O2 -fPIC -I. -I. \n-I../../src/include -D_GNU_SOURCE -c -o test_decoding.o test_decoding.c\nIn file included from ../../src/include/postgres.h:48,\n from test_decoding.c:13:\n../../src/include/utils/elog.h:71:28: error: utils/errcodes.h: No such \nfile or directory\nIn file included from ../../src/include/replication/slot.h:15,\n from ../../src/include/replication/logical.h:12,\n from test_decoding.c:23:\n../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h: \nNo such file or directory\ntest_decoding.c: In function ‘pg_decode_startup’:\ntest_decoding.c:127: error: ‘ERRCODE_INVALID_PARAMETER_VALUE’ undeclared \n(first use in this function)\ntest_decoding.c:127: error: (Each undeclared identifier is reported only \nonce\ntest_decoding.c:127: error: for each function it appears in.)\nmake: *** [test_decoding.o] Error 1\n[centos@centos-cpula test_decoding]$\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 17:54:30 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PG 9.6]make is failing for test_decoding contrib module."
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 8:24 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> Found by one of the my colleague - Kashif Jeeshan , in PG 9.6 - make is\n\nKashif Jeeshan?\n\n> failing for test_decoding contrib module.\n>\n> [centos@centos-cpula test_decoding]$ make\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute\n> -Wformat-security -fno-strict-aliasing -fwrapv -O2 -fPIC -I. -I.\n> -I../../src/include -D_GNU_SOURCE -c -o test_decoding.o test_decoding.c\n> In file included from ../../src/include/postgres.h:48,\n> from test_decoding.c:13:\n> ../../src/include/utils/elog.h:71:28: error: utils/errcodes.h: No such\n> file or directory\n> In file included from ../../src/include/replication/slot.h:15,\n> from ../../src/include/replication/logical.h:12,\n> from test_decoding.c:23:\n> ../../src/include/storage/lwlock.h:129:33: error: storage/lwlocknames.h:\n> No such file or directory\n> test_decoding.c: In function ‘pg_decode_startup’:\n> test_decoding.c:127: error: ‘ERRCODE_INVALID_PARAMETER_VALUE’ undeclared\n> (first use in this function)\n> test_decoding.c:127: error: (Each undeclared identifier is reported only\n> once\n> test_decoding.c:127: error: for each function it appears in.)\n> make: *** [test_decoding.o] Error 1\n> [centos@centos-cpula test_decoding]$\n\nI think your tree is not clean, or you haven't built the server\ncorrectly first. If this were actually broken, the buildfarm would be\nred:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_status.pl\n\nTry 'git clean -dfx'.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 29 Mar 2019 08:42:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PG 9.6]make is failing for test_decoding contrib module."
},
{
"msg_contents": "On 03/29/2019 06:12 PM, Robert Haas wrote:\n> On Fri, Mar 29, 2019 at 8:24 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> Found by one of the my colleague - Kashif Jeeshan , in PG 9.6 - make is\n> Kashif Jeeshan?\n:-) , actually he is also working on logical replication on standbys \ntesting - whenever he has some bandwidth (On/off) ..he found one issue .\n i suggested him to see the behavior on PG 9.6/ PG 10 and while doing \nso - got this issue when he performed make against test_decoding\n>> test_decoding.c:127: error: for each function it appears in.)\n>> make: *** [test_decoding.o] Error 1\n>> [centos@centos-cpula test_decoding]$\n> I think your tree is not clean, or you haven't built the server\n> correctly first. If this were actually broken, the buildfarm would be\n> red:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_status.pl\n>\n> Try 'git clean -dfx'.\n>\nYes, you are right.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 18:57:23 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PG 9.6]make is failing for test_decoding contrib module."
},
{
"msg_contents": "On 03/29/2019 06:12 PM, Robert Haas wrote:\n> Kashif Jeeshan?\n\nOhh, Please read - Kashif Zeeshan. Sorry for the typo.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Fri, 29 Mar 2019 19:55:32 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PG 9.6]make is failing for test_decoding contrib module."
}
] |
[
{
"msg_contents": "PFA patch with minor improvements to documentation.\n\nAlso, what do you think about changing user-facing language from\n\"check checksum\" to \"verify checksum\" ? I see that commit ed308d78 actually\nmoved in the other direction, but I preferred \"verify\".",
"msg_date": "Fri, 29 Mar 2019 09:32:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "clean up pg_checksums.sgml"
},
{
"msg_contents": "On Fri, Mar 29, 2019 at 09:32:10AM -0500, Justin Pryzby wrote:\n> PFA patch with minor improvements to documentation.\n\nPatch does not apply, and I have reworded the last paragraph about\nfailures while operating.\n\n> Also, what do you think about changing user-facing language from\n> \"check checksum\" to \"verify checksum\" ? I see that commit ed308d78\n> actually moved in the other direction, but I preferred \"verify\".\n\nYes, that's a debate that we had during the discussion for the new\nswitches, and we have decided to use --check over --verify for the\ndefault option. On the one hand, \"Check checksums\" is rather\nredundant, but that's more consistent with the option name. \"Verify\nchecksums\" is perhaps more elegant. My opinion is that having some\nconsistency between the option names and the docs is nicer.\n--\nMichael",
"msg_date": "Sat, 30 Mar 2019 10:51:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up pg_checksums.sgml"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 10:51:23AM +0900, Michael Paquier wrote:\n> On Fri, Mar 29, 2019 at 09:32:10AM -0500, Justin Pryzby wrote:\n> > PFA patch with minor improvements to documentation.\n> \n> Patch does not apply, and I have reworded the last paragraph about\n> failures while operating.\n\nSorry, the patch was on top of an brief effort I made to rename \"check\nchecksums\" to \"verify checksums\", before asking about the idea.\n\nPFA patch to master.\n\nJustin\n\n> > Also, what do you think about changing user-facing language from\n> > \"check checksum\" to \"verify checksum\" ? I see that commit ed308d78\n> > actually moved in the other direction, but I preferred \"verify\".\n> \n> Yes, that's a debate that we had during the discussion for the new\n> switches, and we have decided to use --check over --verify for the\n> default option. On the one hand, \"Check checksums\" is rather\n> redundant, but that's more consistent with the option name. \"Verify\n> checksums\" is perhaps more elegant. My opinion is that having some\n> consistency between the option names and the docs is nicer.",
"msg_date": "Sun, 7 Apr 2019 19:15:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up pg_checksums.sgml"
},
{
"msg_contents": "On Sun, Apr 07, 2019 at 07:15:46PM -0500, Justin Pryzby wrote:\n> Sorry, the patch was on top of an brief effort I made to rename \"check\n> checksums\" to \"verify checksums\", before asking about the idea.\n> \n> PFA patch to master.\n\nThanks for the patch, Justin. That looks indeed clearer after\nconsidering your proposal, so I have applied most of it. There were\nsome terms I found fuzzy though. For example, I have replaced\n\"checksum state\" by \"data checksum configuration\", but kept\n\"verifying\" because \"check checksums\" sounds kind of redundant.\n--\nMichael",
"msg_date": "Mon, 8 Apr 2019 15:37:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up pg_checksums.sgml"
}
] |
[
{
"msg_contents": "Hi everyone. I would like to flesh this out in terms fo feedback before\ncreating a patch.\n\nThe Problem\n\nIn large production systems often you can have problems when autovacuum is\nnot tuned aggressively enough. This leads to long autovacuum runs when\nthey happen, and autovacuum ends up eventually causing problems. A major\ndifficulty is that you cannot just make autovacuum more aggressive because\nthen you get very long autovacuum queues, which means very hot tables might\nnot be vacuumed before they end up being close to unusable.\n\nExample:\n\nImagine you have a 2TB database with 6000 tables with volumes ranging from\na few MB to 100GB in size per table. You tune autovacuum to make it more\naggressive and know you can handle 5 in parallel. So you set\nautovacuum_vacuum_scale_factor to a much lower value.\n\nOn the next autovacuum run, autovacuum detects that 3000 tables need to be\nvacuumed, and so creates 5 queues of 600 tables each. Nothing gets added\nto this queue until a queue completely empties.\n\nTo my experience I have not seen a case where analyze poses the same\nproblem but my solution would fold this in.\n\nCurrent workarounds.\n\n1. Periodically kill autovacuum sessions, forcing queue recalculation.\n2. Manually prevacuum everything that exceeds desired thresholds.\n\n\nProposed Solution\n\nI would propose a new GUC variable, autovacuum_max_queue_depth, defaulting\nto 0 (no limit).\n\nWhen autovacuum starts a run, it would sort the tables according to the\nfollowing formula if n_dead_tup > 0:\n\n((n_dead_tup - autovac_threshold) / (n_dead_tup + n_live_tup) -\n(autovacuum_scale_factor * (n_dead_tup)/(n_live_tup + n_dead_tup))\n\nFor analyze runs, n_dead_tup would have number of inserts since last\nanalyzed added to it.\n\nThen the top rows numbering autovacuum_max_queue_depth would be added to\neach autovacuum queue.\n\nIn the scenario presented above, if autovacuum_max_queue_depth were to be\nset to, say, 10, this would mean that after vacuuming 10 tables, each\nautovacuum worker would exit, and be started the next time autovacuum would\nwake up.\n\nThe goal here is to ensure that very hot tables rise to the top of the\nqueue and are vacuumed frequently even after setting Autovacuum to be far\nmore aggressive on a large production database.\n\nThoughts? Feedback? Waiting for a patch?\n\n-- \nBest Regards,\nChris Travers\nHead of Database\n\nTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com\nSaarbrücker Straße 37a, 10405 Berlin\n\nHi everyone. I would like to flesh this out in terms fo feedback before creating a patch.The ProblemIn large production systems often you can have problems when autovacuum is not tuned aggressively enough. This leads to long autovacuum runs when they happen, and autovacuum ends up eventually causing problems. A major difficulty is that you cannot just make autovacuum more aggressive because then you get very long autovacuum queues, which means very hot tables might not be vacuumed before they end up being close to unusable.Example:Imagine you have a 2TB database with 6000 tables with volumes ranging from a few MB to 100GB in size per table. You tune autovacuum to make it more aggressive and know you can handle 5 in parallel. So you set autovacuum_vacuum_scale_factor to a much lower value.On the next autovacuum run, autovacuum detects that 3000 tables need to be vacuumed, and so creates 5 queues of 600 tables each. Nothing gets added to this queue until a queue completely empties.To my experience I have not seen a case where analyze poses the same problem but my solution would fold this in.Current workarounds.1. Periodically kill autovacuum sessions, forcing queue recalculation.2. Manually prevacuum everything that exceeds desired thresholds.Proposed SolutionI would propose a new GUC variable, autovacuum_max_queue_depth, defaulting to 0 (no limit).When autovacuum starts a run, it would sort the tables according to the following formula if n_dead_tup > 0:((n_dead_tup - autovac_threshold) / (n_dead_tup + n_live_tup) - (autovacuum_scale_factor * (n_dead_tup)/(n_live_tup + n_dead_tup)) For analyze runs, n_dead_tup would have number of inserts since last analyzed added to it.Then the top rows numbering autovacuum_max_queue_depth would be added to each autovacuum queue.In the scenario presented above, if autovacuum_max_queue_depth were to be set to, say, 10, this would mean that after vacuuming 10 tables, each autovacuum worker would exit, and be started the next time autovacuum would wake up.The goal here is to ensure that very hot tables rise to the top of the queue and are vacuumed frequently even after setting Autovacuum to be far more aggressive on a large production database.Thoughts? Feedback? Waiting for a patch?-- Best Regards,Chris TraversHead of DatabaseTel: +49 162 9037 210 | Skype: einhverfr | www.adjust.com Saarbrücker Straße 37a, 10405 Berlin",
"msg_date": "Fri, 29 Mar 2019 17:43:06 +0100",
"msg_from": "Chris Travers <chris.travers@adjust.com>",
"msg_from_op": true,
"msg_subject": "Proposal: autovacuum_max_queue_depth"
}
] |
[
{
"msg_contents": "Before writing a patch, I'd like to hear discussion first.\n\nI've searched archives first and read following previous discussions on\nthis topic:\n- https://www.postgresql.org/message-id/4FCF6040.5030408%40redhat.com\n- https://www.postgresql.org/message-id/14899.974513046%40sss.pgh.pa.us\n\nThe problem (as I see it) is that everybody would like to move `/tmp`\nsocket dir to `/var/run`, or even `/var/run/postgresql` (or even\n`/run/postgresql`), but compatibility with old clients (which connect to\n/tmp by default) is a concern.\n\nOne reason to make this move is because any user can create PG socket in\n`/tmp`, and thus local clients will use that PG, instead of system one\n(which won't even start because it can't create socket - it is already\nused).\n\nI propose following 2 ideas:\n\n1. Add a Makefile parameter CONFIG_PGSOCKET_DIR to explicitly switch to new\nunix socket directory, and let distributions decide if they want this, and\nif they want, they should handle socket dir change on their own. For\nexample, switch to `/var/run/postgresql` require `/var/run/postgresql`\ndirectory to be created - an extra step compared to current situation.\n\nThis will allow remove some patches for many (many?) distributions.\n\nBy default (CONFIG_PGSOCKET_DIR undefined) unix socket dir should be set to\n`/tmp` - backward compatibility.\n\n2. The new socket directory shouldn't be hardcoded to single directory,\ninstead it should be detected dynamically.\n\nFor server:\n- if unix_socket_directory specified, use it\n- if not, check if /run/user/$(id -u) exists. If yes, use it as socket dir\n- if doesn't exist, check CONFIG_PGSOCKET_DIR exists. If yes, use it as\nsocket dir\n- else fail\n\nFor client:\n- if host explicitly set, use it\n- if not, check if /run/user/$(id -u) exists and socket file exists there.\nIf yes, use it as socket\n- if doesn't exist, check CONFIG_PGSOCKET_DIR exists. If yes, use it as\nsocket dir\n- else fail\n\nWhat will be solved:\n- no more local /tmp hijack\n- `pg_ctl start` and psql, when run as single user, will use same socket\ndirectory /run/user/$(id -u) - no need to create safe directory first\n- psql will still be able to connect to \"service\" PG - if socket is not\nfound in user runtime dir, then lookup in system (runtime) dir\n\nDrawbacks:\n- running pg_ctl as root will no longer make server accessible by default\nto other users, because /run/user/0 is readable only by root\n- if `postgres` user, under which postgresql service runs, is \"normal\"\nuser, and has /run/user/XXX directory, pg will require start-time -k\n/var/run/postgresql switch, to be accessible to other users' clients\n- there will no longer be a \"single\" directory to lookup sockets, so an\ninstructions on nuances of unix socket dir resolution for newcomers is\nrequired\n- non-systemd distributions won't benefit from this logic\n- /run/user/$(id -u) is opinionated. $XDG_RUNTIME_DIR would be better\n\nThoughts?\n\nBefore writing a patch, I'd like to hear discussion first.I've searched archives first and read following previous discussions on this topic:- https://www.postgresql.org/message-id/4FCF6040.5030408%40redhat.com- https://www.postgresql.org/message-id/14899.974513046%40sss.pgh.pa.usThe problem (as I see it) is that everybody would like to move `/tmp` socket dir to `/var/run`, or even `/var/run/postgresql` (or even `/run/postgresql`), but compatibility with old clients (which connect to /tmp by default) is a concern.One reason to make this move is because any user can create PG socket in `/tmp`, and thus local clients will use that PG, instead of system one (which won't even start because it can't create socket - it is already used).I propose following 2 ideas:1. Add a Makefile parameter CONFIG_PGSOCKET_DIR to explicitly switch to new unix socket directory, and let distributions decide if they want this, and if they want, they should handle socket dir change on their own. For example, switch to `/var/run/postgresql` require `/var/run/postgresql` directory to be created - an extra step compared to current situation.This will allow remove some patches for many (many?) distributions.By default (CONFIG_PGSOCKET_DIR undefined) unix socket dir should be set to `/tmp` - backward compatibility.2. The new socket directory shouldn't be hardcoded to single directory, instead it should be detected dynamically.For server:- if unix_socket_directory specified, use it- if not, check if /run/user/$(id -u) exists. If yes, use it as socket dir- if doesn't exist, check CONFIG_PGSOCKET_DIR exists. If yes, use it as socket dir- else failFor client:- if host explicitly set, use it- if not, check if /run/user/$(id -u) exists and socket file exists there. If yes, use it as socket- if doesn't exist, check CONFIG_PGSOCKET_DIR exists. If yes, use it as socket dir- else failWhat will be solved:- no more local /tmp hijack- `pg_ctl start` and psql, when run as single user, will use same socket directory /run/user/$(id -u) - no need to create safe directory first- psql will still be able to connect to \"service\" PG - if socket is not found in user runtime dir, then lookup in system (runtime) dirDrawbacks:- running pg_ctl as root will no longer make server accessible by default to other users, because /run/user/0 is readable only by root- if `postgres` user, under which postgresql service runs, is \"normal\" user, and has /run/user/XXX directory, pg will require start-time -k /var/run/postgresql switch, to be accessible to other users' clients- there will no longer be a \"single\" directory to lookup sockets, so an instructions on nuances of unix socket dir resolution for newcomers is required- non-systemd distributions won't benefit from this logic- /run/user/$(id -u) is opinionated. $XDG_RUNTIME_DIR would be betterThoughts?",
"msg_date": "Fri, 29 Mar 2019 22:37:44 +0200",
"msg_from": "Danylo Hlynskyi <abcz2.uprola@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unix socket dir, an idea"
},
{
"msg_contents": "Danylo Hlynskyi <abcz2.uprola@gmail.com> writes:\n> The problem (as I see it) is that everybody would like to move `/tmp`\n> socket dir to `/var/run`, or even `/var/run/postgresql` (or even\n> `/run/postgresql`), but compatibility with old clients (which connect to\n> /tmp by default) is a concern.\n\n*Some* people would like to move the default socket location. Others\nof us see that as a recipe for chaos. If it's really easy to change\nthat, we're going to have a Babel of servers and clients that can't\ntalk to each other.\n\nI would also like to point out the extreme Unix-centricity (and\neven particular-distribution-centricity) of the alternative locations\nyou mention, as well as the fact that all those locations are unfriendly\nto running an unprivileged postmaster (i.e. one that hasn't been\nexplicitly blessed by whoever has root on the box).\n\n> 1. Add a Makefile parameter CONFIG_PGSOCKET_DIR to explicitly switch to new\n> unix socket directory, and let distributions decide if they want this, and\n> if they want, they should handle socket dir change on their own.\n\nWe already have DEFAULT_PGSOCKET_DIR in pg_config_manual.h, and distros\nthat want to change it typically carry a patch to adjust that header.\nI'm not sure we really want to make it any easier than that.\n\n> 2. The new socket directory shouldn't be hardcoded to single directory,\n> instead it should be detected dynamically.\n\nThis idea is just nuts. It makes each of the problems I mentioned above\nabout ten times worse.\n\n> For client:\n> - if host explicitly set, use it\n> - if not, check if /run/user/$(id -u) exists and socket file exists there.\n> If yes, use it as socket\n\nUh, how is a client supposed to know what UID the postmaster is running\nunder?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Mar 2019 20:40:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Unix socket dir, an idea"
},
{
"msg_contents": "Hi Tom, and much thanks for reply!\n\n> I would also like to point out the extreme Unix-centricity (and\n> even particular-distribution-centricity) of the alternative locations\n> you mention\n\nYes! The /run/user and /var/run directories are absent on MacOS. That's why\nI **don't** propose to change\ndefault directory to /var/run. Each distribution **may** set it on it's own\nor use default /tmp\n- Ubuntu/Debian can set to /var/run/postgresql\n- generic systemd distro can set to /run/postgresql\n- /tmp everywhere else, including MacOS. Actually, a default\n- (I think this is unrelated to Windows, but maybe windows has it's own\nnotion for runtime directories)\n\nAll those won't be hardcoded in PG source, it is build time param and\ndistribution holds all the responsibility\nfor changing the default.\n\n> as well as the fact that all those locations are unfriendly\n> to running an unprivileged postmaster (i.e. one that hasn't been\n> explicitly blessed by whoever has root on the box).\n\nYes! That's why I propose to use **user runtime directory** first, when\nit's available. Systemd distros do\nhave one (think of user's private /tmp), which is denoted by\nXDG_RUNTIME_DIR envvar. No need\nfor server to be root, and no way for other users to hijack server socket\n(which is currently possible\nwith 0777 /tmp)\n\nIf you are talking about two regular users, one of which runs server,\nanother client - they will have now\nto agree which socket directory to use, yes. And what is nice, they won't\nbe able to override system-level\npostgresql without having root rights (currently it is possible to do\nbetween pg restarts).\n\n> Uh, how is a client supposed to know what UID the postmaster is running\nunder?\n\nIt doesn't have to. It first looks up under current user runtime directory\n(XDG_RUNTIME_DIR or /run/user/$(id -u))\nand if it can't find socket there, it searches in CONFIG_PGSOCKET_DIR\n(which is common for both server and client)\n\n> we're going to have a Babel of servers and clients that can't talk to\neach other.\n\nI'd like to note, that exactly the curent Babel of servers and clients made\nme write this email.\n1. Debian/Ubuntu care about security, so they move socket directory from\n0777 directory to 0755 directory\n(/var/run/postgresql)\n2. PG in Nix distro packageset used default setting (/tmp), and thus `psql`\ninstalled via Nix on Ubuntu didn't connect\nto Ubuntu server by default\n3. Because Debian did change default directory, `pg_ctl start` doesn't work\nwith default params:\n```\n~$ /usr/lib/postgresql/9.6/bin/pg_ctl -D temppg -o \"-p 5400\" start\nserver starting\nFATAL: could not create lock file \"/var/run/postgresql/.s.PGSQL.5400.lock\":\nPermission denied\n```\n\nThanks again for reading this!\n\nсб, 30 бер. 2019 о 02:40 Tom Lane <tgl@sss.pgh.pa.us> пише:\n\n> Danylo Hlynskyi <abcz2.uprola@gmail.com> writes:\n> > The problem (as I see it) is that everybody would like to move `/tmp`\n> > socket dir to `/var/run`, or even `/var/run/postgresql` (or even\n> > `/run/postgresql`), but compatibility with old clients (which connect to\n> > /tmp by default) is a concern.\n>\n> *Some* people would like to move the default socket location. Others\n> of us see that as a recipe for chaos. If it's really easy to change\n> that, we're going to have a Babel of servers and clients that can't\n> talk to each other.\n>\n> I would also like to point out the extreme Unix-centricity (and\n> even particular-distribution-centricity) of the alternative locations\n> you mention, as well as the fact that all those locations are unfriendly\n> to running an unprivileged postmaster (i.e. one that hasn't been\n> explicitly blessed by whoever has root on the box).\n>\n> > 1. Add a Makefile parameter CONFIG_PGSOCKET_DIR to explicitly switch to\n> new\n> > unix socket directory, and let distributions decide if they want this,\n> and\n> > if they want, they should handle socket dir change on their own.\n>\n> We already have DEFAULT_PGSOCKET_DIR in pg_config_manual.h, and distros\n> that want to change it typically carry a patch to adjust that header.\n> I'm not sure we really want to make it any easier than that.\n>\n> > 2. The new socket directory shouldn't be hardcoded to single directory,\n> > instead it should be detected dynamically.\n>\n> This idea is just nuts. It makes each of the problems I mentioned above\n> about ten times worse.\n>\n> > For client:\n> > - if host explicitly set, use it\n> > - if not, check if /run/user/$(id -u) exists and socket file exists\n> there.\n> > If yes, use it as socket\n>\n> Uh, how is a client supposed to know what UID the postmaster is running\n> under?\n>\n> regards, tom lane\n>\n\nHi Tom, and much thanks for reply!> I would also like to point out the extreme Unix-centricity (and> even particular-distribution-centricity) of the alternative locations\n> you mentionYes! The /run/user and /var/run directories are absent on MacOS. That's why I **don't** propose to changedefault directory to /var/run. Each distribution **may** set it on it's own or use default /tmp- Ubuntu/Debian can set to /var/run/postgresql- generic systemd distro can set to /run/postgresql- /tmp everywhere else, including MacOS. Actually, a default- (I think this is unrelated to Windows, but maybe windows has it's own notion for runtime directories)All those won't be hardcoded in PG source, it is build time param and distribution holds all the responsibilityfor changing the default.> as well as the fact that all those locations are unfriendly\n> to running an unprivileged postmaster (i.e. one that hasn't been> explicitly blessed by whoever has root on the box).Yes! That's why I propose to use **user runtime directory** first, when it's available. Systemd distros dohave one (think of user's private /tmp), which is denoted by XDG_RUNTIME_DIR envvar. No needfor server to be root, and no way for other users to hijack server socket (which is currently possiblewith 0777 /tmp)If you are talking about two regular users, one of which runs server, another client - they will have nowto agree which socket directory to use, yes. And what is nice, they won't be able to override system-levelpostgresql without having root rights (currently it is possible to do between pg restarts).> Uh, how is a client supposed to know what UID the postmaster is running under?It doesn't have to. It first looks up under current user runtime directory (XDG_RUNTIME_DIR or /run/user/$(id -u))and if it can't find socket there, it searches in CONFIG_PGSOCKET_DIR (which is common for both server and client)> we're going to have a Babel of servers and clients that can't talk to each other.I'd like to note, that exactly the curent Babel of servers and clients made me write this email.1. Debian/Ubuntu care about security, so they move socket directory from 0777 directory to 0755 directory(/var/run/postgresql)2. PG in Nix distro packageset used default setting (/tmp), and thus `psql` installed via Nix on Ubuntu didn't connect to Ubuntu server by default3. Because Debian did change default directory, `pg_ctl start` doesn't work with default params:```~$ /usr/lib/postgresql/9.6/bin/pg_ctl -D temppg -o \"-p 5400\" startserver startingFATAL: could not create lock file \"/var/run/postgresql/.s.PGSQL.5400.lock\": Permission denied\n```Thanks again for reading this!сб, 30 бер. 2019 о 02:40 Tom Lane <tgl@sss.pgh.pa.us> пише:Danylo Hlynskyi <abcz2.uprola@gmail.com> writes:\n> The problem (as I see it) is that everybody would like to move `/tmp`\n> socket dir to `/var/run`, or even `/var/run/postgresql` (or even\n> `/run/postgresql`), but compatibility with old clients (which connect to\n> /tmp by default) is a concern.\n\n*Some* people would like to move the default socket location. Others\nof us see that as a recipe for chaos. If it's really easy to change\nthat, we're going to have a Babel of servers and clients that can't\ntalk to each other.\n\nI would also like to point out the extreme Unix-centricity (and\neven particular-distribution-centricity) of the alternative locations\nyou mention, as well as the fact that all those locations are unfriendly\nto running an unprivileged postmaster (i.e. one that hasn't been\nexplicitly blessed by whoever has root on the box).\n\n> 1. Add a Makefile parameter CONFIG_PGSOCKET_DIR to explicitly switch to new\n> unix socket directory, and let distributions decide if they want this, and\n> if they want, they should handle socket dir change on their own.\n\nWe already have DEFAULT_PGSOCKET_DIR in pg_config_manual.h, and distros\nthat want to change it typically carry a patch to adjust that header.\nI'm not sure we really want to make it any easier than that.\n\n> 2. The new socket directory shouldn't be hardcoded to single directory,\n> instead it should be detected dynamically.\n\nThis idea is just nuts. It makes each of the problems I mentioned above\nabout ten times worse.\n\n> For client:\n> - if host explicitly set, use it\n> - if not, check if /run/user/$(id -u) exists and socket file exists there.\n> If yes, use it as socket\n\nUh, how is a client supposed to know what UID the postmaster is running\nunder?\n\n regards, tom lane",
"msg_date": "Sat, 30 Mar 2019 12:01:01 +0200",
"msg_from": "Danylo Hlynskyi <abcz2.uprola@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Unix socket dir, an idea"
}
] |
[
{
"msg_contents": "Following example can reproduce the problem:\n\n```\ncreate table d(a int);\ncreate index di on d(a);\nset enable_seqscan=off;\nset enable_bitmapscan to off;\nprepare p as delete from d where a=3;\nexecute p;\nexecute p;\n```\n\nThe reason is that: ExecInitIndexScan will not lock index because it thinks\nInitPlan\nalready write-locked index. But in some cases, such as DELETE+cache plan\nwill\nnot lock index, then failed assert.\n\nSome thoughts on how to fix it:\n1. Disable the optimization in ExecInitModifyTable, don't skip\nExecOpenIndices for DELETE\n2. For DELETE, instead of open indices, just lock them\n3. Lock index of target rel in ExecInitIndexScan for DELETE\n\nPS: another question, why does ExecCloseIndices release index lock instead\nof\nkeeping them?\n\n-- \nGaoZengqi\npgf00a@gmail.com\nzengqigao@gmail.com\n\nFollowing example can reproduce the problem:```create table d(a int);create index di on d(a);set enable_seqscan=off;set enable_bitmapscan to off;prepare p as delete from d where a=3;execute p;execute p;```The reason is that: ExecInitIndexScan will not lock index because it thinks InitPlanalready write-locked index. But in some cases, such as DELETE+cache plan willnot lock index, then failed assert.Some thoughts on how to fix it:1. Disable the optimization in ExecInitModifyTable, don't skip ExecOpenIndices for DELETE2. For DELETE, instead of open indices, just lock them3. Lock index of target rel in ExecInitIndexScan for DELETEPS: another question, why does ExecCloseIndices release index lock instead ofkeeping them?-- GaoZengqipgf00a@gmail.comzengqigao@gmail.com",
"msg_date": "Sat, 30 Mar 2019 14:22:59 +0800",
"msg_from": "=?UTF-8?B?6auY5aKe55Cm?= <pgf00a@gmail.com>",
"msg_from_op": true,
"msg_subject": "Indexscan failed assert caused by using index without lock"
},
{
"msg_contents": "=?UTF-8?B?6auY5aKe55Cm?= <pgf00a@gmail.com> writes:\n> Following example can reproduce the problem:\n\nYeah, this is being discussed at\nhttps://www.postgresql.org/message-id/flat/19465.1541636036@sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2019 10:24:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Indexscan failed assert caused by using index without lock"
}
] |
[
{
"msg_contents": "Hi PostgresSQL developers,\n\nI asked my question already on pgsql-general list and did not find an\nexplanation. Below is the question mainly copied from [0].\n----\n\nI am learning deeply how tuples are organized and column values are\naccessed in different databases. As far as undertood postgres does not\nstore all column positions in a tuple (e.g. in header or footer). In\ncontrast MySQL InnoDB stores column lengths in a record header [1].\n From the first glance it seems that a postgres format can have a\nsignificant performance penalty when accessing a single column which\nis located after multiple variable-length columns because searching a\ncolumn value position in a row requires multiple jumps. And in InnoDB\na position of a particular column can be found right after reading a\nheader.\n\nI found several related threads in pgsql-hackers archives [2,3]\ndescribing significant performance wins in a prototype.\n\nDoes anyone know why the format is still the same? Perhaps InnoDB and\nsimilar formats are not so good, are they?\n\nPlease respond if you have the clue!\n----\n\nI did a rough experiment to check if a difference is visible. I used\nfollowing table (200 pairs of columns):\ncreate table layout(\ni0 int,\ns0 varchar(255),\n...\ni199 int,\ns199 varchar(255)\n);\n\nAnd populated it with 1 million rows. And run following queries\nSELECT AVG(i0) FROM layout;\nSELECT AVG(i199) FROM layout;\n\nOn my machine calculating an average over column i0 took about 1\nsecond and about 2.5 seconds for column i199. And similar observations\nwere described in threads mentioned before. Quite significant\ndifference!\n\nI made a similar experiment for mysql as well (innodb). And results\nare the same for first and last columns.\n\nSome details how it is stored in innodb. They store varlen column\nlengths only in a tuple header (there is no length prefix before\ncolumn data itself). Having a tuple descriptor and lengths in a tuple\nheader it is always possible to calculate each column position without\njumping through an entire record. And seems that space requirements\nare same as in postgresql.\n\nIt seems that an innodb layout is better at least for reading. So, it\nis still unclear for me why postgresql does not employ similar layout\nif it can give significant benefits.\n----\n\n[0] https://www.postgresql.org/message-id/flat/CAOykqKc8Uoi3NKVfd5DpTmUzD4rJBWG9Gjo3pr7eaUGLtrstvw%40mail.gmail.com\n[1] https://dev.mysql.com/doc/refman/8.0/en/innodb-row-format.html#innodb-row-format-compact\n[2] https://www.postgresql.org/message-id/flat/c58979e50702201307w64b12892uf8dfc3d8bf117ec0%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/flat/87irj16umm.fsf%40enterprisedb.com\n\n-- \nBest regards,\nIvan Pavlukhin\n\n\n",
"msg_date": "Sat, 30 Mar 2019 09:35:39 +0300",
"msg_from": "=?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com>",
"msg_from_op": true,
"msg_subject": "Column lookup in a row performance"
},
{
"msg_contents": "=?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com> writes:\n> Does anyone know why the format is still the same?\n\n(1) Backwards compatibility, and (2) it's not clear that a different\nlayout would be a win for all cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2019 10:26:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Column lookup in a row performance"
},
{
"msg_contents": "Tom,\n\nThank you.\n> (1) Backwards compatibility, and (2) it's not clear that a different\n> layout would be a win for all cases.\n\nI am curious regarding (2), for my understanding it is good to find\nout at least one case when layout with lengths/offsets in a header\nwill be crucially worse. I will be happy if someone can elaborate.\n\nсб, 30 мар. 2019 г. в 17:26, Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> =?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com> writes:\n> > Does anyone know why the format is still the same?\n>\n> (1) Backwards compatibility, and (2) it's not clear that a different\n> layout would be a win for all cases.\n>\n> regards, tom lane\n\n\n\n-- \nBest regards,\nIvan Pavlukhin\n\n\n",
"msg_date": "Tue, 2 Apr 2019 08:48:31 +0300",
"msg_from": "=?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Column lookup in a row performance"
},
{
"msg_contents": "=?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com> writes:\n>> (1) Backwards compatibility, and (2) it's not clear that a different\n>> layout would be a win for all cases.\n\n> I am curious regarding (2), for my understanding it is good to find\n> out at least one case when layout with lengths/offsets in a header\n> will be crucially worse. I will be happy if someone can elaborate.\n\nIt seems like you think the only figure of merit here is how fast\ndeform_heap_tuple runs. That's not the case. There are at least\ntwo issues:\n\n1. You're not going to be able to do this without making tuples\nlarger overall in many cases; but more data means more I/O which\nmeans less performance. I base this objection on the observation\nthat our existing design allows single-byte length \"words\" in many\ncommon cases, but it's really hard to see how you could avoid\nstoring a full-size offset for each column if you want to be able\nto access each column in O(1) time without any examination of other\ncolumns.\n\n2. Our existing system design has an across-the-board assumption\nthat each variable-length datum has its length embedded in it,\nso that a single pointer carries enough information for any called\nfunction to work with the value. If you remove the length word\nand expect the length to be computed by subtracting two offsets that\nare not even physically adjacent to the datum, that stops working.\nThere is no fix for that that doesn't add performance costs and\ncomplexity.\n\nPractically speaking, even if we were willing to lose on-disk database\ncompatibility, point 2 breaks so many internal and extension APIs that\nthere's no chance whatever that we could remove the length-word datum\nheaders. That means that the added fields in tuple headers would be\npure added space with no offsetting savings in the data size, making\npoint 1 quite a lot worse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2019 11:41:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Column lookup in a row performance"
},
{
"msg_contents": "Tom, thanks for your answer. It definitely makes a picture in my mind\nmore clear.\n\nвт, 2 апр. 2019 г. в 18:41, Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> =?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com> writes:\n> >> (1) Backwards compatibility, and (2) it's not clear that a different\n> >> layout would be a win for all cases.\n>\n> > I am curious regarding (2), for my understanding it is good to find\n> > out at least one case when layout with lengths/offsets in a header\n> > will be crucially worse. I will be happy if someone can elaborate.\n>\n> It seems like you think the only figure of merit here is how fast\n> deform_heap_tuple runs. That's not the case. There are at least\n> two issues:\n>\n> 1. You're not going to be able to do this without making tuples\n> larger overall in many cases; but more data means more I/O which\n> means less performance. I base this objection on the observation\n> that our existing design allows single-byte length \"words\" in many\n> common cases, but it's really hard to see how you could avoid\n> storing a full-size offset for each column if you want to be able\n> to access each column in O(1) time without any examination of other\n> columns.\n>\n> 2. Our existing system design has an across-the-board assumption\n> that each variable-length datum has its length embedded in it,\n> so that a single pointer carries enough information for any called\n> function to work with the value. If you remove the length word\n> and expect the length to be computed by subtracting two offsets that\n> are not even physically adjacent to the datum, that stops working.\n> There is no fix for that that doesn't add performance costs and\n> complexity.\n>\n> Practically speaking, even if we were willing to lose on-disk database\n> compatibility, point 2 breaks so many internal and extension APIs that\n> there's no chance whatever that we could remove the length-word datum\n> headers. That means that the added fields in tuple headers would be\n> pure added space with no offsetting savings in the data size, making\n> point 1 quite a lot worse.\n>\n> regards, tom lane\n\n\n\n-- \nBest regards,\nIvan Pavlukhin\n\n\n",
"msg_date": "Wed, 3 Apr 2019 14:44:37 +0300",
"msg_from": "=?UTF-8?B?0J/QsNCy0LvRg9GF0LjQvSDQmNCy0LDQvQ==?= <vololo100@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Column lookup in a row performance"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nBuild process on Windows includes compiling all source into object files,\nlinking them to binaries, and generating export symbol definitions, etc.\nWhen I watched the whole build process with a task manager, I discovered\nthat a lot of time was spent on generating export symbol definitions,\nwithout consuming much CPU or IO.\nThe script that doing this is src/tools/msvc/gendef.pl, it enumerates the\nwhole directory for \".obj\" files and call dumpbin utility to generate\n\".sym\" files one by one like this:\n\ndumpbin /symbols /out:a.sym a.obj >NUL\n\nActually the dumpbin utility accepts a wildcard file name, so we can\ngenerate the export symbols of all \".obj\" files in batch.\n\ndumpbin /symbols /out:all.sym *.obj >NUL\n\nThis will avoid wasting time by creating and destroying dumpbin process\nrepeatedly and can speed up the build process considerably.\nI've tested on my 4-core 8-thread Intel i7 CPU. I've set MSBFLAGS=/m to\nensure it can utilize all CPU cores.\nBuilding without this patch takes about 370 seconds. Building with this\npatch takes about 200 seconds. That's almost 2x speed up.\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Sat, 30 Mar 2019 15:42:39 +0900",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Speed up build on Windows by generating symbol definition in batch"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 03:42:39PM +0900, Peifeng Qiu wrote:\n> When I watched the whole build process with a task manager, I discovered\n> that a lot of time was spent on generating export symbol definitions,\n> without consuming much CPU or IO.\n> The script that doing this is src/tools/msvc/gendef.pl, it enumerates the\n> whole directory for \".obj\" files and call dumpbin utility to generate\n> \".sym\" files one by one like this:\n> \n> dumpbin /symbols /out:a.sym a.obj >NUL\n> \n> Actually the dumpbin utility accepts a wildcard file name, so we can\n> generate the export symbols of all \".obj\" files in batch.\n> \n> dumpbin /symbols /out:all.sym *.obj >NUL\n> \n> This will avoid wasting time by creating and destroying dumpbin process\n> repeatedly and can speed up the build process considerably.\n> I've tested on my 4-core 8-thread Intel i7 CPU. I've set MSBFLAGS=/m to\n> ensure it can utilize all CPU cores.\n> Building without this patch takes about 370 seconds. Building with this\n> patch takes about 200 seconds. That's almost 2x speed up.\n\nI, too, get a strong improvement, from 201s to 149s. I can confirm it yields\nidentical *.def files. Thanks for identifying this improvement.\n\n> -\tmy ($objfile, $symfile) = @_;\n> -\tmy ($symvol, $symdirs, $symbase) = splitpath($symfile);\n> -\tmy $tmpfile = catpath($symvol, $symdirs, \"symbols.out\");\n\nYou removed the last use of File::Spec::Functions, so remove its \"use\"\nstatement.\n\n> -\tsystem(\"dumpbin /symbols /out:$tmpfile $_ >NUL\")\n> -\t && die \"Could not call dumpbin\";\n\nThis error handling was crude, but don't replace it with zero error handling.\n\n> -\trename($tmpfile, $symfile);\n\nKeep the use of a temporary file, too.\n\n> +system(\"dumpbin /symbols /out:$symfile $ARGV[0]/*obj >NUL\");\n\nThat should be *.obj, not *obj.\n\n\n",
"msg_date": "Sat, 6 Apr 2019 23:31:10 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up build on Windows by generating symbol definition in\n batch"
},
{
"msg_contents": "Thanks for reviewing!\nI've updated the patch according to your comments.\n\nBest regards,\nPeifeng Qiu\n\nOn Sun, Apr 7, 2019 at 2:31 PM Noah Misch <noah@leadboat.com> wrote:\n\n> On Sat, Mar 30, 2019 at 03:42:39PM +0900, Peifeng Qiu wrote:\n> > When I watched the whole build process with a task manager, I discovered\n> > that a lot of time was spent on generating export symbol definitions,\n> > without consuming much CPU or IO.\n> > The script that doing this is src/tools/msvc/gendef.pl, it enumerates\n> the\n> > whole directory for \".obj\" files and call dumpbin utility to generate\n> > \".sym\" files one by one like this:\n> >\n> > dumpbin /symbols /out:a.sym a.obj >NUL\n> >\n> > Actually the dumpbin utility accepts a wildcard file name, so we can\n> > generate the export symbols of all \".obj\" files in batch.\n> >\n> > dumpbin /symbols /out:all.sym *.obj >NUL\n> >\n> > This will avoid wasting time by creating and destroying dumpbin process\n> > repeatedly and can speed up the build process considerably.\n> > I've tested on my 4-core 8-thread Intel i7 CPU. I've set MSBFLAGS=/m to\n> > ensure it can utilize all CPU cores.\n> > Building without this patch takes about 370 seconds. Building with this\n> > patch takes about 200 seconds. That's almost 2x speed up.\n>\n> I, too, get a strong improvement, from 201s to 149s. I can confirm it\n> yields\n> identical *.def files. Thanks for identifying this improvement.\n>\n> > - my ($objfile, $symfile) = @_;\n> > - my ($symvol, $symdirs, $symbase) = splitpath($symfile);\n> > - my $tmpfile = catpath($symvol, $symdirs, \"symbols.out\");\n>\n> You removed the last use of File::Spec::Functions, so remove its \"use\"\n> statement.\n>\n> > - system(\"dumpbin /symbols /out:$tmpfile $_ >NUL\")\n> > - && die \"Could not call dumpbin\";\n>\n> This error handling was crude, but don't replace it with zero error\n> handling.\n>\n> > - rename($tmpfile, $symfile);\n>\n> Keep the use of a temporary file, too.\n>\n> > +system(\"dumpbin /symbols /out:$symfile $ARGV[0]/*obj >NUL\");\n>\n> That should be *.obj, not *obj.\n>",
"msg_date": "Wed, 10 Apr 2019 14:27:26 +0800",
"msg_from": "Peifeng Qiu <pqiu@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Speed up build on Windows by generating symbol definition in\n batch"
},
{
"msg_contents": "On Wed, Apr 10, 2019 at 02:27:26PM +0800, Peifeng Qiu wrote:\n> I've updated the patch according to your comments.\n\nLooks good. Thanks. I plan to push this on Saturday.\n\n\n",
"msg_date": "Sun, 28 Apr 2019 19:49:54 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up build on Windows by generating symbol definition in\n batch"
},
{
"msg_contents": "On Mon, 29 Apr 2019 at 14:50, Noah Misch <noah@leadboat.com> wrote:\n>\n> On Wed, Apr 10, 2019 at 02:27:26PM +0800, Peifeng Qiu wrote:\n> > I've updated the patch according to your comments.\n>\n> Looks good. Thanks. I plan to push this on Saturday.\n\nI didn't really look at the patch in detail, but on testing it on a\nwindows machine with vs2017, it took built time from 5:00 to 4:10.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 29 Apr 2019 19:36:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up build on Windows by generating symbol definition in\n batch"
}
] |
[
{
"msg_contents": "Hello,\n\nPatch 0001 gets rid of the unconditional lseek() calls for SLRU I/O,\nas a small follow-up to commit c24dcd0c. Patch 0002 gets rid of a few\nplaces that usually do a good job of avoiding lseek() calls while\nreading and writing WAL, but it seems better to have no code at all.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Sat, 30 Mar 2019 22:13:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Removing a few more lseek() calls"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 2:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Hello,\n>\n> Patch 0001 gets rid of the unconditional lseek() calls for SLRU I/O,\n> as a small follow-up to commit c24dcd0c. Patch 0002 gets rid of a few\n> places that usually do a good job of avoiding lseek() calls while\n> reading and writing WAL, but it seems better to have no code at all.\n>\n\nI reviewed the changes and they look good to me. Code looks much cleaner\nafter 2nd patch.\nAfter these changes, only one usage of SLRU_SEEK_FAILED remains in\nSimpleLruDoesPhysicalPageExist().\n\nOn Sat, Mar 30, 2019 at 2:14 AM Thomas Munro <thomas.munro@gmail.com> wrote:Hello,\n\nPatch 0001 gets rid of the unconditional lseek() calls for SLRU I/O,\nas a small follow-up to commit c24dcd0c. Patch 0002 gets rid of a few\nplaces that usually do a good job of avoiding lseek() calls while\nreading and writing WAL, but it seems better to have no code at all.I reviewed the changes and they look good to me. Code looks much cleaner after 2nd patch.After these changes, only one usage of SLRU_SEEK_FAILED remains in SimpleLruDoesPhysicalPageExist().",
"msg_date": "Thu, 6 Jun 2019 10:16:04 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Removing a few more lseek() calls"
}
] |
[
{
"msg_contents": "Hi,\n\nThe Release Management Team (RMT) for the PostgreSQL 12 release\nhas been assembled and has determined that the feature freeze date\nfor the PostgreSQL 12 release will be April 7, 2019. This means that\nany feature that will be going into the PostgreSQL 12 release must be\ncommitted before 2019-04-08 00:00:00 AoE [1].\n\nThe exception to this are any patches related to pgindent rules which\nare purposefully being committed later on, and of course bug fixes.\nAfter the freeze is in effect, any open feature in the current commit\nfest will be moved into the subsequent one.\n\nOpen items for the PostgreSQL 12 release will be tracked here:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\nFor the PostgreSQL 12 release, the release management team is composed\nof:\n\n\tAndres Freund <andres(at)anarazel(dot)de>\n\tMichael Paquier <michael(at)paquier(dot)xyz>\n\tTomas Vondra <tv(at)fuzzy(dot)cz>\n\nFor the time being, if you have any questions about the process,\nplease feel free to email any member of the RMT. We will send out\nnotes with updates and additional guidance in the near future.\n\n[1]: https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\nThanks!\n--\nMichael",
"msg_date": "Sat, 30 Mar 2019 18:40:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 Release Management Team & Feature Freeze"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 06:40:43PM +0900, Michael Paquier wrote:\n> The Release Management Team (RMT) for the PostgreSQL 12 release\n> has been assembled and has determined that the feature freeze date\n> for the PostgreSQL 12 release will be April 7, 2019. This means that\n> any feature that will be going into the PostgreSQL 12 release must be\n> committed before 2019-04-08 00:00:00 AoE [1].\n\nFeature freeze is now effective, so let's stabilize everything now...\n\n> Open items for the PostgreSQL 12 release will be tracked here:\n> https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\nAnd there are a couple of items to work on already.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2019 13:01:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 Release Management Team & Feature Freeze"
}
] |
[
{
"msg_contents": "On some machines (*cough* Mingw *cough*) installs are very slow. We've\nameliorated this by allowing temp installs to be reused, but the\npg_upgrade Makefile never got the message. Here's a patch that does\nthat. I'd like to backpatch it, at least to 9.5 where we switched the\npg_upgrade location. The risk seems appropriately low and it only\naffects our test regime.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 30 Mar 2019 16:42:16 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "On Saturday, March 30, 2019 9:42 PM, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote:\n\n> On some machines (cough Mingw cough) installs are very slow. We've\n> ameliorated this by allowing temp installs to be reused, but the\n> pg_upgrade Makefile never got the message. Here's a patch that does\n> that. I'd like to backpatch it, at least to 9.5 where we switched the\n> pg_upgrade location. The risk seems appropriately low and it only\n> affects our test regime.\n\nWhile I haven't tried the patch (yet), reading it it makes sense, so +1\non the fix. Nice catch!\n\ncheers ./daniel\n\n\n",
"msg_date": "Sat, 30 Mar 2019 21:08:48 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On some machines (*cough* Mingw *cough*) installs are very slow. We've\n> ameliorated this by allowing temp installs to be reused, but the\n> pg_upgrade Makefile never got the message. Here's a patch that does\n> that. I'd like to backpatch it, at least to 9.5 where we switched the\n> pg_upgrade location. The risk seems appropriately low and it only\n> affects our test regime.\n\nI haven't tested this, but it looks reasonable.\n\nI suspect you need double-quotes around the path values, as in\nthe adjacent usage of EXTRA_REGRESS_OPTS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Mar 2019 17:48:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi Andrew,\n\nOn 2019-03-30 16:42:16 -0400, Andrew Dunstan wrote:\n> On some machines (*cough* Mingw *cough*) installs are very slow. We've\n> ameliorated this by allowing temp installs to be reused, but the\n> pg_upgrade Makefile never got the message. Here's a patch that does\n> that. I'd like to backpatch it, at least to 9.5 where we switched the\n> pg_upgrade location. The risk seems appropriately low and it only\n> affects our test regime.\n\nI'm confused as to why this was done as a purely optional path, rather\nthan just ripping out the pg_upgrade specific install?\n\nSee also discussion around https://www.postgresql.org/message-id/21766.1558397960%40sss.pgh.pa.us\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 18:58:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "\nOn 5/20/19 9:58 PM, Andres Freund wrote:\n> Hi Andrew,\n>\n> On 2019-03-30 16:42:16 -0400, Andrew Dunstan wrote:\n>> On some machines (*cough* Mingw *cough*) installs are very slow. We've\n>> ameliorated this by allowing temp installs to be reused, but the\n>> pg_upgrade Makefile never got the message. Here's a patch that does\n>> that. I'd like to backpatch it, at least to 9.5 where we switched the\n>> pg_upgrade location. The risk seems appropriately low and it only\n>> affects our test regime.\n> I'm confused as to why this was done as a purely optional path, rather\n> than just ripping out the pg_upgrade specific install?\n>\n> See also discussion around https://www.postgresql.org/message-id/21766.1558397960%40sss.pgh.pa.us\n>\n\nBy specifying NO_TEMP_INSTALL you are in effect certifying that there is\nalready a suitable temp install available. But that might well not be\nthe case. In fact, there have been several iterations of code to get the\nbuildfarm client to check reasonable reliably that there is such an\ninstall before it chooses to use the flag.\n\n\nNote that the buildfarm doesn't run \"make check-world\" for reasons I\nhave explained in the past. NO_TEMP_INSTALL is particularly valuable in\nsaving time when running the TAP tests, especially on Mingw.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 21 May 2019 14:48:27 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 5/20/19 9:58 PM, Andres Freund wrote:\n>> I'm confused as to why this was done as a purely optional path, rather\n>> than just ripping out the pg_upgrade specific install?\n\n> By specifying NO_TEMP_INSTALL you are in effect certifying that there is\n> already a suitable temp install available. But that might well not be\n> the case. In fact, there have been several iterations of code to get the\n> buildfarm client to check reasonable reliably that there is such an\n> install before it chooses to use the flag.\n\nRight. Issuing \"make check\" in src/bin/pg_upgrade certainly shouldn't\nskip making a new install. But if we're recursing down from a top-level\ncheck-world, we ought to be able to use the install it made.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 15:09:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 14:48:27 -0400, Andrew Dunstan wrote:\n> On 5/20/19 9:58 PM, Andres Freund wrote:\n> > Hi Andrew,\n> >\n> > On 2019-03-30 16:42:16 -0400, Andrew Dunstan wrote:\n> >> On some machines (*cough* Mingw *cough*) installs are very slow. We've\n> >> ameliorated this by allowing temp installs to be reused, but the\n> >> pg_upgrade Makefile never got the message. Here's a patch that does\n> >> that. I'd like to backpatch it, at least to 9.5 where we switched the\n> >> pg_upgrade location. The risk seems appropriately low and it only\n> >> affects our test regime.\n> > I'm confused as to why this was done as a purely optional path, rather\n> > than just ripping out the pg_upgrade specific install?\n> >\n> > See also discussion around https://www.postgresql.org/message-id/21766.1558397960%40sss.pgh.pa.us\n> >\n> \n> By specifying NO_TEMP_INSTALL you are in effect certifying that there is\n> already a suitable temp install available. But that might well not be\n> the case.\n\nBut all that takes is adding a dependency to temp-install in\nsrc/bin/pg_upgrade/Makefile's check target? Like many other regression\ntest? And the temp-install rule already honors NO_TEMP_INSTALL:\n\ntemp-install: | submake-generated-headers\nifndef NO_TEMP_INSTALL\nifneq ($(abs_top_builddir),)\nifeq ($(MAKELEVEL),0)\n\trm -rf '$(abs_top_builddir)'/tmp_install\n\t$(MKDIR_P) '$(abs_top_builddir)'/tmp_install/log\n\t$(MAKE) -C '$(top_builddir)' DESTDIR='$(abs_top_builddir)'/tmp_install install >'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\n\t$(MAKE) -j1 $(if $(CHECKPREP_TOP),-C $(CHECKPREP_TOP),) checkprep >>'$(abs_top_builddir)'/tmp_install/log/install.log 2>&1\nendif\nendif\nendif\n\nI'm not saying that you shouldn't have added NO_TEMP_INSTALL support or\nsomething, I'm confused as to why the support for custom installations\ninside test.sh was retained.\n\nRoughly like in the attached?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 21 May 2019 12:19:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 12:19:18 -0700, Andres Freund wrote:\n> Roughly like in the attached?\n\n> -check: test.sh all\n> -\tMAKE=$(MAKE) bindir=\"$(tbindir)\" libdir=\"$(tlibdir)\" EXTRA_REGRESS_OPTS=\"$(EXTRA_REGRESS_OPTS)\" $(SHELL) $< $(DOINST)\n> +check: test.sh all temp-install\n> +\tMAKE=$(MAKE) $(with_temp_install) bindir=$(abs_top_builddir)/tmp_install/$(bindir) MAKE=$(MAKE) EXTRA_REGRESS_OPTS=\"$(EXTRA_REGRESS_OPTS)\" $(SHELL) $< $(DOINST)\n\nminus the duplicated MAKE=$(MAKE) of course.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 12:41:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "On Wed, May 22, 2019 at 7:41 AM Andres Freund <andres@anarazel.de> wrote:>\n> On 2019-05-21 12:19:18 -0700, Andres Freund wrote:\n> > Roughly like in the attached?\n>\n> > -check: test.sh all\n> > - MAKE=$(MAKE) bindir=\"$(tbindir)\" libdir=\"$(tlibdir)\" EXTRA_REGRESS_OPTS=\"$(EXTRA_REGRESS_OPTS)\" $(SHELL) $< $(DOINST)\n> > +check: test.sh all temp-install\n> > + MAKE=$(MAKE) $(with_temp_install) bindir=$(abs_top_builddir)/tmp_install/$(bindir) MAKE=$(MAKE) EXTRA_REGRESS_OPTS=\"$(EXTRA_REGRESS_OPTS)\" $(SHELL) $< $(DOINST)\n>\n> minus the duplicated MAKE=$(MAKE) of course.\n\nAfter these commits (and Tom's commit \"Un-break pg_upgrade regression\ntest.\"), cfbot broke:\n\n(using postmaster on /tmp/pg_upgrade_check-YGuskp, port 54464)\n============== dropping database \"regression\" ==============\nsh: 1: /usr/local/pgsql/bin/psql: not found\ncommand failed: \"/usr/local/pgsql/bin/psql\" -X -c \"DROP DATABASE IF\nEXISTS \\\"regression\\\"\" \"postgres\"\nmake[1]: *** [installcheck-parallel] Error 2\nmake[1]: Leaving directory\n`/home/travis/build/postgresql-cfbot/postgresql/src/test/regress'\n\nBefore that it had been running happily like this:\n\n./configure --enable-debug --enable-cassert --enable-tap-tests\n--with-tcl --with-python --with-perl --with-ldap --with-openssl\n--with-gssapi --with-icu && echo \"COPT=-Wall -Werror\" >\nsrc/Makefile.custom && make -j4 all contrib docs && make check-world\n\nI added --prefix=$HOME/something and added \"make install\" before \"make\ncheck-world\", and now it's happy again. Was that expected?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 May 2019 21:26:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> After these commits (and Tom's commit \"Un-break pg_upgrade regression\n> test.\"), cfbot broke:\n\n> sh: 1: /usr/local/pgsql/bin/psql: not found\n\nI can confirm that here: check-world passes as long as I've done\n\"make install\" beforehand ... but of course that should not be\nnecessary. If I blow away the install tree, pg_upgrade's\ncheck fails at\n\n../../../src/test/regress/pg_regress --inputdir=. --bindir='/home/postgres/testversion/bin' --port=54464 --dlpath=. --max-concurrent-tests=20 --port=54464 --schedule=./parallel_schedule \n(using postmaster on /tmp/pg_upgrade_check-Nitf3h, port 54464)\n============== dropping database \"regression\" ==============\nsh: /home/postgres/testversion/bin/psql: No such file or directory\ncommand failed: \"/home/postgres/testversion/bin/psql\" -X -c \"DROP DATABASE IF EXISTS \\\"regression\\\"\" \"postgres\"\n\npg_regress is being told the wrong --bindir, ie\nthe final install location not the temp install.\n\n(More generally, should we rearrange the buildfarm test\nsequence so it doesn't run \"make install\" till after the\ntests that aren't supposed to require an installed tree?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 10:58:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 10:58:54 -0400, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > After these commits (and Tom's commit \"Un-break pg_upgrade regression\n> > test.\"), cfbot broke:\n\nI should just have finished working two hours earlier yesterday :(.\n\n\n> > sh: 1: /usr/local/pgsql/bin/psql: not found\n> \n> I can confirm that here: check-world passes as long as I've done\n> \"make install\" beforehand ... but of course that should not be\n> necessary. If I blow away the install tree, pg_upgrade's\n> check fails at\n\n> ../../../src/test/regress/pg_regress --inputdir=. --bindir='/home/postgres/testversion/bin' --port=54464 --dlpath=. --max-concurrent-tests=20 --port=54464 --schedule=./parallel_schedule \n> (using postmaster on /tmp/pg_upgrade_check-Nitf3h, port 54464)\n> ============== dropping database \"regression\" ==============\n> sh: /home/postgres/testversion/bin/psql: No such file or directory\n> command failed: \"/home/postgres/testversion/bin/psql\" -X -c \"DROP DATABASE IF EXISTS \\\"regression\\\"\" \"postgres\"\n> \n> pg_regress is being told the wrong --bindir, ie\n> the final install location not the temp install.\n\nYea, that's indeed the problem. I suspect that problem already exists in\nthe NO_TEMP_INSTALL solution committed in\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=47b3c26642e6850e8dfa7afe01db78320b11549e\nIt's just that nobody noticed that due to:\n\n> (More generally, should we rearrange the buildfarm test\n> sequence so it doesn't run \"make install\" till after the\n> tests that aren't supposed to require an installed tree?)\n\nSeems what we need to fix the immediate issue is to ressurect:\n\n# We need to make it use psql from our temporary installation,\n# because otherwise the installcheck run below would try to\n# use psql from the proper installation directory, which might\n# be outdated or missing. But don't override anything else that's\n# already in EXTRA_REGRESS_OPTS.\nEXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --bindir='$bindir'\"\nexport EXTRA_REGRESS_OPTS\n\nand put that into global scope. While that'd be unnecessary when\ninvoking ./test.sh from commandline with explicitly already installed\nbinaries, it should be harmless there.\n\nI wonder however, shouldn't the above stanza refer to $oldbindir?\n\nI think we need to backpatch the move of the above outside the --install\npath, because otherwise the buildfarm will break once we reorder the\nbuildfarm's scripts to do the make install later. Unless I miss\nsomething?\n\n\n> (More generally, should we rearrange the buildfarm test\n> sequence so it doesn't run \"make install\" till after the\n> tests that aren't supposed to require an installed tree?)\n\nSeems like a good idea. On buildfarm's master the order is:\n\nmake_check() unless $delay_check;\n\n# contrib is built under the standard build step for msvc\nmake_contrib() unless ($using_msvc);\n\nmake_testmodules()\n if (!$using_msvc && ($branch eq 'HEAD' || $branch ge 'REL9_5'));\n\nmake_doc() if (check_optional_step('build_docs'));\n\nmake_install();\n\n# contrib is installed under standard install for msvc\nmake_contrib_install() unless ($using_msvc);\n\nmake_testmodules_install()\n if (!$using_msvc && ($branch eq 'HEAD' || $branch ge 'REL9_5'));\n\nmake_check() if $delay_check;\n\nprocess_module_hooks('configure');\n\nprocess_module_hooks('build');\n\nprocess_module_hooks(\"check\") unless $delay_check;\n\nprocess_module_hooks('install');\n\nprocess_module_hooks(\"check\") if $delay_check;\n\nrun_bin_tests();\n\nrun_misc_tests();\n\n... locale tests, ecpg, typedefs\n\nSeems like we ought to at least move run_bin_tests, run_misc_tests up?\n\n\nI'm not quite sure what the idea of $delay_check is. I found:\n> * a new --delay-check switch delays the check step until after\n> install. This helps work around a bug or lack of capacity w.r.t.\n> LD_LIBRARY_PATH on Alpine Linux\n\nin an release announcement. But no further details.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 10:51:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Seems what we need to fix the immediate issue is to ressurect:\n\n> # We need to make it use psql from our temporary installation,\n> # because otherwise the installcheck run below would try to\n> # use psql from the proper installation directory, which might\n> # be outdated or missing. But don't override anything else that's\n> # already in EXTRA_REGRESS_OPTS.\n> EXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --bindir='$bindir'\"\n> export EXTRA_REGRESS_OPTS\n\n> and put that into global scope.\n\nNot sure about that last bit. pg_upgrade has the issue of possibly\nwanting to deal with 2 installations, unlike the rest of the tree,\nso I'm not sure that fixing its problem means there's something we\nneed to change everywhere else.\n\n(IOW, keep an eye on the cross-version-upgrade tests while\nyou mess with this...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 14:06:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 14:06:47 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Seems what we need to fix the immediate issue is to ressurect:\n> \n> > # We need to make it use psql from our temporary installation,\n> > # because otherwise the installcheck run below would try to\n> > # use psql from the proper installation directory, which might\n> > # be outdated or missing. But don't override anything else that's\n> > # already in EXTRA_REGRESS_OPTS.\n> > EXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --bindir='$bindir'\"\n> > export EXTRA_REGRESS_OPTS\n> \n> > and put that into global scope.\n> \n> Not sure about that last bit. pg_upgrade has the issue of possibly\n> wanting to deal with 2 installations, unlike the rest of the tree,\n> so I'm not sure that fixing its problem means there's something we\n> need to change everywhere else.\n\nI'm not quite following? We need to move it into global scope to fix the\nissue at hand (namely that we currently need to make install first, just\nto get psql). And at which scope could it be in master, other than\nglobal?\n\nI do think we will have to move it to the global scope in the back\nbranches too, because NO_TEMP_INSTALL does indeed fail without a global\ninstall first (rather than using the temp install, as intended):\n\nOn 11:\n\n$ make -j16 -s uninstall\n$ make -j16 -s temp-install\n$ make -j16 -s -C src/bin/pg_upgrade/ check NO_TEMP_INSTALL=1\n...\n../../../src/test/regress/pg_regress --inputdir=/home/andres/src/postgresql-11/src/test/regress --bindir='/home/andres/build/postgres/11-assert//install/bin' --port=60851 --dlpath=. --max-concurrent-tests=20 --port=60851 --schedule=/home/andres/src/postgresql-11/src/test/regress/serial_schedule \n(using postmaster on /tmp/pg_upgrade_check-uEwhDs, port 60851)\n============== dropping database \"regression\" ==============\nsh: 1: /home/andres/build/postgres/11-assert//install/bin/psql: not found\n\n$ make -j16 -s install\n$ make -j16 -s -C src/bin/pg_upgrade/ check NO_TEMP_INSTALL=1 && echo success\n...\nsuccess\n\nAs you can see it uses pg_regress etc from the temp installation, but\npsql from the full installation.\n\n\n> (IOW, keep an eye on the cross-version-upgrade tests while\n> you mess with this...)\n\nI will. If you refer to the buildfarm ones: As far as I can tell they\ndon't use test.sh at all. Which makes sense, as we need cleanup steps\ninbetween the regression run and pg_upgrade, and test.sh doesn't allow\nfor that.\n\nhttps://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 11:20:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-22 14:06:47 -0400, Tom Lane wrote:\n>> Not sure about that last bit. pg_upgrade has the issue of possibly\n>> wanting to deal with 2 installations, unlike the rest of the tree,\n>> so I'm not sure that fixing its problem means there's something we\n>> need to change everywhere else.\n\n> I'm not quite following? We need to move it into global scope to fix the\n> issue at hand (namely that we currently need to make install first, just\n> to get psql). And at which scope could it be in master, other than\n> global?\n\nMaybe I misunderstood you --- I thought you were talking about something\nlike defining EXTRA_REGRESS_OPTS in Makefile.global. If you mean\nrunning this unconditionally within test.sh, I've got no objection\nto that.\n\n> I do think we will have to move it to the global scope in the back\n> branches too, because NO_TEMP_INSTALL does indeed fail without a global\n> install first (rather than using the temp install, as intended):\n\nAgreed, we should fix it in all branches, because it seems like it's\nprobably testing the wrong thing, ie using the later branch's psql\nto run the earlier branch's regression tests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 14:27:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 14:27:51 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-22 14:06:47 -0400, Tom Lane wrote:\n> >> Not sure about that last bit. pg_upgrade has the issue of possibly\n> >> wanting to deal with 2 installations, unlike the rest of the tree,\n> >> so I'm not sure that fixing its problem means there's something we\n> >> need to change everywhere else.\n> \n> > I'm not quite following? We need to move it into global scope to fix the\n> > issue at hand (namely that we currently need to make install first, just\n> > to get psql). And at which scope could it be in master, other than\n> > global?\n> \n> Maybe I misunderstood you --- I thought you were talking about something\n> like defining EXTRA_REGRESS_OPTS in Makefile.global. If you mean\n> running this unconditionally within test.sh, I've got no objection\n> to that.\n\nOh, yes, that's what I meant.\n\n\n> > I do think we will have to move it to the global scope in the back\n> > branches too, because NO_TEMP_INSTALL does indeed fail without a global\n> > install first (rather than using the temp install, as intended):\n> \n> Agreed, we should fix it in all branches, because it seems like it's\n> probably testing the wrong thing, ie using the later branch's psql\n> to run the earlier branch's regression tests.\n\nOk, will do.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 11:42:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "\nOn 5/22/19 2:42 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-05-22 14:27:51 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2019-05-22 14:06:47 -0400, Tom Lane wrote:\n>>>> Not sure about that last bit. pg_upgrade has the issue of possibly\n>>>> wanting to deal with 2 installations, unlike the rest of the tree,\n>>>> so I'm not sure that fixing its problem means there's something we\n>>>> need to change everywhere else.\n>>> I'm not quite following? We need to move it into global scope to fix the\n>>> issue at hand (namely that we currently need to make install first, just\n>>> to get psql). And at which scope could it be in master, other than\n>>> global?\n>> Maybe I misunderstood you --- I thought you were talking about something\n>> like defining EXTRA_REGRESS_OPTS in Makefile.global. If you mean\n>> running this unconditionally within test.sh, I've got no objection\n>> to that.\n> Oh, yes, that's what I meant.\n>\n>\n>>> I do think we will have to move it to the global scope in the back\n>>> branches too, because NO_TEMP_INSTALL does indeed fail without a global\n>>> install first (rather than using the temp install, as intended):\n>> Agreed, we should fix it in all branches, because it seems like it's\n>> probably testing the wrong thing, ie using the later branch's psql\n>> to run the earlier branch's regression tests.\n\n\n\nIf I disable install, the buildfarm fails the upgrade check even when\nnot using NO_TEMP_INSTALL.\n\n\nexcerpts from the log:\n\n\n\nrm -rf '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install\n/bin/mkdir -p\n'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log\nmake -C '../../..'\nDESTDIR='/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install\ninstall\n>'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log/install.log\n2>&1\nmake -j1 checkprep\n>>'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log/install.log\n2>&1\nMAKE=make\nPATH=\"/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/tmp_install/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin:$PATH\"\nLD_LIBRARY_PATH=\"/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/tmp_install/home/pgl/npgl/pg_head/bfroot/HEAD/inst/lib\" \nbindir=/home/pgl/npgl/pg_h\nead/bfroot/HEAD/pgsql.build/tmp_install//home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin\nEXTRA_REGRESS_OPTS=\"--port=5678\" /bin/sh\n/home/pgl/npgl/pg_head/src/bin/pg_upgrade/test.sh\n\n\nrm -rf ./testtablespace\nmkdir ./testtablespace\n../../../src/test/regress/pg_regress\n--inputdir=/home/pgl/npgl/pg_head/src/test/regress\n--bindir='/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin' --port=5678\n--port=54464 --dlpath=. --max-concurrent-tests=20 --port=5678\n--port=54464\n--schedule=/home/pgl/npgl/pg_head/src/test/regress/parallel_schedule \n(using postmaster on /tmp/pg_upgrade_check-GCUkGu, port 54464)\n============== dropping database \"regression\" ==============\nsh: /home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql: No such file or\ndirectory\ncommand failed: \"/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql\" -X -c\n\"DROP DATABASE IF EXISTS \\\"regression\\\"\" \"postgres\"\nmake[1]: *** [GNUmakefile:141: installcheck-parallel] Error 2\nmake[1]: Leaving directory\n'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/src/test/regress'\nmake: *** [GNUmakefile:68: installcheck-parallel] Error 2\nmake: Leaving directory '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'\nwaiting for server to shut down.... done\nserver stopped\nmake: *** [Makefile:48: check] Error 1\n\n\n\nIt looks to me like the bindir needs to be passed to the make called by\ntest.sh (maybe LD_LIBRARY_PATH too?)\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 22 May 2019 16:04:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "On 2019-05-22 16:04:34 -0400, Andrew Dunstan wrote:\n> \n> On 5/22/19 2:42 PM, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2019-05-22 14:27:51 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> On 2019-05-22 14:06:47 -0400, Tom Lane wrote:\n> >>>> Not sure about that last bit. pg_upgrade has the issue of possibly\n> >>>> wanting to deal with 2 installations, unlike the rest of the tree,\n> >>>> so I'm not sure that fixing its problem means there's something we\n> >>>> need to change everywhere else.\n> >>> I'm not quite following? We need to move it into global scope to fix the\n> >>> issue at hand (namely that we currently need to make install first, just\n> >>> to get psql). And at which scope could it be in master, other than\n> >>> global?\n> >> Maybe I misunderstood you --- I thought you were talking about something\n> >> like defining EXTRA_REGRESS_OPTS in Makefile.global. If you mean\n> >> running this unconditionally within test.sh, I've got no objection\n> >> to that.\n> > Oh, yes, that's what I meant.\n> >\n> >\n> >>> I do think we will have to move it to the global scope in the back\n> >>> branches too, because NO_TEMP_INSTALL does indeed fail without a global\n> >>> install first (rather than using the temp install, as intended):\n> >> Agreed, we should fix it in all branches, because it seems like it's\n> >> probably testing the wrong thing, ie using the later branch's psql\n> >> to run the earlier branch's regression tests.\n> \n> \n> \n> If I disable install, the buildfarm fails the upgrade check even when\n> not using NO_TEMP_INSTALL.\n> \n> \n> excerpts from the log:\n> \n> \n> \n> rm -rf '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install\n> /bin/mkdir -p\n> '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log\n> make -C '../../..'\n> DESTDIR='/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install\n> install\n> >'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log/install.log\n> 2>&1\n> make -j1� checkprep\n> >>'/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'/tmp_install/log/install.log\n> 2>&1\n> MAKE=make\n> PATH=\"/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/tmp_install/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin:$PATH\"\n> LD_LIBRARY_PATH=\"/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/tmp_install/home/pgl/npgl/pg_head/bfroot/HEAD/inst/lib\"�\n> bindir=/home/pgl/npgl/pg_h\n> ead/bfroot/HEAD/pgsql.build/tmp_install//home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin\n> EXTRA_REGRESS_OPTS=\"--port=5678\" /bin/sh\n> /home/pgl/npgl/pg_head/src/bin/pg_upgrade/test.sh\n> \n> \n> rm -rf ./testtablespace\n> mkdir ./testtablespace\n> ../../../src/test/regress/pg_regress\n> --inputdir=/home/pgl/npgl/pg_head/src/test/regress\n> --bindir='/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin'�� --port=5678\n> --port=54464 --dlpath=. --max-concurrent-tests=20 --port=5678\n> --port=54464\n> --schedule=/home/pgl/npgl/pg_head/src/test/regress/parallel_schedule�\n> (using postmaster on /tmp/pg_upgrade_check-GCUkGu, port 54464)\n> ============== dropping database \"regression\"�������� ==============\n> sh: /home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql: No such file or\n> directory\n> command failed: \"/home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql\" -X -c\n> \"DROP DATABASE IF EXISTS \\\"regression\\\"\" \"postgres\"\n> make[1]: *** [GNUmakefile:141: installcheck-parallel] Error 2\n> make[1]: Leaving directory\n> '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build/src/test/regress'\n> make: *** [GNUmakefile:68: installcheck-parallel] Error 2\n> make: Leaving directory '/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build'\n> waiting for server to shut down.... done\n> server stopped\n> make: *** [Makefile:48: check] Error 1\n> \n\nThat's the issue I was talking to Tom about above. Need to\nunconditionally have\n\n+# We need to make pg_regress use psql from the desired installation\n+# (likely a temporary one), because otherwise the installcheck run\n+# below would try to use psql from the proper installation directory,\n+# which might be outdated or missing. But don't override anything else\n+# that's already in EXTRA_REGRESS_OPTS.\n+EXTRA_REGRESS_OPTS=\"$EXTRA_REGRESS_OPTS --bindir='$oldbindir'\"\n+export EXTRA_REGRESS_OPTS\n\nin all branches (i.e. ressurect in master, do it not just in the\n--install case in the back branches, and reference $oldbindir rather\nthan $bindir in all branches).\n\n\n> It looks to me like the bindir needs to be passed to the make called by\n> test.sh (maybe LD_LIBRARY_PATH too?)\n\nThink we don't need LD_LIBRARY_PATH, due to the $(with_temp_install)\nlogic in the makefile. In the back branches the --install branch\ncontains adjustments to LD_LIBRARY_PATH (but still references $bindir\nrather than $oldbindr).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 13:08:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 13:08:41 -0700, Andres Freund wrote:\n> On 2019-05-22 16:04:34 -0400, Andrew Dunstan wrote:\n> > If I disable install, the buildfarm fails the upgrade check even when\n> > not using NO_TEMP_INSTALL.\n> > \n> > \n> > excerpts from the log:\n> > sh: /home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql: No such file or\n> > directory\n> \n> That's the issue I was talking to Tom about above. Need to\n> unconditionally have\n> ....\n\nAndrew, after the latest set of changes, the reversed order should now\nwork reliably?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 May 2019 08:22:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Andrew, after the latest set of changes, the reversed order should now\n> work reliably?\n\nAlso, Thomas should be able to revert his cfbot hack ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 11:37:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
},
{
"msg_contents": "\nOn 5/24/19 11:22 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2019-05-22 13:08:41 -0700, Andres Freund wrote:\n>> On 2019-05-22 16:04:34 -0400, Andrew Dunstan wrote:\n>>> If I disable install, the buildfarm fails the upgrade check even when\n>>> not using NO_TEMP_INSTALL.\n>>>\n>>>\n>>> excerpts from the log:\n>>> sh: /home/pgl/npgl/pg_head/bfroot/HEAD/inst/bin/psql: No such file or\n>>> directory\n>> That's the issue I was talking to Tom about above. Need to\n>> unconditionally have\n>> ....\n> Andrew, after the latest set of changes, the reversed order should now\n> work reliably?\n>\n\n\nWith the latest changes I don't get the above failure:\n\n\nandrew@emma:pg_head (master)$ ~/bf/client-code/run_build.pl \n--skip-steps=install\nmaster:HEAD [19:21:45] creating vpath build dir\n/home/pgl/npgl/pg_head/bfroot/HEAD/pgsql.build ...\nmaster:HEAD [19:21:45] running configure ...\nmaster:HEAD [19:21:52] running make ...\nmaster:HEAD [19:23:42] running make check ...\nmaster:HEAD [19:24:37] running make contrib ...\nmaster:HEAD [19:24:45] running make testmodules ...\nmaster:HEAD [19:24:45] checking pg_upgrade\nmaster:HEAD [19:26:41] checking test-decoding\nmaster:HEAD [19:26:59] running make ecpg check ...\nmaster:HEAD [19:27:25] OK\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 24 May 2019 19:29:42 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL"
}
] |
[
{
"msg_contents": "I reviewed docs like this:\ngit log -p remotes/origin/REL_11_STABLE..HEAD -- doc\n\nAnd split some into separate patches, which may be useful at least for\nreviewing.\n\nI'm mailing now rather than after feature freeze to avoid duplicative work and\nsee if there's any issue.\n\nNote, I also/already mailed this one separately:\n|Clean up docs for log_statement_sample_rate..\nhttps://www.postgresql.org/message-id/flat/20190328135918.GA27808%40telsasoft.com\n\nJustin",
"msg_date": "Sat, 30 Mar 2019 17:43:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "clean up docs for v12"
},
{
"msg_contents": "Find attached updated patches for v12 docs.\n\nNote that Alvaro applied an early patch for log_statement_sample_rate, but\nunfortunately I hadn't sent a v2 patch with additional change from myon, so\nthere's one remaining hunk included here.\n\nIf needed I can split up differently for review, or resend a couple on separate\nthreads, or resend inline.\n\nPatches are currently optimized for review, but maybe should be squished into\none and/or reindented before merging.\n\nJustin",
"msg_date": "Mon, 8 Apr 2019 09:18:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 09:18:28AM -0500, Justin Pryzby wrote:\n> Find attached updated patches for v12 docs.\n\nThanks for taking the time to dig into such things.\n\n> Note that Alvaro applied an early patch for log_statement_sample_rate, but\n> unfortunately I hadn't sent a v2 patch with additional change from myon, so\n> there's one remaining hunk included here.\n\nThis was in 0001. Committed separately from the rest as the author\nand discussion are different.\n\n> If needed I can split up differently for review, or resend a couple on separate\n> threads, or resend inline.\n>\n> Patches are currently optimized for review, but maybe should be squished into\n> one and/or reindented before merging.\n\nThat's helpful. However most of what you are proposing does not seem\nnecessary, and the current phrasing looks correct English to me, but I\nam not a native speaker. I am particularly referring to patches 0005\n(publications use \"a superuser\" in error messages as well which could\nbe fixed as well?), 0006, 0007, 0008, 0011 and 0012. I have committed\nthe most obvious mistakes extracted your patch set though.\n\nHere are some comments about portions which need more work based on\nwhat I looked at.\n\n- * Check if's guaranteed the all the desired attributes are available in\n+ * Check if it's guaranteed that all the desired attributes are available in\n * tuple. If so, we can start deforming. If not, need to make sure to\n\t * fetch the missing columns.\nHere I think that we should have \"Check if all the desired attributes\nare available in the tuple.\" for the first sentence.\n\n * If this is the first attribute, slot->tts_nvalid was 0. Therefore\n- * reset offset to 0 to, it be from a previous execution.\n+ * also reset offset to 0, it may be from a previous execution.\nThe last part should be \"as it may be from a previous execution\"?\n\nAndres, perhaps you have comments on these?\n--\nMichael",
"msg_date": "Fri, 19 Apr 2019 17:00:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Thanks for committing those portions.\n\nOn Fri, Apr 19, 2019 at 05:00:26PM +0900, Michael Paquier wrote:\n> I am particularly referring to patches 0005\n> (publications use \"a superuser\" in error messages as well which could\n> be fixed as well?),\n\nI deliberately avoided changing thesee \"errhint\" messages, since the style\nguide indicates that errhint should be a complete sentence.\n\npryzbyj@pryzbyj:~/src/postgres$ git grep 'must be a superuser' |grep errhint\nsrc/backend/commands/event_trigger.c: errhint(\"The owner of an event trigger must be a superuser.\")));\nsrc/backend/commands/foreigncmds.c: errhint(\"The owner of a foreign-data wrapper must be a superuser.\")));\nsrc/backend/commands/publicationcmds.c: errhint(\"The owner of a FOR ALL TABLES publication must be a superuser.\")));\nsrc/backend/commands/subscriptioncmds.c: errhint(\"The owner of a subscription must be a superuser.\")));\n\nJustin\n\n\n",
"msg_date": "Fri, 19 Apr 2019 09:43:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Fri, Apr 19, 2019 at 09:43:01AM -0500, Justin Pryzby wrote:\n> Thanks for committing those portions.\n\nI have done an extra pass on your patch set to make sure that I am\nmissing nothing, and the last two remaining places which need some\ntweaks are the comments from the JIT code you pointed out. Attached\nis a patch with these adjustments.\n--\nMichael",
"msg_date": "Mon, 22 Apr 2019 14:48:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 14:48:26 +0900, Michael Paquier wrote:\n> On Fri, Apr 19, 2019 at 09:43:01AM -0500, Justin Pryzby wrote:\n> > Thanks for committing those portions.\n> \n> I have done an extra pass on your patch set to make sure that I am\n> missing nothing, and the last two remaining places which need some\n> tweaks are the comments from the JIT code you pointed out. Attached\n> is a patch with these adjustments.\n> --\n> Michael\n\n> diff --git a/src/backend/jit/llvm/llvmjit_deform.c b/src/backend/jit/llvm/llvmjit_deform.c\n> index 94b4635218..e7aa92e274 100644\n> --- a/src/backend/jit/llvm/llvmjit_deform.c\n> +++ b/src/backend/jit/llvm/llvmjit_deform.c\n> @@ -298,9 +298,9 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,\n> \t}\n> \n> \t/*\n> -\t * Check if's guaranteed the all the desired attributes are available in\n> -\t * tuple. If so, we can start deforming. If not, need to make sure to\n> -\t * fetch the missing columns.\n> +\t * Check if all the desired attributes are available in the tuple. If so,\n> +\t * we can start deforming. If not, we need to make sure to fetch the\n> +\t * missing columns.\n> \t */\n\nThat's imo not an improvement. The guaranteed bit is actually\nrelevant. What this block is doing is eliding the check against the\ntuple header for the number of attributes, if NOT NULL attributes for\nlater columns guarantee that the desired columns are present in the NULL\nbitmap. But the rephrasing makes it sound like we're actually checking\nagainst the tuple.\n\nI think it'd be better just to fix s/the all/that all/.\n\n\n> \tif ((natts - 1) <= guaranteed_column_number)\n> \t{\n> @@ -383,7 +383,7 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,\n> \n> \t\t/*\n> \t\t * If this is the first attribute, slot->tts_nvalid was 0. Therefore\n> -\t\t * reset offset to 0 to, it be from a previous execution.\n> +\t\t * reset offset to 0 too, as it may be from a previous execution.\n> \t\t */\n> \t\tif (attnum == 0)\n> \t\t{\n\nThat obviously makes sense.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:08:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On 2019-Apr-22, Andres Freund wrote:\n\n> On 2019-04-22 14:48:26 +0900, Michael Paquier wrote:\n\n> > \t/*\n> > -\t * Check if's guaranteed the all the desired attributes are available in\n> > -\t * tuple. If so, we can start deforming. If not, need to make sure to\n> > -\t * fetch the missing columns.\n> > +\t * Check if all the desired attributes are available in the tuple. If so,\n> > +\t * we can start deforming. If not, we need to make sure to fetch the\n> > +\t * missing columns.\n> > \t */\n> \n> That's imo not an improvement. The guaranteed bit is actually\n> relevant. What this block is doing is eliding the check against the\n> tuple header for the number of attributes, if NOT NULL attributes for\n> later columns guarantee that the desired columns are present in the NULL\n> bitmap. But the rephrasing makes it sound like we're actually checking\n> against the tuple.\n> \n> I think it'd be better just to fix s/the all/that all/.\n\n(and s/if's/if it's/)\n\n> \n> > \tif ((natts - 1) <= guaranteed_column_number)\n> > \t{\n> > @@ -383,7 +383,7 @@ slot_compile_deform(LLVMJitContext *context, TupleDesc desc,\n> > \n> > \t\t/*\n> > \t\t * If this is the first attribute, slot->tts_nvalid was 0. Therefore\n> > -\t\t * reset offset to 0 to, it be from a previous execution.\n> > +\t\t * reset offset to 0 too, as it may be from a previous execution.\n> > \t\t */\n> > \t\tif (attnum == 0)\n> > \t\t{\n> \n> That obviously makes sense.\n\nHmm, I think \"as it *is*\", not \"as it *may be*\", right?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:19:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Apr-22, Andres Freund wrote:\n>> On 2019-04-22 14:48:26 +0900, Michael Paquier wrote:\n>>> /*\n>>> -\t * Check if's guaranteed the all the desired attributes are available in\n>>> -\t * tuple. If so, we can start deforming. If not, need to make sure to\n>>> -\t * fetch the missing columns.\n>>> +\t * Check if all the desired attributes are available in the tuple. If so,\n>>> +\t * we can start deforming. If not, we need to make sure to fetch the\n>>> +\t * missing columns.\n>>> */\n\n>> That's imo not an improvement. The guaranteed bit is actually\n>> relevant. What this block is doing is eliding the check against the\n>> tuple header for the number of attributes, if NOT NULL attributes for\n>> later columns guarantee that the desired columns are present in the NULL\n>> bitmap. But the rephrasing makes it sound like we're actually checking\n>> against the tuple.\n>> \n>> I think it'd be better just to fix s/the all/that all/.\n\n> (and s/if's/if it's/)\n\nISTM that Michael's proposed wording change shows that the existing\ncomment is easily misinterpreted. I don't think these minor grammatical\nfixes will avoid the misinterpretation problem, and so some more-extensive\nrewording is called for.\n\nBut TBH, now that I look at the code, I think the entire optimization\nis a bad idea and should be removed. Am I right in thinking that the\npresence of a wrong attnotnull marker could cause the generated code to\nactually crash, thanks to not checking the tuple's natts field? I don't\nhave enough faith in our enforcement of those constraints to want to see\nJIT taking that risk to save a nanosecond or two.\n\n(Possibly I'd not think this if I weren't fresh off a couple of days\nwith my nose in the ALTER TABLE SET NOT NULL code. But right now,\nI think that believing that that code does not and never will have\nany bugs is just damfool.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 12:33:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 12:33:24 -0400, Tom Lane wrote:\n> ISTM that Michael's proposed wording change shows that the existing\n> comment is easily misinterpreted. I don't think these minor grammatical\n> fixes will avoid the misinterpretation problem, and so some more-extensive\n> rewording is called for.\n\nFair enough.\n\n\n> But TBH, now that I look at the code, I think the entire optimization\n> is a bad idea and should be removed. Am I right in thinking that the\n> presence of a wrong attnotnull marker could cause the generated code to\n> actually crash, thanks to not checking the tuple's natts field? I don't\n> have enough faith in our enforcement of those constraints to want to see\n> JIT taking that risk to save a nanosecond or two.\n\nIt's not a minor optimization, it's very measurable. Without the check\nthere's no pipeline stall when the memory for the tuple header is not in\nthe CPU cache (very common, especially for seqscans and such, due to the\n\"backward\" memory location ordering of tuples in seqscans, which CPUs\ndon't predict). Server grade CPUs of the last ~5 years just march on and\nstart the work to fetch the first attributes (especially if they're NOT\nNULL) - but can't do that if natts has to be checked. And starting to\ncheck the NULL bitmap for NOT NULL attributes, would make that even\nworse - and would required if we don't trust attnotnull.\n\n\n> (Possibly I'd not think this if I weren't fresh off a couple of days\n> with my nose in the ALTER TABLE SET NOT NULL code. But right now,\n> I think that believing that that code does not and never will have\n> any bugs is just damfool.)\n\nBut there's plenty places where we rely on NOT NULL actually working?\nWe'll return wrong query results, and even crash in non-JIT places\nbecause we thought there was guaranteed to be datum?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:43:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 09:43:56 -0700, Andres Freund wrote:\n> On 2019-04-22 12:33:24 -0400, Tom Lane wrote:\n> > ISTM that Michael's proposed wording change shows that the existing\n> > comment is easily misinterpreted. I don't think these minor grammatical\n> > fixes will avoid the misinterpretation problem, and so some more-extensive\n> > rewording is called for.\n> \n> Fair enough.\n\nThe computation of that variable above has:\n\n\t\t * If the column is possibly missing, we can't rely on its (or\n\t\t * subsequent) NOT NULL constraints to indicate minimum attributes in\n\t\t * the tuple, so stop here.\n\t\t */\n\t\tif (att->atthasmissing)\n\t\t\tbreak;\n\n\t\t/*\n\t\t * Column is NOT NULL and there've been no preceding missing columns,\n\t\t * it's guaranteed that all columns up to here exist at least in the\n\t\t * NULL bitmap.\n\t\t */\n\t\tif (att->attnotnull)\n\t\t\tguaranteed_column_number = attnum;\n\nand only then the comment referenced in the discussion here follows:\n\t/*\n\t * Check if's guaranteed the all the desired attributes are available in\n\t * tuple. If so, we can start deforming. If not, need to make sure to\n\t * fetch the missing columns.\n\t */\n\n\nI think just reformulating that to something like\n\n\t/*\n\t * Check if it's guaranteed that all the desired attributes are available\n\t * in the tuple (but still possibly NULL), by dint of either the last\n\t * to-be-deformed column being NOT NULL, or subsequent ones not accessed\n\t * here being NOT NULL. If that's not guaranteed the tuple headers natt's\n\t * has to be checked, and missing attributes potentially have to be\n\t * fetched (using slot_getmissingattrs().\n\t*/\n\nshould make that clearer?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 09:53:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-22 12:33:24 -0400, Tom Lane wrote:\n>> But TBH, now that I look at the code, I think the entire optimization\n>> is a bad idea and should be removed. Am I right in thinking that the\n>> presence of a wrong attnotnull marker could cause the generated code to\n>> actually crash, thanks to not checking the tuple's natts field? I don't\n>> have enough faith in our enforcement of those constraints to want to see\n>> JIT taking that risk to save a nanosecond or two.\n\n> It's not a minor optimization, it's very measurable.\n\nDoesn't matter, if it's unsafe.\n\n>> (Possibly I'd not think this if I weren't fresh off a couple of days\n>> with my nose in the ALTER TABLE SET NOT NULL code. But right now,\n>> I think that believing that that code does not and never will have\n>> any bugs is just damfool.)\n\n> But there's plenty places where we rely on NOT NULL actually working?\n\nI do not think there are any other places where we make this particular\nassumption. Given the number of ways in which we rely on there being\nnatts checks to avoid rewriting tables, I'm very afraid of the idea\nthat JIT is making more assumptions than the mainline code does.\n\nIn hopes of putting some fear into you too, I exhibit the following\nbehavior, which is not a bug according to our current definitions:\n\nregression=# create table pp(f1 int);\nCREATE TABLE\nregression=# create table cc() inherits (pp);\nCREATE TABLE\nregression=# insert into cc values(1);\nINSERT 0 1\nregression=# insert into cc values(2);\nINSERT 0 1\nregression=# insert into cc values(null);\nINSERT 0 1\nregression=# alter table pp add column f2 text;\nALTER TABLE\nregression=# alter table pp add column f3 text;\nALTER TABLE\nregression=# alter table only pp alter f3 set not null;\nALTER TABLE\nregression=# select * from pp;\n f1 | f2 | f3 \n----+----+----\n 1 | | \n 2 | | \n | | \n(3 rows)\n\nThe tuples coming out of cc will still have natts = 1, I believe.\nIf they were deformed according to pp's tupdesc, there'd be a\nproblem. Now, we shouldn't do that, because this is not the only\npossible discrepancy between parent and child tupdescs --- but\nI think this example shows that attnotnull is a lot spongier\nthan you are assuming, even without considering the possibility\nof outright bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:18:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The computation of that variable above has:\n\n> \t\t * If the column is possibly missing, we can't rely on its (or\n> \t\t * subsequent) NOT NULL constraints to indicate minimum attributes in\n> \t\t * the tuple, so stop here.\n> \t\t */\n> \t\tif (att->atthasmissing)\n> \t\t\tbreak;\n\nBTW, why do we have to stop? ISTM that a not-null column without\natthasmissing is enough to prove this, regardless of the state of prior\ncolumns. (This is assuming that you trust attnotnull for this, which\nas I said I don't, but that's not relevant to this question.) I wonder\nalso if it wouldn't be smart to explicitly check that the \"guaranteeing\"\ncolumn is not attisdropped.\n\n> I think just reformulating that to something like\n\n> \t/*\n> \t * Check if it's guaranteed that all the desired attributes are available\n> \t * in the tuple (but still possibly NULL), by dint of either the last\n> \t * to-be-deformed column being NOT NULL, or subsequent ones not accessed\n> \t * here being NOT NULL. If that's not guaranteed the tuple headers natt's\n> \t * has to be checked, and missing attributes potentially have to be\n> \t * fetched (using slot_getmissingattrs().\n> \t*/\n\n> should make that clearer?\n\nOK by me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 13:27:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 13:27:17 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The computation of that variable above has:\n> \n> > \t\t * If the column is possibly missing, we can't rely on its (or\n> > \t\t * subsequent) NOT NULL constraints to indicate minimum attributes in\n> > \t\t * the tuple, so stop here.\n> > \t\t */\n> > \t\tif (att->atthasmissing)\n> > \t\t\tbreak;\n> \n> BTW, why do we have to stop? ISTM that a not-null column without\n> atthasmissing is enough to prove this, regardless of the state of prior\n> columns. (This is assuming that you trust attnotnull for this, which\n> as I said I don't, but that's not relevant to this question.)\n\nAre you wondering if we could also use this kind of logic to infer the\nlength of the null bitmap if there's preceding columns with\natthasmissing true as long as there's a later !hasmissing column that's\nNOT NULL? Right. The logic could be made more powerful - I implemented\nthe above after Andrew's commit of fast-not-null broke JIT (not because\nof that logic, but because it simply didn't look up the missing\ncolumns). I assume it doesn't terribly matter to be fast once\nattributes after a previously missing one are accessed - it's likely not\ngoing to be the hotly accessed data?\n\n\n> I wonder\n> also if it wouldn't be smart to explicitly check that the \"guaranteeing\"\n> column is not attisdropped.\n\nYea, that probably would be smart. I don't think there's an active\nproblem, because we remove NOT NULL when deleting an attribute, but it\nseems good to be doubly sure / explain why that's safe:\n\n\t\t/* Remove any NOT NULL constraint the column may have */\n\t\tattStruct->attnotnull = false;\n\nI'm a bit unsure whether to make it an assert, elog(ERROR) or just not\nassume column presence?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 10:55:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-22 13:27:17 -0400, Tom Lane wrote:\n>> I wonder\n>> also if it wouldn't be smart to explicitly check that the \"guaranteeing\"\n>> column is not attisdropped.\n\n> Yea, that probably would be smart. I don't think there's an active\n> problem, because we remove NOT NULL when deleting an attribute, but it\n> seems good to be doubly sure / explain why that's safe:\n> \t\t/* Remove any NOT NULL constraint the column may have */\n> \t\tattStruct->attnotnull = false;\n> I'm a bit unsure whether to make it an assert, elog(ERROR) or just not\n> assume column presence?\n\nI'd just make the code look like\n\n /*\n * If it's NOT NULL then it must be present in every tuple,\n * unless there's a \"missing\" entry that could provide a non-null\n * value for it. Out of paranoia, also check !attisdropped.\n */\n if (att->attnotnull &&\n !att->atthasmissing &&\n !att->attisdropped)\n guaranteed_column_number = attnum;\n\nI don't think the extra check is so expensive as to be worth obsessing\nover.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Apr 2019 14:17:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 14:17:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-22 13:27:17 -0400, Tom Lane wrote:\n> >> I wonder\n> >> also if it wouldn't be smart to explicitly check that the \"guaranteeing\"\n> >> column is not attisdropped.\n> \n> > Yea, that probably would be smart. I don't think there's an active\n> > problem, because we remove NOT NULL when deleting an attribute, but it\n> > seems good to be doubly sure / explain why that's safe:\n> > \t\t/* Remove any NOT NULL constraint the column may have */\n> > \t\tattStruct->attnotnull = false;\n> > I'm a bit unsure whether to make it an assert, elog(ERROR) or just not\n> > assume column presence?\n> \n> I'd just make the code look like\n> \n> /*\n> * If it's NOT NULL then it must be present in every tuple,\n> * unless there's a \"missing\" entry that could provide a non-null\n> * value for it. Out of paranoia, also check !attisdropped.\n> */\n> if (att->attnotnull &&\n> !att->atthasmissing &&\n> !att->attisdropped)\n> guaranteed_column_number = attnum;\n> \n> I don't think the extra check is so expensive as to be worth obsessing\n> over.\n\nOh, yea, the cost is irrelevant here - it's one-off work basically, and\npales in comparison to the cost of JITing. I was more thinking about\nwhether it's worth \"escalating\" the violation of assumptions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:22:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 13:18:18 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> (Possibly I'd not think this if I weren't fresh off a couple of days\n> >> with my nose in the ALTER TABLE SET NOT NULL code. But right now,\n> >> I think that believing that that code does not and never will have\n> >> any bugs is just damfool.)\n> \n> > But there's plenty places where we rely on NOT NULL actually working?\n> \n> I do not think there are any other places where we make this particular\n> assumption.\n\nSure, not exactly the assumtion that JITed deforming benefits from, but\nas far as I can tell, plenty things would be broken just as well if we\nallowed NOT NULL columns to not be present (whether \"physically\" present\nor present via atthasmissing) for tuples in a table. Fast defaults\nwouldn't work, Assert(!isnull) checks would fire, primary keys would be\nbroken etc.\n\n\n\n> In hopes of putting some fear into you too, I exhibit the following\n> behavior, which is not a bug according to our current definitions:\n> \n> regression=# create table pp(f1 int);\n> CREATE TABLE\n> regression=# create table cc() inherits (pp);\n> CREATE TABLE\n> regression=# insert into cc values(1);\n> INSERT 0 1\n> regression=# insert into cc values(2);\n> INSERT 0 1\n> regression=# insert into cc values(null);\n> INSERT 0 1\n> regression=# alter table pp add column f2 text;\n> ALTER TABLE\n> regression=# alter table pp add column f3 text;\n> ALTER TABLE\n> regression=# alter table only pp alter f3 set not null;\n> ALTER TABLE\n> regression=# select * from pp;\n> f1 | f2 | f3 \n> ----+----+----\n> 1 | | \n> 2 | | \n> | | \n> (3 rows)\n> \n> The tuples coming out of cc will still have natts = 1, I believe.\n> If they were deformed according to pp's tupdesc, there'd be a\n> problem. Now, we shouldn't do that, because this is not the only\n> possible discrepancy between parent and child tupdescs --- but\n> I think this example shows that attnotnull is a lot spongier\n> than you are assuming, even without considering the possibility\n> of outright bugs.\n\nUnortunately it doesn't really put the fear into me - given that\nattribute numbers don't even have to match between inheritance children,\nmaking inferrences about the length of the NULL bitmap seems peanuts\ncompared to the breakage of using the wrong tupdesc to deform.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Apr 2019 11:22:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Mon, Apr 22, 2019 at 12:19:55PM -0400, Alvaro Herrera wrote:\n> On 2019-Apr-22, Andres Freund wrote:\n>> I think it'd be better just to fix s/the all/that all/.\n> \n> (and s/if's/if it's/)\n\nFWIW, I have noticed that part when gathering all the pieces for what\nbecame 148266f, still the full paragraph was sort of confusing, so I\nhave just fixed the most obvious issues reported first.\n--\nMichael",
"msg_date": "Tue, 23 Apr 2019 11:50:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn Tue, Apr 23, 2019 at 11:50:42AM +0900, Michael Paquier wrote:\n> On Mon, Apr 22, 2019 at 12:19:55PM -0400, Alvaro Herrera wrote:\n> > On 2019-Apr-22, Andres Freund wrote:\n> >> I think it'd be better just to fix s/the all/that all/.\n> > \n> > (and s/if's/if it's/)\n> \n> FWIW, I have noticed that part when gathering all the pieces for what\n> became 148266f, still the full paragraph was sort of confusing, so I\n> have just fixed the most obvious issues reported first.\n\nI saw you closed the item here:\nhttps://wiki.postgresql.org/index.php?title=PostgreSQL_12_Open_Items&diff=33390&oldid=33389\n\nBut I think the biggest part of the patch is still not even reviewed ?\nI'm referring to ./*review-docs-for-pg12dev.patch \n\nI haven't updated the JIT changes since there's larger discussion regarding the\ncode.\n\nJustin\n\n\n",
"msg_date": "Fri, 26 Apr 2019 12:17:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 12:17:22PM -0500, Justin Pryzby wrote:\n> But I think the biggest part of the patch is still not even reviewed ?\n> I'm referring to ./*review-docs-for-pg12dev.patch\n\nNope. I looked at the patch, and as mentioned upthread the suggested\nchanges did not seem like improvements as the existing sentences make\nsense, at least to me. Do you have any particular part of your patch\nwhere you think your wording is an improvement? Why do you think so?\n--\nMichael",
"msg_date": "Sat, 27 Apr 2019 09:44:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 09:44:20AM +0900, Michael Paquier wrote:\n> On Fri, Apr 26, 2019 at 12:17:22PM -0500, Justin Pryzby wrote:\n> > But I think the biggest part of the patch is still not even reviewed ?\n> > I'm referring to ./*review-docs-for-pg12dev.patch\n> \n> Nope. I looked at the patch, and as mentioned upthread the suggested\n> changes did not seem like improvements as the existing sentences make\n> sense, at least to me. Do you have any particular part of your patch\n> where you think your wording is an improvement? Why do you think so?\n\nThat's mostly new language from v12 commits which I specifically reviewed and\nworth cleaning up before release.\n\nIf nobody else is interested then I'll forget about it, but they're *all*\n(minor) improvements IMO. \n\nI don't think it's be useful to enumerate justifications for each hunk; if one\nof them isn't agreed to be an improvement, I'd just remove it.\n\nBut here's some one-liner excerpts.\n\n- is <literal>2</literal> bits and maximum is <literal>4095</literal>. Parameters for\n+ is <literal>2</literal> bits and the maximum is <literal>4095</literal>. Parameters for\n\nAdding \"the\" makes it a complete sentence and not a fragment.\n\n- all autovacuum actions. Minus-one (the default) disables logging\n+ all autovacuum actions. <literal>-1</literal> (the default) disables logging\n\nThere's nothing else that says \"minus-one\" anywhere else on that page. I just\nfound one in auto-explain.sgml, which I changed.\n\n- than 16KB; <function>gss_wrap_size_limit()</function> should be used by the\n+ than 16kB; <function>gss_wrap_size_limit()</function> should be used by the\n\nEvery other use in documentation has a lowercase \"kay\", and PG itself doesn't\naccept \"KB\" unit suffix.\n\n- A few features included in the C99 standard are, at this time, not be\n+ A few features included in the C99 standard are, at this time, not\n permitted to be used in core <productname>PostgreSQL</productname>\n\nIndisputably wrong ?\n\nJustin\n\n\n",
"msg_date": "Fri, 26 Apr 2019 21:56:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Fri, Apr 26, 2019 at 09:56:47PM -0500, Justin Pryzby wrote:\n> But here's some one-liner excerpts.\n> \n> - is <literal>2</literal> bits and maximum is <literal>4095</literal>. Parameters for\n> + is <literal>2</literal> bits and the maximum is <literal>4095</literal>. Parameters for\n> \n> Adding \"the\" makes it a complete sentence and not a fragment.\n\nNot sure here either that it matters.\n\n> - all autovacuum actions. Minus-one (the default) disables logging\n> + all autovacuum actions. <literal>-1</literal> (the default) disables logging\n> \n> There's nothing else that says \"minus-one\" anywhere else on that page. I just\n> found one in auto-explain.sgml, which I changed.\n\nThat's one of these I am not sure about.\n\n> - than 16KB; <function>gss_wrap_size_limit()</function> should be used by the\n> + than 16kB; <function>gss_wrap_size_limit()</function> should be used by the\n> \n> Every other use in documentation has a lowercase \"kay\", and PG itself doesn't\n> accept \"KB\" unit suffix.\n\nRight. There are more places like that, particularly in the comments\nof the code.\n\n> - A few features included in the C99 standard are, at this time, not be\n> + A few features included in the C99 standard are, at this time, not\n> permitted to be used in core <productname>PostgreSQL</productname>\n> \n> Indisputably wrong ?\n\nYep, this one is wrong as-is.\n--\nMichael",
"msg_date": "Sat, 27 Apr 2019 13:57:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Apr 26, 2019 at 09:56:47PM -0500, Justin Pryzby wrote:\n>> - all autovacuum actions. Minus-one (the default) disables logging\n>> + all autovacuum actions. <literal>-1</literal> (the default) disables logging\n>> \n>> There's nothing else that says \"minus-one\" anywhere else on that page. I just\n>> found one in auto-explain.sgml, which I changed.\n\n> That's one of these I am not sure about.\n\nFWIW, I think we generally write this the way Justin suggests. It's\nmore precise, at least if you're reading it in a way that makes\n<literal> text distinguishable from plain text: what to put into\nthe config file is exactly \"-1\", and not for instance \"minus-one\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Apr 2019 11:10:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Sat, Apr 27, 2019 at 11:10:46AM -0400, Tom Lane wrote:\n> FWIW, I think we generally write this the way Justin suggests. It's\n> more precise, at least if you're reading it in a way that makes\n> <literal> text distinguishable from plain text: what to put into\n> the config file is exactly \"-1\", and not for instance \"minus-one\".\n\nOkay, sold. I have done and extra pass on v2-0002 and included again\nsome obvious mistakes. I have noticed a couple of things on the way.\n--\nMichael",
"msg_date": "Sun, 28 Apr 2019 23:05:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-22 14:17:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-04-22 13:27:17 -0400, Tom Lane wrote:\n> >> I wonder\n> >> also if it wouldn't be smart to explicitly check that the \"guaranteeing\"\n> >> column is not attisdropped.\n> \n> > Yea, that probably would be smart. I don't think there's an active\n> > problem, because we remove NOT NULL when deleting an attribute, but it\n> > seems good to be doubly sure / explain why that's safe:\n> > \t\t/* Remove any NOT NULL constraint the column may have */\n> > \t\tattStruct->attnotnull = false;\n> > I'm a bit unsure whether to make it an assert, elog(ERROR) or just not\n> > assume column presence?\n> \n> I'd just make the code look like\n> \n> /*\n> * If it's NOT NULL then it must be present in every tuple,\n> * unless there's a \"missing\" entry that could provide a non-null\n> * value for it. Out of paranoia, also check !attisdropped.\n> */\n> if (att->attnotnull &&\n> !att->atthasmissing &&\n> !att->attisdropped)\n> guaranteed_column_number = attnum;\n> \n> I don't think the extra check is so expensive as to be worth obsessing\n> over.\n\nPushed. Did so separately from Justin's changes, since this is a small\nfunctional change.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 16:48:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-08 09:18:28 -0500, Justin Pryzby wrote:\n> From aae1a84b74436951222dba42b21de284ed8b1ac9 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sat, 30 Mar 2019 17:24:35 -0500\n> Subject: [PATCH v2 03/12] JIT typos..\n> \n> ..which I sent to Andres some time ago and which I noticed were never applied\n> (nor rejected).\n> \n> https://www.postgresql.org/message-id/20181127184133.GM10913%40telsasoft.com\n> ---\n> src/backend/jit/llvm/llvmjit_deform.c | 22 +++++++++++-----------\n> src/backend/jit/llvm/llvmjit_inline.cpp | 2 +-\n> 2 files changed, 12 insertions(+), 12 deletions(-)\n\nI pushed these, minus the ones that were obsoleted by the slightly\nlarger changes resulting from the discussion of this patch.\n\nThanks for the patch!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 30 Apr 2019 16:50:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Sat, Mar 30, 2019 at 05:43:33PM -0500, Justin Pryzby wrote:\n> I reviewed docs like this:\n> git log -p remotes/origin/REL_11_STABLE..HEAD -- doc\n\nOn Fri, Apr 19, 2019 at 05:00:26PM +0900, Michael Paquier wrote:\n> However most of what you are proposing does not seem necessary, and the\n> current phrasing looks correct English to me, but I am not a native speaker.\n\nMichael reviewed and committed several portions of previous version of this\npatch, but I think each of the remaining changes are individually each minor\nimprovements and combine are a significant improvement, so I'm requesting\nadditional review. I think it'd be much better than submission and review of\ndozens of separate mails while individually rediscovering the same things..\n\nThanks in advance for any review.\n\nJustin",
"msg_date": "Mon, 20 May 2019 13:20:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 13:20:01 -0500, Justin Pryzby wrote:\n> On Sat, Mar 30, 2019 at 05:43:33PM -0500, Justin Pryzby wrote:\n> Thanks in advance for any review.\n\nI find these pretty tedious to work with. I'm somewhat dyslexic, not a\nnative speaker. So it requires a lot of concentration to go through\nthem...\n\n\n> @@ -3052,7 +3052,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n> simplifies <command>ATTACH/DETACH PARTITION</command> operations:\n> the partition dependencies need only be added or removed.\n> Example: a child partitioned index is made partition-dependent\n> - on both the partition table it is on and the parent partitioned\n> + on both the table partition and the parent partitioned\n> index, so that it goes away if either of those is dropped, but\n> not otherwise. The dependency on the parent index is primary,\n> so that if the user tries to drop the child partitioned index,\n\n> @@ -3115,7 +3115,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n> Note that it's quite possible for two objects to be linked by more than\n> one <structname>pg_depend</structname> entry. For example, a child\n> partitioned index would have both a partition-type dependency on its\n> - associated partition table, and an auto dependency on each column of\n> + associated table partition, and an auto dependency on each column of\n> that table that it indexes. This sort of situation expresses the union\n> of multiple dependency semantics. A dependent object can be dropped\n> without <literal>CASCADE</literal> if any of its dependencies satisfies\n\nHm, that's not an improvement from my POV? The version before isn't great either,\nbut it seems to improve this'd require a somewhat bigger hammer.\n\n\n\n> @@ -6947,8 +6948,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;\n> <para>\n> Causes each action executed by autovacuum to be logged if it ran for at\n> least the specified number of milliseconds. Setting this to zero logs\n> - all autovacuum actions. <literal>-1</literal> (the default) disables\n> - logging autovacuum actions. For example, if you set this to\n> + all autovacuum actions. <literal>-1</literal> (the default) disables logging\n> + autovacuum actions. For example, if you set this to\n> <literal>250ms</literal> then all automatic vacuums and analyzes that run\n> 250ms or longer will be logged. In addition, when this parameter is\n> set to any value other than <literal>-1</literal>, a message will be\n\nHm?\n\n\n> diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\n> index a0a7435..cfe83a8 100644\n> --- a/doc/src/sgml/ddl.sgml\n> +++ b/doc/src/sgml/ddl.sgml\n> @@ -3867,12 +3867,12 @@ CREATE INDEX ON measurement (logdate);\n> \n> <para>\n> Normally the set of partitions established when initially defining the\n> - table are not intended to remain static. It is common to want to\n> - remove old partitions of data and periodically add new partitions for\n> + table are not intended to remain static. It's common to\n> + remove partitions of old data and add partitions for\n> new data. One of the most important advantages of partitioning is\n> - precisely that it allows this otherwise painful task to be executed\n> + allowing this otherwise painful task to be executed\n> nearly instantaneously by manipulating the partition structure, rather\n> - than physically moving large amounts of data around.\n> + than physically moving around large amounts of data.\n> </para>\n\nI don't understand what the point of changing things like \"It is\" to\n\"It's\". There's more uses of the former in the docs.\n\nI'm also not sure that I like the removal of \"to want\" etc,\nbecause just because it's a common desire, it's not automatically common\npractice. And I think the 'periodically' is actually a reasonable hint\nthat partition creation doesn't happen automatically or in the\nforeground.\n\n\n> <row>\n> <entry><structfield>partitions_done</structfield></entry>\n> <entry><type>bigint</type></entry>\n> <entry>\n> - When creating an index on a partitioned table, this column is set to\n> - the number of partitions on which the index has been completed.\n> + When creating an index on a partitioned table, this is\n> + the number of partitions for which the process is complete.\n> </entry>\n> </row>\n> </tbody>\n\n> @@ -3643,9 +3643,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,\n> <entry>\n> The index is being built by the access method-specific code. In this phase,\n> access methods that support progress reporting fill in their own progress data,\n> - and the subphase is indicated in this column. Typically,\n> + and the subphase is indicated in this column.\n> <structname>blocks_total</structname> and <structname>blocks_done</structname>\n> - will contain progress data, as well as potentially\n> + will contain progress data, as may\n> <structname>tuples_total</structname> and <structname>tuples_done</structname>.\n> </entry>\n\nHm, if you're intent on removing \"this column\", why not here?\n\nIs the removal of \"typically\" and \"potentially\" actually correct here?\nSomebody clearly wanted to indicate it's not guaranteed?\n\n\n> @@ -3922,9 +3922,9 @@ SELECT pg_stat_get_backend_pid(s.backendid) AS pid,\n> <title>CLUSTER Progress Reporting</title>\n> \n> <para>\n> - Whenever <command>CLUSTER</command> or <command>VACUUM FULL</command> is\n> - running, the <structname>pg_stat_progress_cluster</structname> view will\n> - contain a row for each backend that is currently running either command.\n> + The <structname>pg_stat_progress_cluster</structname> view will contain a\n> + row for each backend that is running either\n> + <command>CLUSTER</command> or <command>VACUUM FULL</command>.\n> The tables below describe the information that will be reported and\n> provide information about how to interpret it.\n> </para>\n\nUnrelated to your change, but I noticed it in this hunk. Isn't it weird\nto say \"will contain\" rather than just \"contains\" for this type of\nreference documentation?\n\n\n> diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml\n> index a84be85..65c161b 100644\n> --- a/doc/src/sgml/perform.sgml\n> +++ b/doc/src/sgml/perform.sgml\n> @@ -899,10 +899,10 @@ EXPLAIN ANALYZE SELECT * FROM tenk1 WHERE unique1 < 100 AND unique2 > 9000\n> Generally, the <command>EXPLAIN</command> output will display details for\n> every plan node which was generated by the query planner. However, there\n> are cases where the executor is able to determine that certain nodes are\n> - not required; currently, the only node types to support this are the\n> - <literal>Append</literal> and <literal>MergeAppend</literal> nodes. These\n> - node types have the ability to discard subnodes which they are able to\n> - determine won't contain any records required by the query. It is possible\n> + not required; currently, the only node types to support this are\n> + <literal>Append</literal> and <literal>MergeAppend</literal>, which\n> + are able to discard subnodes when it's deduced that\n> + they will not contain any records required by the query. It is possible\n> to determine that nodes have been removed in this way by the presence of a\n> \"Subplans Removed\" property in the <command>EXPLAIN</command> output.\n> </para>\n\nShrug.\n\n\n> diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\n> index 49b081a..dd80bd0 100644\n> --- a/doc/src/sgml/ref/alter_table.sgml\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -219,7 +219,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> \n> <para>\n> <literal>SET NOT NULL</literal> may only be applied to a column\n> - providing none of the records in the table contain a\n> + provided none of the records in the table contain a\n> <literal>NULL</literal> value for the column. Ordinarily this is\n> checked during the <literal>ALTER TABLE</literal> by scanning the\n> entire table; however, if a valid <literal>CHECK</literal> constraint is\n\nIt'd be easier to review / apply this kind of thing if stylistic\nchoices, especially borderline ones, were separated from clear typos\n(like this one).\n\n\n> diff --git a/doc/src/sgml/ref/create_index.sgml b/doc/src/sgml/ref/create_index.sgml\n> index 30bb38b..bf4f550 100644\n> --- a/doc/src/sgml/ref/create_index.sgml\n> +++ b/doc/src/sgml/ref/create_index.sgml\n> @@ -181,8 +181,8 @@ CREATE [ UNIQUE ] INDEX [ CONCURRENTLY ] [ [ IF NOT EXISTS ] <replaceable class=\n> </para>\n> \n> <para>\n> - Currently, the B-tree and the GiST index access methods support this\n> - feature. In B-tree and the GiST indexes, the values of columns listed\n> + Currently, only the B-tree and GiST index access methods support this\n> + feature. In B-tree and GiST indexes, the values of columns listed\n> in the <literal>INCLUDE</literal> clause are included in leaf tuples\n> which correspond to heap tuples, but are not included in upper-level\n> index entries used for tree navigation.\n\nHm. External index AMs also can also support uniqueness. So perhaps not\nadding only is actually better? I realize that other places in the same\nfile already say \"only\".\n\n\n> diff --git a/doc/src/sgml/ref/pg_rewind.sgml b/doc/src/sgml/ref/pg_rewind.sgml\n> index 4d91eeb..6f6d220 100644\n> --- a/doc/src/sgml/ref/pg_rewind.sgml\n> +++ b/doc/src/sgml/ref/pg_rewind.sgml\n> @@ -106,15 +106,14 @@ PostgreSQL documentation\n> </para>\n> \n> <para>\n> - <application>pg_rewind</application> will fail immediately if it finds\n> - files it cannot write directly to. This can happen for example when\n> - the source and the target server use the same file mapping for read-only\n> - SSL keys and certificates. If such files are present on the target server\n> + <application>pg_rewind</application> will fail immediately if it experiences\n> + a write error. This can happen for example when\n> + the source and target server use the same file mapping for read-only\n> + SSL keys and certificates. If such files are present on the target server,\n> it is recommended to remove them before running\n\nThat's not really the same. The failure will often be *before* a write\nerror. E.g.\n\tmode = O_WRONLY | O_CREAT | PG_BINARY;\n\tif (trunc)\n\t\tmode |= O_TRUNC;\n\tdstfd = open(dstpath, mode, pg_file_create_mode);\n\tif (dstfd < 0)\n\t\tpg_fatal(\"could not open target file \\\"%s\\\": %m\",\n\t\t\t\t dstpath);\nwill fail, rather than a write().\n\n\n\n\n> @@ -474,10 +474,8 @@ pgbench <optional> <replaceable>options</replaceable> </optional> <replaceable>d\n> </listitem>\n> </itemizedlist>\n> \n> - Because in \"prepared\" mode <application>pgbench</application> reuses\n> - the parse analysis result for the second and subsequent query\n> - iteration, <application>pgbench</application> runs faster in the\n> - prepared mode than in other modes.\n> + <application>pgbench</application> runs faster in prepared mode because the\n> + parse analysis happens only during the first query.\n> </para>\n\nThat text seems wrong before and after. It's not just parse analysis,\nbut also planning?\n\n\n> --- a/doc/src/sgml/runtime.sgml\n> +++ b/doc/src/sgml/runtime.sgml\n> @@ -2634,8 +2634,9 @@ openssl x509 -req -in server.csr -text -days 365 \\\n> using <acronym>GSSAPI</acronym> to encrypt client/server communications for\n> increased security. Support requires that a <acronym>GSSAPI</acronym>\n> implementation (such as MIT krb5) is installed on both client and server\n> - systems, and that support in <productname>PostgreSQL</productname> is\n> - enabled at build time (see <xref linkend=\"installation\"/>).\n> + systems, and must be enabled at the time\n> + <productname>PostgreSQL</productname> is built (see <xref\n> + linkend=\"installation\"/>).\n> </para>\n\nThis is weird before and after. I'd just say \"that PostgreSQL has been\nbuilt with GSSAPI support\".\n\n\n> <para>\n> - For example <literal>_StaticAssert()</literal> and\n> + For example, <literal>_StaticAssert()</literal> and\n> <literal>__builtin_constant_p</literal> are currently used, even though\n> - they are from newer revisions of the C standard and a\n> - <productname>GCC</productname> extension respectively. If not available\n> - we respectively fall back to using a C99 compatible replacement that\n> - performs the same checks, but emits rather cryptic messages and do not\n> + they are from a newer revision of the C standard and a\n> + <productname>GCC</productname> extension, respectively. If not available, in the first case, \n> + we fall back to using a C99 compatible replacement that\n> + performs the same checks, but emits rather cryptic messages; in the second case, we do not\n> use <literal>__builtin_constant_p</literal>.\n> </para>\n\nTo me the point of changing just about equivalent formulations into\nanother isn't clear.\n\n\n> @@ -3381,7 +3381,7 @@ if (!ptr)\n> The parallel safety property (<literal>PARALLEL\n> UNSAFE</literal>, <literal>PARALLEL RESTRICTED</literal>, or\n> <literal>PARALLEL SAFE</literal>) must also be specified if you hope\n> - to use the function in parallelized queries.\n> + queries calling the function to use parallel query.\n> It can also be useful to specify the function's estimated execution\n> cost, and/or the number of rows a set-returning function is estimated\n> to return. However, the declarative way of specifying those two\n\nSeems like a larger rewrite is needed\n\n\n> @@ -3393,7 +3393,7 @@ if (!ptr)\n> It is also possible to attach a <firstterm>planner support\n> function</firstterm> to a SQL-callable function (called\n> its <firstterm>target function</firstterm>), and thereby provide\n> - knowledge about the target function that is too complex to be\n> + information about the target function that is too complex to be\n> represented declaratively. Planner support functions have to be\n> written in C (although their target functions might not be), so this is\n> an advanced feature that relatively few people will use.\n\nWhy s/knowledge/information/?\n\n\n> From d8c8b5416726909203db53cfa73ea2c7cc0fe9a0 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 29 Mar 2019 19:37:35 -0500\n> Subject: [PATCH v3 02/12] Add comma for readability\n\nYou did plenty of that in the previous set of changes...\n\n> ---\n> doc/src/sgml/backup.sgml | 2 +-\n> doc/src/sgml/bki.sgml | 2 +-\n> doc/src/sgml/client-auth.sgml | 4 ++--\n> doc/src/sgml/config.sgml | 4 ++--\n> doc/src/sgml/ddl.sgml | 2 +-\n> doc/src/sgml/indices.sgml | 2 +-\n> doc/src/sgml/installation.sgml | 2 +-\n> doc/src/sgml/logical-replication.sgml | 2 +-\n> doc/src/sgml/protocol.sgml | 6 +++---\n> doc/src/sgml/ref/create_table.sgml | 2 +-\n> doc/src/sgml/ref/create_table_as.sgml | 2 +-\n> doc/src/sgml/ref/pgupgrade.sgml | 2 +-\n> doc/src/sgml/ref/psql-ref.sgml | 2 +-\n> doc/src/sgml/sources.sgml | 14 +++++++-------\n> doc/src/sgml/wal.sgml | 2 +-\n> doc/src/sgml/xoper.sgml | 2 +-\n> 16 files changed, 26 insertions(+), 26 deletions(-)\n> \n> diff --git a/doc/src/sgml/backup.sgml b/doc/src/sgml/backup.sgml\n> index b67da89..ae41bc7 100644\n> --- a/doc/src/sgml/backup.sgml\n> +++ b/doc/src/sgml/backup.sgml\n> @@ -1024,7 +1024,7 @@ SELECT pg_start_backup('label', true);\n> consider during this backup.\n> </para>\n> <para>\n> - As noted above, if the server crashes during the backup it may not be\n> + As noted above, if the server crashes during the backup, it may not be\n> possible to restart until the <literal>backup_label</literal> file has\n> been manually deleted from the <envar>PGDATA</envar> directory. Note\n> that it is very important to never remove the\n> diff --git a/doc/src/sgml/bki.sgml b/doc/src/sgml/bki.sgml\n> index aa3d6f8..e27fa76 100644\n> --- a/doc/src/sgml/bki.sgml\n> +++ b/doc/src/sgml/bki.sgml\n> @@ -403,7 +403,7 @@\n> 8000—9999. This minimizes the risk of OID collisions with other\n> patches being developed concurrently. To keep the 8000—9999\n> range free for development purposes, after a patch has been committed\n> - to the master git repository its OIDs should be renumbered into\n> + to the master git repository, its OIDs should be renumbered into\n> available space below that range. Typically, this will be done\n> near the end of each development cycle, moving all OIDs consumed by\n> patches committed in that cycle at the same time. The script\n> diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml\n> index ffed887..098900e 100644\n> --- a/doc/src/sgml/client-auth.sgml\n> +++ b/doc/src/sgml/client-auth.sgml\n> @@ -157,7 +157,7 @@ hostnogssenc <replaceable>database</replaceable> <replaceable>user</replaceable\n> </para>\n> \n> <para>\n> - To make use of this option the server must be built with\n> + To make use of this option, the server must be built with\n> <acronym>SSL</acronym> support. Furthermore,\n> <acronym>SSL</acronym> must be enabled\n> by setting the <xref linkend=\"guc-ssl\"/> configuration parameter (see\n> @@ -189,7 +189,7 @@ hostnogssenc <replaceable>database</replaceable> <replaceable>user</replaceable\n> </para>\n> \n> <para>\n> - To make use of this option the server must be built with\n> + To make use of this option, the server must be built with\n> <acronym>GSSAPI</acronym> support. Otherwise,\n> the <literal>hostgssenc</literal> record is ignored except for logging\n> a warning that it cannot match any connections.\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index 73eb768..54b91d3 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -3458,7 +3458,7 @@ restore_command = 'copy \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n> reached. The default is <literal>pause</literal>, which means recovery will\n> be paused. <literal>promote</literal> means the recovery process will finish\n> and the server will start to accept connections.\n> - Finally <literal>shutdown</literal> will stop the server after reaching the\n> + Finally, <literal>shutdown</literal> will stop the server after reaching the\n> recovery target.\n> </para>\n> <para>\n> @@ -4188,7 +4188,7 @@ ANY <replaceable class=\"parameter\">num_sync</replaceable> ( <replaceable class=\"\n> </para>\n> <para>\n> The delay occurs once the database in recovery has reached a consistent\n> - state, until the standby is promoted or triggered. After that the standby\n> + state, until the standby is promoted or triggered. After that, the standby\n> will end recovery without further waiting.\n> </para>\n> <para>\n> diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\n> index cfe83a8..8135507 100644\n> --- a/doc/src/sgml/ddl.sgml\n> +++ b/doc/src/sgml/ddl.sgml\n> @@ -3866,7 +3866,7 @@ CREATE INDEX ON measurement (logdate);\n> <title>Partition Maintenance</title>\n> \n> <para>\n> - Normally the set of partitions established when initially defining the\n> + Normally, the set of partitions established when initially defining the\n> table are not intended to remain static. It's common to\n> remove partitions of old data and add partitions for\n> new data. One of the most important advantages of partitioning is\n> diff --git a/doc/src/sgml/indices.sgml b/doc/src/sgml/indices.sgml\n> index 95c0a19..e940ddb 100644\n> --- a/doc/src/sgml/indices.sgml\n> +++ b/doc/src/sgml/indices.sgml\n> @@ -1081,7 +1081,7 @@ SELECT x FROM tab WHERE x = 'key' AND z < 42;\n> scan. Even in the successful case, this approach trades visibility map\n> accesses for heap accesses; but since the visibility map is four orders\n> of magnitude smaller than the heap it describes, far less physical I/O is\n> - needed to access it. In most situations the visibility map remains\n> + needed to access it. In most situations, the visibility map remains\n> cached in memory all the time.\n> </para>\n> \n> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml\n> index 4493862..847e028 100644\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -2527,7 +2527,7 @@ xcodebuild -version -sdk macosx Path\n> </programlisting>\n> Note that building an extension using a different sysroot version than\n> was used to build the core server is not really recommended; in the\n> - worst case it could result in hard-to-debug ABI inconsistencies.\n> + worst case, it could result in hard-to-debug ABI inconsistencies.\n> </para>\n> \n> <para>\n> diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\n> index 3f2f674..31c814c 100644\n> --- a/doc/src/sgml/logical-replication.sgml\n> +++ b/doc/src/sgml/logical-replication.sgml\n> @@ -201,7 +201,7 @@\n> \n> <para>\n> Subscriptions are dumped by <command>pg_dump</command> if the current user\n> - is a superuser. Otherwise a warning is written and subscriptions are\n> + is a superuser. Otherwise, a warning is written and subscriptions are\n> skipped, because non-superusers cannot read all subscription information\n> from the <structname>pg_subscription</structname> catalog.\n> </para>\n> diff --git a/doc/src/sgml/protocol.sgml b/doc/src/sgml/protocol.sgml\n> index 70f7286..3f907f6 100644\n> --- a/doc/src/sgml/protocol.sgml\n> +++ b/doc/src/sgml/protocol.sgml\n> @@ -1449,7 +1449,7 @@ SELECT 1/0;\n> <literal>S</literal>, perform an <acronym>SSL</acronym> startup handshake\n> (not described here, part of the <acronym>SSL</acronym>\n> specification) with the server. If this is successful, continue\n> - with sending the usual StartupMessage. In this case the\n> + with sending the usual StartupMessage. In this case, the\n> StartupMessage and all subsequent data will be\n> <acronym>SSL</acronym>-encrypted. To continue after\n> <literal>N</literal>, send the usual StartupMessage and proceed without\n> @@ -1462,7 +1462,7 @@ SELECT 1/0;\n> the server predates the addition of <acronym>SSL</acronym> support\n> to <productname>PostgreSQL</productname>. (Such servers are now very ancient,\n> and likely do not exist in the wild anymore.)\n> - In this case the connection must\n> + In this case, the connection must\n> be closed, but the frontend might choose to open a fresh connection\n> and proceed without requesting <acronym>SSL</acronym>.\n> </para>\n> @@ -1528,7 +1528,7 @@ SELECT 1/0;\n> The frontend should also be prepared to handle an ErrorMessage\n> response to GSSENCRequest from the server. This would only occur if\n> the server predates the addition of <acronym>GSSAPI</acronym> encryption\n> - support to <productname>PostgreSQL</productname>. In this case the\n> + support to <productname>PostgreSQL</productname>. In this case, the\n> connection must be closed, but the frontend might choose to open a fresh\n> connection and proceed without requesting <acronym>GSSAPI</acronym>\n> encryption. Given the length limits specified above, the ErrorMessage\n> diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml\n> index 44a61ef..dbb1468 100644\n> --- a/doc/src/sgml/ref/create_table.sgml\n> +++ b/doc/src/sgml/ref/create_table.sgml\n> @@ -1189,7 +1189,7 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> This clause specifies optional storage parameters for a table or index;\n> see <xref linkend=\"sql-createtable-storage-parameters\"\n> endterm=\"sql-createtable-storage-parameters-title\"/> for more\n> - information. For backward-compatibility the <literal>WITH</literal>\n> + information. For backward-compatibility, the <literal>WITH</literal>\n> clause for a table can also include <literal>OIDS=FALSE</literal> to\n> specify that rows of the new table should not contain OIDs (object\n> identifiers), <literal>OIDS=TRUE</literal> is not supported anymore.\n> diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml\n> index b5c4ce6..0880459 100644\n> --- a/doc/src/sgml/ref/create_table_as.sgml\n> +++ b/doc/src/sgml/ref/create_table_as.sgml\n> @@ -142,7 +142,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI\n> This clause specifies optional storage parameters for the new table;\n> see <xref linkend=\"sql-createtable-storage-parameters\"\n> endterm=\"sql-createtable-storage-parameters-title\"/> for more\n> - information. For backward-compatibility the <literal>WITH</literal>\n> + information. For backward-compatibility, the <literal>WITH</literal>\n> clause for a table can also include <literal>OIDS=FALSE</literal> to\n> specify that rows of the new table should contain no OIDs (object\n> identifiers), <literal>OIDS=TRUE</literal> is not supported anymore.\n> diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml\n> index 8288676..6a898d9 100644\n> --- a/doc/src/sgml/ref/pgupgrade.sgml\n> +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> @@ -746,7 +746,7 @@ psql --username=postgres --file=script.sql postgres\n> <application>pg_upgrade</application> launches short-lived postmasters in\n> the old and new data directories. Temporary Unix socket files for\n> communication with these postmasters are, by default, made in the current\n> - working directory. In some situations the path name for the current\n> + working directory. In some situations, the path name for the current\n> directory might be too long to be a valid socket name. In that case you\n> can use the <option>-s</option> option to put the socket files in some\n> directory with a shorter path name. For security, be sure that that\n> diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml\n> index b867640..55cc78c 100644\n> --- a/doc/src/sgml/ref/psql-ref.sgml\n> +++ b/doc/src/sgml/ref/psql-ref.sgml\n> @@ -1053,7 +1053,7 @@ testdb=>\n> These operations are not as efficient as the <acronym>SQL</acronym>\n> <command>COPY</command> command with a file or program data source or\n> destination, because all data must pass through the client/server\n> - connection. For large amounts of data the <acronym>SQL</acronym>\n> + connection. For large amounts of data, the <acronym>SQL</acronym>\n> command might be preferable.\n> </para>\n> </tip>\n> diff --git a/doc/src/sgml/sources.sgml b/doc/src/sgml/sources.sgml\n> index 2b9f59d..520bbae 100644\n> --- a/doc/src/sgml/sources.sgml\n> +++ b/doc/src/sgml/sources.sgml\n> @@ -100,7 +100,7 @@ less -x4\n> <para>\n> There are two required elements for every message: a severity level\n> (ranging from <literal>DEBUG</literal> to <literal>PANIC</literal>) and a primary\n> - message text. In addition there are optional elements, the most\n> + message text. In addition, there are optional elements, the most\n> common of which is an error identifier code that follows the SQL spec's\n> SQLSTATE conventions.\n> <function>ereport</function> itself is just a shell function that exists\n> @@ -473,7 +473,7 @@ Hint: the addendum\n> \n> <para>\n> Rationale: Messages are not necessarily displayed on terminal-type\n> - displays. In GUI displays or browsers these formatting instructions are\n> + displays. In GUI displays or browsers, these formatting instructions are\n> at best ignored.\n> </para>\n> \n> @@ -897,14 +897,14 @@ BETTER: unrecognized node type: 42\n> <simplesect>\n> <title>Function-Like Macros and Inline Functions</title>\n> <para>\n> - Both, macros with arguments and <literal>static inline</literal>\n> - functions, may be used. The latter are preferable if there are\n> + Both macros with arguments and <literal>static inline</literal>\n> + functions may be used. The latter are preferable if there are\n> multiple-evaluation hazards when written as a macro, as e.g. the\n> case with\n> <programlisting>\n> #define Max(x, y) ((x) > (y) ? (x) : (y))\n> </programlisting>\n> - or when the macro would be very long. In other cases it's only\n> + or when the macro would be very long. In other cases, it's only\n> possible to use macros, or at least easier. For example because\n> expressions of various types need to be passed to the macro.\n> </para>\n> @@ -936,7 +936,7 @@ MemoryContextSwitchTo(MemoryContext context)\n> <simplesect>\n> <title>Writing Signal Handlers</title>\n> <para>\n> - To be suitable to run inside a signal handler code has to be\n> + To be suitable to run inside a signal handler, code has to be\n> written very carefully. The fundamental problem is that, unless\n> blocked, a signal handler can interrupt code at any time. If code\n> inside the signal handler uses the same state as code outside,\n> @@ -945,7 +945,7 @@ MemoryContextSwitchTo(MemoryContext context)\n> interrupted code.\n> </para>\n> <para>\n> - Barring special arrangements code in signal handlers may only\n> + Barring special arrangements, code in signal handlers may only\n> call async-signal safe functions (as defined in POSIX) and access\n> variables of type <literal>volatile sig_atomic_t</literal>. A few\n> functions in <command>postgres</command> are also deemed signal safe; specifically,\n> diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\n> index 4eb8feb..30bde24 100644\n> --- a/doc/src/sgml/wal.sgml\n> +++ b/doc/src/sgml/wal.sgml\n> @@ -326,7 +326,7 @@\n> before returning a success indication to the client. The client is\n> therefore guaranteed that a transaction reported to be committed will\n> be preserved, even in the event of a server crash immediately after.\n> - However, for short transactions this delay is a major component of the\n> + However, for short transactions, this delay is a major component of the\n> total transaction time. Selecting asynchronous commit mode means that\n> the server returns success as soon as the transaction is logically\n> completed, before the <acronym>WAL</acronym> records it generated have\n> diff --git a/doc/src/sgml/xoper.sgml b/doc/src/sgml/xoper.sgml\n> index 260e43c..55cd3b1 100644\n> --- a/doc/src/sgml/xoper.sgml\n> +++ b/doc/src/sgml/xoper.sgml\n> @@ -375,7 +375,7 @@ table1.column1 OP table2.column2\n> Another example is that on machines that meet the <acronym>IEEE</acronym>\n> floating-point standard, negative zero and positive zero are different\n> values (different bit patterns) but they are defined to compare equal.\n> - If a float value might contain negative zero then extra steps are needed\n> + If a float value might contain negative zero, then extra steps are needed\n> to ensure it generates the same hash value as positive zero.\n> </para>\n> \n> -- \n> 2.7.4\n> \n\n> From 4d7e3b99d6e9e203e539cc8658554294121732b1 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 29 Mar 2019 19:40:49 -0500\n> Subject: [PATCH v3 03/12] Consistent language: \"must be superuser\"\n> \n> ---\n> src/backend/storage/ipc/signalfuncs.c | 6 +++---\n> 1 file changed, 3 insertions(+), 3 deletions(-)\n> \n> diff --git a/src/backend/storage/ipc/signalfuncs.c b/src/backend/storage/ipc/signalfuncs.c\n> index 4bfbd57..1df5861 100644\n> --- a/src/backend/storage/ipc/signalfuncs.c\n> +++ b/src/backend/storage/ipc/signalfuncs.c\n> @@ -115,7 +115,7 @@ pg_cancel_backend(PG_FUNCTION_ARGS)\n> \tif (r == SIGNAL_BACKEND_NOSUPERUSER)\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t (errmsg(\"must be a superuser to cancel superuser query\"))));\n> +\t\t\t\t (errmsg(\"must be superuser to cancel superuser query\"))));\n> \n> \tif (r == SIGNAL_BACKEND_NOPERMISSION)\n> \t\tereport(ERROR,\n> @@ -139,12 +139,12 @@ pg_terminate_backend(PG_FUNCTION_ARGS)\n> \tif (r == SIGNAL_BACKEND_NOSUPERUSER)\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t (errmsg(\"must be a superuser to terminate superuser process\"))));\n> +\t\t\t\t (errmsg(\"must be superuser to terminate superuser process\"))));\n>\nThere's a number of\nerrhint(\"The owner of a subscription must be a superuser.\")));\nstyle messages, if you're trying for further consistency here...\n\n\n... Out of steam. And, as it turns out, battery power.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 May 2019 15:59:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi,\n\nOn 2019/05/21 7:59, Andres Freund wrote:\n> On 2019-05-20 13:20:01 -0500, Justin Pryzby wrote:\n>> @@ -3052,7 +3052,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n>> simplifies <command>ATTACH/DETACH PARTITION</command> operations:\n>> the partition dependencies need only be added or removed.\n>> Example: a child partitioned index is made partition-dependent\n>> - on both the partition table it is on and the parent partitioned\n>> + on both the table partition and the parent partitioned\n>> index, so that it goes away if either of those is dropped, but\n>> not otherwise. The dependency on the parent index is primary,\n>> so that if the user tries to drop the child partitioned index,\n> \n>> @@ -3115,7 +3115,7 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n>> Note that it's quite possible for two objects to be linked by more than\n>> one <structname>pg_depend</structname> entry. For example, a child\n>> partitioned index would have both a partition-type dependency on its\n>> - associated partition table, and an auto dependency on each column of\n>> + associated table partition, and an auto dependency on each column of\n>> that table that it indexes. This sort of situation expresses the union\n>> of multiple dependency semantics. A dependent object can be dropped\n>> without <literal>CASCADE</literal> if any of its dependencies satisfies\n> \n> Hm, that's not an improvement from my POV? The version before isn't great either,\n> but it seems to improve this'd require a somewhat bigger hammer.\n\nThe original \"partition table\" is meant as \"table that is a partition\", so\nnot wrong as such, though I agree about the bigger hammer part,\nespecially seeing \"a child partitioned index\" in both the sentences that\nJustin's patch touches, which should really be \"an index partition\". So\nthe two sentences could be modified as follows, including Justin's change\nfor consistency of the use of \"partition\":\n\n@@ -3051,13 +3051,12 @@ SCRAM-SHA-256$<replaceable><iteration\ncount></replaceable>:<replaceable>&l\n instead of, any dependencies the object would normally have. This\n simplifies <command>ATTACH/DETACH PARTITION</command> operations:\n the partition dependencies need only be added or removed.\n- Example: a child partitioned index is made partition-dependent\n- on both the partition table it is on and the parent partitioned\n- index, so that it goes away if either of those is dropped, but\n- not otherwise. The dependency on the parent index is primary,\n- so that if the user tries to drop the child partitioned index,\n- the error message will suggest dropping the parent index instead\n- (not the table).\n+ Example: an index partition is made partition-dependent on both the\n+ table partition it is on and the parent partitioned index, so that it\n+ goes away if either of those is dropped, but not otherwise.\n+ The dependency on the parent index is primary, so that if the user\n+ tries to drop the index partition, the error will suggest dropping the\n+ parent index instead (not the table).\n </para>\n </listitem>\n </varlistentry>\n@@ -3113,10 +3112,10 @@ SCRAM-SHA-256$<replaceable><iteration\ncount></replaceable>:<replaceable>&l\n\n <para>\n Note that it's quite possible for two objects to be linked by more than\n- one <structname>pg_depend</structname> entry. For example, a child\n- partitioned index would have both a partition-type dependency on its\n- associated partition table, and an auto dependency on each column of\n- that table that it indexes. This sort of situation expresses the union\n+ one <structname>pg_depend</structname> entry. For example, an index\n+ partition would have both a partition-type dependency on its assosiated\n+ table partition, and an auto dependency on each column of that table that\n+ it indexes. This sort of situation expresses the union\n of multiple dependency semantics. A dependent object can be dropped\n without <literal>CASCADE</literal> if any of its dependencies satisfies\n its condition for automatic dropping. Conversely, all the\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 13:04:48 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hello,\n\nI'm sorry if this is the wrong place for this or it's already been\ncovered (I did scan though this whole thread and a couple others), but\nI noticed the docs at\nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html still say\nyou can't create a foreign key referencing a partitioned table, even\nthough the docs for\nhttps://www.postgresql.org/docs/devel/sql-createtable.html have been\nupdated (compared to v11). My understanding is that foreign keys\n*still* don't work as expected when pointing at traditional INHERITS\ntables, but they *will* work with declaratively-partitioned tables. In\nthat case I suggest this change:\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex a0a7435a03..3b4f43bbad 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3966,14 +3966,6 @@ ALTER TABLE measurement ATTACH PARTITION\nmeasurement_y2008m02\n\n- <listitem>\n- <para>\n- While primary keys are supported on partitioned tables, foreign\n- keys referencing partitioned tables are not supported. (Foreign key\n- references from a partitioned table to some other table are supported.)\n- </para>\n- </listitem>\n-\n <listitem>\n <para>\n <literal>BEFORE ROW</literal> triggers, if necessary, must be defined\n on individual partitions, not the partitioned table.\n </para>\n@@ -4366,6 +4358,14 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement;\n </para>\n </listitem>\n\n+ <listitem>\n+ <para>\n+ While primary keys are supported on inheritance-partitioned\ntables, foreign\n+ keys referencing these tables are not supported. (Foreign key\n+ references from an inheritance-partitioned table to some other\ntable are supported.)\n+ </para>\n+ </listitem>\n+\n <listitem>\n <para>\n If you are using manual <command>VACUUM</command> or\n\n(I've also attached it as a patch file.) In other words, we should\nmove this caveat from the section on declaratively-partitioned tables\nto the section on inheritance-partitioned tables.\n\nSorry again if this is the wrong conversation for this!\n\nYours,\nPaul",
"msg_date": "Mon, 20 May 2019 21:25:39 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "Hi Paul,\n\nOn 2019/05/21 13:25, Paul A Jungwirth wrote:\n> I'm sorry if this is the wrong place for this or it's already been\n> covered (I did scan though this whole thread and a couple others), but\n> I noticed the docs at\n> https://www.postgresql.org/docs/devel/ddl-partitioning.html still say\n> you can't create a foreign key referencing a partitioned table, even\n> though the docs for\n> https://www.postgresql.org/docs/devel/sql-createtable.html have been\n> updated (compared to v11). My understanding is that foreign keys\n> *still* don't work as expected when pointing at traditional INHERITS\n> tables, but they *will* work with declaratively-partitioned tables.\n\nYou're right. I think it's simply an oversight of f56f8f8da6, which\nmissed updating ddl.sgml\n\n> (I've also attached it as a patch file.) In other words, we should\n> move this caveat from the section on declaratively-partitioned tables\n> to the section on inheritance-partitioned tables.\n> \n> Sorry again if this is the wrong conversation for this!\n\nThanks for the patch. To avoid it getting lost in the discussions of this\nthread, it might be better to post the patch to a separate thread.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 13:35:50 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Mon, May 20, 2019 at 9:36 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> Thanks for the patch. To avoid it getting lost in the discussions of this\n> thread, it might be better to post the patch to a separate thread.\n\nOkay, I'll make a new thread and a new CF entry. Thanks!\n\n\n",
"msg_date": "Mon, 20 May 2019 21:39:51 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On 2019/05/21 13:39, Paul A Jungwirth wrote:\n> On Mon, May 20, 2019 at 9:36 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> Thanks for the patch. To avoid it getting lost in the discussions of this\n>> thread, it might be better to post the patch to a separate thread.\n> \n> Okay, I'll make a new thread and a new CF entry. Thanks!\n\nThis sounds more like an open item to me [1], not something that have to\nbe postponed until the next CF.\n\nThanks,\nAmit\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\n\n\n",
"msg_date": "Tue, 21 May 2019 13:43:46 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Mon, May 20, 2019 at 9:44 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> This sounds more like an open item to me [1], not something that have to\n> be postponed until the next CF.\n>\n> [1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\nOh sorry, I already created the CF entry. Should I withdraw it? I'll\nask on -infra about getting editor permission for the wiki and add a\nnote there instead.\n\nPaul\n\n\n",
"msg_date": "Mon, 20 May 2019 21:47:47 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On 2019/05/21 13:47, Paul A Jungwirth wrote:\n> On Mon, May 20, 2019 at 9:44 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> This sounds more like an open item to me [1], not something that have to\n>> be postponed until the next CF.\n>>\n>> [1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n> \n> Oh sorry, I already created the CF entry. Should I withdraw it? I'll\n> ask on -infra about getting editor permission for the wiki and add a\n> note there instead.\n\nYou could link the CF entry from the wiki (the open item), but then it\nwill have to be closed when the open entry will be closed, so double work\nfor whoever does the cleaning up duties. Maybe, it's better to withdraw\nit now.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 13:55:46 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Tue, May 21, 2019 at 01:55:46PM +0900, Amit Langote wrote:\n> You could link the CF entry from the wiki (the open item), but then it\n> will have to be closed when the open entry will be closed, so double work\n> for whoever does the cleaning up duties. Maybe, it's better to withdraw\n> it now.\n\nIf you could clean up the CF entry, and keep only the open item in the\nlist, that would be nice. Thanks.\n--\nMichael",
"msg_date": "Tue, 21 May 2019 14:22:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> If you could clean up the CF entry, and keep only the open item in the\n> list, that would be nice. Thanks.\n\nI withdrew the CF entry; hopefully that is all that needs to be done,\nbut if I should do anything else let me know.\n\nThanks,\nPaul\n\n\n",
"msg_date": "Mon, 20 May 2019 22:36:33 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On 2019-May-20, Paul A Jungwirth wrote:\n\n> On Mon, May 20, 2019 at 9:44 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > This sounds more like an open item to me [1], not something that have to\n> > be postponed until the next CF.\n> >\n> > [1] https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n> \n> Oh sorry, I already created the CF entry. Should I withdraw it? I'll\n> ask on -infra about getting editor permission for the wiki and add a\n> note there instead.\n\nYou didn't actually ask, but I did it anyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Jun 2019 19:29:06 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "This patch was applied as f73293aba4d4. Thanks, Paul and Michael.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Jun 2019 19:34:10 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "On Thu, Jun 20, 2019 at 07:34:10PM -0400, Alvaro Herrera wrote:\n> This patch was applied as f73293aba4d4. Thanks, Paul and Michael.\n\nThanks for the thread update, Alvaro. I completely forgot to mention\nthe commit on this thread.\n--\nMichael",
"msg_date": "Fri, 21 Jun 2019 14:33:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: clean up docs for v12"
},
{
"msg_contents": "I made bunch of changes based on Andres' review and I split some more\nindisputable 1 line changes from the large commit, hoping it will be easier to\nreview both. Several bits and pieces of the patch have been applied piecemeal,\nbut I was hoping to avoid continuing to do that.\n\nI think at least these are also necessary.\nv5-0002-Say-it-more-naturally.patch \nv5-0010-spelling-and-typos.patch \n\nI suggest to anyone reading to look at the large patch last, since its changes\nare longer and less easy to read. Many of the changes are intended to improve\nthe text rather than to fix a definite error.\n\nJustin",
"msg_date": "Tue, 9 Jul 2019 11:12:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: clean up docs for v12"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile rebasing the remaining tableam patches (luckily a pretty small set\nnow!), I had a few conflicts with ExecComputeStoredGenerated(). While\nresolving I noticed:\n\n\toldtuple = ExecFetchSlotHeapTuple(slot, true, &should_free);\n\tnewtuple = heap_modify_tuple(oldtuple, tupdesc, values, nulls, replaces);\n\tExecForceStoreHeapTuple(newtuple, slot);\n\tif (should_free)\n\t\theap_freetuple(oldtuple);\n\n\tMemoryContextSwitchTo(oldContext);\n\nFirst off, I'm not convinced this is correct:\n\nISTM you'd need at least an ExecMaterializeSlot() before the\nMemoryContextSwitchTo() in ExecComputeStoredGenerated().\n\nBut what actually brought me to reply was that it seems like it'll cause\nunnecessary slowdowns for !heap AMs. First, it'll form a heaptuple if\nthe slot isn't in that form, and then it'll cause a conversion by\nstoring a heap tuple even if the target doesn't use heap representation.\n\nISTM the above would be much more efficiently - even more efficient if\nonly heap is used - implemented as something roughly akin to:\n\n slot_getallattrs(slot);\n memcpy(values, slot->tts_values, ...);\n memcpy(nulls, slot->tts_isnull, ...);\n\n for (int i = 0; i < natts; i++)\n {\n if (TupleDescAttr(tupdesc, i)->attgenerated == ATTRIBUTE_GENERATED_STORED)\n {\n values[i] = ...\n }\n else\n values[i] = datumCopy(...);\n }\n\n ExecClearTuple(slot);\n memcpy(slot->tts_values, values, ...);\n memcpy(slot->tts_isnull, nulls, ...);\n ExecStoreVirtualTuple(slot);\n ExecMaterializeSlot(slot);\n\nthat's not perfect, but more efficient than your version...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 30 Mar 2019 19:57:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-03-30 19:57:44 -0700, Andres Freund wrote:\n> while rebasing the remaining tableam patches (luckily a pretty small set\n> now!), I had a few conflicts with ExecComputeStoredGenerated(). While\n> resolving I noticed:\n> \n> \toldtuple = ExecFetchSlotHeapTuple(slot, true, &should_free);\n> \tnewtuple = heap_modify_tuple(oldtuple, tupdesc, values, nulls, replaces);\n> \tExecForceStoreHeapTuple(newtuple, slot);\n> \tif (should_free)\n> \t\theap_freetuple(oldtuple);\n> \n> \tMemoryContextSwitchTo(oldContext);\n> \n> First off, I'm not convinced this is correct:\n> \n> ISTM you'd need at least an ExecMaterializeSlot() before the\n> MemoryContextSwitchTo() in ExecComputeStoredGenerated().\n> \n> But what actually brought me to reply was that it seems like it'll cause\n> unnecessary slowdowns for !heap AMs. First, it'll form a heaptuple if\n> the slot isn't in that form, and then it'll cause a conversion by\n> storing a heap tuple even if the target doesn't use heap representation.\n> \n> ISTM the above would be much more efficiently - even more efficient if\n> only heap is used - implemented as something roughly akin to:\n> \n> slot_getallattrs(slot);\n> memcpy(values, slot->tts_values, ...);\n> memcpy(nulls, slot->tts_isnull, ...);\n> \n> for (int i = 0; i < natts; i++)\n> {\n> if (TupleDescAttr(tupdesc, i)->attgenerated == ATTRIBUTE_GENERATED_STORED)\n> {\n> values[i] = ...\n> }\n> else\n> values[i] = datumCopy(...);\n> }\n> \n> ExecClearTuple(slot);\n> memcpy(slot->tts_values, values, ...);\n> memcpy(slot->tts_isnull, nulls, ...);\n> ExecStoreVirtualTuple(slot);\n> ExecMaterializeSlot(slot);\n> \n> that's not perfect, but more efficient than your version...\n\nAlso, have you actually benchmarked this code? ISTM that adding a\nstored generated column would cause quite noticable slowdowns in the\nCOPY path based on this code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 30 Mar 2019 20:00:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On 2019-03-31 04:57, Andres Freund wrote:\n> while rebasing the remaining tableam patches (luckily a pretty small set\n> now!), I had a few conflicts with ExecComputeStoredGenerated(). While\n> resolving I noticed:\n\nThe core of that code was written a long time ago and perhaps hasn't\ncaught up with all the refactoring going on. I'll look through your\nproposal and update the code.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 11:23:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On 2019-03-31 05:00, Andres Freund wrote:\n> Also, have you actually benchmarked this code? ISTM that adding a\n> stored generated column would cause quite noticable slowdowns in the\n> COPY path based on this code.\n\nYes, it'll be slower than not having it, but it's much faster than the\nequivalent trigger.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 11:25:46 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-01 11:25:46 +0200, Peter Eisentraut wrote:\n> On 2019-03-31 05:00, Andres Freund wrote:\n> > Also, have you actually benchmarked this code? ISTM that adding a\n> > stored generated column would cause quite noticable slowdowns in the\n> > COPY path based on this code.\n> \n> Yes, it'll be slower than not having it, but it's much faster than the\n> equivalent trigger.\n\nIt at the moment is quite noticably slower than directly inserting the\ngenerated column.\n\npostgres[11993][1]=# CREATE TABLE foo_without_generated(id int, copy_of_int int);\nCREATE TABLE\nTime: 0.625 ms\npostgres[11993][1]=# CREATE TABLE foo_with_generated(id int, copy_of_int int generated always as (id) stored);\nCREATE TABLE\nTime: 0.771 ms\npostgres[11993][1]=# INSERT INTO foo_without_generated SELECT g.i, g.i FROM generate_series(1, 1000000) g(i);\nINSERT 0 1000000\nTime: 691.533 ms\npostgres[11993][1]=# INSERT INTO foo_with_generated SELECT g.i FROM generate_series(1, 1000000) g(i);\nINSERT 0 1000000\nTime: 825.471 ms\npostgres[11993][1]=# COPY foo_without_generated TO '/tmp/foo_without_generated';\nCOPY 1000000\nTime: 194.051 ms\npostgres[11993][1]=# COPY foo_with_generated TO '/tmp/foo_with_generated';\nCOPY 1000000\nTime: 153.146 ms\npostgres[11993][1]=# ;TRUNCATE foo_without_generated ;COPY foo_without_generated FROM '/tmp/foo_without_generated';\nTime: 0.178 ms\nTRUNCATE TABLE\nTime: 8.456 ms\nCOPY 1000000\nTime: 394.990 ms\npostgres[11993][1]=# ;TRUNCATE foo_with_generated ;COPY foo_with_generated FROM '/tmp/foo_with_generated';\nTime: 0.147 ms\nTRUNCATE TABLE\nTime: 8.043 ms\nCOPY 1000000\nTime: 508.918 ms\n\n From a quick profile that's indeed largely because\nExecComputeStoredGenerated() is really inefficient - and it seems\nlargely unnecessarily so. I think this should at least be roughly as\nefficient as getting the additional data from the client.\n\n\nMinor other point: I'm not a fan of defining more general infrastructure\nlike ExecComputedStoredGenerated() in nodeModifyTable.c - it's already\nlarge and confusing, and it's not obvious that e.g. COPY would call into\nit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Apr 2019 14:58:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On 2019-04-01 11:23, Peter Eisentraut wrote:\n> On 2019-03-31 04:57, Andres Freund wrote:\n>> while rebasing the remaining tableam patches (luckily a pretty small set\n>> now!), I had a few conflicts with ExecComputeStoredGenerated(). While\n>> resolving I noticed:\n> \n> The core of that code was written a long time ago and perhaps hasn't\n> caught up with all the refactoring going on. I'll look through your\n> proposal and update the code.\n\nThe attached patch is based on your sketch. It's clearly better in the\nlong term not to rely on heap tuples here. But in testing this change\nseems to make it slightly slower, certainly not a speedup as you were\napparently hoping for.\n\n\nTest setup:\n\ncreate table t0 (a int, b int);\ninsert into t0 select generate_series (1, 10000000); -- 10 million\n\\copy t0 (a) to 'test.dat';\n\n-- for comparison, without generated column\ntruncate t0;\n\\copy t0 (a) from 'test.dat';\n\n-- master\ncreate table t1 (a int, b int generated always as (a * 2) stored);\ntruncate t1;\n\\copy t1 (a) from 'test.dat';\n\n-- patched\ntruncate t1;\n\\copy t1 (a) from 'test.dat';\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 23 Apr 2019 10:23:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On Tue, 23 Apr 2019 at 20:23, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The attached patch is based on your sketch. It's clearly better in the\n> long term not to rely on heap tuples here. But in testing this change\n> seems to make it slightly slower, certainly not a speedup as you were\n> apparently hoping for.\n>\n>\n> Test setup:\n>\n> create table t0 (a int, b int);\n> insert into t0 select generate_series (1, 10000000); -- 10 million\n> \\copy t0 (a) to 'test.dat';\n>\n> -- for comparison, without generated column\n> truncate t0;\n> \\copy t0 (a) from 'test.dat';\n>\n> -- master\n> create table t1 (a int, b int generated always as (a * 2) stored);\n> truncate t1;\n> \\copy t1 (a) from 'test.dat';\n>\n> -- patched\n> truncate t1;\n> \\copy t1 (a) from 'test.dat';\n\nI didn't do the exact same test, but if I use COPY instead of \\copy,\nthen for me patched is faster.\n\nNormal table:\n\n\npostgres=# copy t0 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 5437.768 ms (00:05.438)\npostgres=# truncate t0;\nTRUNCATE TABLE\nTime: 20.775 ms\npostgres=# copy t0 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 5272.228 ms (00:05.272)\n\nMaster:\n\npostgres=# copy t1 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 6570.031 ms (00:06.570)\npostgres=# truncate t1;\nTRUNCATE TABLE\nTime: 17.813 ms\npostgres=# copy t1 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 6486.253 ms (00:06.486)\n\nPatched:\n\npostgres=# copy t1 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 5359.338 ms (00:05.359)\npostgres=# truncate table t1;\nTRUNCATE TABLE\nTime: 25.551 ms\npostgres=# copy t1 (a) from '/home/drowley/test.dat';\nCOPY 10000000\nTime: 5347.596 ms (00:05.348)\n\n\nFor the patch, I wonder if you need this line:\n\n+ memcpy(values, slot->tts_values, sizeof(*values) * natts);\n\nIf you got rid of that and changed the datumCopy to use\nslot->tts_values[i] instead.\n\nMaybe it's also worth getting rid of the first memcpy for the null\narray and just assign the element in the else clause.\n\nIt might also be cleaner to assign TupleDescAttr(tupdesc, i) to a\nvariable instead of using the macro 3 times. It'd make that datumCopy\nline shorter too.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 24 Apr 2019 10:26:56 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On 2019-04-24 00:26, David Rowley wrote:\n> I didn't do the exact same test, but if I use COPY instead of \\copy,\n> then for me patched is faster.\n\nOK, confirmed that way, too.\n\n> For the patch, I wonder if you need this line:\n> \n> + memcpy(values, slot->tts_values, sizeof(*values) * natts);\n> \n> If you got rid of that and changed the datumCopy to use\n> slot->tts_values[i] instead.\n\ndone\n\n> Maybe it's also worth getting rid of the first memcpy for the null\n> array and just assign the element in the else clause.\n\nTried that, seems to be slower. So I left it as is.\n\n> It might also be cleaner to assign TupleDescAttr(tupdesc, i) to a\n> variable instead of using the macro 3 times. It'd make that datumCopy\n> line shorter too.\n\nAlso done.\n\nUpdated patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 15 May 2019 19:44:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On Thu, 16 May 2019 at 05:44, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Updated patch attached.\n\nThis patch looks okay to me.\n\nIt's not for this patch, or probably for PG12, but it would be good if\nwe could avoid the formation of the Tuple until right before the new\ntuple is inserted.\n\nI see heap_form_tuple() is called 3 times for a single INSERT with:\n\ncreate table t (a text, b text, c text generated always as (b || b) stored);\n\ncreate or replace function t_trigger() returns trigger as $$\nbegin\nNEW.b = UPPER(NEW.a);\nRETURN NEW;\nend;\n$$ language plpgsql;\n\ncreate trigger t_on_insert before insert on t for each row execute\nfunction t_trigger();\n\ninsert into t (a) values('one');\n\nand heap_deform_tuple() is called once for each additional\nheap_form_tuple(). That's pretty wasteful :-(\n\nMaybe Andres can explain if this is really required, or if it's just\nsomething that's not well optimised yet.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 20 May 2019 14:23:34 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 14:23:34 +1200, David Rowley wrote:\n> It's not for this patch, or probably for PG12, but it would be good if\n> we could avoid the formation of the Tuple until right before the new\n> tuple is inserted.\n>\n> I see heap_form_tuple() is called 3 times for a single INSERT with:\n>\n> create table t (a text, b text, c text generated always as (b || b) stored);\n>\n> create or replace function t_trigger() returns trigger as $$\n> begin\n> NEW.b = UPPER(NEW.a);\n> RETURN NEW;\n> end;\n> $$ language plpgsql;\n>\n> create trigger t_on_insert before insert on t for each row execute\n> function t_trigger();\n>\n> insert into t (a) values('one');\n>\n> and heap_deform_tuple() is called once for each additional\n> heap_form_tuple(). That's pretty wasteful :-(\n>\n> Maybe Andres can explain if this is really required, or if it's just\n> something that's not well optimised yet.\n\nI think we can optimize this further, but it's not unexpected.\n\nI see:\n\n1) ExecCopySlot() call in in ExecModifyTable(). For INSERT SELECT the\n input will be in a virtual slot. We might be able to have some\n trickery to avoid this one in some case. Not sure how much it'd help\n - I think we most of the time would just move the forming of the\n tuple around - ExecInsert() wants to materialize the slot.\n\n2) plpgsql form/deform due to updating a field. I don't see how we could\n easily fix that. We'd have to invent a mechanism that allows plpgsql to pass\n slots around. I guess it's possible you could make that work somehow?\n But I think we'd also need to change the external trigger interface -\n which currently specifies that the return type is a HeapTuple\n\n3) ExecComputeStoredGenerated(). I suspect it's not particularly useful\n to get rid of the heap_form_tuple (from with ExecMaterialize())\n here. When actually inserting we'll have to actually form the tuple\n anyway. But what I do wonder is whether it would make sense to move\n the materialization outside of that function. If there's constraints,\n or partitioning, we'll have to deform (parts of) the tuple, to access\n the necessary columns.\n\nCurrently materializing an unmaterialized slot (i.e. making it\nindependent from anything but memory referenced by the slot) also means\nthat later accesses will need to deform again. I'm fairly sure we can\nimprove that for many cases (IIRC we don't need to that for virtual\nslots, but that's irrelevant here).\n\nI'm pretty sure we get rid of most of this, but it'll be some work. I'm\nalso not sure how important it is - for INSERT/UPDATE, in how many cases\nis the bottleneck those copies, rather than other parts of query\nexecution? I suspect you can measure it for some INSERT ... SELECT type\ncases - but probably the overhead of triggers and GENERATED is going to\nbe higher.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 19:50:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
},
{
"msg_contents": "On 2019-05-20 04:23, David Rowley wrote:\n> On Thu, 16 May 2019 at 05:44, Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> Updated patch attached.\n> \n> This patch looks okay to me.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 18:43:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does ExecComputeStoredGenerated() form a heap tuple"
}
] |
[
{
"msg_contents": "Hi, hackers,\n\nI'm Rui Guo, a PhD student focusing on database at the University of\nCalifornia, Irvine. I'm interested in the \"GiST API advancement\" project\nfor the Google Summer of Code 2019 which is listed at\nhttps://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29 .\n\nI'm still reading about RR*-tree, GiST and the PostgreSQL source code to\nhave a better idea on my proposal. Meanwhile, I have a very basic and\nsimple question:\n\nSince the chooseSubtree() algorithm in both R*-tree and RR*-tree are\nheuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the\nleast), is it possible to apply *machine learning* algorithm to improve it?\nThe only related reference I got is to use deep learning in database join\noperation (https://arxiv.org/abs/1808.03196). Is it not suitable to use\nmachine learning here or someone already did?\n\nThanks,\nRui Guo\n\nHi, hackers,I'm Rui Guo, a PhD student focusing on database at the University of California, Irvine. I'm interested in the \"GiST API advancement\" project for the Google Summer of Code 2019 which is listed at https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29 .I'm still reading about RR*-tree, GiST and the PostgreSQL source code to have a better idea on my proposal. Meanwhile, I have a very basic and simple question:Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the least), is it possible to apply machine learning algorithm to improve it? The only related reference I got is to use deep learning in database join operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use machine learning here or someone already did?Thanks,Rui Guo",
"msg_date": "Sun, 31 Mar 2019 02:58:35 -0700",
"msg_from": "GUO Rui <ruig2@uci.edu>",
"msg_from_op": true,
"msg_subject": "Google Summer of Code: question about GiST API advancement project"
},
{
"msg_contents": "Hi!\n\n> 31 марта 2019 г., в 14:58, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I'm Rui Guo, a PhD student focusing on database at the University of California, Irvine. I'm interested in the \"GiST API advancement\" project for the Google Summer of Code 2019 which is listed at https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29 .\n> \n> I'm still reading about RR*-tree, GiST and the PostgreSQL source code to have a better idea on my proposal. Meanwhile, I have a very basic and simple question:\n> \n> Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the least), is it possible to apply machine learning algorithm to improve it? The only related reference I got is to use deep learning in database join operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use machine learning here or someone already did?\n\nIf you are interested in ML and DBs you should definitely look into [0]. You do not have to base your proposal on mentor ideas, you can use your own. Implementing learned indexes - seems reasonable.\n\nRR*-tree algorithms are heuristic in some specific parts, but in general they are designed to optimize very clear metrics. Generally, ML algorithms tend to compose much bigger pile of heuristics and solve less mathematically clear tasks than splitting subtrees or choosing subtree for insertion.\nR*-tree algorithms are heuristic only to be faster.\n\nBest regards, Andrey Borodin.\n\n[0] https://arxiv.org/pdf/1712.01208.pdf\n\n",
"msg_date": "Sun, 31 Mar 2019 22:52:31 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "Dear Andrey Borodin,\n\nI discussed the above topic with the professors at my school, and got the\nfollowing points:\n\n1. The case that the volume of an MBB is 0 should be very rare, and if the\ndata is skewed (e.g. only a few nodes have non-NULL value on a dimension)\nthen the data can be pre-proceeded and normalized before it goes to the\ndatabase, thus the storage and query can be much faster;\n2. The performance of the R-tree family may depend on the specific data set\nand even the order of the data insertions, so one algorithm may be better\non one dataset and slower on another, thus the benchmark should include\ndifferent datasets;\n\nI totally agree that by adopting the RR*-tree algorithm we can improve the\nperformance of PostgreSQL. For my proposal, I'll:\n1. Document the benchmarks I found available online (e.g.\nhttps://github.com/ambling/rtree-benchmark), and then state how we'd like\nto generate data ourselves (e.g. data with a Gaussian distribution, or the\nsame dataset but different insertion order...) to test with for a wilder\ncoverage;\n2. Create tools to generate a report on current PostgreSQL performance with\nthe benchmark;\n3. Plan to improve the R-tree and GiST part of PostgreSQL. For the\ndiscussion in the email thread\nhttps://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com\n<https://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com>\n, I prefer to do a *scale-based* trick rather than using bits in a float or\ncreating a new struct;\n4. Generate a performance report on PostgreSQL with the above R-tree patch;\nThe following would be marked as *optional*:\n5. Optimize GiST with New APIs (e.g. non-penalty-based choose subtree\nfunction, also discussed in the above email thread);\n6. For skewed data, try to warn the user, and then suggest methods to cook\nthe data (e.g. the normalization algorithms in ML); pre-proceeding the data\nshould not be the duty of the database;\n7. Other advanced features of RR*-tree and GiST bulk loading;\n\nAny comments or feedback on the above ideas? I'll work on a draft proposal\nASAP.\n\nMany thanks,\nRui Guo\n\n\n\n\nOn Sun, Mar 31, 2019 at 10:53 AM Andrey Borodin <x4mmm@yandex-team.ru>\nwrote:\n\n> Hi!\n>\n> > 31 марта 2019 г., в 14:58, GUO Rui <ruig2@uci.edu> написал(а):\n> >\n> > I'm Rui Guo, a PhD student focusing on database at the University of\n> California, Irvine. I'm interested in the \"GiST API advancement\" project\n> for the Google Summer of Code 2019 which is listed at\n> https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29\n> .\n> >\n> > I'm still reading about RR*-tree, GiST and the PostgreSQL source code to\n> have a better idea on my proposal. Meanwhile, I have a very basic and\n> simple question:\n> >\n> > Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are\n> heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the\n> least), is it possible to apply machine learning algorithm to improve it?\n> The only related reference I got is to use deep learning in database join\n> operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use\n> machine learning here or someone already did?\n>\n> If you are interested in ML and DBs you should definitely look into [0].\n> You do not have to base your proposal on mentor ideas, you can use your\n> own. Implementing learned indexes - seems reasonable.\n>\n> RR*-tree algorithms are heuristic in some specific parts, but in general\n> they are designed to optimize very clear metrics. Generally, ML algorithms\n> tend to compose much bigger pile of heuristics and solve less\n> mathematically clear tasks than splitting subtrees or choosing subtree for\n> insertion.\n> R*-tree algorithms are heuristic only to be faster.\n>\n> Best regards, Andrey Borodin.\n>\n> [0] https://arxiv.org/pdf/1712.01208.pdf\n\nDear Andrey Borodin,I discussed the above topic with the professors at my school, and got the following points:1. The case that the volume of an MBB is 0 should be very rare, and if the data is skewed (e.g. only a few nodes have non-NULL value on a dimension) then the data can be pre-proceeded and normalized before it goes to the database, thus the storage and query can be much faster;2. The performance of the R-tree family may depend on the specific data set and even the order of the data insertions, so one algorithm may be better on one dataset and slower on another, thus the benchmark should include different datasets;I totally agree that by adopting the RR*-tree algorithm we can improve the performance of PostgreSQL. For my proposal, I'll:1. Document the benchmarks I found available online (e.g. https://github.com/ambling/rtree-benchmark), and then state how we'd like to generate data ourselves (e.g. data with a Gaussian distribution, or the same dataset but different insertion order...) to test with for a wilder coverage;2. Create tools to generate a report on current PostgreSQL performance with the benchmark;3. Plan to improve the R-tree and GiST part of PostgreSQL. For the discussion in the email thread https://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com , I prefer to do a scale-based trick rather than using bits in a float or creating a new struct;4. Generate a performance report on PostgreSQL with the above R-tree patch;The following would be marked as optional:5. Optimize GiST with New APIs (e.g. non-penalty-based choose subtree function, also discussed in the above email thread);6. For skewed data, try to warn the user, and then suggest methods to cook the data (e.g. the normalization algorithms in ML); pre-proceeding the data should not be the duty of the database;7. Other advanced features of RR*-tree and GiST bulk loading;Any comments or feedback on the above ideas? I'll work on a draft proposal ASAP.Many thanks,Rui GuoOn Sun, Mar 31, 2019 at 10:53 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Hi!\n\n> 31 марта 2019 г., в 14:58, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I'm Rui Guo, a PhD student focusing on database at the University of California, Irvine. I'm interested in the \"GiST API advancement\" project for the Google Summer of Code 2019 which is listed at https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29 .\n> \n> I'm still reading about RR*-tree, GiST and the PostgreSQL source code to have a better idea on my proposal. Meanwhile, I have a very basic and simple question:\n> \n> Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the least), is it possible to apply machine learning algorithm to improve it? The only related reference I got is to use deep learning in database join operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use machine learning here or someone already did?\n\nIf you are interested in ML and DBs you should definitely look into [0]. You do not have to base your proposal on mentor ideas, you can use your own. Implementing learned indexes - seems reasonable.\n\nRR*-tree algorithms are heuristic in some specific parts, but in general they are designed to optimize very clear metrics. Generally, ML algorithms tend to compose much bigger pile of heuristics and solve less mathematically clear tasks than splitting subtrees or choosing subtree for insertion.\nR*-tree algorithms are heuristic only to be faster.\n\nBest regards, Andrey Borodin.\n\n[0] https://arxiv.org/pdf/1712.01208.pdf",
"msg_date": "Wed, 3 Apr 2019 22:55:03 -0700",
"msg_from": "GUO Rui <ruig2@uci.edu>",
"msg_from_op": true,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "I drafted my proposal about the above topic at\nhttps://docs.google.com/document/d/1X7Lw-c0rLYuSjwLNfw6qXpN5Cf1_0u2gXtgEgLkNezA/edit?usp=sharing\n. Looking forward to your feedback.\n\nOn Wed, Apr 3, 2019 at 10:55 PM GUO Rui <ruig2@uci.edu> wrote:\n\n> Dear Andrey Borodin,\n>\n> I discussed the above topic with the professors at my school, and got the\n> following points:\n>\n> 1. The case that the volume of an MBB is 0 should be very rare, and if the\n> data is skewed (e.g. only a few nodes have non-NULL value on a dimension)\n> then the data can be pre-proceeded and normalized before it goes to the\n> database, thus the storage and query can be much faster;\n> 2. The performance of the R-tree family may depend on the specific data\n> set and even the order of the data insertions, so one algorithm may be\n> better on one dataset and slower on another, thus the benchmark should\n> include different datasets;\n>\n> I totally agree that by adopting the RR*-tree algorithm we can improve the\n> performance of PostgreSQL. For my proposal, I'll:\n> 1. Document the benchmarks I found available online (e.g.\n> https://github.com/ambling/rtree-benchmark), and then state how we'd like\n> to generate data ourselves (e.g. data with a Gaussian distribution, or the\n> same dataset but different insertion order...) to test with for a wilder\n> coverage;\n> 2. Create tools to generate a report on current PostgreSQL performance\n> with the benchmark;\n> 3. Plan to improve the R-tree and GiST part of PostgreSQL. For the\n> discussion in the email thread\n> https://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com\n> <https://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com>\n> , I prefer to do a *scale-based* trick rather than using bits in a float\n> or creating a new struct;\n> 4. Generate a performance report on PostgreSQL with the above R-tree patch;\n> The following would be marked as *optional*:\n> 5. Optimize GiST with New APIs (e.g. non-penalty-based choose subtree\n> function, also discussed in the above email thread);\n> 6. For skewed data, try to warn the user, and then suggest methods to cook\n> the data (e.g. the normalization algorithms in ML); pre-proceeding the data\n> should not be the duty of the database;\n> 7. Other advanced features of RR*-tree and GiST bulk loading;\n>\n> Any comments or feedback on the above ideas? I'll work on a draft proposal\n> ASAP.\n>\n> Many thanks,\n> Rui Guo\n>\n>\n>\n>\n> On Sun, Mar 31, 2019 at 10:53 AM Andrey Borodin <x4mmm@yandex-team.ru>\n> wrote:\n>\n>> Hi!\n>>\n>> > 31 марта 2019 г., в 14:58, GUO Rui <ruig2@uci.edu> написал(а):\n>> >\n>> > I'm Rui Guo, a PhD student focusing on database at the University of\n>> California, Irvine. I'm interested in the \"GiST API advancement\" project\n>> for the Google Summer of Code 2019 which is listed at\n>> https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29\n>> .\n>> >\n>> > I'm still reading about RR*-tree, GiST and the PostgreSQL source code\n>> to have a better idea on my proposal. Meanwhile, I have a very basic and\n>> simple question:\n>> >\n>> > Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are\n>> heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the\n>> least), is it possible to apply machine learning algorithm to improve it?\n>> The only related reference I got is to use deep learning in database join\n>> operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use\n>> machine learning here or someone already did?\n>>\n>> If you are interested in ML and DBs you should definitely look into [0].\n>> You do not have to base your proposal on mentor ideas, you can use your\n>> own. Implementing learned indexes - seems reasonable.\n>>\n>> RR*-tree algorithms are heuristic in some specific parts, but in general\n>> they are designed to optimize very clear metrics. Generally, ML algorithms\n>> tend to compose much bigger pile of heuristics and solve less\n>> mathematically clear tasks than splitting subtrees or choosing subtree for\n>> insertion.\n>> R*-tree algorithms are heuristic only to be faster.\n>>\n>> Best regards, Andrey Borodin.\n>>\n>> [0] https://arxiv.org/pdf/1712.01208.pdf\n>\n>\n\nI drafted my proposal about the above topic at https://docs.google.com/document/d/1X7Lw-c0rLYuSjwLNfw6qXpN5Cf1_0u2gXtgEgLkNezA/edit?usp=sharing . Looking forward to your feedback.On Wed, Apr 3, 2019 at 10:55 PM GUO Rui <ruig2@uci.edu> wrote:Dear Andrey Borodin,I discussed the above topic with the professors at my school, and got the following points:1. The case that the volume of an MBB is 0 should be very rare, and if the data is skewed (e.g. only a few nodes have non-NULL value on a dimension) then the data can be pre-proceeded and normalized before it goes to the database, thus the storage and query can be much faster;2. The performance of the R-tree family may depend on the specific data set and even the order of the data insertions, so one algorithm may be better on one dataset and slower on another, thus the benchmark should include different datasets;I totally agree that by adopting the RR*-tree algorithm we can improve the performance of PostgreSQL. For my proposal, I'll:1. Document the benchmarks I found available online (e.g. https://github.com/ambling/rtree-benchmark), and then state how we'd like to generate data ourselves (e.g. data with a Gaussian distribution, or the same dataset but different insertion order...) to test with for a wilder coverage;2. Create tools to generate a report on current PostgreSQL performance with the benchmark;3. Plan to improve the R-tree and GiST part of PostgreSQL. For the discussion in the email thread https://www.postgresql.org/message-id/flat/CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg%40mail.gmail.com#CAJEAwVFMo-FXaJ6Lkj8Wtb1br0MtBY48EGMVEJBOodROEGykKg@mail.gmail.com , I prefer to do a scale-based trick rather than using bits in a float or creating a new struct;4. Generate a performance report on PostgreSQL with the above R-tree patch;The following would be marked as optional:5. Optimize GiST with New APIs (e.g. non-penalty-based choose subtree function, also discussed in the above email thread);6. For skewed data, try to warn the user, and then suggest methods to cook the data (e.g. the normalization algorithms in ML); pre-proceeding the data should not be the duty of the database;7. Other advanced features of RR*-tree and GiST bulk loading;Any comments or feedback on the above ideas? I'll work on a draft proposal ASAP.Many thanks,Rui GuoOn Sun, Mar 31, 2019 at 10:53 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Hi!\n\n> 31 марта 2019 г., в 14:58, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I'm Rui Guo, a PhD student focusing on database at the University of California, Irvine. I'm interested in the \"GiST API advancement\" project for the Google Summer of Code 2019 which is listed at https://wiki.postgresql.org/wiki/GSoC_2019#GiST_API_advancement_.282019.29 .\n> \n> I'm still reading about RR*-tree, GiST and the PostgreSQL source code to have a better idea on my proposal. Meanwhile, I have a very basic and simple question:\n> \n> Since the chooseSubtree() algorithm in both R*-tree and RR*-tree are heuristic and somehow greedy (e.g. pick the MBB that needs to enlarge the least), is it possible to apply machine learning algorithm to improve it? The only related reference I got is to use deep learning in database join operation (https://arxiv.org/abs/1808.03196). Is it not suitable to use machine learning here or someone already did?\n\nIf you are interested in ML and DBs you should definitely look into [0]. You do not have to base your proposal on mentor ideas, you can use your own. Implementing learned indexes - seems reasonable.\n\nRR*-tree algorithms are heuristic in some specific parts, but in general they are designed to optimize very clear metrics. Generally, ML algorithms tend to compose much bigger pile of heuristics and solve less mathematically clear tasks than splitting subtrees or choosing subtree for insertion.\nR*-tree algorithms are heuristic only to be faster.\n\nBest regards, Andrey Borodin.\n\n[0] https://arxiv.org/pdf/1712.01208.pdf",
"msg_date": "Fri, 5 Apr 2019 02:07:10 -0700",
"msg_from": "GUO Rui <ruig2@uci.edu>",
"msg_from_op": true,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "Hi!\n\n> 5 апр. 2019 г., в 14:07, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I drafted my proposal about the above topic at https://docs.google.com/document/d/1X7Lw-c0rLYuSjwLNfw6qXpN5Cf1_0u2gXtgEgLkNezA/edit?usp=sharing . Looking forward to your feedback.\nI'd recommend planning some time to review other patches. If you plan to post your patches to commitfest, you are expected to review work some work of comparable complexity of others.\n\n> 1. The case that the volume of an MBB is 0 should be very rare\nNope, if you index data points tend to cluster on grid-alligned planes (which happens with geo data, all points have same height above ocean) if always have zero volume of MBB.\n\nBTW look at PostGIS, they are main GiST users, so indexing their data is important.\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Fri, 5 Apr 2019 15:13:29 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "I added more details about GiST use cases (PostGIS and ScalaGiST) in my\nproposal and created one more entry for reviewing other patches in the time\ntable.\n\nI'll try to polish the proposal in the remaining three days to the GSoC\ndeadline. I don't think I have much time to modify PostgreSQL source code\nand test against it myself though ;(. Many thanks to your feedback.\n\nOn Fri, Apr 5, 2019 at 3:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> Hi!\n>\n> > 5 апр. 2019 г., в 14:07, GUO Rui <ruig2@uci.edu> написал(а):\n> >\n> > I drafted my proposal about the above topic at\n> https://docs.google.com/document/d/1X7Lw-c0rLYuSjwLNfw6qXpN5Cf1_0u2gXtgEgLkNezA/edit?usp=sharing\n> . Looking forward to your feedback.\n> I'd recommend planning some time to review other patches. If you plan to\n> post your patches to commitfest, you are expected to review work some work\n> of comparable complexity of others.\n>\n> > 1. The case that the volume of an MBB is 0 should be very rare\n> Nope, if you index data points tend to cluster on grid-alligned planes\n> (which happens with geo data, all points have same height above ocean) if\n> always have zero volume of MBB.\n>\n> BTW look at PostGIS, they are main GiST users, so indexing their data is\n> important.\n>\n> Best regards, Andrey Borodin.\n>\n>\n\nI added more details about GiST use cases (PostGIS and ScalaGiST) in my proposal and created one more entry for reviewing other patches in the time table.I'll try to polish the proposal in the remaining three days to the GSoC deadline. I don't think I have much time to modify PostgreSQL source code and test against it myself though ;(. Many thanks to your feedback.On Fri, Apr 5, 2019 at 3:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Hi!\n\n> 5 апр. 2019 г., в 14:07, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I drafted my proposal about the above topic at https://docs.google.com/document/d/1X7Lw-c0rLYuSjwLNfw6qXpN5Cf1_0u2gXtgEgLkNezA/edit?usp=sharing . Looking forward to your feedback.\nI'd recommend planning some time to review other patches. If you plan to post your patches to commitfest, you are expected to review work some work of comparable complexity of others.\n\n> 1. The case that the volume of an MBB is 0 should be very rare\nNope, if you index data points tend to cluster on grid-alligned planes (which happens with geo data, all points have same height above ocean) if always have zero volume of MBB.\n\nBTW look at PostGIS, they are main GiST users, so indexing their data is important.\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 5 Apr 2019 18:38:09 -0700",
"msg_from": "GUO Rui <ruig2@uci.edu>",
"msg_from_op": true,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "Hi!\n\n> 6 апр. 2019 г., в 6:38, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I added more details about GiST use cases (PostGIS and ScalaGiST) in my proposal and created one more entry for reviewing other patches in the time table.\n\nScalaGiST is quite distant project. It is not PostgreSQL part, it is MR subsystem.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 6 Apr 2019 10:43:45 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
},
{
"msg_contents": "Yes, it is a different project, and we cannot run it on top of PostgreSQL\ndirectly.\n\nMaybe we can learn from it by:\n1. Study its benchmark. The benchmark used is YCSB [1], and maybe we can\ngenerate data and run queries in a similar way as YCSB in our project. YCSB\nalready discussed the order to insert data and the distribution models of\ndata.\n2. Study how they use R-tree and GiST in their system, and perhaps we can\nbe inspired and be able to borrow some ideas when we implement our patch.\n\nI'm still learning the codebase of PostgreSQL and how R-tree/GiST can be\nimplemented. Does that make sense?\n\n[1] Cooper, Brian F., et al. \"Benchmarking cloud serving systems with\nYCSB.\" *Proceedings of the 1st ACM symposium on Cloud computing*. ACM, 2010.\n\nOn Fri, Apr 5, 2019 at 10:44 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> Hi!\n>\n> > 6 апр. 2019 г., в 6:38, GUO Rui <ruig2@uci.edu> написал(а):\n> >\n> > I added more details about GiST use cases (PostGIS and ScalaGiST) in my\n> proposal and created one more entry for reviewing other patches in the time\n> table.\n>\n> ScalaGiST is quite distant project. It is not PostgreSQL part, it is MR\n> subsystem.\n>\n> Best regards, Andrey Borodin.\n\nYes, it is a different project, and we cannot run it on top of PostgreSQL directly.Maybe we can learn from it by:1. Study its benchmark. The benchmark used is YCSB [1], and maybe we can generate data and run queries in a similar way as YCSB in our project. YCSB already discussed the order to insert data and the distribution models of data.2. Study how they use R-tree and GiST in their system, and perhaps we can be inspired and be able to borrow some ideas when we implement our patch. I'm still learning the codebase of PostgreSQL and how R-tree/GiST can be implemented. Does that make sense?[1] Cooper, Brian F., et al. \"Benchmarking cloud serving systems with YCSB.\" Proceedings of the 1st ACM symposium on Cloud computing. ACM, 2010.On Fri, Apr 5, 2019 at 10:44 PM Andrey Borodin <x4mmm@yandex-team.ru> wrote:Hi!\n\n> 6 апр. 2019 г., в 6:38, GUO Rui <ruig2@uci.edu> написал(а):\n> \n> I added more details about GiST use cases (PostGIS and ScalaGiST) in my proposal and created one more entry for reviewing other patches in the time table.\n\nScalaGiST is quite distant project. It is not PostgreSQL part, it is MR subsystem.\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 6 Apr 2019 00:26:52 -0700",
"msg_from": "GUO Rui <ruig2@uci.edu>",
"msg_from_op": true,
"msg_subject": "Re: Google Summer of Code: question about GiST API advancement\n project"
}
] |
[
{
"msg_contents": "Hello everyone! \nThank you for your interest to this topic. \n\nI would like to propose Compressed Storage Manager for PostgreSQL.\n\nThe problem:\nIn cases when you store some log-like data in your tables, or when you\nstore time-series data you may face with high disk space consumption \nbecause of a lot of data. It is a good idea to compress tables, \nespecially if you have a compressible data and OLAP \nWORM (write once read many) usage scenarios.\n\nCurrent ways to solve this problem:\nNow this could be solved via a compressible file system such as BTRFS \nor ZFS. This approach has a contradictory impact on performance and \nconnected with difficulties of administration.\n\nOther's DB approaches:\nPostgres Pro Enterprise has embedded CFS [1][2] for this purposes.\nMySQL InnoDB has two options of compression - table level compression \n(zlib only) [3] and transparency pages compression (zlib, LZ4) [4] \nvia hole punching [5]. \n\nMy offer:\nImplement LZ4 Compressed Storage Manager. It should compress pages on \nwriting to block files and decompress on reading. I would like to \noffer LZ4 at first, because it has low CPU consumption and it is \navailable under BSD 2 clause license. \n\nCompressed Storage Manager operation description (TLDR: algorithm could\nbe similar to MySQL table level compression):\n - It should store compressed pages in a block file, but because of \n different size of compressed data, it should have an additional \n file with offset for each pages.\n - When it reads a page, it translates upper PostgreSQL layers \n file/offset query to actual page offset, read compressed page \n bytes, decompress them and fill the requested buffer with \n decompressed page. \n - New pages writing quite a simple, it has to compress the page, \n write it to block file and write page offset into a file with \n pointers.\n - In cases when it's necessary to write changed page, it has to \n check that the size of the compressed page smaller or equal to \n previous version. If it's bigger, it is should to write page \n to the end of the block file and change the page pointer. The \n old page version became dead.\n - There is an ability to make free space release mechanism, for instance, \n MySQL use hole punching (what contradictory impact on \n performance [6]). At first time dead pages could be freed \n via VACUUM FULL. \n\n pointers file\n +====+====+====+\n | p1 | p2 | p3 | \n +=|==+==|=+==|=+ \n | | |_________________________________\n | |____________________ |\n | | | block file\n +=|======+=================+=|===============+=|==================+\n | p1 len | p1 ####data#### | p2 len | p2 #d# | p3 len | p3 #data# |\n +========+=================+=================+====================+\n\n\nTest of possible compression (database [7], table ticket_flights [8]):\n 547M 47087 <- uncompressed\n 200M 47087.lz4.1.pages.compressed <-- pages compression (37%)\n\nPros:\n- decreases disk space usage\n- decreases disk reads\nCons:\n- possible increases random access I/O\n- increases CPU usage\n- possible conflicts with PostgreSQL expectations \n of Storage Manager behaviour\n- could conflict with pg_basebackup and pg_upgrade utilities\n- compression requires additional memory\n\nWhy it should be implemented on Storage Manager level instead of usage\nPluggable storage API [9]?\n - From my perspective view Storage Manager level implementation \n allows to focus on proper I/O operations and compression. \n It allows to write much more simple realization. It's because of \n Pluggable storage API force you to implement more complex \n interfaces. To be honest, I am really hesitating about this point, \n especially because of Pluggable storage API allows to create \n extension without core code modification and it potentially allows \n to use more perfective compression algorithms (Table Access Manager\n allows you to get more information about storing data). \n\nI would like to implement a proof of concept \nand have a couple of questions:\n - your opinion about necessity of this feature \n (Compressed Storage Manager)\n - Is it good idea to implement DB compression on Storage Manager \n level? Perhaps it is better to use Pluggable storage API.\n - Is there any reason to refuse this proposal?\n - Are there any circumstances what didn't allow to implement \n Compressed Storage Manager?\n\nRegards, \nNikolay P.\n\n[1] - https://postgrespro.com/docs/enterprise/9.6/cfs\n[2] - https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17)\n[3] - https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html\n[4] - https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html\n[5] - https://lwn.net/Articles/415889/\n[6] - https://www.percona.com/blog/2017/11/20/innodb-page-compression/\n[7] - https://postgrespro.com/education/demodb\n[8] - https://postgrespro.com/docs/postgrespro/10/apjs02\n[9] - https://commitfest.postgresql.org/22/1283/\n\n\n\n",
"msg_date": "Sun, 31 Mar 2019 17:25:51 +0300",
"msg_from": "=?utf-8?B?0J3QuNC60L7Qu9Cw0Lkg0J/QtdGC0YDQvtCy?=\n <nik.petrov.ua@yandex.ru>",
"msg_from_op": true,
"msg_subject": "[HACKERS][Proposal] LZ4 Compressed Storage Manager"
},
{
"msg_contents": "31.03.2019, 17:26, \"Nikolay Petrov\" <nik.petrov.ua@yandex.ru>:\n> Hello everyone!\n> Thank you for your interest to this topic.\n>\n> I would like to propose Compressed Storage Manager for PostgreSQL.\n\nPrevious thread here \nhttps://www.postgresql.org/message-id/flat/op.ux8if71gcigqcu%40soyouz\nAnd the result of previous investigation \nhttps://www.postgresql.org/message-id/op.uyhszpgkcke6l8%40soyouz\n\nRegards, \nNikolay P.\n\n\n",
"msg_date": "Sun, 31 Mar 2019 19:51:54 +0300",
"msg_from": "Nikolay Petrov <nik.petrov.ua@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager"
},
{
"msg_contents": "On 31.03.2019 17:25, Николай Петров wrote:\n> Hello everyone!\n> Thank you for your interest to this topic.\n>\n> I would like to propose Compressed Storage Manager for PostgreSQL.\n>\n> The problem:\n> In cases when you store some log-like data in your tables, or when you\n> store time-series data you may face with high disk space consumption\n> because of a lot of data. It is a good idea to compress tables,\n> especially if you have a compressible data and OLAP\n> WORM (write once read many) usage scenarios.\n>\n> Current ways to solve this problem:\n> Now this could be solved via a compressible file system such as BTRFS\n> or ZFS. This approach has a contradictory impact on performance and\n> connected with difficulties of administration.\n>\n> Other's DB approaches:\n> Postgres Pro Enterprise has embedded CFS [1][2] for this purposes.\n> MySQL InnoDB has two options of compression - table level compression\n> (zlib only) [3] and transparency pages compression (zlib, LZ4) [4]\n> via hole punching [5].\n>\n> My offer:\n> Implement LZ4 Compressed Storage Manager. It should compress pages on\n> writing to block files and decompress on reading. I would like to\n> offer LZ4 at first, because it has low CPU consumption and it is\n> available under BSD 2 clause license.\n>\n> Compressed Storage Manager operation description (TLDR: algorithm could\n> be similar to MySQL table level compression):\n> - It should store compressed pages in a block file, but because of\n> different size of compressed data, it should have an additional\n> file with offset for each pages.\n> - When it reads a page, it translates upper PostgreSQL layers\n> file/offset query to actual page offset, read compressed page\n> bytes, decompress them and fill the requested buffer with\n> decompressed page.\n> - New pages writing quite a simple, it has to compress the page,\n> write it to block file and write page offset into a file with\n> pointers.\n> - In cases when it's necessary to write changed page, it has to\n> check that the size of the compressed page smaller or equal to\n> previous version. If it's bigger, it is should to write page\n> to the end of the block file and change the page pointer. The\n> old page version became dead.\n> - There is an ability to make free space release mechanism, for instance,\n> MySQL use hole punching (what contradictory impact on\n> performance [6]). At first time dead pages could be freed\n> via VACUUM FULL.\n>\n> pointers file\n> +====+====+====+\n> | p1 | p2 | p3 |\n> +=|==+==|=+==|=+\n> | | |_________________________________\n> | |____________________ |\n> | | | block file\n> +=|======+=================+=|===============+=|==================+\n> | p1 len | p1 ####data#### | p2 len | p2 #d# | p3 len | p3 #data# |\n> +========+=================+=================+====================+\n>\n>\n> Test of possible compression (database [7], table ticket_flights [8]):\n> 547M 47087 <- uncompressed\n> 200M 47087.lz4.1.pages.compressed <-- pages compression (37%)\n>\n> Pros:\n> - decreases disk space usage\n> - decreases disk reads\n> Cons:\n> - possible increases random access I/O\n> - increases CPU usage\n> - possible conflicts with PostgreSQL expectations\n> of Storage Manager behaviour\n> - could conflict with pg_basebackup and pg_upgrade utilities\n> - compression requires additional memory\n>\n> Why it should be implemented on Storage Manager level instead of usage\n> Pluggable storage API [9]?\n> - From my perspective view Storage Manager level implementation\n> allows to focus on proper I/O operations and compression.\n> It allows to write much more simple realization. It's because of\n> Pluggable storage API force you to implement more complex\n> interfaces. To be honest, I am really hesitating about this point,\n> especially because of Pluggable storage API allows to create\n> extension without core code modification and it potentially allows\n> to use more perfective compression algorithms (Table Access Manager\n> allows you to get more information about storing data).\n>\n> I would like to implement a proof of concept\n> and have a couple of questions:\n> - your opinion about necessity of this feature\n> (Compressed Storage Manager)\n> - Is it good idea to implement DB compression on Storage Manager\n> level? Perhaps it is better to use Pluggable storage API.\n> - Is there any reason to refuse this proposal?\n> - Are there any circumstances what didn't allow to implement\n> Compressed Storage Manager?\n>\n> Regards,\n> Nikolay P.\n>\n> [1] - https://postgrespro.com/docs/enterprise/9.6/cfs\n> [2] - https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17)\n> [3] - https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html\n> [4] - https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html\n> [5] - https://lwn.net/Articles/415889/\n> [6] - https://www.percona.com/blog/2017/11/20/innodb-page-compression/\n> [7] - https://postgrespro.com/education/demodb\n> [8] - https://postgrespro.com/docs/postgrespro/10/apjs02\n> [9] - https://commitfest.postgresql.org/22/1283/\n>\n>\n>\n\nI can shared my experience of development of CFS for PostgresPro.\nFirst of all I want to notice that most likely it will be not possible \nto isolate all changes in Postgres at Storage Manager level.\nThere are many places in Postgres (basebackup,vacuum,...) which makes \nsome assumptions on content of Postgres data directory.\nSo if compressed storage manager will provide some alternative files \nlayout, then other parts of the Postgres should know about it.\n\nThe most difficult thing in CFS development is certainly \ndefragmentation. In CFS it is done using background garbage collection, \nby one or one\nGC worker processes. The main challenges were to minimize its \ninteraction with normal work of the system, make it fault tolerant and \nprevent unlimited growth of data segments.\n\nCFS is not introducing its own storage manager, it is mostly embedded in \nexisted Postgres file access layer (fd.c, md.c). It allows to reused \ncode responsible for mapping relations and file descriptors cache. As it \nwas recently discussed in hackers, it may be good idea to separate the \nquestions \"how to map blocks to filenames and offsets\" and \"how to \nactually perform IO\". In this it will be easier to implement compressed \nstorage manager.\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 1 Apr 2019 12:30:22 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager"
},
{
"msg_contents": "On Sun, Mar 31, 2019 at 05:25:51PM +0300, Николай Петров wrote:\n> Why it should be implemented on Storage Manager level instead of usage\n> Pluggable storage API [9]?\n> - From my perspective view Storage Manager level implementation \n> allows to focus on proper I/O operations and compression. \n> It allows to write much more simple realization. It's because of \n> Pluggable storage API force you to implement more complex \n> interfaces. To be honest, I am really hesitating about this point, \n> especially because of Pluggable storage API allows to create \n> extension without core code modification and it potentially allows \n> to use more perfective compression algorithms (Table Access Manager\n> allows you to get more information about storing data). \n> \n> I would like to implement a proof of concept \n> and have a couple of questions:\n> - your opinion about necessity of this feature \n> (Compressed Storage Manager)\n> - Is it good idea to implement DB compression on Storage Manager \n> level? Perhaps it is better to use Pluggable storage API.\n> - Is there any reason to refuse this proposal?\n> - Are there any circumstances what didn't allow to implement \n> Compressed Storage Manager?\n\nStepping back a bit, there are several levels of compression:\n\n1. single field\n2. across all fields in a row\n3. across rows on a single page\n4. across all rows in a table\n5. across tables in a database\n\nWe currently do #1 with TOAST, and your approach would allow the first\nthree. #4 feels like it is getting near the features of columnar\nstorage. I think it is unclear if adding #2 and #3 produce enough of a\nbenefit to warrant special storage, given the complexity and overhead of\nimplementing it.\n\nI do think the Pluggable storage API is the right approach, and, if you\nare going to go that route, adding #4 compression seems very worthwhile.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 11 Apr 2019 12:18:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager"
},
{
"msg_contents": "čt 11. 4. 2019 v 18:18 odesílatel Bruce Momjian <bruce@momjian.us> napsal:\n\n> On Sun, Mar 31, 2019 at 05:25:51PM +0300, Николай Петров wrote:\n> > Why it should be implemented on Storage Manager level instead of usage\n> > Pluggable storage API [9]?\n> > - From my perspective view Storage Manager level implementation\n> > allows to focus on proper I/O operations and compression.\n> > It allows to write much more simple realization. It's because of\n> > Pluggable storage API force you to implement more complex\n> > interfaces. To be honest, I am really hesitating about this point,\n> > especially because of Pluggable storage API allows to create\n> > extension without core code modification and it potentially allows\n> > to use more perfective compression algorithms (Table Access Manager\n> > allows you to get more information about storing data).\n> >\n> > I would like to implement a proof of concept\n> > and have a couple of questions:\n> > - your opinion about necessity of this feature\n> > (Compressed Storage Manager)\n> > - Is it good idea to implement DB compression on Storage Manager\n> > level? Perhaps it is better to use Pluggable storage API.\n> > - Is there any reason to refuse this proposal?\n> > - Are there any circumstances what didn't allow to implement\n> > Compressed Storage Manager?\n>\n> Stepping back a bit, there are several levels of compression:\n>\n> 1. single field\n> 2. across all fields in a row\n> 3. across rows on a single page\n> 4. across all rows in a table\n> 5. across tables in a database\n>\n\n> We currently do #1 with TOAST, and your approach would allow the first\n> three. #4 feels like it is getting near the features of columnar\n> storage. I think it is unclear if adding #2 and #3 produce enough of a\n> benefit to warrant special storage, given the complexity and overhead of\n> implementing it.\n>\n\n@4 compression over columns on page are probably much more effective. But\nthere can some preprocessing stage, where rows can be transformed to\ncolumns.\n\nThis doesn't need real column store, and can helps lot of. Real column\nstore has sense when columns are separated to different pages. But for\ncompressions, we can transform rows to columns without real column storage.\n\nProbably 8kB page is too small for this case.\n\nRegards\n\nPavel\n\n\n\n\n\n> I do think the Pluggable storage API is the right approach, and, if you\n> are going to go that route, adding #4 compression seems very worthwhile.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + As you are, so once was I. As I am, so you will be. +\n> + Ancient Roman grave inscription +\n>\n>\n>\n\nčt 11. 4. 2019 v 18:18 odesílatel Bruce Momjian <bruce@momjian.us> napsal:On Sun, Mar 31, 2019 at 05:25:51PM +0300, Николай Петров wrote:\n> Why it should be implemented on Storage Manager level instead of usage\n> Pluggable storage API [9]?\n> - From my perspective view Storage Manager level implementation \n> allows to focus on proper I/O operations and compression. \n> It allows to write much more simple realization. It's because of \n> Pluggable storage API force you to implement more complex \n> interfaces. To be honest, I am really hesitating about this point, \n> especially because of Pluggable storage API allows to create \n> extension without core code modification and it potentially allows \n> to use more perfective compression algorithms (Table Access Manager\n> allows you to get more information about storing data). \n> \n> I would like to implement a proof of concept \n> and have a couple of questions:\n> - your opinion about necessity of this feature \n> (Compressed Storage Manager)\n> - Is it good idea to implement DB compression on Storage Manager \n> level? Perhaps it is better to use Pluggable storage API.\n> - Is there any reason to refuse this proposal?\n> - Are there any circumstances what didn't allow to implement \n> Compressed Storage Manager?\n\nStepping back a bit, there are several levels of compression:\n\n1. single field\n2. across all fields in a row\n3. across rows on a single page\n4. across all rows in a table\n5. across tables in a database \n\nWe currently do #1 with TOAST, and your approach would allow the first\nthree. #4 feels like it is getting near the features of columnar\nstorage. I think it is unclear if adding #2 and #3 produce enough of a\nbenefit to warrant special storage, given the complexity and overhead of\nimplementing it.@4 compression over columns on page are probably much more effective. But there can some preprocessing stage, where rows can be transformed to columns. This doesn't need real column store, and can helps lot of. Real column store has sense when columns are separated to different pages. But for compressions, we can transform rows to columns without real column storage.Probably 8kB page is too small for this case. RegardsPavel\n\nI do think the Pluggable storage API is the right approach, and, if you\nare going to go that route, adding #4 compression seems very worthwhile.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +",
"msg_date": "Thu, 11 Apr 2019 18:31:19 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager"
}
] |
[
{
"msg_contents": "Hello,\n\nBuilding on the excellent work begun by commit e529cd4ffa60, I would\nlike to propose a do-what-I-mean mode for psql. Please find a POC\npatch attached. It works like this:\n\npostgres=# select datnaam from pg_database where ooid = 12917;\nERROR: column \"datnaam\" does not exist\nLINE 1: select datnaam from pg_database where ooid = 12917;\n ^\nHINT: Perhaps you meant to reference the column \"pg_database.datname\".\npostgres=# YES\n datname\n----------\n postgres\n(1 row)\n\nAs you can see, by \"shouting\" a new keyword at the computer, it will\ntake its own hint and run the corrected query. To avoid having to do\nthis in two steps, you can also shout the whole query for the same\neffect:\n\npostgres=# SELECT DATNAAM FROM PG_DATABASE WHERE OOID = 12917;\n datname\n----------\n postgres\n(1 row)\n\nThe next version will be able to fix permissions problems and override\nerrors automatically as follows, though that is proving trickier to\nget working. Example:\n\npostgres=# SUDO DROP TABLE PG_DATABASS;\nNO CARRIER\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Mon, 1 Apr 2019 09:52:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "DWIM mode for psql"
},
{
"msg_contents": "On 2019-04-01 09:52:34 +1300, Thomas Munro wrote:\n> +/*\n> + * This program is free software: you can redistribute it and/or modify\n> + * it under the terms of the GNU General Public License as published by\n> + * the Free Software Foundation, either version 3 of the License, or\n> + * (at your option) any later version.\n\nIndentation bug. You really need to work a bit more careful.\n\n\n",
"msg_date": "Sun, 31 Mar 2019 14:04:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: DWIM mode for psql"
},
{
"msg_contents": "On Sun, Mar 31, 2019 at 5:04 PM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-04-01 09:52:34 +1300, Thomas Munro wrote:\n> > +/*\n> > + * This program is free software: you can redistribute it and/or modify\n> > + * it under the terms of the GNU General Public License as published by\n> > + * the Free Software Foundation, either version 3 of the License, or\n> > + * (at your option) any later version.\n>\n> Indentation bug. You really need to work a bit more careful.\n>\n\nThe patch applies cleanly, and passes \"make check\", but it generated an\nexecutable called \"mongodb\".\nShould I have run \"make maintainer-clean\" first?\n\nOn Sun, Mar 31, 2019 at 5:04 PM Andres Freund <andres@anarazel.de> wrote:On 2019-04-01 09:52:34 +1300, Thomas Munro wrote:\n> +/*\n> + * This program is free software: you can redistribute it and/or modify\n> + * it under the terms of the GNU General Public License as published by\n> + * the Free Software Foundation, either version 3 of the License, or\n> + * (at your option) any later version.\n\nIndentation bug. You really need to work a bit more careful.The patch applies cleanly, and passes \"make check\", but it generated an executable called \"mongodb\".Should I have run \"make maintainer-clean\" first?",
"msg_date": "Sun, 31 Mar 2019 17:32:23 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DWIM mode for psql"
},
{
"msg_contents": "On 3/31/19 10:52 PM, Thomas Munro wrote:> Building on the excellent work \nbegun by commit e529cd4ffa60, I would\n> like to propose a do-what-I-mean mode for psql. Please find a POC\n> patch attached. It works like this:\n> \n> postgres=# select datnaam from pg_database where ooid = 12917;\n> ERROR: column \"datnaam\" does not exist\n> LINE 1: select datnaam from pg_database where ooid = 12917;\n> ^\n> HINT: Perhaps you meant to reference the column \"pg_database.datname\".\n> postgres=# YES\n> datname\n> ----------\n> postgres\n> (1 row)\n\nI think it is potentially confusing that YES and NO does not look like \nother psql commands. Let's pick something which is more in line with \nexisting commands like \\y and \\n.\n\nAndreas\n\n\n",
"msg_date": "Sun, 31 Mar 2019 23:49:51 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: DWIM mode for psql"
},
{
"msg_contents": "Hi Thomas,\n\nThanks for working on this.\n\nOn Mon, Apr 1, 2019 at 5:53 Thomas Munro <thomas.munro@gmail.com> wrote:\n\nHello,\n>\n> Building on the excellent work begun by commit e529cd4ffa60, I would\n> like to propose a do-what-I-mean mode for psql. Please find a POC\n> patch attached. It works like this:\n>\n> postgres=# select datnaam from pg_database where ooid = 12917;\n> ERROR: column \"datnaam\" does not exist\n> LINE 1: select datnaam from pg_database where ooid = 12917;\n> ^\n> HINT: Perhaps you meant to reference the column \"pg_database.datname\".\n> postgres=# YES\n> datname\n> ----------\n> postgres\n> (1 row)\n>\n> As you can see, by \"shouting\" a new keyword at the computer, it will\n> take its own hint and run the corrected query. To avoid having to do\n> this in two steps, you can also shout the whole query for the same\n> effect:\n>\n> postgres=# SELECT DATNAAM FROM PG_DATABASE WHERE OOID = 12917;\n> datname\n> ----------\n> postgres\n> (1 row)\n\n\nNeat.\n\nThe next version will be able to fix permissions problems and override\n> errors automatically as follows, though that is proving trickier to\n> get working. Example:\n>\n> postgres=# SUDO DROP TABLE PG_DATABASS;\n> NO CARRIER\n\n\nHave you tried rebooting the machine?\n\nThanks,\nAmit\n\n>\n\nHi Thomas,Thanks for working on this.On Mon, Apr 1, 2019 at 5:53 Thomas Munro <thomas.munro@gmail.com> wrote:Hello,\n\nBuilding on the excellent work begun by commit e529cd4ffa60, I would\nlike to propose a do-what-I-mean mode for psql. Please find a POC\npatch attached. It works like this:\n\npostgres=# select datnaam from pg_database where ooid = 12917;\nERROR: column \"datnaam\" does not exist\nLINE 1: select datnaam from pg_database where ooid = 12917;\n ^\nHINT: Perhaps you meant to reference the column \"pg_database.datname\".\npostgres=# YES\n datname\n----------\n postgres\n(1 row)\n\nAs you can see, by \"shouting\" a new keyword at the computer, it will\ntake its own hint and run the corrected query. To avoid having to do\nthis in two steps, you can also shout the whole query for the same\neffect:\n\npostgres=# SELECT DATNAAM FROM PG_DATABASE WHERE OOID = 12917;\n datname\n----------\n postgres\n(1 row)Neat.\nThe next version will be able to fix permissions problems and override\nerrors automatically as follows, though that is proving trickier to\nget working. Example:\n\npostgres=# SUDO DROP TABLE PG_DATABASS;\nNO CARRIERHave you tried rebooting the machine?Thanks,Amit",
"msg_date": "Mon, 1 Apr 2019 10:12:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DWIM mode for psql"
},
{
"msg_contents": "Andreas Karlsson wrote\n> On 3/31/19 10:52 PM, Thomas Munro wrote:> Building on the excellent work \n> begun by commit e529cd4ffa60, I would\n>> like to propose a do-what-I-mean mode for psql. Please find a POC\n>> patch attached. It works like this:\n>> \n>> postgres=# select datnaam from pg_database where ooid = 12917;\n>> ERROR: column \"datnaam\" does not exist\n>> LINE 1: select datnaam from pg_database where ooid = 12917;\n>> ^\n>> HINT: Perhaps you meant to reference the column \"pg_database.datname\".\n>> postgres=# YES\n>> datname\n>> ----------\n>> postgres\n>> (1 row)\n> \n> I think it is potentially confusing that YES and NO does not look like \n> other psql commands. Let's pick something which is more in line with \n> existing commands like \\y and \\n.\n> \n> Andreas\n\n+1\nRegards\n>-)))°>\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Mon, 1 Apr 2019 13:48:27 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DWIM mode for psql"
}
] |
[
{
"msg_contents": "In copydir.c:copy_file() I read\n\n\t/* Use palloc to ensure we get a maxaligned buffer */\n\tbuffer = palloc(COPY_BUF_SIZE);\n\nNo data type wider than a single byte is used to access the data in the\nbuffer, and neither read() nor write() should require any specific alignment.\nCan someone please explain why alignment matters here?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 01 Apr 2019 10:01:05 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Question on alignment"
},
{
"msg_contents": "On 01/04/2019 11:01, Antonin Houska wrote:\n> In copydir.c:copy_file() I read\n> \n> \t/* Use palloc to ensure we get a maxaligned buffer */\n> \tbuffer = palloc(COPY_BUF_SIZE);\n> \n> No data type wider than a single byte is used to access the data in the\n> buffer, and neither read() nor write() should require any specific alignment.\n> Can someone please explain why alignment matters here?\n\nAn aligned buffer can allow optimizations in the kernel, when it copies \nthe data. So it's not strictly required, but potentially makes the \nread() and write() faster.\n\n- Heikki\n\n\n",
"msg_date": "Mon, 1 Apr 2019 12:09:09 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Question on alignment"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 01/04/2019 11:01, Antonin Houska wrote:\n> > In copydir.c:copy_file() I read\n> >\n> > \t/* Use palloc to ensure we get a maxaligned buffer */\n> > \tbuffer = palloc(COPY_BUF_SIZE);\n> >\n> > No data type wider than a single byte is used to access the data in the\n> > buffer, and neither read() nor write() should require any specific alignment.\n> > Can someone please explain why alignment matters here?\n> \n> An aligned buffer can allow optimizations in the kernel, when it copies the\n> data. So it's not strictly required, but potentially makes the read() and\n> write() faster.\n\nThanks. Your response reminds me of buffer alignment:\n\n/*\n * Preferred alignment for disk I/O buffers. On some CPUs, copies between\n * user space and kernel space are significantly faster if the user buffer\n * is aligned on a larger-than-MAXALIGN boundary. Ideally this should be\n * a platform-dependent value, but for now we just hard-wire it.\n */\n#define ALIGNOF_BUFFER\t32\n\nIs this what you mean? Since palloc() only ensures MAXIMUM_ALIGNOF, that\nwouldn't help here anyway.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 01 Apr 2019 11:21:44 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Question on alignment"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Since palloc() only ensures MAXIMUM_ALIGNOF, that wouldn't help here anyway.\n\nAfter some more search I'm not sure about that. The following comment\nindicates that MAXALIGN helps too:\n\n/*\n * Use this, not \"char buf[BLCKSZ]\", to declare a field or local variable\n * holding a page buffer, if that page might be accessed as a page and not\n * just a string of bytes. Otherwise the variable might be under-aligned,\n * causing problems on alignment-picky hardware. (In some places, we use\n * this to declare buffers even though we only pass them to read() and\n * write(), because copying to/from aligned buffers is usually faster than\n * using unaligned buffers.) We include both \"double\" and \"int64\" in the\n * union to ensure that the compiler knows the value must be MAXALIGN'ed\n * (cf. configure's computation of MAXIMUM_ALIGNOF).\n */\ntypedef union PGAlignedBlock\n{\n\tchar\t\tdata[BLCKSZ];\n\tdouble\t\tforce_align_d;\n\tint64\t\tforce_align_i64;\n} PGAlignedBlock;\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 01 Apr 2019 14:38:30 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Question on alignment"
},
{
"msg_contents": "On Mon, Apr 01, 2019 at 02:38:30PM +0200, Antonin Houska wrote:\n> After some more search I'm not sure about that. The following comment\n> indicates that MAXALIGN helps too:\n\nThe performance argument is true, now the reason why PGAlignedBlock\nhas been introduced is here:\nhttps://www.postgresql.org/message-id/1535618100.1286.3.camel@credativ.de\n--\nMichael",
"msg_date": "Mon, 1 Apr 2019 22:37:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Question on alignment"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> Antonin Houska <ah@cybertec.at> wrote:\n>> Since palloc() only ensures MAXIMUM_ALIGNOF, that wouldn't help here anyway.\n\n> After some more search I'm not sure about that. The following comment\n> indicates that MAXALIGN helps too:\n\nWell, there is more than one thing going on here, and more than one\nlevel of potential optimization. On just about any hardware I know,\nmisalignment below the machine's natural word width is going to cost\ncycles in memcpy (or whatever equivalent the kernel is using). Intel\nCPUs tend to throw many many transistors at minimizing such costs, but\nthat still doesn't make it zero. On some hardware, you can get further\nspeedups with alignment to a bigger-than-word-width boundary, allowing\nmemcpy to use specialized instructions (SSE2 stuff on Intel, IIRC).\nBut there's a point of diminishing returns there, plus it takes extra\nwork and more wasted space to arrange for anything to have extra\nalignment. So we generally only bother with ALIGNOF_BUFFER for shared\nbuffers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2019 09:40:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question on alignment"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Antonin Houska <ah@cybertec.at> writes:\n> > Antonin Houska <ah@cybertec.at> wrote:\n> >> Since palloc() only ensures MAXIMUM_ALIGNOF, that wouldn't help here anyway.\n> \n> > After some more search I'm not sure about that. The following comment\n> > indicates that MAXALIGN helps too:\n> \n> Well, there is more than one thing going on here, and more than one\n> level of potential optimization. On just about any hardware I know,\n> misalignment below the machine's natural word width is going to cost\n> cycles in memcpy (or whatever equivalent the kernel is using). Intel\n> CPUs tend to throw many many transistors at minimizing such costs, but\n> that still doesn't make it zero. On some hardware, you can get further\n> speedups with alignment to a bigger-than-word-width boundary, allowing\n> memcpy to use specialized instructions (SSE2 stuff on Intel, IIRC).\n> But there's a point of diminishing returns there, plus it takes extra\n> work and more wasted space to arrange for anything to have extra\n> alignment.\n\nThanks for this summary.\n\n> So we generally only bother with ALIGNOF_BUFFER for shared buffers.\n\nok, I'll consider this a (reasonable) convention.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 05 Apr 2019 17:25:51 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Question on alignment"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nmy CLOBBER_CACHE_ALWAYS animal, jaguarundi, has gotten stuck in \"make \r\ncheck\"'s initdb three times in a row now.\r\n\r\nI have trace output covering about the final minute of initdb. It mainly \r\nconsists of ~90000 iterations of\r\n\r\n- Open base/1/1259 [pg_class]\r\n- Seek to end [twice]\r\n- Open global/pg_filenode.map\r\n- Read 512 bytes\r\n- Close global/filenode.map\r\n- Open base/1/pg_filenode.map\r\n- Read 512 bytes\r\n- Close base/1/pg_filenode.map\r\n- Close base/1/1259\r\n\r\nwith some operations on other heap files in between. At the very end, it \r\nwrites 8K of zeros to 1259_fsm at offset 0x10000, then it starts waiting \r\non a semaphore and never finishes.\r\n\r\nIf someone would like the 0.5 GiB of trace output (FreeBSD ktrace), it \r\ncompresses to 1.75 MiB.\r\n\r\n\r\nAll the best,\r\n\r\n-- \r\nChristian\r\n",
"msg_date": "Mon, 1 Apr 2019 08:38:56 +0000",
"msg_from": "Christian Ullrich <chris@chrullrich.net>",
"msg_from_op": true,
"msg_subject": "C_C_A animal on HEAD gets stuck in initdb"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 9:39 PM Christian Ullrich <chris@chrullrich.net> wrote:\n> my CLOBBER_CACHE_ALWAYS animal, jaguarundi, has gotten stuck in \"make\n> check\"'s initdb three times in a row now.\n\nCould it be the same as this?\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLCwPF0S4Mk7S8qw%2BDK0Bq65LueN9rofAA3HHSYikW-Zw%40mail.gmail.com\n\nI see that its first failure was after commit 558a9165e0 (along with others).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Apr 2019 22:26:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: C_C_A animal on HEAD gets stuck in initdb"
},
{
"msg_contents": "* Thomas Munro wrote:\r\n\r\n> On Mon, Apr 1, 2019 at 9:39 PM Christian Ullrich <chris@chrullrich.net> wrote:\r\n>> my CLOBBER_CACHE_ALWAYS animal, jaguarundi, has gotten stuck in \"make\r\n>> check\"'s initdb three times in a row now.\r\n> \r\n> Could it be the same as this?\r\n> \r\n> https://www.postgresql.org/message-id/CA%2BhUKGLCwPF0S4Mk7S8qw%2BDK0Bq65LueN9rofAA3HHSYikW-Zw%40mail.gmail.com\r\n> \r\n> I see that its first failure was after commit 558a9165e0 (along with others).\r\n\r\nIt does look very similar. I don't have a working gdb on the box, hence \r\nthis is from lldb.\r\n\r\n(lldb) bt\r\n* thread #1, name = 'postgres'\r\n * frame #0: 0x00000008020e4ce8 libc.so.7`_umtx_op + 8\r\n frame #1: 0x00000008020d0e5e libc.so.7`_sem_clockwait_np [inlined] \r\nusem_wait(sem=<unavailable>, clock_id=0, rmtp=<unavailable>) at \r\ncancelpoints_sem_new.c:365\r\n frame #2: 0x00000008020d0e4f \r\nlibc.so.7`_sem_clockwait_np(sem=<unavailable>, clock_id=0, \r\nflags=<unavailable>, rqtp=0x0000000000000000, rmtp=<unavailable>) at \r\ncancelpoints_sem_new.c:424\r\n frame #3: 0x00000000007104e8 \r\npostgres`PGSemaphoreLock(sema=0x00000008032031b0) at pg_sema.c:316\r\n frame #4: 0x0000000000796889 \r\npostgres`LWLockAcquire(lock=0x0000000803725924, mode=LW_SHARED) at \r\nlwlock.c:1244\r\n frame #5: 0x000000000077157a \r\npostgres`LockBuffer(buffer=<unavailable>, mode=1) at bufmgr.c:0\r\n frame #6: 0x00000000004f54f1 postgres`_bt_getroot [inlined] \r\n_bt_getbuf(rel=<unavailable>, blkno=<unavailable>, access=1) at \r\nnbtpage.c:806\r\n frame #7: 0x00000000004f54cd \r\npostgres`_bt_getroot(rel=0x000000080314d080, access=1) at nbtpage.c:323\r\n frame #8: 0x00000000004fa7fa \r\npostgres`_bt_search(rel=0x000000080314d080, key=0x00007fffffffb508, \r\nbufP=0x00007fffffffbf28, access=1, snapshot=0x0000000000dccc48) at \r\nnbtsearch.c:99\r\n frame #9: 0x00000000004fbe5a \r\npostgres`_bt_first(scan=0x00000008031bb4f0, dir=<unavailable>) at \r\nnbtsearch.c:1247\r\n frame #10: 0x00000000004f9736 \r\npostgres`btgettuple(scan=0x00000008031bb4f0, dir=ForwardScanDirection) \r\nat nbtree.c:245\r\n frame #11: 0x00000000004efec1 \r\npostgres`index_getnext_tid(scan=0x00000008031bb4f0, \r\ndirection=<unavailable>) at indexam.c:550\r\n frame #12: 0x00000000004f0052 \r\npostgres`index_getnext_slot(scan=0x00000008031bb4f0, \r\ndirection=ForwardScanDirection, slot=0x00000008031bcf80) at indexam.c:642\r\n frame #13: 0x00000000004eefc2 \r\npostgres`systable_getnext(sysscan=0x00000008031bdcd0) at genam.c:450\r\n frame #14: 0x00000000008ccaf0 \r\npostgres`ScanPgRelation(targetRelId=<unavailable>, \r\nindexOK=<unavailable>, force_non_historic=false) at relcache.c:365\r\n frame #15: 0x00000000008c5ea6 postgres`RelationClearRelation at \r\nrelcache.c:2288\r\n frame #16: 0x00000000008c5e51 \r\npostgres`RelationClearRelation(relation=0x00000008031460b0, \r\nrebuild=true) at relcache.c:2421\r\n frame #17: 0x00000000008c788d postgres`RelationCacheInvalidate at \r\nrelcache.c:2854\r\n frame #18: 0x00000000008bd0aa postgres`AcceptInvalidationMessages \r\n[inlined] InvalidateSystemCaches at inval.c:649\r\n frame #19: 0x00000000008bd09b postgres`AcceptInvalidationMessages \r\nat inval.c:708\r\n frame #20: 0x000000000078a929 postgres`LockRelationOid(relid=1213, \r\nlockmode=<unavailable>) at lmgr.c:133\r\n frame #21: 0x000000000049baa2 \r\npostgres`relation_open(relationId=1213, lockmode=1) at relation.c:56\r\n frame #22: 0x000000000051624c \r\npostgres`table_open(relationId=<unavailable>, lockmode=<unavailable>) at \r\ntable.c:43\r\n frame #23: 0x00000000008bc707 \r\npostgres`SearchCatCacheMiss(cache=0x000000080313e900, nkeys=1, \r\nhashValue=1761185739, hashIndex=3, v1=1663, v2=0, v3=0, v4=0) at \r\ncatcache.c:1357\r\n frame #24: 0x00000000008bae3b \r\npostgres`SearchCatCacheInternal(cache=0x000000080313e900, \r\nnkeys=<unavailable>, v1=<unavailable>, v2=<unavailable>, \r\nv3=<unavailable>, v4=0) at catcache.c:1299\r\n frame #25: 0x00000000008ce406 \r\npostgres`get_tablespace(spcid=<unavailable>) at spccache.c:136\r\n frame #26: 0x00000000008ce4a9 \r\npostgres`get_tablespace_io_concurrency(spcid=<unavailable>) at \r\nspccache.c:217\r\n frame #27: 0x00000000004db9c6 \r\npostgres`heap_compute_xid_horizon_for_tuples(rel=0x00000008031460b0, \r\ntids=0x00000008031ba7f8, nitems=<unavailable>) at heapam.c:6980\r\n frame #28: 0x00000000004eed06 \r\npostgres`index_compute_xid_horizon_for_tuples [inlined] \r\ntable_compute_xid_horizon_for_tuples(rel=<unavailable>, \r\nitems=<unavailable>, nitems=<unavailable>) at tableam.h:973\r\n frame #29: 0x00000000004eecf1 \r\npostgres`index_compute_xid_horizon_for_tuples(irel=<unavailable>, \r\nhrel=0x00000008031460b0, ibuf=<unavailable>, itemnos=0x00007fffffffc7a0, \r\nnitems=3) at genam.c:306\r\n frame #30: 0x00000000004f6b14 \r\npostgres`_bt_delitems_delete(rel=0x000000080314d080, buf=49, \r\nitemnos=<unavailable>, nitems=3, heapRel=<unavailable>) at nbtpage.c:1111\r\n frame #31: 0x00000000004f4c2c \r\npostgres`_bt_vacuum_one_page(rel=<unavailable>, buffer=<unavailable>, \r\nheapRel=<unavailable>) at nbtinsert.c:2270\r\n frame #32: 0x00000000004f13a2 postgres`_bt_doinsert [inlined] \r\n_bt_findinsertloc(rel=<unavailable>, heapRel=0x00000008031460b0) at \r\nnbtinsert.c:736\r\n frame #33: 0x00000000004f136d \r\npostgres`_bt_doinsert(rel=<unavailable>, itup=0x000000080306b678, \r\ncheckUnique=UNIQUE_CHECK_YES, heapRel=0x00000008031460b0) at nbtinsert.c:281\r\n frame #34: 0x00000000004f9017 \r\npostgres`btinsert(rel=0x000000080314d080, values=<unavailable>, \r\nisnull=<unavailable>, ht_ctid=0x00000008031b85c8, heapRel=<unavailable>, \r\ncheckUnique=<unavailable>, indexInfo=0x000000080c2a40a0) at nbtree.c:203\r\n frame #35: 0x000000000063c992 \r\npostgres`ExecInsertIndexTuples(slot=<unavailable>, estate=<unavailable>, \r\nnoDupErr=false, specConflict=0x0000000000000000, \r\narbiterIndexes=0x0000000000000000) at execIndexing.c:391\r\n frame #36: 0x0000000000667598 \r\npostgres`ExecUpdate(mtstate=<unavailable>, tupleid=0x00007fffffffdd00, \r\noldtuple=0x0000000000000000, slot=0x00000008031b8598, \r\nplanSlot=0x00000008031b82e8, epqstate=0x000000080c2a4b38, \r\nestate=0x000000080306b118, canSetTag=<unavailable>) at \r\nnodeModifyTable.c:1407\r\n frame #37: 0x0000000000665d45 \r\npostgres`ExecModifyTable(pstate=<unavailable>) at nodeModifyTable.c:2182\r\n frame #38: 0x000000000063e0ed postgres`standard_ExecutorRun \r\n[inlined] ExecProcNode(node=<unavailable>) at executor.h:239\r\n frame #39: 0x000000000063e0d8 postgres`standard_ExecutorRun \r\n[inlined] ExecutePlan(estate=<unavailable>, \r\nplanstate=0x000000080c2a4a40, operation=<unavailable>, numberTuples=0, \r\ndirection=NoMovementScanDirection, dest=<unavailable>, \r\nexecute_once=<unavailable>) at execMain.c:1647\r\n frame #40: 0x000000000063e09a \r\npostgres`standard_ExecutorRun(queryDesc=<unavailable>, \r\ndirection=NoMovementScanDirection, count=0, execute_once=<unavailable>) \r\nat execMain.c:365\r\n frame #41: 0x00000000007abd6c \r\npostgres`ProcessQuery(plan=0x000000080c29f380, sourceText=\"\\nUPDATE \r\npg_class SET relacl = (SELECT array_agg(a.acl) FROM (SELECT \r\nE'=r/\\\"pgbf\\\"' as acl UNION SELECT unnest(pg_catalog.acldefault( \r\nCASE WHEN relkind = 'S' THEN 's' ELSE 'r' \r\nEND::\\\"char\\\",10::oid)) ) as a) WHERE relkind IN ('r', 'v', 'm', 'S') \r\nAND relacl IS NULL;\\n\", params=0x0000000000000000, \r\nqueryEnv=0x0000000000000000, dest=0x0000000000ab4b38, completionTag=\"\") \r\nat pquery.c:161\r\n frame #42: 0x00000000007ab340 \r\npostgres`PortalRunMulti(portal=0x000000080310c118, isTopLevel=true, \r\nsetHoldSnapshot=false, dest=0x0000000000ab4b38, \r\naltdest=0x0000000000ab4b38, completionTag=\"\") at pquery.c:0\r\n frame #43: 0x00000000007aac69 \r\npostgres`PortalRun(portal=0x000000080310c118, count=<unavailable>, \r\nisTopLevel=true, run_once=<unavailable>, dest=0x0000000000ab4b38, \r\naltdest=0x0000000000ab4b38, completionTag=\"\") at pquery.c:796\r\n frame #44: 0x00000000007a9bda \r\npostgres`exec_simple_query(query_string=\"\\nUPDATE pg_class SET relacl \r\n= (SELECT array_agg(a.acl) FROM (SELECT E'=r/\\\"pgbf\\\"' as acl UNION \r\nSELECT unnest(pg_catalog.acldefault( CASE WHEN relkind = 'S' THEN 's' \r\n ELSE 'r' END::\\\"char\\\",10::oid)) ) as a) WHERE relkind IN \r\n('r', 'v', 'm', 'S') AND relacl IS NULL;\\n\") at postgres.c:1215\r\n frame #45: 0x00000000007a7b07 \r\npostgres`PostgresMain(argc=<unavailable>, argv=<unavailable>, \r\ndbname=<unavailable>, username=<unavailable>) at postgres.c:0\r\n frame #46: 0x000000000068fbfb postgres`main(argc=10, \r\nargv=0x00007fffffffe2d8) at main.c:224\r\n frame #47: 0x000000000048e7df postgres`_start + 383\r\n\r\n\r\n-- \r\nChristian\r\n",
"msg_date": "Mon, 1 Apr 2019 11:31:23 +0000",
"msg_from": "Christian Ullrich <chris@chrullrich.net>",
"msg_from_op": true,
"msg_subject": "Re: C_C_A animal on HEAD gets stuck in initdb"
},
{
"msg_contents": "* Christian Ullrich wrote:\r\n\r\n> * Thomas Munro wrote:\r\n> \r\n>> On Mon, Apr 1, 2019 at 9:39 PM Christian Ullrich \r\n>> <chris@chrullrich.net> wrote:\r\n\r\n>>> my CLOBBER_CACHE_ALWAYS animal, jaguarundi, has gotten stuck in \"make\r\n>>> check\"'s initdb three times in a row now.\r\n>>\r\n>> Could it be the same as this?\r\n>>\r\n>> https://www.postgresql.org/message-id/CA%2BhUKGLCwPF0S4Mk7S8qw%2BDK0Bq65LueN9rofAA3HHSYikW-Zw%40mail.gmail.com \r\n>>\r\n>>\r\n>> I see that its first failure was after commit 558a9165e0 (along with \r\n>> others).\r\n> \r\n> It does look very similar. I don't have a working gdb on the box, hence \r\n> this is from lldb.\r\n\r\nI think the patch in the linked message works; it doesn't get stuck \r\nanymore. It's still slow as molasses with C_C_A; this animal can take \r\n12+ hours to complete.\r\n\r\n-- \r\nChristian\r\n",
"msg_date": "Mon, 1 Apr 2019 11:52:09 +0000",
"msg_from": "Christian Ullrich <chris@chrullrich.net>",
"msg_from_op": true,
"msg_subject": "Re: C_C_A animal on HEAD gets stuck in initdb"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 4:31 AM Christian Ullrich <chris@chrullrich.net> wrote:\n> It does look very similar. I don't have a working gdb on the box, hence\n> this is from lldb.\n>\n> (lldb) bt\n\nI am almost certain that it's the same issue, based on this stack trace.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 1 Apr 2019 12:14:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: C_C_A animal on HEAD gets stuck in initdb"
},
{
"msg_contents": "On Tue, Apr 2, 2019 at 12:52 AM Christian Ullrich <chris@chrullrich.net> wrote:\n> I think the patch in the linked message works; it doesn't get stuck\n> anymore. It's still slow as molasses with C_C_A; this animal can take\n> 12+ hours to complete.\n\nThanks. I pushed the fix.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Apr 2019 09:39:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: C_C_A animal on HEAD gets stuck in initdb"
}
] |
[
{
"msg_contents": "Some buildfarm runs have failed like this:\n\n============== dropping database \"pl_regression\" ==============\nERROR: database \"pl_regression\" is being accessed by other users\nDETAIL: There is 1 other session using the database.\n\nAffected runs:\n\n axolotl │ PLCheck-C │ REL9_5_STABLE │ 2015-08-21 19:29:19 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2015-08-21%2019:29:19\n axolotl │ PLCheck-C │ REL9_6_STABLE │ 2017-03-16 17:43:16 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2017-03-16%2017:43:16\n mandrill │ PLCheck-C │ HEAD │ 2017-05-13 17:14:12 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2017-05-13%2017:14:12\n tern │ PLCheck-C │ HEAD │ 2017-09-05 20:45:17 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2017-09-05%2020:45:17\n mandrill │ PLCheck-C │ HEAD │ 2017-11-15 13:34:12 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2017-11-15%2013:34:12\n mandrill │ PLCheck-en_US.ISO8859-1 │ REL_10_STABLE │ 2018-03-15 05:24:41 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2018-03-15%2005:24:41\n frogfish │ TestModulesCheck-en_US.utf8 │ REL_11_STABLE │ 2019-01-29 01:32:51 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=frogfish&dt=2019-01-29%2001:32:51\n hornet │ PLCheck-C │ REL_11_STABLE │ 2019-01-29 01:52:29 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2019-01-29%2001:52:29\n\nI can reproduce that reliably by combining \"make -C src/pl installcheck\" with\nthis hack:\n\n proc_exit(int code)\n {\n+\tpg_usleep(7000000);\n\nThis happens because dropdb()'s call to CountOtherDBBackends() waits up to 5s\nfor the database to become vacant. If the last plpython test backend takes\nmore than 5s to exit, the pltcl suite fails. Most test suites are unaffected,\nthanks to USE_MODULE_DB=1 in the buildfarm script. However, PL suites ignore\nUSE_MODULE_DB. So do the three src/test/modules directories that contain no C\ncode and define $(REGRESS). Isolation suites, too, ignore USE_MODULE_DB.\n\nI would like to fix this as follows. When MODULES and MODULE_big are both\nunset, instead of using a constant string, derive the database name from the\nfirst element of $(REGRESS) or $(ISOLATION). I considered $(EXTENSION), but\nsrc/test/modules/commit_ts does not set it. $(REGRESS) and $(ISOLATION) are\nrobust; in their absence, a directory simply won't invoke pg_regress to drop\nand/or create a database. I considered introducing a TESTDB_SUFFIX variable\nthat src/test/modules directories could define, but that felt like needless\nflexibility. Treat src/pl in a similar fashion. With the attached patch,\ninstallcheck-world and check-world no longer reuse any database name in a\ngiven postmaster. Next, I'll mail this buildfarm client patch, after which\nany non-MSVC, v9.5+ (due to ddc2504) buildfarm run would no longer reuse any\ndatabase name in a given postmaster:\n\n--- a/run_build.pl\n+++ b/run_build.pl\n@@ -1677 +1677,2 @@ sub make_pl_install_check\n-\t\t@checklog = run_log(\"cd $pgsql/src/pl && $make installcheck\");\n+\t\tmy $cmd = \"cd $pgsql/src/pl && $make USE_MODULE_DB=1 installcheck\";\n+\t\t@checklog = run_log($cmd);\n\nI plan to back-patch the PostgreSQL patch, to combat buildfarm noise. Perhaps\nsomeone has test automation that sets USE_MODULE_DB and nonetheless probes the\nexact database name \"pl_regression\", but I'm not too worried. The original\nrationale for USE_MODULE_DB, in commit ad69bd0, was to facilitate pg_upgrade\ntesting. Folks using \"make installcheck-world\" to populate a cluster for\npg_upgrade testing will see additional test coverage, which may cause\nadditional failures. I'm fine with that, too.",
"msg_date": "Mon, 1 Apr 2019 06:52:13 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Extending USE_MODULE_DB to more test suite types"
},
{
"msg_contents": "On Mon, Apr 1, 2019 at 9:52 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> Some buildfarm runs have failed like this:\n>\n> ============== dropping database \"pl_regression\" ==============\n> ERROR: database \"pl_regression\" is being accessed by other users\n> DETAIL: There is 1 other session using the database.\n>\n> Affected runs:\n>\n> axolotl │ PLCheck-C │ REL9_5_STABLE │ 2015-08-21 19:29:19 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2015-08-21%2019:29:19\n> axolotl │ PLCheck-C │ REL9_6_STABLE │ 2017-03-16 17:43:16 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2017-03-16%2017:43:16\n> mandrill │ PLCheck-C │ HEAD │ 2017-05-13 17:14:12 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2017-05-13%2017:14:12\n> tern │ PLCheck-C │ HEAD │ 2017-09-05 20:45:17 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2017-09-05%2020:45:17\n> mandrill │ PLCheck-C │ HEAD │ 2017-11-15 13:34:12 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2017-11-15%2013:34:12\n> mandrill │ PLCheck-en_US.ISO8859-1 │ REL_10_STABLE │ 2018-03-15 05:24:41 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2018-03-15%2005:24:41\n> frogfish │ TestModulesCheck-en_US.utf8 │ REL_11_STABLE │ 2019-01-29 01:32:51 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=frogfish&dt=2019-01-29%2001:32:51\n> hornet │ PLCheck-C │ REL_11_STABLE │ 2019-01-29 01:52:29 │ https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2019-01-29%2001:52:29\n>\n> I can reproduce that reliably by combining \"make -C src/pl installcheck\" with\n> this hack:\n>\n> proc_exit(int code)\n> {\n> + pg_usleep(7000000);\n>\n> This happens because dropdb()'s call to CountOtherDBBackends() waits up to 5s\n> for the database to become vacant. If the last plpython test backend takes\n> more than 5s to exit, the pltcl suite fails. Most test suites are unaffected,\n> thanks to USE_MODULE_DB=1 in the buildfarm script. However, PL suites ignore\n> USE_MODULE_DB. So do the three src/test/modules directories that contain no C\n> code and define $(REGRESS). Isolation suites, too, ignore USE_MODULE_DB.\n>\n> I would like to fix this as follows. When MODULES and MODULE_big are both\n> unset, instead of using a constant string, derive the database name from the\n> first element of $(REGRESS) or $(ISOLATION). I considered $(EXTENSION), but\n> src/test/modules/commit_ts does not set it. $(REGRESS) and $(ISOLATION) are\n> robust; in their absence, a directory simply won't invoke pg_regress to drop\n> and/or create a database. I considered introducing a TESTDB_SUFFIX variable\n> that src/test/modules directories could define, but that felt like needless\n> flexibility. Treat src/pl in a similar fashion. With the attached patch,\n> installcheck-world and check-world no longer reuse any database name in a\n> given postmaster. Next, I'll mail this buildfarm client patch, after which\n> any non-MSVC, v9.5+ (due to ddc2504) buildfarm run would no longer reuse any\n> database name in a given postmaster:\n>\n> --- a/run_build.pl\n> +++ b/run_build.pl\n> @@ -1677 +1677,2 @@ sub make_pl_install_check\n> - @checklog = run_log(\"cd $pgsql/src/pl && $make installcheck\");\n> + my $cmd = \"cd $pgsql/src/pl && $make USE_MODULE_DB=1 installcheck\";\n> + @checklog = run_log($cmd);\n>\n> I plan to back-patch the PostgreSQL patch, to combat buildfarm noise. Perhaps\n> someone has test automation that sets USE_MODULE_DB and nonetheless probes the\n> exact database name \"pl_regression\", but I'm not too worried. The original\n> rationale for USE_MODULE_DB, in commit ad69bd0, was to facilitate pg_upgrade\n> testing. Folks using \"make installcheck-world\" to populate a cluster for\n> pg_upgrade testing will see additional test coverage, which may cause\n> additional failures. I'm fine with that, too.\n\n\nExcellent. Extending use of USE_MODULE_DB has been on my list of\nthings to do. I'll add the buildfarm patch right away. It should be\nharmless before these changes are made.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 1 Apr 2019 13:01:11 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Extending USE_MODULE_DB to more test suite types"
},
{
"msg_contents": "On 2019-04-01 06:52:13 -0700, Noah Misch wrote:\n> I plan to back-patch the PostgreSQL patch, to combat buildfarm noise. Perhaps\n> someone has test automation that sets USE_MODULE_DB and nonetheless probes the\n> exact database name \"pl_regression\", but I'm not too worried. The original\n> rationale for USE_MODULE_DB, in commit ad69bd0, was to facilitate pg_upgrade\n> testing. Folks using \"make installcheck-world\" to populate a cluster for\n> pg_upgrade testing will see additional test coverage, which may cause\n> additional failures. I'm fine with that, too.\n\n+1 for all of that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 1 Apr 2019 10:06:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Extending USE_MODULE_DB to more test suite types"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-04-01 06:52:13 -0700, Noah Misch wrote:\n>> I plan to back-patch the PostgreSQL patch, to combat buildfarm noise. Perhaps\n>> someone has test automation that sets USE_MODULE_DB and nonetheless probes the\n>> exact database name \"pl_regression\", but I'm not too worried. The original\n>> rationale for USE_MODULE_DB, in commit ad69bd0, was to facilitate pg_upgrade\n>> testing. Folks using \"make installcheck-world\" to populate a cluster for\n>> pg_upgrade testing will see additional test coverage, which may cause\n>> additional failures. I'm fine with that, too.\n\n> +1 for all of that.\n\nI haven't tested the patch, but also +1 for the idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2019 13:36:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Extending USE_MODULE_DB to more test suite types"
},
{
"msg_contents": "On Mon, Apr 01, 2019 at 01:01:11PM -0400, Andrew Dunstan wrote:\n> On Mon, Apr 1, 2019 at 9:52 AM Noah Misch <noah@leadboat.com> wrote:\n> > I plan to back-patch the PostgreSQL patch, to combat buildfarm noise. Perhaps\n> > someone has test automation that sets USE_MODULE_DB and nonetheless probes the\n> > exact database name \"pl_regression\", but I'm not too worried. The original\n> > rationale for USE_MODULE_DB, in commit ad69bd0, was to facilitate pg_upgrade\n> > testing. Folks using \"make installcheck-world\" to populate a cluster for\n> > pg_upgrade testing will see additional test coverage, which may cause\n> > additional failures. I'm fine with that, too.\n\nPushed. It looks like XversionUpgradeSave relies on having a\n\"contrib_regression\" database:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=crake&br=REL9_5_STABLE\n\n> Excellent. Extending use of USE_MODULE_DB has been on my list of\n> things to do. I'll add the buildfarm patch right away. It should be\n> harmless before these changes are made.\n\nThanks.\n\n\n",
"msg_date": "Wed, 3 Apr 2019 22:56:58 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: Extending USE_MODULE_DB to more test suite types"
}
] |
[
{
"msg_contents": "Hello,\nI’m Youssef Khedher, 19 years old student from Tunisia.\nI’m a computer science student in Tunisia, and also an online student for Harvard CS50 Program.\nI’m interested in the ” pgBackRest port to Windows (2019)”.\nI have a good knowledge in C and in IT in general, I believe that nothing is impossible and that I can handle and understand this project \neven if I have to stay 24h/24h. I love challenges and I’m someone who never stops not matter what is the obstacle is.\nWorking in an open source project with a well-known organization like PostgreSQL is a pleasure and a dream for every young \nstudent. That’s why I’m fully motivated to the challenge with you. \nTo be frank with you, I’m not an expert in C, but I’m kind of people who never give up until the problem is solved on my own.\nIn order to achieve the project goal, all I need from you MR. Stephen Frost or MR. David Steele is some help to understand perfectly the task and to help me enter to world of software development. \nI’m feeling comfortable to express what’s in my head honestly because PostgreSQL said that their mentors are wonderfully kind and friendly people who want you to learn and succeed!\nThanks for your time. \nLooking forward to your response!\n\nSincerely,\n \" YOUSSEF KHEDHER \"\n————————————————————\nComputer-Science Student. \nPhone:(+216) 25 460 276 \nAddress : 28 Rue Amir Abedlkader, Jemmal 5020, Monastir, Tunisia.\n\n\n\nHello,I’m Youssef Khedher, 19 years old student from Tunisia.I’m a computer science student in Tunisia, and also an online student for Harvard CS50 Program.I’m interested in the ” pgBackRest port to Windows (2019)”.I have a good knowledge in C and in IT in general, I believe that nothing is impossible and that I can handle and understand this project even if I have to stay 24h/24h. I love challenges and I’m someone who never stops not matter what is the obstacle is.Working in an open source project with a well-known organization like PostgreSQL is a pleasure and a dream for every young student. That’s why I’m fully motivated to the challenge with you. To be frank with you, I’m not an expert in C, but I’m kind of people who never give up until the problem is solved on my own.In order to achieve the project goal, all I need from you MR. Stephen Frost or MR. David Steele is some help to understand perfectly the task and to help me enter to world of software development. I’m feeling comfortable to express what’s in my head honestly because PostgreSQL said that their mentors are wonderfully kind and friendly people who want you to learn and succeed!Thanks for your time. Looking forward to your response! Sincerely, \" YOUSSEF KHEDHER \"————————————————————Computer-Science Student. Phone:(+216) 25 460 276 Address : 28 Rue Amir Abedlkader, Jemmal 5020, Monastir, Tunisia.",
"msg_date": "Mon, 1 Apr 2019 17:16:13 +0100",
"msg_from": "Youssef Khedher <youssefkhedher.contact@gmail.com>",
"msg_from_op": true,
"msg_subject": "GCoS2019--pgBackRest port to Windows (2019)"
},
{
"msg_contents": "Hello Youssef,\n\nOn 4/1/19 5:16 PM, Youssef Khedher wrote:\n> Hello,\n> \n> I’m Youssef Khedher, 19 years old student from Tunisia.\n> \n> I’m a computer science student in Tunisia, and also an online student \n> for Harvard CS50 Program.\n> \n> I’m interested in the ” pgBackRest port to Windows (2019)”.\n\nExcellent!\n\n > To be frank with you, I’m not an expert in C\n\nYou'll need to be able to understand and modify many areas of the C code \nin order to be successful in this project. I would encourage you to \nreview the code and make sure you are able to follow what it's doing in \ngeneral before submitting an application:\n\nhttps://github.com/pgbackrest/pgbackrest/tree/master/src\n\nI don't think you'll be writing a bunch of new code, but anything is \npossible when porting software.\n\n> In order to achieve the project goal, all I need from you MR. Stephen \n> Frost or MR. David Steele is some help to understand perfectly the task \n> and to help me enter to world of software development.\n\nFor this project it's important to have knowledge about how Windows \ndiffers from Unix, e.g. the way child processes are spawned. In the \ncore code we only use fork() with an immediate exec(), so this should be \nstraightforward enough to port:\n\nhttps://github.com/pgbackrest/pgbackrest/blob/master/src/command/archive/get/get.c#L235\n\nWe also use fork() quite a bit in our testing and I was thinking the \nHARNESS_FORK*() macros could be enhanced to use threads instead on Windows:\n\nhttps://github.com/pgbackrest/pgbackrest/blob/master/test/src/common/harnessFork.h\n\nIf not, then the tests will need to be adjusted to accommodate whatever \ntesting method is developed.\n\npgBackRest uses SSH to communicate with remote processes:\n\nhttps://github.com/pgbackrest/pgbackrest/blob/master/src/protocol/helper.c#L288\n\nEventually we would like to move away from requiring SSH, but for this \nport I think the best idea would be to get pgBackRest working with some \nopen source SSH solution such as OpenSSH (which is easily installed on \nrecent versions of Windows, but not sure about older versions). If \nthere is time at the end we might look at alternate solutions.\n\nThere may be other minor areas in the code that need be adjusted or \n#ifdef'd to work with Windows. We've tried to keep this to a minimum by \nenforcing C99 and Posix standards, but there will be some differences. \nThe config code that enforces Unix path structure is an obvious area \nthat will need to be updated:\n\nhttps://github.com/pgbackrest/pgbackrest/blob/master/src/config/parse.c#L1034\n\nNote that we want to port to native Windows without the presence of \nCygwin (or similar) in production. My preference would be to use \nsomething like Strawberry Perl for testing, and then as few dependencies \nas possible for the production distribution.\n\nA CI testing platform for Windows will need to be selected -- mostly \nlikely AppVeyor.\n\nThe documentation will also need to be updated for Windows.\n\nYou should delve into the areas mentioned above and propose possible \nsolutions when writing your proposal. Feel free to ask questions.\n\nPorting code from one platform to another can be quite complicated, but \nwe believe this project can be accomplished over the summer by a skilled \nand motivated student.\n\nIf you are interested in proceeding, you should create an issue here:\n\nhttps://github.com/pgbackrest/pgbackrest/issues\n\nWe do our development on Github and issues are the way we discuss \nprojects, enhancements, and bugs.\n\nGood luck!\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 2 Apr 2019 12:32:42 +0100",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: GCoS2019--pgBackRest port to Windows (2019)"
}
] |
[
{
"msg_contents": "Unified logging system for command-line programs\n\nThis unifies the various ad hoc logging (message printing, error\nprinting) systems used throughout the command-line programs.\n\nFeatures:\n\n- Program name is automatically prefixed.\n\n- Message string does not end with newline. This removes a common\n source of inconsistencies and omissions.\n\n- Additionally, a final newline is automatically stripped, simplifying\n use of PQerrorMessage() etc., another common source of mistakes.\n\n- I converted error message strings to use %m where possible.\n\n- As a result of the above several points, more translatable message\n strings can be shared between different components and between\n frontends and backend, without gratuitous punctuation or whitespace\n differences.\n\n- There is support for setting a \"log level\". This is not meant to be\n user-facing, but can be used internally to implement debug or\n verbose modes.\n\n- Lazy argument evaluation, so no significant overhead if logging at\n some level is disabled.\n\n- Some color in the messages, similar to gcc and clang. Set\n PG_COLOR=auto to try it out. Some colors are predefined, but can be\n customized by setting PG_COLORS.\n\n- Common files (common/, fe_utils/, etc.) can handle logging much more\n simply by just using one API without worrying too much about the\n context of the calling program, requiring callbacks, or having to\n pass \"progname\" around everywhere.\n\n- Some programs called setvbuf() to make sure that stderr is\n unbuffered, even on Windows. But not all programs did that. This\n is now done centrally.\n\nSoft goals:\n\n- Reduces vertical space use and visual complexity of error reporting\n in the source code.\n\n- Encourages more deliberate classification of messages. For example,\n in some cases it wasn't clear without analyzing the surrounding code\n whether a message was meant as an error or just an info.\n\n- Concepts and terms are vaguely aligned with popular logging\n frameworks such as log4j and Python logging.\n\nThis is all just about printing stuff out. Nothing affects program\nflow (e.g., fatal exits). The uses are just too varied to do that.\nSome existing code had wrappers that do some kind of print-and-exit,\nand I adapted those.\n\nI tried to keep the output mostly the same, but there is a lot of\nhistorical baggage to unwind and special cases to consider, and I\nmight not always have succeeded. One significant change is that\npg_rewind used to write all error messages to stdout. That is now\nchanged to stderr.\n\nReviewed-by: Donald Dong <xdong@csumb.edu>\nReviewed-by: Arthur Zakirov <a.zakirov@postgrespro.ru>\nDiscussion: https://www.postgresql.org/message-id/flat/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/cc8d41511721d25d557fc02a46c053c0a602fed0\n\nModified Files\n--------------\ndoc/src/sgml/ref/clusterdb.sgml | 11 +\ndoc/src/sgml/ref/createdb.sgml | 11 +\ndoc/src/sgml/ref/createuser.sgml | 11 +\ndoc/src/sgml/ref/dropdb.sgml | 11 +\ndoc/src/sgml/ref/dropuser.sgml | 11 +\ndoc/src/sgml/ref/initdb.sgml | 11 +\ndoc/src/sgml/ref/pg_basebackup.sgml | 6 +\ndoc/src/sgml/ref/pg_checksums.sgml | 11 +\ndoc/src/sgml/ref/pg_controldata.sgml | 11 +\ndoc/src/sgml/ref/pg_dump.sgml | 11 +\ndoc/src/sgml/ref/pg_dumpall.sgml | 11 +\ndoc/src/sgml/ref/pg_isready.sgml | 7 +\ndoc/src/sgml/ref/pg_receivewal.sgml | 6 +\ndoc/src/sgml/ref/pg_recvlogical.sgml | 7 +\ndoc/src/sgml/ref/pg_resetwal.sgml | 17 ++\ndoc/src/sgml/ref/pg_restore.sgml | 11 +\ndoc/src/sgml/ref/pg_rewind.sgml | 7 +\ndoc/src/sgml/ref/pg_waldump.sgml | 26 ++\ndoc/src/sgml/ref/psql-ref.sgml | 11 +\ndoc/src/sgml/ref/reindexdb.sgml | 11 +\ndoc/src/sgml/ref/vacuumdb.sgml | 11 +\nsrc/backend/access/transam/xlog.c | 2 +-\nsrc/backend/utils/misc/pg_controldata.c | 8 +-\nsrc/bin/initdb/initdb.c | 266 +++++++----------\nsrc/bin/initdb/nls.mk | 5 +-\nsrc/bin/pg_archivecleanup/Makefile | 4 +-\nsrc/bin/pg_archivecleanup/nls.mk | 4 +-\nsrc/bin/pg_archivecleanup/pg_archivecleanup.c | 53 ++--\nsrc/bin/pg_basebackup/nls.mk | 5 +-\nsrc/bin/pg_basebackup/pg_basebackup.c | 412 ++++++++++----------------\nsrc/bin/pg_basebackup/pg_receivewal.c | 122 ++++----\nsrc/bin/pg_basebackup/pg_recvlogical.c | 147 ++++-----\nsrc/bin/pg_basebackup/receivelog.c | 204 ++++++-------\nsrc/bin/pg_basebackup/streamutil.c | 97 +++---\nsrc/bin/pg_basebackup/walmethods.c | 16 +-\nsrc/bin/pg_checksums/Makefile | 4 +-\nsrc/bin/pg_checksums/nls.mk | 4 +-\nsrc/bin/pg_checksums/pg_checksums.c | 72 +++--\nsrc/bin/pg_controldata/Makefile | 4 +-\nsrc/bin/pg_controldata/pg_controldata.c | 11 +-\nsrc/bin/pg_ctl/Makefile | 4 +-\nsrc/bin/pg_ctl/pg_ctl.c | 8 +-\nsrc/bin/pg_dump/common.c | 124 +++-----\nsrc/bin/pg_dump/compress_io.c | 46 +--\nsrc/bin/pg_dump/nls.mk | 16 +-\nsrc/bin/pg_dump/parallel.c | 62 ++--\nsrc/bin/pg_dump/pg_backup_archiver.c | 250 +++++++---------\nsrc/bin/pg_dump/pg_backup_archiver.h | 15 +-\nsrc/bin/pg_dump/pg_backup_custom.c | 94 +++---\nsrc/bin/pg_dump/pg_backup_db.c | 55 ++--\nsrc/bin/pg_dump/pg_backup_directory.c | 72 ++---\nsrc/bin/pg_dump/pg_backup_null.c | 4 +-\nsrc/bin/pg_dump/pg_backup_tar.c | 88 +++---\nsrc/bin/pg_dump/pg_backup_utils.c | 58 +---\nsrc/bin/pg_dump/pg_backup_utils.h | 6 +-\nsrc/bin/pg_dump/pg_dump.c | 334 ++++++++++-----------\nsrc/bin/pg_dump/pg_dump.h | 1 -\nsrc/bin/pg_dump/pg_dump_sort.c | 26 +-\nsrc/bin/pg_dump/pg_dumpall.c | 125 ++++----\nsrc/bin/pg_dump/pg_restore.c | 36 +--\nsrc/bin/pg_dump/t/001_basic.pl | 54 ++--\nsrc/bin/pg_dump/t/002_pg_dump.pl | 24 +-\nsrc/bin/pg_resetwal/Makefile | 4 +-\nsrc/bin/pg_resetwal/nls.mk | 4 +-\nsrc/bin/pg_resetwal/pg_resetwal.c | 175 +++++------\nsrc/bin/pg_resetwal/t/002_corrupted.pl | 4 +-\nsrc/bin/pg_rewind/Makefile | 4 +-\nsrc/bin/pg_rewind/copy_fetch.c | 38 +--\nsrc/bin/pg_rewind/datapagemap.c | 5 +-\nsrc/bin/pg_rewind/file_ops.c | 60 ++--\nsrc/bin/pg_rewind/filemap.c | 34 +--\nsrc/bin/pg_rewind/libpq_fetch.c | 37 +--\nsrc/bin/pg_rewind/logging.c | 73 +----\nsrc/bin/pg_rewind/logging.h | 18 +-\nsrc/bin/pg_rewind/nls.mk | 7 +-\nsrc/bin/pg_rewind/parsexlog.c | 33 +--\nsrc/bin/pg_rewind/pg_rewind.c | 85 +++---\nsrc/bin/pg_rewind/pg_rewind.h | 1 -\nsrc/bin/pg_rewind/timeline.c | 17 +-\nsrc/bin/pg_test_fsync/Makefile | 4 +-\nsrc/bin/pg_test_fsync/pg_test_fsync.c | 19 +-\nsrc/bin/pg_upgrade/pg_upgrade.c | 4 +-\nsrc/bin/pg_waldump/Makefile | 3 +-\nsrc/bin/pg_waldump/nls.mk | 6 +-\nsrc/bin/pg_waldump/pg_waldump.c | 71 ++---\nsrc/bin/pgbench/pgbench.c | 23 +-\nsrc/bin/psql/command.c | 157 +++++-----\nsrc/bin/psql/common.c | 81 ++---\nsrc/bin/psql/common.h | 2 -\nsrc/bin/psql/copy.c | 42 ++-\nsrc/bin/psql/crosstabview.c | 19 +-\nsrc/bin/psql/describe.c | 75 ++---\nsrc/bin/psql/help.c | 3 +-\nsrc/bin/psql/input.c | 15 +-\nsrc/bin/psql/large_obj.c | 15 +-\nsrc/bin/psql/mainloop.c | 14 +-\nsrc/bin/psql/nls.mk | 8 +-\nsrc/bin/psql/psqlscanslash.l | 11 +-\nsrc/bin/psql/startup.c | 50 +++-\nsrc/bin/psql/tab-complete.c | 2 +-\nsrc/bin/psql/variables.c | 11 +-\nsrc/bin/psql/variables.h | 2 +-\nsrc/bin/scripts/clusterdb.c | 20 +-\nsrc/bin/scripts/common.c | 30 +-\nsrc/bin/scripts/createdb.c | 22 +-\nsrc/bin/scripts/createuser.c | 13 +-\nsrc/bin/scripts/dropdb.c | 11 +-\nsrc/bin/scripts/dropuser.c | 12 +-\nsrc/bin/scripts/nls.mk | 6 +-\nsrc/bin/scripts/pg_isready.c | 10 +-\nsrc/bin/scripts/reindexdb.c | 46 +--\nsrc/bin/scripts/vacuumdb.c | 67 ++---\nsrc/common/controldata_utils.c | 41 ++-\nsrc/common/file_utils.c | 84 +++---\nsrc/common/pgfnames.c | 31 +-\nsrc/common/restricted_token.c | 23 +-\nsrc/common/rmtree.c | 27 +-\nsrc/fe_utils/Makefile | 2 +-\nsrc/fe_utils/logging.c | 228 ++++++++++++++\nsrc/fe_utils/psqlscan.l | 3 +-\nsrc/include/common/controldata_utils.h | 6 +-\nsrc/include/common/file_utils.h | 13 +-\nsrc/include/common/restricted_token.h | 4 +-\nsrc/include/fe_utils/logging.h | 95 ++++++\nsrc/include/fe_utils/psqlscan.h | 7 -\nsrc/interfaces/ecpg/test/Makefile | 2 +\nsrc/nls-global.mk | 8 +\nsrc/test/isolation/Makefile | 3 +-\nsrc/test/perl/TestLib.pm | 1 +\nsrc/test/regress/GNUmakefile | 4 +-\nsrc/test/regress/pg_regress.c | 6 +-\nsrc/tools/msvc/Mkvcbuild.pm | 8 +-\n132 files changed, 2555 insertions(+), 2686 deletions(-)\n\n",
"msg_date": "Mon, 01 Apr 2019 18:25:56 +0000",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Unified logging system for command-line programs"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-04-01 <E1hB1d6-00051m-1s@gemulon.postgresql.org>\n> - Some color in the messages, similar to gcc and clang. Set\n> PG_COLOR=auto to try it out. Some colors are predefined, but can be\n> customized by setting PG_COLORS.\n\nCan we rename PG_COLOR to PGCOLOR? This is the only PG* environment\nvariable prefixed with the extra underscore, and remembering that will\nbe confusing. (Like pgbench should really be named pg_bench for\nconsistency.) Even if it's not a libpq variable, but that's an\nimplementation detail that users shouldn't have to worry about.\n\n From reindexdb(1):\n\nNAME\n reindexdb - reindex a PostgreSQL database\n\nENVIRONMENT\n PGDATABASE\n PGHOST\n PGPORT\n PGUSER\n Default connection parameters\n\n PG_COLOR\n Specifies whether to use color in diagnostics messages. Possible values are always, auto, never.\n\nAlso, why doesn't this default to 'auto'? Lots of programs have moved\nto using colors by default over the last years, including git and gcc.\n\nChristoph\n\n\n",
"msg_date": "Tue, 9 Apr 2019 11:22:21 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "On 2019-04-09 11:22, Christoph Berg wrote:\n> Can we rename PG_COLOR to PGCOLOR? This is the only PG* environment\n> variable prefixed with the extra underscore, and remembering that will\n> be confusing. (Like pgbench should really be named pg_bench for\n> consistency.) Even if it's not a libpq variable, but that's an\n> implementation detail that users shouldn't have to worry about.\n\nI'm okay with changing it. As you indicate, I chose the name so that it\ndoesn't look like a libpq variable. There are some other PG_ variables\nthroughout the code, but those appear to be mostly for internal use.\nAlso, there is GCC_COLORS, LS_COLORS, etc. But perhaps this wisdom will\nbe lost on users who just read the man page and get confused. ;-)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 9 Apr 2019 12:55:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-04-01 <E1hB1d6-00051m-1s@gemulon.postgresql.org>\n> - There is support for setting a \"log level\". This is not meant to be\n> user-facing, but can be used internally to implement debug or\n> verbose modes.\n\nI'm not entirely sure what happened here, but I think this made\npg_restore verbose by default (and there is no --quiet option).\nAt least that's what the apt.pg.o upgrade regression tests say for\n11->12:\n\n09:12:49 ok 59 - pg_upgradecluster reported successful operation\n09:12:49 not ok 60 - no error messages during upgrade\n09:12:49 # Failed test 'no error messages during upgrade'\n09:12:49 # at ./t/040_upgrade.t line 160.\n09:12:49 pg_restore: connecting to database for restore\n09:12:49 pg_restore: executing BLOB 1234\n09:12:49 pg_restore: disabling triggers for tstab\n09:12:49 pg_restore: processing data for table \"public.tstab\"\n09:12:49 pg_restore: enabling triggers for tstab\n09:12:49 pg_restore: processing BLOBS\n09:12:49 pg_restore: restoring large object with OID 1234\n09:12:49 pg_restore: restored 1 large object\n09:12:49 pg_restore: connecting to database for restore\n09:12:49 pg_restore: connecting to database for restore\n09:12:49 pg_restore: disabling triggers for nums\n09:12:49 pg_restore: processing data for table \"public.nums\"\n09:12:49 pg_restore: enabling triggers for nums\n09:12:49 pg_restore: connecting to database for restore\n09:12:49 pg_restore: connecting to database for restore\n09:12:49 pg_restore: disabling triggers for old\n09:12:49 pg_restore: processing data for table \"old.old\"\n09:12:49 pg_restore: enabling triggers for old\n09:12:49 pg_restore: disabling triggers for phone\n09:12:49 pg_restore: processing data for table \"public.phone\"\n09:12:49 pg_restore: enabling triggers for phone\n09:12:49 pg_restore: executing SEQUENCE SET odd10ok 61 - pg_lsclusters -h\n09:12:49 ok 62 - pg_lsclusters output\n\nhttps://pgdgbuild.dus.dg-i.net/view/Testsuite/job/upgrade-11-12/lastFailedBuild/architecture=amd64,distribution=sid/console\n\nThe code running there is:\n\n print 'Upgrading database ', $db, \"...\\n\";\n open SOURCE, '-|', $pg_dump, '-h', $oldsocket, '-p', $info{'port'},\n '-Fc', '--quote-all-identifiers', $db or\n error 'Could not execute pg_dump for old cluster';\n\n # start pg_restore and copy over everything\n my @restore_argv = ($pg_restore, '-h', $newsocket, '-p',\n $newinfo{'port'}, '--data-only', '-d', $db,\n '--disable-triggers', '--no-data-for-failed-tables');\n open SINK, '|-', @restore_argv or\n error 'Could not execute pg_restore for new cluster';\n\nhttps://salsa.debian.org/postgresql/postgresql-common/blob/master/pg_upgradecluster#L511-521\n\nChristoph\n\n\n",
"msg_date": "Tue, 9 Apr 2019 13:58:14 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unified logging system for command-line programs"
},
{
"msg_contents": "Re: Peter Eisentraut 2019-04-09 <d483cdb6-db98-9b2f-7f2b-eed0f4bd975d@2ndquadrant.com>\n> I'm okay with changing it. As you indicate, I chose the name so that it\n> doesn't look like a libpq variable. There are some other PG_ variables\n> throughout the code, but those appear to be mostly for internal use.\n> Also, there is GCC_COLORS, LS_COLORS, etc. But perhaps this wisdom will\n> be lost on users who just read the man page and get confused. ;-)\n\nDo we need two variables to control this? I was only looking at\nPG_COLOR, and noticed PG_COLORS only later. Keeping PG_COLORS aligned\nwith {GCC,LS}_COLORS makes sense. How about removing PG_COLOR, and\nmaking \"auto\" the default? (Maybe we could still support \"PG_COLORS=off\")\n\nChristoph\n\n\n",
"msg_date": "Tue, 9 Apr 2019 14:01:12 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "On 2019-04-09 13:58, Christoph Berg wrote:\n> I'm not entirely sure what happened here, but I think this made\n> pg_restore verbose by default (and there is no --quiet option).\n\nThat was by accident. Fixed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 10 Apr 2019 11:50:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Unified logging system for command-line programs"
},
{
"msg_contents": "On Tue, Apr 9, 2019 at 9:01 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Peter Eisentraut 2019-04-09 <d483cdb6-db98-9b2f-7f2b-eed0f4bd975d@2ndquadrant.com>\n> > I'm okay with changing it. As you indicate, I chose the name so that it\n> > doesn't look like a libpq variable. There are some other PG_ variables\n> > throughout the code, but those appear to be mostly for internal use.\n> > Also, there is GCC_COLORS, LS_COLORS, etc. But perhaps this wisdom will\n> > be lost on users who just read the man page and get confused. ;-)\n>\n> Do we need two variables to control this? I was only looking at\n> PG_COLOR, and noticed PG_COLORS only later. Keeping PG_COLORS aligned\n> with {GCC,LS}_COLORS makes sense. How about removing PG_COLOR, and\n> making \"auto\" the default? (Maybe we could still support \"PG_COLORS=off\")\n>\n\nI think the if we keep two variables user can set the same value to\nboth GCC_COLORS and PG_COLORS. Rather I think it's a problem that\nthere is no documentation of PG_COLORS. Thoughts?\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 6 Jun 2019 18:08:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "On 2019-06-06 11:08, Masahiko Sawada wrote:\n> On Tue, Apr 9, 2019 at 9:01 PM Christoph Berg <myon@debian.org> wrote:\n>>\n>> Re: Peter Eisentraut 2019-04-09 <d483cdb6-db98-9b2f-7f2b-eed0f4bd975d@2ndquadrant.com>\n>>> I'm okay with changing it. As you indicate, I chose the name so that it\n>>> doesn't look like a libpq variable. There are some other PG_ variables\n>>> throughout the code, but those appear to be mostly for internal use.\n>>> Also, there is GCC_COLORS, LS_COLORS, etc. But perhaps this wisdom will\n>>> be lost on users who just read the man page and get confused. ;-)\n>>\n>> Do we need two variables to control this? I was only looking at\n>> PG_COLOR, and noticed PG_COLORS only later. Keeping PG_COLORS aligned\n>> with {GCC,LS}_COLORS makes sense. How about removing PG_COLOR, and\n>> making \"auto\" the default? (Maybe we could still support \"PG_COLORS=off\")\n>>\n> \n> I think the if we keep two variables user can set the same value to\n> both GCC_COLORS and PG_COLORS. Rather I think it's a problem that\n> there is no documentation of PG_COLORS. Thoughts?\n\nIt looks like there is documentation for PG_COLORS in the release notes\nnow, which seems like an odd place. Suggestions for a better place?\n\nAnd any more opinions for PG_COLORS vs PGCOLORS naming?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 18 Sep 2019 13:03:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-06-06 11:08, Masahiko Sawada wrote:\n>>> Do we need two variables to control this? I was only looking at\n>>> PG_COLOR, and noticed PG_COLORS only later. Keeping PG_COLORS aligned\n>>> with {GCC,LS}_COLORS makes sense. How about removing PG_COLOR, and\n>>> making \"auto\" the default? (Maybe we could still support \"PG_COLORS=off\")\n\n>> I think the if we keep two variables user can set the same value to\n>> both GCC_COLORS and PG_COLORS. Rather I think it's a problem that\n>> there is no documentation of PG_COLORS. Thoughts?\n\n> It looks like there is documentation for PG_COLORS in the release notes\n> now, which seems like an odd place. Suggestions for a better place?\n\nI stuck that in because Bruce's text didn't make any sense to me,\nso I went and read the code to see what it was actually doing.\nI didn't know that it hadn't been correctly documented in the first\nplace ;-)\n\nI'm not for forcing \"auto\" mode all the time; that will surely break\nthings for some people. So I think the behavior is fine and\nwe should just fix the docs. (Possibly my opinion is biased here\nby the fact that I hate all forms of colorized output with a deep,\nabiding passion, as Robert would put it. So off-by-default is just\nfine with me.)\n\n> And any more opinions for PG_COLORS vs PGCOLORS naming?\n\nFollowing the precedent of LS_COLORS makes sense from here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Sep 2019 11:19:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> It looks like there is documentation for PG_COLORS in the release notes\n> now, which seems like an odd place. Suggestions for a better place?\n\nBTW, as far as that goes, it looks like PG_COLOR is documented separately\nin each frontend program's \"Environment\" man page section. That's a bit\nduplicative but I don't think we have a better answer right now. Seems\nlike you just need to add boilerplate text about PG_COLORS alongside\neach of those.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Sep 2019 19:27:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGCOLOR? (Re: pgsql: Unified logging system for command-line\n programs)"
}
] |
[
{
"msg_contents": "Hello,\n\nI think the following conditional code is misleading, and I wonder if\nit would be better like so:\n\n--- a/src/backend/storage/smgr/md.c\n+++ b/src/backend/storage/smgr/md.c\n@@ -1787,8 +1787,13 @@ _mdfd_openseg(SMgrRelation reln, ForkNumber\nforknum, BlockNumber segno,\n if (fd < 0)\n return NULL;\n\n- if (segno <= reln->md_num_open_segs[forknum])\n- _fdvec_resize(reln, forknum, segno + 1);\n+ /*\n+ * Segments are always opened in order from lowest to highest,\nso we must\n+ * be adding a new one at the end.\n+ */\n+ Assert(segno == reln->md_num_open_segs[forknum]);\n+\n+ _fdvec_resize(reln, forknum, segno + 1);\n\n /* fill the entry */\n v = &reln->md_seg_fds[forknum][segno];\n\nI think the condition is always true, and with == it would also always\nbe true. If that weren't the case, the call to _fdvec_resize() code\nwould effectively leak vfds.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Apr 2019 17:14:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "At Wed, 3 Apr 2019 17:14:36 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in <CA+hUKG+NBw+uSzxF1os-SO6gUuw=cqO5DAybk6KnHKzgGvxhxA@mail.gmail.com>\n> Hello,\n> \n> I think the following conditional code is misleading, and I wonder if\n> it would be better like so:\n> \n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -1787,8 +1787,13 @@ _mdfd_openseg(SMgrRelation reln, ForkNumber\n> forknum, BlockNumber segno,\n> if (fd < 0)\n> return NULL;\n> \n> - if (segno <= reln->md_num_open_segs[forknum])\n> - _fdvec_resize(reln, forknum, segno + 1);\n> + /*\n> + * Segments are always opened in order from lowest to highest,\n> so we must\n> + * be adding a new one at the end.\n> + */\n> + Assert(segno == reln->md_num_open_segs[forknum]);\n> +\n> + _fdvec_resize(reln, forknum, segno + 1);\n> \n> /* fill the entry */\n> v = &reln->md_seg_fds[forknum][segno];\n> \n> I think the condition is always true, and with == it would also always\n> be true. If that weren't the case, the call to _fdvec_resize() code\n> would effectively leak vfds.\n\nI may be missing something, but it seems possible that\n_mdfd_getseg calls it with segno > opensegs.\n\n| for (nextsegno = reln->md_num_open_segs[forknum];\n| nextsegno <= targetseg; nextsegno++)\n| ...\n| v = _mdfd_openseg(reln, forknum, nextsegno, flags);\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Wed, 03 Apr 2019 13:33:57 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 5:34 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> I may be missing something, but it seems possible that\n> _mdfd_getseg calls it with segno > opensegs.\n>\n> | for (nextsegno = reln->md_num_open_segs[forknum];\n\nHere nextsegno starts out equal to opensegs.\n\n> | nextsegno <= targetseg; nextsegno++)\n\nHere we add one to nextsegno...\n\n> | ...\n> | v = _mdfd_openseg(reln, forknum, nextsegno, flags);\n\n... after adding one to opensegs. So they're always equal. This fits\na general programming pattern when appending to an array, the next\nelement's index is the same as the number of elements. But I claim\nthe coding is weird, because _mdfd_openseg's *looks* like it can\nhandle opening segments in any order, except that the author\naccidentally wrote \"<=\" instead of \">=\". In fact it can't open them\nin any order, because we don't support \"holes\" in the array. So I\nthink it should really be \"==\", and it should be an assertion, not a\ncondition.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Apr 2019 09:24:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "Hi,\n\nOn 2019-04-04 09:24:49 +1300, Thomas Munro wrote:\n> On Wed, Apr 3, 2019 at 5:34 PM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > I may be missing something, but it seems possible that\n> > _mdfd_getseg calls it with segno > opensegs.\n> >\n> > | for (nextsegno = reln->md_num_open_segs[forknum];\n> \n> Here nextsegno starts out equal to opensegs.\n> \n> > | nextsegno <= targetseg; nextsegno++)\n> \n> Here we add one to nextsegno...\n> \n> > | ...\n> > | v = _mdfd_openseg(reln, forknum, nextsegno, flags);\n> \n> ... after adding one to opensegs. So they're always equal. This fits\n> a general programming pattern when appending to an array, the next\n> element's index is the same as the number of elements. But I claim\n> the coding is weird, because _mdfd_openseg's *looks* like it can\n> handle opening segments in any order, except that the author\n> accidentally wrote \"<=\" instead of \">=\". In fact it can't open them\n> in any order, because we don't support \"holes\" in the array. So I\n> think it should really be \"==\", and it should be an assertion, not a\n> condition.\n\nYea, I totally agree it's weird. I'm not sure if I'd go for an assertion\nof equality, or just invert the >= (which I agree I probably just\nscrewed up and didn't notice when reviewing the patch because it looked\nclose enough to correct and it didn't have a measurable effect).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Apr 2019 13:47:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "Hello.\n\nAt Wed, 3 Apr 2019 13:47:46 -0700, Andres Freund <andres@anarazel.de> wrote in <20190403204746.2yumq7c2mirmodsg@alap3.anarazel.de>\n> Hi,\n> \n> On 2019-04-04 09:24:49 +1300, Thomas Munro wrote:\n> > On Wed, Apr 3, 2019 at 5:34 PM Kyotaro HORIGUCHI\n> > <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > > I may be missing something, but it seems possible that\n> > > _mdfd_getseg calls it with segno > opensegs.\n> > >\n> > > | for (nextsegno = reln->md_num_open_segs[forknum];\n> > \n> > Here nextsegno starts out equal to opensegs.\n> > \n> > > | nextsegno <= targetseg; nextsegno++)\n> > \n> > Here we add one to nextsegno...\n> > \n> > > | ...\n> > > | v = _mdfd_openseg(reln, forknum, nextsegno, flags);\n> > \n> > ... after adding one to opensegs. So they're always equal. This fits\n> > a general programming pattern when appending to an array, the next\n> > element's index is the same as the number of elements. But I claim\n> > the coding is weird, because _mdfd_openseg's *looks* like it can\n> > handle opening segments in any order, except that the author\n> > accidentally wrote \"<=\" instead of \">=\". In fact it can't open them\n> > in any order, because we don't support \"holes\" in the array. So I\n> > think it should really be \"==\", and it should be an assertion, not a\n> > condition.\n> \n> Yea, I totally agree it's weird. I'm not sure if I'd go for an assertion\n> of equality, or just invert the >= (which I agree I probably just\n> screwed up and didn't notice when reviewing the patch because it looked\n> close enough to correct and it didn't have a measurable effect).\n\nI looked there and agreed. _mdfd_openseg is always called just to\nadd one new segment after the last opened segment by the two\ncallers. So I think == is better.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n",
"msg_date": "Thu, 04 Apr 2019 12:15:52 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 4:16 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> At Wed, 3 Apr 2019 13:47:46 -0700, Andres Freund <andres@anarazel.de> wrote in <20190403204746.2yumq7c2mirmodsg@alap3.anarazel.de>\n> > Yea, I totally agree it's weird. I'm not sure if I'd go for an assertion\n> > of equality, or just invert the >= (which I agree I probably just\n> > screwed up and didn't notice when reviewing the patch because it looked\n> > close enough to correct and it didn't have a measurable effect).\n>\n> I looked there and agreed. _mdfd_openseg is always called just to\n> add one new segment after the last opened segment by the two\n> callers. So I think == is better.\n\nThanks. Some other little things I noticed while poking around in this area:\n\nCallers of _mdgetseg(EXTENSION_RETURN_NULL) expect errno to be set if\nit returns NULL, and it expects the same of\nmdopen(EXTERNSION_RETURN_NULL), and yet the latter does:\n\n fd = PathNameOpenFile(path, O_RDWR | PG_BINARY);\n\n if (fd < 0)\n {\n if ((behavior & EXTENSION_RETURN_NULL) &&\n FILE_POSSIBLY_DELETED(errno))\n {\n pfree(path);\n return NULL;\n }\n\n1. I guess that needs save_errno treatment to protect it from being\nclobbered by pfree()?\n2. It'd be nice if function documentation explicitly said which\nfunctions set errno on error (and perhaps which syscalls).\n3. Why does some code use \"file < 0\" and other code \"file <= 0\" to\ndetect errors from fd.c functions that return File?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Apr 2019 18:44:15 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "At Fri, 5 Apr 2019 18:44:15 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in <CA+hUKGKa-OKiNEsWOs+SWugpSE-C7MebejK-dDipaoS17BkRNw@mail.gmail.com>\n> On Thu, Apr 4, 2019 at 4:16 PM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > At Wed, 3 Apr 2019 13:47:46 -0700, Andres Freund <andres@anarazel.de> wrote in <20190403204746.2yumq7c2mirmodsg@alap3.anarazel.de>\n> > > Yea, I totally agree it's weird. I'm not sure if I'd go for an assertion\n> > > of equality, or just invert the >= (which I agree I probably just\n> > > screwed up and didn't notice when reviewing the patch because it looked\n> > > close enough to correct and it didn't have a measurable effect).\n> >\n> > I looked there and agreed. _mdfd_openseg is always called just to\n> > add one new segment after the last opened segment by the two\n> > callers. So I think == is better.\n> \n> Thanks. Some other little things I noticed while poking around in this area:\n> \n> Callers of _mdgetseg(EXTENSION_RETURN_NULL) expect errno to be set if\n> it returns NULL, and it expects the same of\n\nOnly mdsyncfiletag seems expecting that and it is documented. But\n_mdfd_getseg is not documented as the same. mdopen also is not.\n\n> mdopen(EXTERNSION_RETURN_NULL), and yet the latter does:\n> \n> fd = PathNameOpenFile(path, O_RDWR | PG_BINARY);\n> \n> if (fd < 0)\n> {\n> if ((behavior & EXTENSION_RETURN_NULL) &&\n> FILE_POSSIBLY_DELETED(errno))\n> {\n> pfree(path);\n> return NULL;\n> }\n\n> 1. I guess that needs save_errno treatment to protect it from being\n> clobbered by pfree()?\n\nIf both elog() and free() don't change errno, we don't need to do\nthat at least for AllocSetFree, and is seems to be the same for\nother allocators. I think it is better to guarantee (and\ndocument) that errno does not change by pfree(), rather than to\nprotect in the caller side.a\n\n> 2. It'd be nice if function documentation explicitly said which\n> functions set errno on error (and perhaps which syscalls).\n\nI agree about errno. I'm not sure about syscall (names?).\n\n> 3. Why does some code use \"file < 0\" and other code \"file <= 0\" to\n> detect errors from fd.c functions that return File?\n\nThat seems just a thinko, or difference of how to think about\ninvalid (or impossible) values. Vfd == 0 is invalid and\nimpossible, so file <=0 and < 0 are effectively the\nequivalents. I think we should treat 0 as error rather than\nsucess. I don't think it worth to do Assert(file != 0).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 08 Apr 2019 15:34:33 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 12:15:52PM +0900, Kyotaro HORIGUCHI wrote:\n> At Wed, 3 Apr 2019 13:47:46 -0700, Andres Freund <andres@anarazel.de> wrote in <20190403204746.2yumq7c2mirmodsg@alap3.anarazel.de>\n> > On 2019-04-04 09:24:49 +1300, Thomas Munro wrote:\n> > > On Wed, Apr 3, 2019 at 5:34 PM Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > > > I may be missing something, but it seems possible that\n> > > > _mdfd_getseg calls it with segno > opensegs.\n> > > >\n> > > > | for (nextsegno = reln->md_num_open_segs[forknum];\n> > > \n> > > Here nextsegno starts out equal to opensegs.\n> > > \n> > > > | nextsegno <= targetseg; nextsegno++)\n> > > \n> > > Here we add one to nextsegno...\n> > > \n> > > > | ...\n> > > > | v = _mdfd_openseg(reln, forknum, nextsegno, flags);\n> > > \n> > > ... after adding one to opensegs. So they're always equal. This fits\n> > > a general programming pattern when appending to an array, the next\n> > > element's index is the same as the number of elements. But I claim\n> > > the coding is weird, because _mdfd_openseg's *looks* like it can\n> > > handle opening segments in any order, except that the author\n> > > accidentally wrote \"<=\" instead of \">=\". In fact it can't open them\n> > > in any order, because we don't support \"holes\" in the array. So I\n> > > think it should really be \"==\", and it should be an assertion, not a\n> > > condition.\n> > \n> > Yea, I totally agree it's weird. I'm not sure if I'd go for an assertion\n> > of equality, or just invert the >= (which I agree I probably just\n> > screwed up and didn't notice when reviewing the patch because it looked\n> > close enough to correct and it didn't have a measurable effect).\n> \n> I looked there and agreed. _mdfd_openseg is always called just to\n> add one new segment after the last opened segment by the two\n> callers. So I think == is better.\n\nAgreed. The rest of md.c won't cope with a hole in this array, so allowing\nless-than-or-equal here is futile. The patch in the original post looks fine.\n\n\n",
"msg_date": "Sat, 25 Jan 2020 11:23:07 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
},
{
"msg_contents": "On Sun, Jan 26, 2020 at 8:23 AM Noah Misch <noah@leadboat.com> wrote:\n> Agreed. The rest of md.c won't cope with a hole in this array, so allowing\n> less-than-or-equal here is futile. The patch in the original post looks fine.\n\nThanks. Pushed.\n\n\n",
"msg_date": "Mon, 27 Jan 2020 09:18:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Strange coding in _mdfd_openseg()"
}
] |
[
{
"msg_contents": "Hi all,\n(Adding Simon as the author of toast_tuple_target, as well Andrew and \nPavan in CC.)\n\ntoast_tuple_target has been introduced in 2017 by c251336 as of v11.\nAnd while reviewing Pavan's patch to have more complex control over\nthe compression threshold of a tuple, I have bumped into some\nsurprising code:\nhttps://www.postgresql.org/message-id/20190403044916.GD3298@paquier.xyz\n\nAs far as I understand it, even with this option we don't try to toast\ntuples in heap_prepare_insert() and heap_update() where\nTOAST_TUPLE_THRESHOLD gets used to define if a tuple can be toasted or\nnot. The same applies to raw_heap_insert() in rewriteheap.c, and\nneeds_toast_table() in toasting.c.\n\nShouldn't we use the reloption instead of the compiled threshold to\ndetermine if a tuple should be toasted or not? Perhaps I am missing\nsomething? It seems to me that this is a bug that should be\nback-patched, but it could also be qualified as a behavior change for\nexisting relations.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 3 Apr 2019 15:37:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 2:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Hi all,\n> (Adding Simon as the author of toast_tuple_target, as well Andrew and\n> Pavan in CC.)\n>\n> toast_tuple_target has been introduced in 2017 by c251336 as of v11.\n> And while reviewing Pavan's patch to have more complex control over\n> the compression threshold of a tuple, I have bumped into some\n> surprising code:\n> https://www.postgresql.org/message-id/20190403044916.GD3298@paquier.xyz\n>\n> As far as I understand it, even with this option we don't try to toast\n> tuples in heap_prepare_insert() and heap_update() where\n> TOAST_TUPLE_THRESHOLD gets used to define if a tuple can be toasted or\n> not. The same applies to raw_heap_insert() in rewriteheap.c, and\n> needs_toast_table() in toasting.c.\n>\n> Shouldn't we use the reloption instead of the compiled threshold to\n> determine if a tuple should be toasted or not? Perhaps I am missing\n> something? It seems to me that this is a bug that should be\n> back-patched, but it could also be qualified as a behavior change for\n> existing relations.\n\nCould you explain a bit more clearly what you think the bug is?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Apr 2019 12:13:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Wed, Apr 03, 2019 at 12:13:51PM -0400, Robert Haas wrote:\n> On Wed, Apr 3, 2019 at 2:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Shouldn't we use the reloption instead of the compiled threshold to\n>> determine if a tuple should be toasted or not? Perhaps I am missing\n>> something? It seems to me that this is a bug that should be\n>> back-patched, but it could also be qualified as a behavior change for\n>> existing relations.\n> \n> Could you explain a bit more clearly what you think the bug is?\n\nI mean that toast_tuple_target is broken as-is, because it should be\nused on the new tuples of a relation as a threshold to decide if this\ntuple should be toasted or not, but we don't actually use the\nreloption value for that decision-making: the default threshold\nTOAST_TUPLE_THRESHOLD gets used instead all the time! The code does\nnot even create a toast table in some cases.\n\nYou may want to look at the attached patch if those words make little\nsense as code may be easier to explain than words here. Here is also\na simple example:\nCREATE TABLE toto (b text) WITH (toast_tuple_target = 1024);\nINSERT INTO toto SELECT string_agg('', md5(random()::text))\n FROM generate_series(1,10); -- 288 bytes\nSELECT pg_relation_size(reltoastrelid) = 0 AS is_empty FROM\n pg_class where relname = 'toto';\nINSERT INTO toto SELECT string_agg('', md5(random()::text))\n FROM generate_series(1,40); -- 1248 bytes\nSELECT pg_relation_size(reltoastrelid) = 0 AS is_empty FROM\n pg_class where relname = 'toto';\n\nOn HEAD, the second INSERT does *not* toast the tuple (is_empty is\ntrue). With the patch attached, the second INSERT toasts the tuple as\nI would expect (is_empty is *false*) per the custom threshold\ndefined.\n\nWhile on it, I think that the extra argument for\nRelationGetToastTupleTarget() is useless as the default value should\nbe TOAST_TUPLE_THRESHOLD all the time.\n\nDoes this make sense?\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 15:06:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 11:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n>\n> I mean that toast_tuple_target is broken as-is, because it should be\n> used on the new tuples of a relation as a threshold to decide if this\n> tuple should be toasted or not, but we don't actually use the\n> reloption value for that decision-making: the default threshold\n> TOAST_TUPLE_THRESHOLD gets used instead all the time! The code does\n> not even create a toast table in some cases.\n>\n\nI think the problem with the existing code is that while it allows users to\nset toast_tuple_target to be less than TOAST_TUPLE_THRESHOLD, the same is\nnot honoured while deciding whether to toast a row or not. AFAICS it works\nok when toast_tuple_target is greater than or equal to\nTOAST_TUPLE_THRESHOLD i.e. it won't toast the rows unless they are larger\nthan toast_tuple_target.\n\nIMV it makes sense to simply cap the lower limit of toast_tuple_target to\nthe compile time default and update docs to reflect that. Otherwise, we\nneed to deal with the possibility of dynamically creating the toast table\nif the relation option is lowered after creating the table. Your proposed\npatch handles the case when the toast_tuple_target is specified at CREATE\nTABLE, but we would still have problem with ALTER TABLE, no? But there\nmight be side effects of changing the lower limit for pg_dump/pg_restore.\nSo we would need to think about that too.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Thu, Apr 4, 2019 at 11:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n\nI mean that toast_tuple_target is broken as-is, because it should be\nused on the new tuples of a relation as a threshold to decide if this\ntuple should be toasted or not, but we don't actually use the\nreloption value for that decision-making: the default threshold\nTOAST_TUPLE_THRESHOLD gets used instead all the time! The code does\nnot even create a toast table in some cases.I think the problem with the existing code is that while it allows users to set toast_tuple_target to be less than TOAST_TUPLE_THRESHOLD, the same is not honoured while deciding whether to toast a row or not. AFAICS it works ok when toast_tuple_target is greater than or equal to TOAST_TUPLE_THRESHOLD i.e. it won't toast the rows unless they are larger than toast_tuple_target.IMV it makes sense to simply cap the lower limit of toast_tuple_target to the compile time default and update docs to reflect that. Otherwise, we need to deal with the possibility of dynamically creating the toast table if the relation option is lowered after creating the table. Your proposed patch handles the case when the toast_tuple_target is specified at CREATE TABLE, but we would still have problem with ALTER TABLE, no? But there might be side effects of changing the lower limit for pg_dump/pg_restore. So we would need to think about that too.Thanks,Pavan-- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 5 Apr 2019 10:00:52 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Fri, 5 Apr 2019 at 17:31, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n> IMV it makes sense to simply cap the lower limit of toast_tuple_target to the compile time default and update docs to reflect that. Otherwise, we need to deal with the possibility of dynamically creating the toast table if the relation option is lowered after creating the table. Your proposed patch handles the case when the toast_tuple_target is specified at CREATE TABLE, but we would still have problem with ALTER TABLE, no? But there might be side effects of changing the lower limit for pg_dump/pg_restore. So we would need to think about that too.\n\nFWIW I independently stumbled upon this problem today and I concluded\nthe same thing, we can only make the lower limit for the\ntoast_tuple_threshold reloption the same as TOAST_TUPLE_THRESHOLD. (I\nwas unaware of this thread, so I reported in [1])\n\nI only quickly looked at Michael's patch and it does not seem to do\nanything for the case that if no toast table exists and the user\nlowers the reloption, then nothing seems to be there to build a new\ntoast table.\n\nI mentioned over in [1] that:\n> It does not seem possible to add/remote the toast table when the\n> reloption is changed either as we're only obtaining a\n> ShareUpdateExclusiveLock to set it. We'd likely need to upgrade that\n> to an AccessExclusiveLock to do that.\n\nReading over the original discussion in [2], Simon seemed more\ninterested in delaying the toasting for tuples larger than 2040 bytes,\nnot making it happen sooner. This makes sense since smaller datums are\nincreasingly less likely to compress the smaller they are.\n\nThe question is, can we increase the lower limit. We don't want\npg_upgrade or pg_dump reloads failing from older versions. Perhaps we\ncan just silently set the reloption to TOAST_TUPLE_THRESHOLD when the\nuser gives us some lower value. At least then lower values would\ndisappear over time.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9vrJ55oYe7un+rakTzwaGh3my5MA0RBfyNngAXu7eVeQ@mail.gmail.com\n[2] https://postgr.es/m/CANP8+jKsVmw6CX6YP9z7zqkTzcKV1+Uzr3XjKcZW=2Ya00OyQQ@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 5 Apr 2019 23:20:46 +1300",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Fri, 5 Apr 2019 at 17:31, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n> IMV it makes sense to simply cap the lower limit of toast_tuple_target to the compile time default and update docs to reflect that. Otherwise, we need to deal with the possibility of dynamically creating the toast table if the relation option is lowered after creating the table. Your proposed patch handles the case when the toast_tuple_target is specified at CREATE TABLE, but we would still have problem with ALTER TABLE, no? But there might be side effects of changing the lower limit for pg_dump/pg_restore. So we would need to think about that too.\n\nI've attached a patch which increases the lower limit up to\nTOAST_TUPLE_TARGET. Unfortunately, reloptions don't have an\nassign_hook like GUCs do. Unless we add those we've no way to still\naccept lower values without an error. Does anyone think that's worth\nadding for this? Without it, it's possible that pg_restore /\npg_upgrade could fail if someone happened to have toast_tuple_target\nset lower than 2032 bytes.\n\nI didn't bother capping RelationGetToastTupleTarget() to ensure it\nnever returns less than TOAST_TUPLE_TARGET since the code that\ncurrently uses it can't trigger if it's lower than that value.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 16 Apr 2019 23:30:51 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Tue, 16 Apr 2019 at 23:30, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've attached a patch which increases the lower limit up to\n> TOAST_TUPLE_TARGET. Unfortunately, reloptions don't have an\n> assign_hook like GUCs do. Unless we add those we've no way to still\n> accept lower values without an error. Does anyone think that's worth\n> adding for this? Without it, it's possible that pg_restore /\n> pg_upgrade could fail if someone happened to have toast_tuple_target\n> set lower than 2032 bytes.\n\nHi Michael,\n\nI'm just wondering if you've had any thoughts on this?\n\nTo recap, Pavan and I think it's a problem to allow the\ntoast_tuple_target to be reduced as the relation in question may not\nhave a toast table defined. It does not seem very possible to call\ncreate_toast_table() when the toast_tuple_target is changed since we\nonly obtain a ShareUpdateExclusiveLock when doing that.\n\nThe options seem to be:\n1. Make the lower limit of toast_tuple_target the same as\nTOAST_TUPLE_THRESHOLD; or\n2. Require an AccessExclusiveLock when setting toast_tuple_target and\ncall create_toast_table() to ensure we get a toast table when the\nsetting is reduced to a level that means the target table may require\na toast relation.\n\nI sent a patch for #1, but maybe someone thinks we should do #2? The\nreason I've not explored #2 more is that after reading the original\ndiscussion when this patch was being developed, the main interest\nseemed to be keeping the values inline longer. Moving them out of\nline sooner seems to make less sense since smaller values are less\nlikely to compress as well as larger values.\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 30 Apr 2019 14:20:27 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Tue, Apr 30, 2019 at 02:20:27PM +1200, David Rowley wrote:\n> The options seem to be:\n> 1. Make the lower limit of toast_tuple_target the same as\n> TOAST_TUPLE_THRESHOLD; or\n> 2. Require an AccessExclusiveLock when setting toast_tuple_target and\n> call create_toast_table() to ensure we get a toast table when the\n> setting is reduced to a level that means the target table may require\n> a toast relation.\n\nActually, the patch I sent upthread does call create_toast_table() to\nattempt to create a toast table. However it fails lamentably because\nit lacks an exclusive lock when setting toast_tuple_target as you\noutlined:\ncreate table ab (a char(300));\nalter table ab set (toast_tuple_target = 128);\ninsert into ab select string_agg('', md5(random()::text))\n from generate_series(1,10); -- 288 bytes\n\nThis fails for the ALTER TABLE step like that:\nERROR: XX000: AccessExclusiveLock required to add toast table.\n\nNow if I upgrade to AccessExclusiveLock then the thing is able to\ntoast tuples related to the lower threshold set. Here is the stack if\nyou are interested:\n#0 create_toast_table (rel=0x7f8af648d178, toastOid=0,\ntoastIndexOid=0, reloptions=0, lockmode=8, check=true) at\ntoasting.c:131\n#1 0x00005627da7a4eca in CheckAndCreateToastTable (relOid=16385,\nreloptions=0, lockmode=8, check=true) at toasting.c:86\n#2 0x00005627da7a4e1e in AlterTableCreateToastTable (relOid=16385,\nreloptions=0, lockmode=8) at toasting.c:63\n#3 0x00005627da87f479 in ATRewriteCatalogs (wqueue=0x7fffb77cfae8,\nlockmode=8) at tablecmds.c:4185\n\n> I sent a patch for #1, but maybe someone thinks we should do #2? The\n> reason I've not explored #2 more is that after reading the original\n> discussion when this patch was being developed, the main interest\n> seemed to be keeping the values inline longer. Moving them out of\n> line sooner seems to make less sense since smaller values are less\n> likely to compress as well as larger values.\n\nYes, I have been chewing on the original thread from Simon, and it\nseems really that he got interested in larger values when working on\nthis patch. And anyway, on HEAD we currently allow a toast table to\nbe created only if the threshold is at least TOAST_TUPLE_THRESHOLD,\nso we have an inconsistency between reloptions.c and\nneeds_toast_table().\n\nThere could be an argument for allowing lower thresholds, but let's\nsee if somebody has a better use-case for it. In this case they would\nneed to upgrade the lock needed to set toast_tuple_target. I actually\ndon't have an argument in favor of that, thinking about it more.\n\nNow, can we really increase the minimum value as you and Pavan\npropose? For now anything between 128 and TOAST_TUPLE_TARGET gets\nsilently ignored, but if we increase the threshold as you propose we\ncould prevent some dumps to be restored, and as storage parameters are\ndefined as part of a WITH clause in CREATE TABLE, this could break\nrestores for a lot of users. We could tell pg_dump to enforce any\nvalues between 128 and TOAST_TUPLE_THRESHOLD to be enforced to\nTOAST_TUPLE_THRESHOLD, still that's a lot of complication just to take\ncare of one inconsistency.\n\nHence, based on that those arguments, there is option #3 to do\nnothing. Perhaps the surrounding comments could make the current\nbehavior less confusing though.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 15:49:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Tue, 14 May 2019 at 18:49, Michael Paquier <michael@paquier.xyz> wrote:\n> Now, can we really increase the minimum value as you and Pavan\n> propose? For now anything between 128 and TOAST_TUPLE_TARGET gets\n> silently ignored, but if we increase the threshold as you propose we\n> could prevent some dumps to be restored, and as storage parameters are\n> defined as part of a WITH clause in CREATE TABLE, this could break\n> restores for a lot of users. We could tell pg_dump to enforce any\n> values between 128 and TOAST_TUPLE_THRESHOLD to be enforced to\n> TOAST_TUPLE_THRESHOLD, still that's a lot of complication just to take\n> care of one inconsistency.\n\nIf we had reloption validation functions then we could, but we don't,\nso it seems we'd have no choice but reporting a hard ERROR.\n\nI guess it's not impossible for pg_dump to fail on this even without\nthis change. If someone had increased the limit on an instance with\nsay 16k page to something over what TOAST_TUPLE_TARGET_MAIN would be\non a standard instance, then restoring onto the 8k page instance will\nfail. Of course, that's less likely since it's a whole other factor\nin the equation, and it's still not impossible, so maybe we need to\nthink about it harder.\n\n> Hence, based on that those arguments, there is option #3 to do\n> nothing. Perhaps the surrounding comments could make the current\n> behavior less confusing though.\n\nI see this item has been moved to the \"Nothing to do\" section of the\nopen items list. I'd really like to see a few other people comment\nbefore we go and ignore this. We only get 1 opportunity to release a\nfix like this per year and it would be good to get further\nconfirmation if we want to leave this.\n\nIn my view, someone who has to go to the trouble of changing this\nsetting is probably going to have quite a bit of data in their\ndatabase and is quite unlikely to be using pg_dump due to that. Does\nthat mean we can make this cause an ERROR?... I don't know, but would\nbe good to hear what others think.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 12:33:54 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
},
{
"msg_contents": "On Tue, May 21, 2019 at 12:33:54PM +1200, David Rowley wrote:\n> I guess it's not impossible for pg_dump to fail on this even without\n> this change. If someone had increased the limit on an instance with\n> say 16k page to something over what TOAST_TUPLE_TARGET_MAIN would be\n> on a standard instance, then restoring onto the 8k page instance will\n> fail. Of course, that's less likely since it's a whole other factor\n> in the equation, and it's still not impossible, so maybe we need to\n> think about it harder.\n\nSure, this one would be possible as well. Much less likely I guess as\nI don't imagine a lot of our user base which perform upgrades to new\ninstances by changing the page size. One way to trick that would be\nto use a ratio of the page size instead. I would imagine that\nchanging compile-time constraints when moving to a new version\nincreases since we have logical replication so as you can move things\nwith close to zero downtime without relying on the physical page\nsize.\n\n> I see this item has been moved to the \"Nothing to do\" section of the\n> open items list. I'd really like to see a few other people comment\n> before we go and ignore this. We only get 1 opportunity to release a\n> fix like this per year and it would be good to get further\n> confirmation if we want to leave this.\n\nYes, I moved this item without seeing any replies. Anyway, it's\nreally the kind of thing I'd rather not touch post beta, and I \nsee disadvantages in doing what you and Pavan propose as well. There\nis as well the argument that tuple_toast_target is so specialized that\nclose to zero people are using it, hence changing its lower bound\nwould impact nobody.\n\n> In my view, someone who has to go to the trouble of changing this\n> setting is probably going to have quite a bit of data in their\n> database and is quite unlikely to be using pg_dump due to that. Does\n> that mean we can make this cause an ERROR?... I don't know, but would\n> be good to hear what others think.\n\nSure. Other opinions are welcome. Perhaps I lack insight and user\nstories on the matter, but I unfortunately see downsides in all things\ndiscussed. I am a rather pessimistic guy by nature.\n--\nMichael",
"msg_date": "Tue, 21 May 2019 13:18:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Caveats from reloption toast_tuple_target"
}
] |
[
{
"msg_contents": "Hi all,\n\nSome tests for toast_tuple_target introduced by c251336 check if a\ntoast relation is empty or not using that: \n+select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from\npg_class where relname =\n'toasttest'))/current_setting('block_size')::integer as blocks;\n\nThis is overcomplicated as there is not need to compile the relation\ntoast name, and reltoastrelid can be used directly, like that:\nSELECT pg_relation_size(reltoastrelid) = 0 AS data_size\n FROM pg_class where relname = 'toasttest';\n\nAny objections if I simplify those tests as per the attached?\n--\nMichael",
"msg_date": "Wed, 3 Apr 2019 15:59:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Simplify redability of some tests for toast_tuple_target in\n strings.sql"
},
{
"msg_contents": "On Wednesday, April 3, 2019 8:59 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n>\n> Some tests for toast_tuple_target introduced by c251336 check if a\n> toast relation is empty or not using that:\n> +select 0 = pg_relation_size('pg_toast.pg_toast_'||(select oid from\n> pg_class where relname =\n> 'toasttest'))/current_setting('block_size')::integer as blocks;\n>\n> This is overcomplicated as there is not need to compile the relation\n> toast name, and reltoastrelid can be used directly, like that:\n> SELECT pg_relation_size(reltoastrelid) = 0 AS data_size\n> FROM pg_class where relname = 'toasttest';\n>\n> Any objections if I simplify those tests as per the attached?\n\n+1, that's much more readable. Thanks!\n\ncheers ./daniel\n\n\n",
"msg_date": "Wed, 03 Apr 2019 09:38:57 +0000",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Simplify redability of some tests for toast_tuple_target in\n strings.sql"
},
{
"msg_contents": "On Wed, Apr 03, 2019 at 09:38:57AM +0000, Daniel Gustafsson wrote:\n> +1, that's much more readable. Thanks!\n\nThanks for the lookup, done.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 10:26:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Simplify redability of some tests for toast_tuple_target in\n strings.sql"
}
] |
[
{
"msg_contents": "As pointed out by Michael Banck as a comment on my blogpost, the pg_rewind\ndocumentation says it requires superuser permissions on the remote server.\n\nIs that really so, though? I haven't tested it, but from a quick look at\nthe code it looks like it needs pg_ls_dir(), pg_stat_file() and\npg_read_binary_file(), all, of which are independently grantable.\n\nOr am I missing something?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nAs pointed out by Michael Banck as a comment on my blogpost, the pg_rewind documentation says it requires superuser permissions on the remote server.Is that really so, though? I haven't tested it, but from a quick look at the code it looks like it needs pg_ls_dir(), pg_stat_file() and pg_read_binary_file(), all, of which are independently grantable.Or am I missing something?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 3 Apr 2019 11:28:50 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "pg_rewind vs superuser"
},
{
"msg_contents": "On Wed, Apr 03, 2019 at 11:28:50AM +0200, Magnus Hagander wrote:\n> As pointed out by Michael Banck as a comment on my blogpost, the pg_rewind\n> documentation says it requires superuser permissions on the remote server.\n> \n> Is that really so, though? I haven't tested it, but from a quick look at\n> the code it looks like it needs pg_ls_dir(), pg_stat_file() and\n> pg_read_binary_file(), all, of which are independently grantable.\n> \n> Or am I missing something?\n\nSomebody I heard of has mentioned that stuff on his blog some time\nago:\nhttps://paquier.xyz/postgresql-2/postgres-11-superuser-rewind/\n\nAnd what you need to do is just that:\nCREATE USER rewind_user LOGIN;\nGRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)\nTO rewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_stat_file(text, boolean) TO\nrewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO\nrewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint,\nbigint, boolean) TO rewind_user;\n\nI think that we should document that and back-patch, as now the docs\nonly say that a superuser should be used, but that is wrong.\n\nAt the same time, let's also document that we need to use a checkpoint\non the promoted standby so as the control file gets a refresh and\npg_rewind is able to work properly. I promised that some time ago and\ngot reminded of that issue after seeing this thread...\n\nWhat do you think about the attached?\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 13:11:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 6:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Apr 03, 2019 at 11:28:50AM +0200, Magnus Hagander wrote:\n> > As pointed out by Michael Banck as a comment on my blogpost, the\n> pg_rewind\n> > documentation says it requires superuser permissions on the remote\n> server.\n> >\n> > Is that really so, though? I haven't tested it, but from a quick look at\n> > the code it looks like it needs pg_ls_dir(), pg_stat_file() and\n> > pg_read_binary_file(), all, of which are independently grantable.\n> >\n> > Or am I missing something?\n>\n> Somebody I heard of has mentioned that stuff on his blog some time\n> ago:\n> https://paquier.xyz/postgresql-2/postgres-11-superuser-rewind/\n\n\nHah. I usually read your blog, but I had forgotten about that one :)\n\n\nAnd what you need to do is just that:\n> CREATE USER rewind_user LOGIN;\n> GRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)\n> TO rewind_user;\n> GRANT EXECUTE ON function pg_catalog.pg_stat_file(text, boolean) TO\n> rewind_user;\n> GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO\n> rewind_user;\n> GRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint,\n> bigint, boolean) TO rewind_user;\n>\n> I think that we should document that and back-patch, as now the docs\n> only say that a superuser should be used, but that is wrong.\n>\n> At the same time, let's also document that we need to use a checkpoint\n> on the promoted standby so as the control file gets a refresh and\n> pg_rewind is able to work properly. I promised that some time ago and\n> got reminded of that issue after seeing this thread...\n>\n> What do you think about the attached?\n>\n\nLooks good. Maybe we should list the \"role having sufficient permissions\"\nbefore superuser, \"just because\", but not something I feel strongly about.\n\nThe part about CHECKPOINT also looks pretty good, but that's entirely\nunrelated, right? :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2019 at 6:11 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Apr 03, 2019 at 11:28:50AM +0200, Magnus Hagander wrote:\n> As pointed out by Michael Banck as a comment on my blogpost, the pg_rewind\n> documentation says it requires superuser permissions on the remote server.\n> \n> Is that really so, though? I haven't tested it, but from a quick look at\n> the code it looks like it needs pg_ls_dir(), pg_stat_file() and\n> pg_read_binary_file(), all, of which are independently grantable.\n> \n> Or am I missing something?\n\nSomebody I heard of has mentioned that stuff on his blog some time\nago:\nhttps://paquier.xyz/postgresql-2/postgres-11-superuser-rewind/Hah. I usually read your blog, but I had forgotten about that one :)\nAnd what you need to do is just that:\nCREATE USER rewind_user LOGIN;\nGRANT EXECUTE ON function pg_catalog.pg_ls_dir(text, boolean, boolean)\nTO rewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_stat_file(text, boolean) TO\nrewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text) TO\nrewind_user;\nGRANT EXECUTE ON function pg_catalog.pg_read_binary_file(text, bigint,\nbigint, boolean) TO rewind_user;\n\nI think that we should document that and back-patch, as now the docs\nonly say that a superuser should be used, but that is wrong.\n\nAt the same time, let's also document that we need to use a checkpoint\non the promoted standby so as the control file gets a refresh and\npg_rewind is able to work properly. I promised that some time ago and\ngot reminded of that issue after seeing this thread...\n\nWhat do you think about the attached?Looks good. Maybe we should list the \"role having sufficient permissions\" before superuser, \"just because\", but not something I feel strongly about.The part about CHECKPOINT also looks pretty good, but that's entirely unrelated, right? :)-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 4 Apr 2019 10:18:45 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 10:18:45AM +0200, Magnus Hagander wrote:\n> Looks good. Maybe we should list the \"role having sufficient permissions\"\n> before superuser, \"just because\", but not something I feel strongly about.\n\nListing the superuser after sounds fine to me.\n\n> The part about CHECKPOINT also looks pretty good, but that's entirely\n> unrelated, right? :)\n\nCompletely unrelated, but as we are on this part of the documentation\nnow, and as we discussed that stuff face-to-face last September where\nI actually promised to write a patch without doing it for seven\nmonths, I see no problems to tackle this issue as well now. Better\nlater than never :)\n\nI would like to apply this down to 9.5 for the checkpoint part and\ndown to 11 for the role part, so if anybody has any comments, please\nfeel free.\n--\nMichael",
"msg_date": "Thu, 4 Apr 2019 19:43:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 12:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Apr 04, 2019 at 10:18:45AM +0200, Magnus Hagander wrote:\n> > Looks good. Maybe we should list the \"role having sufficient permissions\"\n> > before superuser, \"just because\", but not something I feel strongly\n> about.\n>\n> Listing the superuser after sounds fine to me.\n>\n> > The part about CHECKPOINT also looks pretty good, but that's entirely\n> > unrelated, right? :)\n>\n> Completely unrelated, but as we are on this part of the documentation\n> now, and as we discussed that stuff face-to-face last September where\n> I actually promised to write a patch without doing it for seven\n> months, I see no problems to tackle this issue as well now. Better\n> later than never :)\n>\n\n:) Nope, I definitely think we need to include that.\n\n\nI would like to apply this down to 9.5 for the checkpoint part and\n> down to 11 for the role part, so if anybody has any comments, please\n> feel free.\n>\n\nAll of it, or just the checkpoint part? I assume just the checkpoint part?\nAFAIK it does require superuser in those earlier versions?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Apr 4, 2019 at 12:43 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Apr 04, 2019 at 10:18:45AM +0200, Magnus Hagander wrote:\n> Looks good. Maybe we should list the \"role having sufficient permissions\"\n> before superuser, \"just because\", but not something I feel strongly about.\n\nListing the superuser after sounds fine to me.\n\n> The part about CHECKPOINT also looks pretty good, but that's entirely\n> unrelated, right? :)\n\nCompletely unrelated, but as we are on this part of the documentation\nnow, and as we discussed that stuff face-to-face last September where\nI actually promised to write a patch without doing it for seven\nmonths, I see no problems to tackle this issue as well now. Better\nlater than never :):) Nope, I definitely think we need to include that.\nI would like to apply this down to 9.5 for the checkpoint part and\ndown to 11 for the role part, so if anybody has any comments, please\nfeel free.All of it, or just the checkpoint part? I assume just the checkpoint part? AFAIK it does require superuser in those earlier versions? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 4 Apr 2019 13:19:44 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 01:19:44PM +0200, Magnus Hagander wrote:\n> All of it, or just the checkpoint part? I assume just the checkpoint part?\n> AFAIK it does require superuser in those earlier versions?\n\nI meant of course the checkpoint part down to 9.5, and the rest down\nto 11, so done this way.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 10:40:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 04, 2019 at 01:11:29PM +0900, Michael Paquier wrote:\n> At the same time, let's also document that we need to use a checkpoint\n> on the promoted standby so as the control file gets a refresh and\n> pg_rewind is able to work properly. I promised that some time ago and\n> got reminded of that issue after seeing this thread...\n\nIs there a good reason why Postgres doesn't just issue a CHECKPOINT\nafter promote itself? After all, this seems to be about making the\ncontrol file having the proper content, which sounds like a good thing\nto have in general.\n\nCould this be a problem for anything else besides pg_rewind?\n\nThis looks like a needless footgun waiting to happen, and just\ndocumenting it in pg_rewind's notes section looks a bit too hidden to me\n(but is certainly an improvement).\n\n\nMichael\n\n\n",
"msg_date": "Fri, 5 Apr 2019 09:41:58 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 05, 2019 at 09:41:58AM +0200, Michael Banck wrote:\n> Is there a good reason why Postgres doesn't just issue a CHECKPOINT\n> after promote itself? After all, this seems to be about making the\n> control file having the proper content, which sounds like a good thing\n> to have in general.\n\nThe startup process requests a checkpoint since 9.3, and before that\nit was doing the checkpoint by itself (grep for fast_promoted and\nRequestCheckpoint() around line 7579 in xlog.c). This allows the\nrecovery to finish much faster.\n\n> Could this be a problem for anything else besides pg_rewind?\n\nNot that I know of, at least not in the tree.\n\n> This looks like a needless footgun waiting to happen, and just\n> documenting it in pg_rewind's notes section looks a bit too hidden to me\n> (but is certainly an improvement).\n\nWe had a couple of reports on the matter over the past years. Perhaps\nwe could use a big fat warning but that feels a bit overdoing it.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 16:56:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 9:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 05, 2019 at 09:41:58AM +0200, Michael Banck wrote:\n> > Is there a good reason why Postgres doesn't just issue a CHECKPOINT\n> > after promote itself? After all, this seems to be about making the\n> > control file having the proper content, which sounds like a good thing\n> > to have in general.\n>\n> The startup process requests a checkpoint since 9.3, and before that\n> it was doing the checkpoint by itself (grep for fast_promoted and\n> RequestCheckpoint() around line 7579 in xlog.c). This allows the\n> recovery to finish much faster.\n>\n> > Could this be a problem for anything else besides pg_rewind?\n>\n> Not that I know of, at least not in the tree.\n>\n> > This looks like a needless footgun waiting to happen, and just\n> > documenting it in pg_rewind's notes section looks a bit too hidden to me\n> > (but is certainly an improvement).\n>\n> We had a couple of reports on the matter over the past years. Perhaps\n> we could use a big fat warning but that feels a bit overdoing it.\n>\n\nA related question is, could we (for 12+) actually make the problem go\naway? As in, can we detect the state and just have pg_rewind issue the\ncheckpoint as needed?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 5, 2019 at 9:56 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Apr 05, 2019 at 09:41:58AM +0200, Michael Banck wrote:\n> Is there a good reason why Postgres doesn't just issue a CHECKPOINT\n> after promote itself? After all, this seems to be about making the\n> control file having the proper content, which sounds like a good thing\n> to have in general.\n\nThe startup process requests a checkpoint since 9.3, and before that\nit was doing the checkpoint by itself (grep for fast_promoted and\nRequestCheckpoint() around line 7579 in xlog.c). This allows the\nrecovery to finish much faster.\n\n> Could this be a problem for anything else besides pg_rewind?\n\nNot that I know of, at least not in the tree.\n\n> This looks like a needless footgun waiting to happen, and just\n> documenting it in pg_rewind's notes section looks a bit too hidden to me\n> (but is certainly an improvement).\n\nWe had a couple of reports on the matter over the past years. Perhaps\nwe could use a big fat warning but that feels a bit overdoing it.A related question is, could we (for 12+) actually make the problem go away? As in, can we detect the state and just have pg_rewind issue the checkpoint as needed? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 5 Apr 2019 09:59:29 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 05, 2019 at 09:59:29AM +0200, Magnus Hagander wrote:\n> A related question is, could we (for 12+) actually make the problem go\n> away? As in, can we detect the state and just have pg_rewind issue the\n> checkpoint as needed?\n\nI am not sure as you can still bump into the legit case where one is \ntrying to rewind an instance which is on the same timeline as the\nsource, and nothing should happen in this case.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 17:06:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 10:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 05, 2019 at 09:59:29AM +0200, Magnus Hagander wrote:\n> > A related question is, could we (for 12+) actually make the problem go\n> > away? As in, can we detect the state and just have pg_rewind issue the\n> > checkpoint as needed?\n>\n> I am not sure as you can still bump into the legit case where one is\n> trying to rewind an instance which is on the same timeline as the\n> source, and nothing should happen in this case.\n>\n\nIf that is the case, would running a CHECKPOINT actually cause a problem?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 5, 2019 at 10:06 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Apr 05, 2019 at 09:59:29AM +0200, Magnus Hagander wrote:\n> A related question is, could we (for 12+) actually make the problem go\n> away? As in, can we detect the state and just have pg_rewind issue the\n> checkpoint as needed?\n\nI am not sure as you can still bump into the legit case where one is \ntrying to rewind an instance which is on the same timeline as the\nsource, and nothing should happen in this case.If that is the case, would running a CHECKPOINT actually cause a problem? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 5 Apr 2019 10:11:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "Hi,\n\nOn Fri, Apr 05, 2019 at 04:56:32PM +0900, Michael Paquier wrote:\n> On Fri, Apr 05, 2019 at 09:41:58AM +0200, Michael Banck wrote:\n> > Is there a good reason why Postgres doesn't just issue a CHECKPOINT\n> > after promote itself? After all, this seems to be about making the\n> > control file having the proper content, which sounds like a good thing\n> > to have in general.\n>\n> The startup process requests a checkpoint since 9.3, and before that\n> it was doing the checkpoint by itself (grep for fast_promoted and\n> RequestCheckpoint() around line 7579 in xlog.c). This allows the\n> recovery to finish much faster.\n\nOk, so the problem is that that checkpoint might be still ongoing when\nyou quickly issue a pg_rewind from the other side? If not, can you\nexplain once more the actual problem?\n\nIn any case, the updated documentation says:\n\n|When executing pg_rewind using an online cluster as source which has\n|been recently promoted, it is necessary to execute a CHECKPOINT after\n|promotion so as its control file reflects up-to-date timeline\n|information\n\nI think it might be useful to specify more exactly which of the two\nservers (the remote one AIUI) needs a CHECKPOINT in the abvoe. Also, if\nit is the case that a CHECKPOINT is done automatically (see above), that\nparagraph could be rewritten to say something like \"pg_rewind needs to\nwait for the checkoint on the remote server to finish. This can be\nensured by issueing an explicit checkpoint on the remote server prior to\nrunning pg_rewind.\"\n\nFinally, (and still, if I got the above correctly), to the suggestion of\nMagnus of pg_rewind running the checkpoint itself on the remote: would\nthat again mean that pg_rewind needs SUPERUSER rights or is there\na(nother) GRANTable function that could be added to the list in this\ncase?\n\nSorry for being a bit dense here.\n\n\nMichael\n\n\n",
"msg_date": "Fri, 5 Apr 2019 10:39:26 +0200",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 05, 2019 at 10:11:22AM +0200, Magnus Hagander wrote:\n> If that is the case, would running a CHECKPOINT actually cause a problem?\n\nIf you exclude the point that it may not be necessary and the\npotential extra I/O, no. However we would come back to the point of\npg_rewind requiring a superuser :(\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 19:59:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 05, 2019 at 10:39:26AM +0200, Michael Banck wrote:\n> Ok, so the problem is that that checkpoint might be still ongoing when\n> you quickly issue a pg_rewind from the other side?\n\nThe end-of-recovery checkpoint may not have even begun.\n\n> I think it might be useful to specify more exactly which of the two\n> servers (the remote one AIUI) needs a CHECKPOINT in the above. Also, if\n> it is the case that a CHECKPOINT is done automatically (see above), that\n> paragraph could be rewritten to say something like \"pg_rewind needs to\n> wait for the checkoint on the remote server to finish. This can be\n> ensured by issueing an explicit checkpoint on the remote server prior to\n> running pg_rewind.\"\n\nWell, the target server needs to be cleanly shut down, so it seems\npretty clear to me which one needs to have a checkpoint :)\n\n> Finally, (and still, if I got the above correctly), to the suggestion of\n> Magnus of pg_rewind running the checkpoint itself on the remote: would\n> that again mean that pg_rewind needs SUPERUSER rights or is there\n> a(nother) GRANTable function that could be added to the list in this\n> case?\n\npg_rewind would require again a superuser. So this could be\noptional. In one HA workflow I maintain, what I actually do is to\nenforce directly a checkpoint immediately after the promotion is done\nto make sure that the data is up-to-date, and I don't meddle with\npg_rewind workflow.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 20:05:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Fri, Apr 5, 2019 at 1:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 05, 2019 at 10:39:26AM +0200, Michael Banck wrote:\n> > Ok, so the problem is that that checkpoint might be still ongoing when\n> > you quickly issue a pg_rewind from the other side?\n>\n> The end-of-recovery checkpoint may not have even begun.\n>\n\nSo can we *detect* that this is the case? Because if so, we could perhaps\njust wait for it to be done? Because there will always be one?\n\nThe main point is -- we know from experience that it's pretty fragile to\nassume the user read the documentation :) So if we can find *any* way to\nhandle this in code rather than docs, that'd be great. We would still\nabsolutely want the docs change for back branches of course.\n\n\n> I think it might be useful to specify more exactly which of the two\n> > servers (the remote one AIUI) needs a CHECKPOINT in the above. Also, if\n> > it is the case that a CHECKPOINT is done automatically (see above), that\n> > paragraph could be rewritten to say something like \"pg_rewind needs to\n> > wait for the checkoint on the remote server to finish. This can be\n> > ensured by issueing an explicit checkpoint on the remote server prior to\n> > running pg_rewind.\"\n>\n> Well, the target server needs to be cleanly shut down, so it seems\n> pretty clear to me which one needs to have a checkpoint :)\n>\n\nClear to you and us of course, but quite possibly not to everybody. I'm\nsure there are a *lot* of users out there who do not realize that \"cleanly\nshut down\" means \"ran a checkpoint just before it shut down\".\n\n\n> Finally, (and still, if I got the above correctly), to the suggestion of\n> > Magnus of pg_rewind running the checkpoint itself on the remote: would\n> > that again mean that pg_rewind needs SUPERUSER rights or is there\n> > a(nother) GRANTable function that could be added to the list in this\n> > case?\n>\n> pg_rewind would require again a superuser. So this could be\n>\n\nUgh, you are right of course.\n\n\n\n> optional. In one HA workflow I maintain, what I actually do is to\n> enforce directly a checkpoint immediately after the promotion is done\n> to make sure that the data is up-to-date, and I don't meddle with\n> pg_rewind workflow.\n>\n\nSure. And every other HA setup also has to take care of it. That's why it\nwould make sense to centralize it into the tool itself when it's\n*mandatory* to deal with it somehow.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 5, 2019 at 1:05 PM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Apr 05, 2019 at 10:39:26AM +0200, Michael Banck wrote:\n> Ok, so the problem is that that checkpoint might be still ongoing when\n> you quickly issue a pg_rewind from the other side?\n\nThe end-of-recovery checkpoint may not have even begun.So can we *detect* that this is the case? Because if so, we could perhaps just wait for it to be done? Because there will always be one?The main point is -- we know from experience that it's pretty fragile to assume the user read the documentation :) So if we can find *any* way to handle this in code rather than docs, that'd be great. We would still absolutely want the docs change for back branches of course.\n> I think it might be useful to specify more exactly which of the two\n> servers (the remote one AIUI) needs a CHECKPOINT in the above. Also, if\n> it is the case that a CHECKPOINT is done automatically (see above), that\n> paragraph could be rewritten to say something like \"pg_rewind needs to\n> wait for the checkoint on the remote server to finish. This can be\n> ensured by issueing an explicit checkpoint on the remote server prior to\n> running pg_rewind.\"\n\nWell, the target server needs to be cleanly shut down, so it seems\npretty clear to me which one needs to have a checkpoint :)Clear to you and us of course, but quite possibly not to everybody. I'm sure there are a *lot* of users out there who do not realize that \"cleanly shut down\" means \"ran a checkpoint just before it shut down\".\n> Finally, (and still, if I got the above correctly), to the suggestion of\n> Magnus of pg_rewind running the checkpoint itself on the remote: would\n> that again mean that pg_rewind needs SUPERUSER rights or is there\n> a(nother) GRANTable function that could be added to the list in this\n> case?\n\npg_rewind would require again a superuser. So this could beUgh, you are right of course. \noptional. In one HA workflow I maintain, what I actually do is to\nenforce directly a checkpoint immediately after the promotion is done\nto make sure that the data is up-to-date, and I don't meddle with\npg_rewind workflow.Sure. And every other HA setup also has to take care of it. That's why it would make sense to centralize it into the tool itself when it's *mandatory* to deal with it somehow. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 7 Apr 2019 15:06:56 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Sun, Apr 07, 2019 at 03:06:56PM +0200, Magnus Hagander wrote:\n> So can we *detect* that this is the case? Because if so, we could perhaps\n> just wait for it to be done? Because there will always be one?\n\nYes, this one is technically possible. We could add a timeout option\nwhich checks each N seconds the control file of the online source and\nsees if its timeline differs or not with the target, waiting for the\nchange to happen. If we do that, we may want to revisit the behavior\nof not issuing an error if the source and the target are detected as\nbeing on the same timeline, and consider it as a failure.\n\n> The main point is -- we know from experience that it's pretty fragile to\n> assume the user read the documentation :) So if we can find *any* way to\n> handle this in code rather than docs, that'd be great. We would still\n> absolutely want the docs change for back branches of course.\n\nAny veeeeery recent experience on the matter perhaps? :)\n--\nMichael",
"msg_date": "Mon, 8 Apr 2019 15:17:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "At Mon, 8 Apr 2019 15:17:25 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190408061725.GF2712@paquier.xyz>\n> On Sun, Apr 07, 2019 at 03:06:56PM +0200, Magnus Hagander wrote:\n> > So can we *detect* that this is the case? Because if so, we could perhaps\n> > just wait for it to be done? Because there will always be one?\n> \n> Yes, this one is technically possible. We could add a timeout option\n> which checks each N seconds the control file of the online source and\n> sees if its timeline differs or not with the target, waiting for the\n> change to happen. If we do that, we may want to revisit the behavior\n> of not issuing an error if the source and the target are detected as\n> being on the same timeline, and consider it as a failure.\n> \n> > The main point is -- we know from experience that it's pretty fragile to\n> > assume the user read the documentation :) So if we can find *any* way to\n> > handle this in code rather than docs, that'd be great. We would still\n> > absolutely want the docs change for back branches of course.\n> \n> Any veeeeery recent experience on the matter perhaps? :)\n\nI (am not Magnus) saw a similar but a bit different case. Just\nafter master's promote, standby was killed in immediate mode\nafter catching up to master's latest TLI but before restartpoint\nfinished. They are in different TLIs in control data so *the\ntool* decides to try pg_rewind. Restart->shutdown (*1) sequence\nfor cleanup made standby catch up to the master's TLI but their\nhistories have diverged from each other in the latest TLI. Of\ncourse, pg_rewind says \"no need to rewind since they're on the\nsame TLI\". The subsequent replication starts from the segment\nbeginning and overwrote the WAL records already applied on the\nstandby. The result was a broken database. I suspect that it is\nthe result of a kind of misoperation and sane operation won't\ncause the situation, but such situation could be \"cleaned up\" if\npg_rewind did the work for a replication set on the same TLI.\n\nI haven't find exactly what happend yet in the case.\n\n*1: It is somewhat strange, that recovery reaches to the next TLI\n despite that I heard that the restart is in non-standby,\n non-recovery mode.. Something should be wrong.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 08 Apr 2019 16:14:42 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On 2019-04-04 12:43, Michael Paquier wrote:\n> I would like to apply this down to 9.5 for the checkpoint part and\n> down to 11 for the role part, so if anybody has any comments, please\n> feel free.\n\nHow about some tests to show that this is actually true?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 8 Apr 2019 10:03:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Mon, Apr 08, 2019 at 10:03:48AM +0200, Peter Eisentraut wrote:\n> How about some tests to show that this is actually true?\n\nSure. With something like the attached? I don't think that there is\nmuch point to complicate the test code with multiple roles if the\ndefault is a superuser.\n--\nMichael",
"msg_date": "Tue, 9 Apr 2019 10:38:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Tue, Apr 09, 2019 at 10:38:19AM +0900, Michael Paquier wrote:\n> Sure. With something like the attached? I don't think that there is\n> much point to complicate the test code with multiple roles if the\n> default is a superuser.\n\nAs this topic differs from the original thread, I haev started a new\nthread, so let's discuss the proposed patch there:\nhttps://www.postgresql.org/message-id/20190411041336.GM2728@paquier.xyz\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 13:14:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Mon, Apr 8, 2019 at 8:17 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sun, Apr 07, 2019 at 03:06:56PM +0200, Magnus Hagander wrote:\n> > So can we *detect* that this is the case? Because if so, we could perhaps\n> > just wait for it to be done? Because there will always be one?\n>\n> Yes, this one is technically possible. We could add a timeout option\n> which checks each N seconds the control file of the online source and\n> sees if its timeline differs or not with the target, waiting for the\n> change to happen. If we do that, we may want to revisit the behavior\n> of not issuing an error if the source and the target are detected as\n> being on the same timeline, and consider it as a failure.\n>\n\nI think doing something like that would be a good idea.\n\nI mean, we should *always* detect if if we can, since it's a condition\nwhere things don't work properly.\n\nAnd I think it would make sense to wait by default, but we could then also\nhave a commandline parameter that says \"don't wait, instead error out in\ncase the checkpoint isn't done\".\n\nOr something like that?\n\n\n\n> The main point is -- we know from experience that it's pretty fragile to\n> > assume the user read the documentation :) So if we can find *any* way to\n> > handle this in code rather than docs, that'd be great. We would still\n> > absolutely want the docs change for back branches of course.\n>\n> Any veeeeery recent experience on the matter perhaps? :)\n>\n\nActually no, I've been considering it for some time due to the number of\nquestions we get on it that get exactly the same answer. And then you doing\nthe docs patch reminded me of it :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Apr 8, 2019 at 8:17 AM Michael Paquier <michael@paquier.xyz> wrote:On Sun, Apr 07, 2019 at 03:06:56PM +0200, Magnus Hagander wrote:\n> So can we *detect* that this is the case? Because if so, we could perhaps\n> just wait for it to be done? Because there will always be one?\n\nYes, this one is technically possible. We could add a timeout option\nwhich checks each N seconds the control file of the online source and\nsees if its timeline differs or not with the target, waiting for the\nchange to happen. If we do that, we may want to revisit the behavior\nof not issuing an error if the source and the target are detected as\nbeing on the same timeline, and consider it as a failure.I think doing something like that would be a good idea.I mean, we should *always* detect if if we can, since it's a condition where things don't work properly.And I think it would make sense to wait by default, but we could then also have a commandline parameter that says \"don't wait, instead error out in case the checkpoint isn't done\".Or something like that?\n> The main point is -- we know from experience that it's pretty fragile to\n> assume the user read the documentation :) So if we can find *any* way to\n> handle this in code rather than docs, that'd be great. We would still\n> absolutely want the docs change for back branches of course.\n\nAny veeeeery recent experience on the matter perhaps? :)Actually no, I've been considering it for some time due to the number of questions we get on it that get exactly the same answer. And then you doing the docs patch reminded me of it :) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 11 Apr 2019 10:33:13 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind vs superuser"
},
{
"msg_contents": "On Thu, Apr 11, 2019 at 10:33:13AM +0200, Magnus Hagander wrote:\n> And I think it would make sense to wait by default, but we could then also\n> have a commandline parameter that says \"don't wait, instead error out in\n> case the checkpoint isn't done\".\n> \n> Or something like that?\n\nYes, that would be the idea. You still need to cover the case where\nboth instances are on the same timeline, in which case you could wait\nfor a checkpoint forever, so we'd need to change the current behavior\na bit by making sure that we always throw an error if both nodes are\nstill on the same timeline after the timeout (see TAP test\n005_same_timeline.pl). I am not sure that you need a separate option\nto control the case where you don't want to wait though. Perhaps we\ncould have a separate switch, but a user could also just set\n--timeout=0 to match that behavior.\n--\nMichael",
"msg_date": "Thu, 11 Apr 2019 22:33:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind vs superuser"
}
] |
[
{
"msg_contents": "Hi, everyone\r\n\r\nI have found a potential memory overflow in ecpg preproc module.\r\n\r\nHere is:\r\n\r\nhttps://github.com/postgres/postgres/blob/REL9_5_16/src/interfaces/ecpg/preproc/pgc.l\r\n\r\nIn parse_include() function\r\n-------------------------------------------------------------------\r\nfor (ip = include_paths; yyin == NULL && ip != NULL; ip = ip->next)\r\n {\r\n if (strlen(ip->path) + strlen(yytext) + 3 > MAXPGPATH) ★1 forget to count the length of char '\\0'.\r\n {\r\n fprintf(stderr, _(\"Error: include path \\\"%s/%s\\\" is too long on line %d, skipping\\n\"), ip->path, yytext, yylineno);\r\n continue;\r\n }\r\n snprintf (inc_file, sizeof(inc_file), \"%s/%s\", ip->path, yytext);\r\n yyin = fopen(inc_file, \"r\");\r\n if (!yyin)\r\n {\r\n if (strcmp(inc_file + strlen(inc_file) - 2, \".h\") != 0)\r\n {\r\n strcat(inc_file, \".h\"); ★2\r\n yyin = fopen( inc_file, \"r\" );\r\n }\r\n }\r\n-----------------------------------------------------------------------\r\nFor example\r\n (1)ecpg program has below statement\r\n EXEC SQL INCLUDE “abbbbbbbbcd”\r\nfilename's length is 11.\r\n (2)using ecpg -I command to Specify an additional include path\r\n an additional include path's length is 1010\r\n ex:/file1/ssssssss/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n /a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a/a\r\n\r\nAfter entering the parse_include(), the roadmap of excuting is as follows.\r\n1. When excuting the marked★1 code, strlen(ip->path) is 1010, and strlen(yytext) is 11.\r\n So the total length (strlen(ip->path) + strlen(yytext) + 3 ) is 1024.\r\n As MAXPGPATH is 1024, the error is not be throwed.\r\n 2. When excuting the marked★2 code, the string stored in the variable inc_file is as follows.\r\n\r\n inc_file[0]:'f'\r\n inc_file[1]:'i'\r\n ....\r\n inc_file[1022]:'.'\r\n inc_file[1023]:'h' ====>there is no space for the char '\\0'.\r\n\r\nLast, it is easy to fix, here is a solution patch.\r\n\r\n--\r\n以上\r\nLiu Huailing\r\n--------------------------------------------------\r\nLiu Huailing\r\nDevelopment Department III\r\nSoftware Division II\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-8439\r\nCOINS: 7998-8439\r\nFAX : +86+25-83317685\r\nMAIL : liuhuailing@cn.fujitsu.com\r\n--------------------------------------------------",
"msg_date": "Wed, 3 Apr 2019 09:55:47 +0000",
"msg_from": "\"Liu, Huailing\" <liuhuailing@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "fix memory overflow in ecpg preproc module"
},
{
"msg_contents": "Hi,\n\n> I have found a potential memory overflow in ecpg preproc module.\n> ... \n\nThanks for finding and fixing, committed.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Thu, 11 Apr 2019 21:07:30 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: fix memory overflow in ecpg preproc module"
}
] |
[
{
"msg_contents": "Hi, everyone\r\n\r\nWhen reading the codes in file src/backend/replication/syncrep.c,\r\nI have found two spelling mistakes of comments.\r\n\r\nFirst\r\n---------------------------------------------------------------------------------------------\r\n/*-------------------------------------------------------------------------\r\n*\r\n* syncrep.c\r\n*\r\n* Synchronous replication is new as of PostgreSQL 9.1.\r\n*\r\n* If requested, transaction commits wait until their commit LSN are\r\n* acknowledged by the synchronous standbys.\r\n*\r\n* This module contains the code for waiting and release★ of backends.\r\n* All code in this module executes on the primary. The core streaming\r\n* replication transport remains within WALreceiver/WALsender modules.\r\n----------------------------------------------------------------------------------------------------\r\nI think the word marked★ should be 'releasing'.\r\n\r\n\r\n\r\nSecond\r\n-----------------------------------------------------------------------------------------------------------\r\n/*\r\n* Walk★ the specified queue from head. Set the state of any backends that\r\n* need to be woken, remove them from the queue, and then wake them.\r\n* Pass all = true to wake whole queue; otherwise, just wake up to\r\n* the walsender's LSN.\r\n*\r\n* Must hold SyncRepLock.\r\n*/\r\nstatic int\r\nSyncRepWakeQueue(bool all, int mode)\r\n--------------------------------------------------------------------------------------------------------------\r\nI think the word marked★ should be 'Wake'.\r\n\r\nAttached patch fixed them.\r\n\r\n--\r\n以上\r\nLiu Huailing\r\n--------------------------------------------------\r\nLiu Huailing\r\nDevelopment Department III\r\nSoftware Division II\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-8439\r\nCOINS: 7998-8439\r\nFAX : +86+25-83317685\r\nMAIL : liuhuailing@cn.fujitsu.com\r\n--------------------------------------------------",
"msg_date": "Wed, 3 Apr 2019 10:55:27 +0000",
"msg_from": "\"Liu, Huailing\" <liuhuailing@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "fix the spelling mistakes of comments"
},
{
"msg_contents": "On Wed, Apr 3, 2019 at 6:55 AM Liu, Huailing <liuhuailing@cn.fujitsu.com> wrote:\n> * This module contains the code for waiting and release★ of backends.\n> * All code in this module executes on the primary. The core streaming\n> * replication transport remains within WALreceiver/WALsender modules.\n>\n> I think the word marked★ should be 'releasing'.\n\nIt could be changed, but I don't think it's really wrong as written.\n\n> * Walk★ the specified queue from head. Set the state of any backends that\n>\n> I think the word marked★ should be 'Wake'.\n\nI think it's correct as written.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Apr 2019 07:57:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix the spelling mistakes of comments"
},
{
"msg_contents": "On Thu, Apr 04, 2019 at 07:57:02AM -0400, Robert Haas wrote:\n> On Wed, Apr 3, 2019 at 6:55 AM Liu, Huailing <liuhuailing@cn.fujitsu.com> wrote:\n>> * This module contains the code for waiting and release★ of backends.\n>> * All code in this module executes on the primary. The core streaming\n>> * replication transport remains within WALreceiver/WALsender modules.\n>>\n>> I think the word marked★ should be 'releasing'.\n> \n> It could be changed, but I don't think it's really wrong as written.\n\nIndeed.\n\n> > * Walk★ the specified queue from head. Set the state of any backends that\n> >\n> > I think the word marked★ should be 'Wake'.\n> \n> I think it's correct as written.\n\nYes, it's correct as-is. The code *walks* through the shmem queue to\n*wake* some of its elements.\n--\nMichael",
"msg_date": "Fri, 5 Apr 2019 10:43:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix the spelling mistakes of comments"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Apr 04, 2019 at 07:57:02AM -0400, Robert Haas wrote:\n>> On Wed, Apr 3, 2019 at 6:55 AM Liu, Huailing <liuhuailing@cn.fujitsu.com> wrote:\n>>>> * This module contains the code for waiting and release★ of backends.\n\n>>> I think the word marked★ should be 'releasing'.\n\n>> It could be changed, but I don't think it's really wrong as written.\n\n> Indeed.\n\nWell, the problem is the lack of grammatical agreement between \"waiting\"\nand \"release\". It's poor English at least.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Apr 2019 23:55:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fix the spelling mistakes of comments"
},
{
"msg_contents": "On Thu, Apr 4, 2019 at 11:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Indeed.\n>\n> Well, the problem is the lack of grammatical agreement between \"waiting\"\n> and \"release\". It's poor English at least.\n\nIt's not correct for formal writing, but I think a lot of people would\nfind it acceptable in casual speech. At any rate, the proposed change\nto \"This module contains the code for waiting and releasing of\nbackends\" is no better, because it's still faulty parallelism. You\ncan release a backend, but you can't \"wait\" a backend. If you want to\nmake it really good English, you're going to have to change more than\none word. (e.g. \"This module contains the code to make backends wait\nfor replication, and to release them from waiting at the appropriate\ntime.\")\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 5 Apr 2019 15:48:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix the spelling mistakes of comments"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.