threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi!\n\nAttached patchset implements jsonpath .datetime() method.\n\n * 0001-datetime-in-JsonbValue-1.patch\nThis patch allows JsonbValue struct to hold datetime values. It\nappears to be convenient since jsonpath execution engine uses\nJsonbValue to store intermediate calculation results. On\nserialization datetime values are converted into strings.\n\n * 0002-datetime-conversion-for-jsonpath-1.patch\nThis patch adds some datetime conversion infrastructure missing\naccording to SQL/JSON standard. It includes FF1-FF6 format patterns,\nruntime identification of datetime type, strict parsing mode.\n\n * 0003-error-suppression-for-datetime-1.path\nAs jsonpath supports error suppression in general, it's required for\ndatetime functions too. This commit implements it in the same manner\nas we did for numerics before.\n\n * 0004-implement-jsonpath-datetime-1.path\n.datetime() method itself and additionally comparison of datetime\nvalues. Here goes a trick. Out exising jsonb_path_*() functions are\nimmutable, while comparison of timezoned and non-timezoned type is\nobviously not. This patch makes existing immutable jsonb_path_*()\nfunctions throw error on non-immutable comparison. Additionally it\nimplements stable jsonb_path_*_tz() functions, which support full set\nof features.\n\nI was going to discuss this patchset among the other SQL/JSON problems\non PGCon unconference, but I didn't make it there. I found most\nquestionable point in this patchset to be two sets of functions:\nimmutable and stable. However, I don't see better solution here: we\nneed immutable functions for expression indexes, and also we need\nfunction with full set of jsonpath features, which are not all\nimmutable.\n\nSometimes immutability of jsonpath expression could be determined\nruntime. When .datetime() method is used with template string\nargument we may know result type in advance. Thus, in some times we\nmay know in advance that given jsonpath is immutable. So, we could\nhack contain_mutable_functions_checker() or something to make an\nexclusive heuristics for jsonb_path_*() functions. But I think it's\nbetter to go with jsonb_path_*() and jsonb_path_*_tz() variants for\nnow. We could come back to idea of heuristics during consideration of\nstandard SQL/JSON clauses.\n\nAny thoughts?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 28 May 2019 08:55:19 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Tue, May 28, 2019 at 8:55 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Attached patchset implements jsonpath .datetime() method.\n\nRevised patchset is attached. Some inconsistencies around\nparse_datetime() function are fixed. Rebased to current master as\nwell.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 1 Jul 2019 19:28:13 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Mon, Jul 1, 2019 at 7:28 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Tue, May 28, 2019 at 8:55 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Attached patchset implements jsonpath .datetime() method.\n>\n> Revised patchset is attached. Some inconsistencies around\n> parse_datetime() function are fixed. Rebased to current master as\n> well.\n\nI found commitfest.cputube.org is unhappy with this patchset because\nof gcc warning. Fixed in attached patchset.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 2 Jul 2019 12:16:14 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHi,\r\n\r\nIn general, the feature looks good. It is consistent with the standard and the code around.\r\nIt definitely needs more documentation - datetime() and new jsonb_path_*_tz() functions are not documented.\r\n\r\nHere are also minor questions on implementation and code style:\r\n\r\n1) + case jbvDatetime:\r\n elog(ERROR, \"unexpected jbvBinary value\");\r\nWe should use separate error message for jvbDatetime here.\r\n\r\n2) + *jentry = JENTRY_ISSTRING | len;\r\nHere we can avoid using JENTRY_ISSTRING since it defined to 0x0. \r\nI propose to do so to be consistent with jbvString case.\r\n\r\n3) \r\n+ * Default time-zone for tz types is specified with 'tzname'. If 'tzname' is\r\n+ * NULL and the input string does not contain zone components then \"missing tz\"\r\n+ * error is thrown.\r\n+ */\r\n+Datum\r\n+parse_datetime(text *date_txt, text *fmt, bool strict, Oid *typid,\r\n+ int32 *typmod, int *tz)\r\n\r\nThe comment about 'tzname' is outdated.\r\n\r\n4) Some typos:\r\n\r\n+ * Convinience macros for error handling\r\n> * Convenience macros for error handling\r\n\r\n+ * Two macros below helps handling errors in functions, which takes\r\n> * Two macros below help to handle errors in functions, which take\r\n\r\n5) + * RETURN_ERROR() macro intended to wrap ereport() calls. When have_error\r\n+ * argument is provided, then instead of ereport'ing we set *have_error flag \r\n\r\nhave_error is not a macro argument, so I suggest to rephrase this comment.\r\n\r\nShouldn't we pass have_error explicitly?\r\nIn case someone will change the name of the variable, this macro will work incorrectly.\r\n\r\n6) * When no argument is supplied, first fitting ISO format is selected.\r\n+ /* Try to recognize one of ISO formats. */\r\n+ static const char *fmt_str[] =\r\n+ {\r\n+ \"yyyy-mm-dd HH24:MI:SS TZH:TZM\",\r\n+ \"yyyy-mm-dd HH24:MI:SS TZH\",\r\n+ \"yyyy-mm-dd HH24:MI:SS\",\r\n+ \"yyyy-mm-dd\",\r\n+ \"HH24:MI:SS TZH:TZM\",\r\n+ \"HH24:MI:SS TZH\",\r\n+ \"HH24:MI:SS\"\r\n+ };\r\n\r\nHow do we choose the order of formats to check? Is it in standard?\r\nAnyway, I think this struct needs a comment that explains that changing of order can affect end-user.\r\n\r\n7) +\t\tif (res == jperNotFound)\r\n+\t\t\tRETURN_ERROR(ereport(ERROR,\r\n+\t\t\t\t\t\t\t\t (errcode(ERRCODE_INVALID_ARGUMENT_FOR_JSON_DATETIME_FUNCTION),\r\n+\t\t\t\t\t\t\t\t errmsg(\"invalid argument for SQL/JSON datetime function\"),\r\n+\t\t\t\t\t\t\t\t errdetail(\"unrecognized datetime format\"),\r\n+\t\t\t\t\t\t\t\t errhint(\"use datetime template argument for explicit format specification\"))));\r\n\r\nThe hint is confusing. If I understand correctly, no-arg datetime function supports all formats,\r\nso if parsing failed, it must be an invalid argument and providing format explicitly won't help.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 15 Jul 2019 12:45:43 +0000",
"msg_from": "Anastasia Lubennikova <lubennikovaav@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "Hi!\n\nThank you for the review!\n\nRevised version of patch is attached.\n\nOn Mon, Jul 15, 2019 at 3:57 PM Anastasia Lubennikova\n<lubennikovaav@gmail.com> wrote:\n> In general, the feature looks good. It is consistent with the standard and the code around.\n> It definitely needs more documentation - datetime() and new jsonb_path_*_tz() functions are not documented.\n\nDocumentation is added for both jsonpath .datetime() method and SQL\njsonb_path_*_tz() functions.\n\n> Here are also minor questions on implementation and code style:\n>\n> 1) + case jbvDatetime:\n> elog(ERROR, \"unexpected jbvBinary value\");\n> We should use separate error message for jvbDatetime here.\n\nFixed.\n\n> 2) + *jentry = JENTRY_ISSTRING | len;\n> Here we can avoid using JENTRY_ISSTRING since it defined to 0x0.\n> I propose to do so to be consistent with jbvString case.\n\nFixed.\n\n> 3)\n> + * Default time-zone for tz types is specified with 'tzname'. If 'tzname' is\n> + * NULL and the input string does not contain zone components then \"missing tz\"\n> + * error is thrown.\n> + */\n> +Datum\n> +parse_datetime(text *date_txt, text *fmt, bool strict, Oid *typid,\n> + int32 *typmod, int *tz)\n>\n> The comment about 'tzname' is outdated.\n\nFixed.\n\n> 4) Some typos:\n>\n> + * Convinience macros for error handling\n> > * Convenience macros for error handling\n>\n> + * Two macros below helps handling errors in functions, which takes\n> > * Two macros below help to handle errors in functions, which take\n\nFixed.\n\n> 5) + * RETURN_ERROR() macro intended to wrap ereport() calls. When have_error\n> + * argument is provided, then instead of ereport'ing we set *have_error flag\n>\n> have_error is not a macro argument, so I suggest to rephrase this comment.\n>\n> Shouldn't we pass have_error explicitly?\n> In case someone will change the name of the variable, this macro will work incorrectly.\n\nComment about RETURN_ERROR() is updated. RETURN_ERROR() is just\nfile-wide macro. So I think in this case it's ok to pass *have_error\nflag implicitly for the sake of brevity.\n\n> 6) * When no argument is supplied, first fitting ISO format is selected.\n> + /* Try to recognize one of ISO formats. */\n> + static const char *fmt_str[] =\n> + {\n> + \"yyyy-mm-dd HH24:MI:SS TZH:TZM\",\n> + \"yyyy-mm-dd HH24:MI:SS TZH\",\n> + \"yyyy-mm-dd HH24:MI:SS\",\n> + \"yyyy-mm-dd\",\n> + \"HH24:MI:SS TZH:TZM\",\n> + \"HH24:MI:SS TZH\",\n> + \"HH24:MI:SS\"\n> + };\n>\n> How do we choose the order of formats to check? Is it in standard?\n> Anyway, I think this struct needs a comment that explains that changing of order can affect end-user.\n\nYes, standard defines which order we should try datetime types (and\ncorresponding ISO formats). I've updated respectively array, its\ncomment and docs.\n\n> 7) + if (res == jperNotFound)\n> + RETURN_ERROR(ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_ARGUMENT_FOR_JSON_DATETIME_FUNCTION),\n> + errmsg(\"invalid argument for SQL/JSON datetime function\"),\n> + errdetail(\"unrecognized datetime format\"),\n> + errhint(\"use datetime template argument for explicit format specification\"))));\n>\n> The hint is confusing. If I understand correctly, no-arg datetime function supports all formats,\n> so if parsing failed, it must be an invalid argument and providing format explicitly won't help.\n\nCustom format string may define format not enumerated in fmt_str[].\nFor instance, imagine \"dd.mm.yyyy\". In some cases custom format\nstring can fix the error. So, ISTM hint is OK.\n\nI'm setting this back to \"Needs review\" waiting for either you or\nPeter Eisentraut provide additional review.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 16 Jul 2019 06:41:06 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On 7/16/19 6:41 AM, Alexander Korotkov wrote:\n> Hi!\n>\n> Thank you for the review!\n>\n> Revised version of patch is attached.\n>\n> On Mon, Jul 15, 2019 at 3:57 PM Anastasia Lubennikova\n> <lubennikovaav@gmail.com> wrote:\n>> In general, the feature looks good. It is consistent with the standard and the code around.\n>> It definitely needs more documentation - datetime() and new jsonb_path_*_tz() functions are not documented.\n> Documentation is added for both jsonpath .datetime() method and SQL\n> jsonb_path_*_tz() functions.\n>\n>> Here are also minor questions on implementation and code style:\n>>\n>> 1) + case jbvDatetime:\n>> elog(ERROR, \"unexpected jbvBinary value\");\n>> We should use separate error message for jvbDatetime here.\n> Fixed.\n>\n>> 2) + *jentry = JENTRY_ISSTRING | len;\n>> Here we can avoid using JENTRY_ISSTRING since it defined to 0x0.\n>> I propose to do so to be consistent with jbvString case.\n> Fixed.\n>\n>> 3)\n>> + * Default time-zone for tz types is specified with 'tzname'. If 'tzname' is\n>> + * NULL and the input string does not contain zone components then \"missing tz\"\n>> + * error is thrown.\n>> + */\n>> +Datum\n>> +parse_datetime(text *date_txt, text *fmt, bool strict, Oid *typid,\n>> + int32 *typmod, int *tz)\n>>\n>> The comment about 'tzname' is outdated.\n> Fixed.\n>\n>> 4) Some typos:\n>>\n>> + * Convinience macros for error handling\n>>> * Convenience macros for error handling\n>> + * Two macros below helps handling errors in functions, which takes\n>>> * Two macros below help to handle errors in functions, which take\n> Fixed.\n>\n>> 5) + * RETURN_ERROR() macro intended to wrap ereport() calls. When have_error\n>> + * argument is provided, then instead of ereport'ing we set *have_error flag\n>>\n>> have_error is not a macro argument, so I suggest to rephrase this comment.\n>>\n>> Shouldn't we pass have_error explicitly?\n>> In case someone will change the name of the variable, this macro will work incorrectly.\n> Comment about RETURN_ERROR() is updated. RETURN_ERROR() is just\n> file-wide macro. So I think in this case it's ok to pass *have_error\n> flag implicitly for the sake of brevity.\n>\n>> 6) * When no argument is supplied, first fitting ISO format is selected.\n>> + /* Try to recognize one of ISO formats. */\n>> + static const char *fmt_str[] =\n>> + {\n>> + \"yyyy-mm-dd HH24:MI:SS TZH:TZM\",\n>> + \"yyyy-mm-dd HH24:MI:SS TZH\",\n>> + \"yyyy-mm-dd HH24:MI:SS\",\n>> + \"yyyy-mm-dd\",\n>> + \"HH24:MI:SS TZH:TZM\",\n>> + \"HH24:MI:SS TZH\",\n>> + \"HH24:MI:SS\"\n>> + };\n>>\n>> How do we choose the order of formats to check? Is it in standard?\n>> Anyway, I think this struct needs a comment that explains that changing of order can affect end-user.\n> Yes, standard defines which order we should try datetime types (and\n> corresponding ISO formats). I've updated respectively array, its\n> comment and docs.\n>\n>> 7) + if (res == jperNotFound)\n>> + RETURN_ERROR(ereport(ERROR,\n>> + (errcode(ERRCODE_INVALID_ARGUMENT_FOR_JSON_DATETIME_FUNCTION),\n>> + errmsg(\"invalid argument for SQL/JSON datetime function\"),\n>> + errdetail(\"unrecognized datetime format\"),\n>> + errhint(\"use datetime template argument for explicit format specification\"))));\n>>\n>> The hint is confusing. If I understand correctly, no-arg datetime function supports all formats,\n>> so if parsing failed, it must be an invalid argument and providing format explicitly won't help.\n> Custom format string may define format not enumerated in fmt_str[].\n> For instance, imagine \"dd.mm.yyyy\". In some cases custom format\n> string can fix the error. So, ISTM hint is OK.\n>\n> I'm setting this back to \"Needs review\" waiting for either you or\n> Peter Eisentraut provide additional review.\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\nHi Alexander,\n\nI had look at the added docs and would like to suggest a couple of \nchanges. Please see the attached patches with my my edits for func.sgml \nand some of the comments.\n\nLooks like we also need to change the following entry in \nfeatures-unsupported.sgml, and probably move it to features-supported.sgml?\n\n <row>\n <entry>T832</entry>\n <entry></entry>\n <entry>SQL/JSON path language: item method</entry>\n <entry>datetime() not yet implemented</entry>\n </row>\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Jul 2019 17:30:09 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "Hi, Liudmila!\n\nOn Fri, Jul 19, 2019 at 5:30 PM Liudmila Mantrova\n<l.mantrova@postgrespro.ru> wrote:\n> I had look at the added docs and would like to suggest a couple of\n> changes. Please see the attached patches with my my edits for func.sgml\n> and some of the comments.\n\nThank you for your edits, they look good to me. Attached patchset\ncontains your edits as well as revised commit messages.\n\n> Looks like we also need to change the following entry in\n> features-unsupported.sgml, and probably move it to features-supported.sgml?\n>\n> <row>\n> <entry>T832</entry>\n> <entry></entry>\n> <entry>SQL/JSON path language: item method</entry>\n> <entry>datetime() not yet implemented</entry>\n> </row>\n\nYes, that's it. Attached patch updates sql_features.txt, which is a\nsource for generation of both features-unsupported.sgml and\nfeatures-supported.sgml.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 21 Jul 2019 01:42:35 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "I think the best way forward here is to focus first on patch 0002 and\nget the additional format templates in, independent of any surrounding\nJSON functionality.\n\nIn particular, remove parse_datetime() and all the related API changes,\nthen it becomes much simpler.\n\nThe codes FF1..FF6 that you added appear to be correct, but reading the\nspec I find there is more missing, specifically\n\n- RRRR and RR\n- SSSSS (currently only SSSS is supported, but that's not standard)\n\nAlso in some cases we allow timestamps with seven digits of fractional\nprecision, so perhaps FF7 should be supported as well. I'm not quite\nsure about the details here. You tests only cover 6 and 9 digits. It\nwould be good to cover 7 and perhaps 8 as well, since those are the\nboundary cases.\n\nSome concrete pieces of review:\n\n+ <row>\n+ <entry><literal>FF1</literal></entry>\n+ <entry>decisecond (0-9)</entry>\n+ </row>\n\nLet's not use such weird terms as \"deciseconds\". We could say\n\"fractional seconds, 1 digit\" etc. or something like that.\n\n+/* Return flags for DCH_from_char() */\n+#define DCH_DATED 0x01\n+#define DCH_TIMED 0x02\n+#define DCH_ZONED 0x04\n\nI think you mean do_to_timestamp() here. These terms \"dated\" etc. are\nfrom the SQL standard text, but they should be explained somewhere for\nthe readers of the code.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 15:44:07 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On 23.07.2019 16:44, Peter Eisentraut wrote:\n\n> I think the best way forward here is to focus first on patch 0002 and\n> get the additional format templates in, independent of any surrounding\n> JSON functionality.\n>\n> In particular, remove parse_datetime() and all the related API changes,\n> then it becomes much simpler.\n>\n> The codes FF1..FF6 that you added appear to be correct, but reading the\n> spec I find there is more missing, specifically\n>\n> - RRRR and RR\n\nIt seems that our YY works like RR should:\n\nSELECT to_date('69', 'YY');\n to_date\n------------\n 2069-01-01\n(1 row)\n\nSELECT to_date('70', 'YY');\n to_date\n------------\n 1970-01-01\n(1 row)\n\nBut by the standard first two digits of current year should be used in YY.\n\n\nOracle follows the standard but its implementation has the different\nrounding algorithm:\n\nSELECT TO_CHAR(TO_DATE('99', 'YY'), 'YYYY') from dual;\n2099\n\nSELECT TO_CHAR(TO_DATE('49', 'RR'), 'YYYY') from dual;\n2049\n\nSELECT TO_CHAR(TO_DATE('50', 'RR'), 'YYYY') from dual;\n1950\n\n\nSo it's unclear what we should do:\n - implement YY and RR strictly following the standard only in .datetime()\n - fix YY implementation in to_date()/to_timestamp() and implement RR\n - use our non-standard templates in .datetime()\n\n> - SSSSS (currently only SSSS is supported, but that's not standard)\n\nSSSSS template can be easily added as alias to SSSS.\n\n> Also in some cases we allow timestamps with seven digits of fractional\n> precision, so perhaps FF7 should be supported as well. I'm not quite\n> sure about the details here. You tests only cover 6 and 9 digits. It\n> would be good to cover 7 and perhaps 8 as well, since those are the\n> boundary cases.\n\nFF7-FF9 weer present in earlier versions of the jsonpath patches, but they\nhad been removed (see [1]) because they were not completely supported due\nto the limited precision of timestamp.\n\n> Some concrete pieces of review:\n>\n> + <row>\n> + <entry><literal>FF1</literal></entry>\n> + <entry>decisecond (0-9)</entry>\n> + </row>\n>\n> Let's not use such weird terms as \"deciseconds\". We could say\n> \"fractional seconds, 1 digit\" etc. or something like that.\nAnd what about \"tenths of seconds\", \"hundredths of seconds\"?\n> +/* Return flags for DCH_from_char() */\n> +#define DCH_DATED 0x01\n> +#define DCH_TIMED 0x02\n> +#define DCH_ZONED 0x04\n>\n> I think you mean do_to_timestamp() here. These terms \"dated\" etc. are\n> from the SQL standard text, but they should be explained somewhere for\n> the readers of the code.\n\n[1] \nhttps://www.postgresql.org/message-id/885de241-5a51-29c8-a6b3-f1dda22aba13%40postgrespro.ru\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nOn 23.07.2019 16:44, Peter Eisentraut wrote:\n\n\nI think the best way forward here is to focus first on patch 0002 and\nget the additional format templates in, independent of any surrounding\nJSON functionality.\n\nIn particular, remove parse_datetime() and all the related API changes,\nthen it becomes much simpler.\n\nThe codes FF1..FF6 that you added appear to be correct, but reading the\nspec I find there is more missing, specifically\n\n- RRRR and RR\n\n\nIt seems that our YY works like RR should:\nSELECT to_date('69', 'YY');\n to_date \n------------\n 2069-01-01\n(1 row)\n\nSELECT to_date('70', 'YY');\n to_date \n------------\n 1970-01-01\n(1 row)\n\nBut by the standard first two digits of current year should be used in YY.\n\n\nOracle follows the standard but its implementation has the different \nrounding algorithm:\n\nSELECT TO_CHAR(TO_DATE('99', 'YY'), 'YYYY') from dual;\n2099\n\nSELECT TO_CHAR(TO_DATE('49', 'RR'), 'YYYY') from dual;\n2049\n\nSELECT TO_CHAR(TO_DATE('50', 'RR'), 'YYYY') from dual;\n1950\n\n\nSo it's unclear what we should do: \n - implement YY and RR strictly following the standard only in .datetime()\n - fix YY implementation in to_date()/to_timestamp() and implement RR\n - use our non-standard templates in .datetime()\n\n\n\n\n- SSSSS (currently only SSSS is supported, but that's not standard)\n\nSSSSS template can be easily added as alias to SSSS. \n\n\n\nAlso in some cases we allow timestamps with seven digits of fractional\nprecision, so perhaps FF7 should be supported as well. I'm not quite\nsure about the details here. You tests only cover 6 and 9 digits. It\nwould be good to cover 7 and perhaps 8 as well, since those are the\nboundary cases.\n\nFF7-FF9 weer present in earlier versions of the jsonpath patches, but they\nhad been removed (see [1]) because they were not completely supported due\nto the limited precision of timestamp.\n\n\n\n\n\nSome concrete pieces of review:\n\n+ <row>\n+ <entry><literal>FF1</literal></entry>\n+ <entry>decisecond (0-9)</entry>\n+ </row>\n\nLet's not use such weird terms as \"deciseconds\". We could say\n\"fractional seconds, 1 digit\" etc. or something like that.\n\n\n And what about \"tenths of seconds\", \"hundredths of seconds\"?\n \n+/* Return flags for DCH_from_char() */\n+#define DCH_DATED 0x01\n+#define DCH_TIMED 0x02\n+#define DCH_ZONED 0x04\n\nI think you mean do_to_timestamp() here. These terms \"dated\" etc. are\nfrom the SQL standard text, but they should be explained somewhere for\nthe readers of the code.\n\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/885de241-5a51-29c8-a6b3-f1dda22aba13%40postgrespro.ru\n\n\n -- \n Nikita Glukhov\n Postgres Professional: http://www.postgrespro.com\n The Russian Postgres Company",
"msg_date": "Wed, 24 Jul 2019 01:48:26 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 1:50 AM Nikita Glukhov <n.gluhov@postgrespro.ru> wrote:\n> So it's unclear what we should do:\n> - implement YY and RR strictly following the standard only in .datetime()\n> - fix YY implementation in to_date()/to_timestamp() and implement RR\n> - use our non-standard templates in .datetime()\n\nAlso it appears that according to standard .datetime() should treat\nspaces and delimiters differently than our to_date()/to_timestamp().\nIt requires strict matching of spaces and delimiters in input and\nformat strings. We don't have such behavior in both non-FX and FX\nmodes. Also, standard doesn't define FX mode at all. This rules\ncover jsonpath .datetime() method and CAST(... FORMAT ...) – new cast\nclause defined by standard.\n\nSo, I think due to reasons of compatibility it doesn't worth trying to\nmake behavior of our to_date()/to_timestamp() to fit requirements for\njsonpath .datetime() and CAST(... FORMAT ...). I propose to leave\nthis functions as is (maybe add new patterns), but introduce another\ndatetime parsing mode, which would fit to the standard. Opinions?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 24 Jul 2019 16:45:04 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On 2019-07-24 00:48, Nikita Glukhov wrote:\n> It seems that our YY works like RR should:\n> \n> SELECT to_date('69', 'YY');\n> to_date \n> ------------\n> 2069-01-01\n> (1 row)\n> \n> SELECT to_date('70', 'YY');\n> to_date \n> ------------\n> 1970-01-01\n> (1 row)\n> \n> But by the standard first two digits of current year should be used in YY.\n\nIs this behavior even documented anywhere in our documentation? I\ncouldn't find it. What's the exact specification of what it does in\nthese cases?\n\n> So it's unclear what we should do: \n> - implement YY and RR strictly following the standard only in .datetime()\n> - fix YY implementation in to_date()/to_timestamp() and implement RR\n> - use our non-standard templates in .datetime()\n\nI think we definitely should try to use the same template system in both\nthe general functions and in .datetime(). This might involve some\ncompromises between existing behavior, Oracle behavior, SQL standard.\nSo far I'm not worried: If you're using two-digit years like above,\nyou're playing with fire anyway. Also some of the other cases like\ndealing with trailing spaces are probably acceptable as slight\nincompatibilities or extensions.\n\nWe should collect a list of test cases that illustrate the differences\nand then work out how to deal with them.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jul 2019 22:25:08 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "\nOn 7/24/19 4:25 PM, Peter Eisentraut wrote:\n> On 2019-07-24 00:48, Nikita Glukhov wrote:\n>> It seems that our YY works like RR should:\n>>\n>> SELECT to_date('69', 'YY');\n>> to_date \n>> ------------\n>> 2069-01-01\n>> (1 row)\n>>\n>> SELECT to_date('70', 'YY');\n>> to_date \n>> ------------\n>> 1970-01-01\n>> (1 row)\n>>\n>> But by the standard first two digits of current year should be used in YY.\n> Is this behavior even documented anywhere in our documentation? I\n> couldn't find it. What's the exact specification of what it does in\n> these cases?\n>\n>> So it's unclear what we should do: \n>> - implement YY and RR strictly following the standard only in .datetime()\n>> - fix YY implementation in to_date()/to_timestamp() and implement RR\n>> - use our non-standard templates in .datetime()\n> I think we definitely should try to use the same template system in both\n> the general functions and in .datetime().\n\n\n\nAgreed. It's too hard to maintain otherwise.\n\n\n> This might involve some\n> compromises between existing behavior, Oracle behavior, SQL standard.\n> So far I'm not worried: If you're using two-digit years like above,\n> you're playing with fire anyway. Also some of the other cases like\n> dealing with trailing spaces are probably acceptable as slight\n> incompatibilities or extensions.\n\n\nMy instict wouyld be to move as close as possible to the standard,\nespecially if the current behaviour isn't documented.\n\n\n>\n> We should collect a list of test cases that illustrate the differences\n> and then work out how to deal with them.\n>\n\n\nAgreed.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 26 Jul 2019 10:41:50 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "\nOn 7/23/19 6:48 PM, Nikita Glukhov wrote:\n> Some concrete pieces of review:\n>> + <row>\n>> + <entry><literal>FF1</literal></entry>\n>> + <entry>decisecond (0-9)</entry>\n>> + </row>\n>>\n>> Let's not use such weird terms as \"deciseconds\". We could say\n>> \"fractional seconds, 1 digit\" etc. or something like that.\n> And what about \"tenths of seconds\", \"hundredths of seconds\"?\n\n\n\nYes, those are much better.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 26 Jul 2019 10:43:07 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 2:43 AM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> On 7/23/19 6:48 PM, Nikita Glukhov wrote:\n> > Some concrete pieces of review:\n> >> + <row>\n> >> + <entry><literal>FF1</literal></entry>\n> >> + <entry>decisecond (0-9)</entry>\n> >> + </row>\n> >>\n> >> Let's not use such weird terms as \"deciseconds\". We could say\n> >> \"fractional seconds, 1 digit\" etc. or something like that.\n> > And what about \"tenths of seconds\", \"hundredths of seconds\"?\n>\n> Yes, those are much better.\n\nI've moved this to the September CF, still in \"Waiting on Author\" state.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:30:01 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 1:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jul 27, 2019 at 2:43 AM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n> > On 7/23/19 6:48 PM, Nikita Glukhov wrote:\n> > > Some concrete pieces of review:\n> > >> + <row>\n> > >> + <entry><literal>FF1</literal></entry>\n> > >> + <entry>decisecond (0-9)</entry>\n> > >> + </row>\n> > >>\n> > >> Let's not use such weird terms as \"deciseconds\". We could say\n> > >> \"fractional seconds, 1 digit\" etc. or something like that.\n> > > And what about \"tenths of seconds\", \"hundredths of seconds\"?\n> >\n> > Yes, those are much better.\n>\n> I've moved this to the September CF, still in \"Waiting on Author\" state.\n\nI'd like to summarize differences between standard datetime parsing\nand our to_timestamp()/to_date().\n\n1) Standard defines much less datetime template parts. Namely it defines:\nYYYY | YYY | YY | Y\nRRRR | RR\nMM\nDD\nDDD\nHH | HH12\nHH24\nMI\nSS\nSSSSS\nFF1 | FF2 | FF3 | FF4 | FF5 | FF6 | FF7 | FF8 | FF9\nA.M. | P.M.\nTZH\nTZM\n\nWe support majority of them and much more. Incompatibilities are:\n * SSSS (our name is SSSSS),\n * We don't support RRRR | RR,\n * Our handling of YYYY | YYY | YY | Y is different. What we have\nhere is more like RRRR | RR in standard (Nikita explained that\nupthread [1]),\n * We don't support FF[1-9]. FF[1-6] are implemented in patch. We\ncan't support FF[7-9], because our binary representation of timestamp\ndatatype don't have enough of precision.\n\n2) Standard defines only following delimiters: <minus sign>, <period>,\n<solidus>, <comma>, <apostrophe>, <semicolon>, <colon>, <space>. And\nit requires strict matching of separators between template and input\nstrings. We don't do so either in FX or non-FX mode.\n\nFor instance, we allow both to_date('2019/12/31', 'YYYY-MM-DD') and\nto_date('2019/12/31', 'FXYYYY-MM-DD'). But according to standard this\ndate should be written only as '2019-12-31' to match given template\nstring.\n\n3) Standard prescribes recognition of digits according to \\p{Nd}\nregex. \\p{Nd} matches to \"a digit zero through nine in any script\nexcept ideographic scripts\". As far as I remember, we currently do\nrecognize only ASCII digits.\n\n4) For non-delimited template parts standard requires matching to\ndigit sequences of lengths between 1 and maximum number of characters\nof that template part. We don't always do so. For instance, we allow\nmore than 4 digits to correspond to YYYY, more than 3 digits to\ncorrespond to YYY and so on.\n\n# select to_date('2019-12-31', 'YYY-MM-DD');\n to_date\n------------\n 2019-12-31\n(1 row)\n\nLinks.\n\n1. https://www.postgresql.org/message-id/d6efab15-f3a4-40d6-8ddb-6fd8f64cbc08%40postgrespro.ru\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 13 Aug 2019 00:08:07 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Tue, Aug 13, 2019 at 12:08 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Aug 1, 2019 at 1:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Sat, Jul 27, 2019 at 2:43 AM Andrew Dunstan\n> > <andrew.dunstan@2ndquadrant.com> wrote:\n> > > On 7/23/19 6:48 PM, Nikita Glukhov wrote:\n> > > > Some concrete pieces of review:\n> > > >> + <row>\n> > > >> + <entry><literal>FF1</literal></entry>\n> > > >> + <entry>decisecond (0-9)</entry>\n> > > >> + </row>\n> > > >>\n> > > >> Let's not use such weird terms as \"deciseconds\". We could say\n> > > >> \"fractional seconds, 1 digit\" etc. or something like that.\n> > > > And what about \"tenths of seconds\", \"hundredths of seconds\"?\n> > >\n> > > Yes, those are much better.\n> >\n> > I've moved this to the September CF, still in \"Waiting on Author\" state.\n>\n> I'd like to summarize differences between standard datetime parsing\n> and our to_timestamp()/to_date().\n\nLet me describe my proposal to overcome these differences.\n\n> 1) Standard defines much less datetime template parts. Namely it defines:\n> YYYY | YYY | YY | Y\n> RRRR | RR\n> MM\n> DD\n> DDD\n> HH | HH12\n> HH24\n> MI\n> SS\n> SSSSS\n> FF1 | FF2 | FF3 | FF4 | FF5 | FF6 | FF7 | FF8 | FF9\n> A.M. | P.M.\n> TZH\n> TZM\n>\n> We support majority of them and much more.\n\nRegarding non-contradicting template parts we can support them in\n.datetime() method too. That would be our extension to standard. See\nno problem here.\n\n> Incompatibilities are:\n> * SSSS (our name is SSSSS),\n\nSince SSSS is not reserved, I'd propose to make SSSS an alias for SSSSS.\n\n> * We don't support RRRR | RR,\n> * Our handling of YYYY | YYY | YY | Y is different. What we have\n> here is more like RRRR | RR in standard (Nikita explained that\n> upthread [1]),\n\nI'd like to make YYYY | YYY | YY | Y and RRRR | RR behavior standard\nconforming in both to_timestamp()/to_date() and .datetime(). Handling\nthese template parts differently in different functions would be\nconfusing for users.\n\n> * We don't support FF[1-9]. FF[1-6] are implemented in patch. We\n> can't support FF[7-9], because our binary representation of timestamp\n> datatype don't have enough of precision.\n\nI propose to postpone implementation of FF[7-9]. We can support them\nlater once we have precise enough datatypes.\n\n> 2) Standard defines only following delimiters: <minus sign>, <period>,\n> <solidus>, <comma>, <apostrophe>, <semicolon>, <colon>, <space>. And\n> it requires strict matching of separators between template and input\n> strings. We don't do so either in FX or non-FX mode.\n>\n> For instance, we allow both to_date('2019/12/31', 'YYYY-MM-DD') and\n> to_date('2019/12/31', 'FXYYYY-MM-DD'). But according to standard this\n> date should be written only as '2019-12-31' to match given template\n> string.\n>\n> 4) For non-delimited template parts standard requires matching to\n> digit sequences of lengths between 1 and maximum number of characters\n> of that template part. We don't always do so. For instance, we allow\n> more than 4 digits to correspond to YYYY, more than 3 digits to\n> correspond to YYY and so on.\n>\n> # select to_date('2019-12-31', 'YYY-MM-DD');\n> to_date\n> ------------\n> 2019-12-31\n> (1 row)\n\nIn order to implement these I'd like to propose introduction of\nspecial do_to_timestamp() flag, which would define standard conforming\nparsing. This flag would be used in .datetime() jsonpath method.\nLater we also should use it for CAST(... FORMAT ...) expression, which\nshould also do standard conforming parsing\n\n> 3) Standard prescribes recognition of digits according to \\p{Nd}\n> regex. \\p{Nd} matches to \"a digit zero through nine in any script\n> except ideographic scripts\". As far as I remember, we currently do\n> recognize only ASCII digits.\n\nSupport all unicode digit scripts would be cool for both\nto_timestamp()/to_date() and standard parsing. However, I think this\ncould be postponed. Personally I didn't meet non-ascii digits in\ndatabases yet. If needed one can implement this later, shouldn't be\nhard.\n\nIf no objections, Nikita and me will work on revised patchset based on\nthis proposal.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 19 Aug 2019 01:29:53 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Mon, Aug 19, 2019 at 1:29 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> If no objections, Nikita and me will work on revised patchset based on\n> this proposal.\n\nRevised patchset is attached. It still requires some polishing. But\nthe most doubtful part is handling of RR, YYY, YY and Y.\n\nStandard requires us to complete YYY, YY and Y with high digits from\ncurrent year. So, if YY matches 99, then year should be 2099, not\n1999.\n\nFor RR, standard requirements are relaxed. Implementation may choose\nmatching year from range [current_year - 100; current_year + 100]. It\nlooks reasonable to handle RR in the same way we currently handle YY:\nselect appropriate year in [1970; 2069] range. It seems like we\nselect this range to start in the same point as unix timestamp. But\nnowadays it still looks reasonable: it's about +- 50 from current\nyear. So, years close to the current one are likely completed\ncorrectly. In Oracle RR returns year in [1950; 1949] range. So, it\nseems to be designed near 2000 :). I don't think we need to copy this\nbehavior.\n\nHandling YYY and YY in standard way seems quite easy. We can complete\nthem as 2YYY and 20YY. This should be standard conforming till 2100.\n\nBut handling Y looks problematic. Immutable way of handling this\nwould work only for decade. Current code completes Y as 200Y and it\nlooks pretty \"outdated\" now in 2019. Using current real year would\nmake conversion timestamp-dependent. This property doesn't look favor\nfor to_date()/to_timestamp() and unacceptable for immutable jsonpath\nfunctions (but we can forbid using Y pattern there). Current patch\ncomplete Y as 202Y assuming v13 will be released in 2020. But I'm not\nsure what is better solution here. The bright side is that I haven't\nseen anybody use Y patten in real life :)\n\n\n\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 27 Aug 2019 05:19:00 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 5:19 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> Revised patchset is attached. It still requires some polishing. But\n> the most doubtful part is handling of RR, YYY, YY and Y.\n>\n> Standard requires us to complete YYY, YY and Y with high digits from\n> current year. So, if YY matches 99, then year should be 2099, not\n> 1999.\n>\n> For RR, standard requirements are relaxed. Implementation may choose\n> matching year from range [current_year - 100; current_year + 100]. It\n> looks reasonable to handle RR in the same way we currently handle YY:\n> select appropriate year in [1970; 2069] range. It seems like we\n> select this range to start in the same point as unix timestamp. But\n> nowadays it still looks reasonable: it's about +- 50 from current\n> year. So, years close to the current one are likely completed\n> correctly. In Oracle RR returns year in [1950; 1949] range. So, it\n> seems to be designed near 2000 :). I don't think we need to copy this\n> behavior.\n>\n> Handling YYY and YY in standard way seems quite easy. We can complete\n> them as 2YYY and 20YY. This should be standard conforming till 2100.\n>\n> But handling Y looks problematic. Immutable way of handling this\n> would work only for decade. Current code completes Y as 200Y and it\n> looks pretty \"outdated\" now in 2019. Using current real year would\n> make conversion timestamp-dependent. This property doesn't look favor\n> for to_date()/to_timestamp() and unacceptable for immutable jsonpath\n> functions (but we can forbid using Y pattern there). Current patch\n> complete Y as 202Y assuming v13 will be released in 2020. But I'm not\n> sure what is better solution here. The bright side is that I haven't\n> seen anybody use Y patten in real life :)\n\nRevised patchset is attached. It adds and adjusts commit messages,\ncomments and does other cosmetic improvements.\n\nI think 0001 and 0002 are well reviewed already. And these patches\nare usable not only for jsonpath .datetime(), but contain improvements\nfor existing to_date()/to_timestamp() SQL functions. I'm going to\npush these two if no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 14 Sep 2019 22:18:29 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Sat, Sep 14, 2019 at 10:18 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Tue, Aug 27, 2019 at 5:19 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > Revised patchset is attached. It still requires some polishing. But\n> > the most doubtful part is handling of RR, YYY, YY and Y.\n> >\n> > Standard requires us to complete YYY, YY and Y with high digits from\n> > current year. So, if YY matches 99, then year should be 2099, not\n> > 1999.\n> >\n> > For RR, standard requirements are relaxed. Implementation may choose\n> > matching year from range [current_year - 100; current_year + 100]. It\n> > looks reasonable to handle RR in the same way we currently handle YY:\n> > select appropriate year in [1970; 2069] range. It seems like we\n> > select this range to start in the same point as unix timestamp. But\n> > nowadays it still looks reasonable: it's about +- 50 from current\n> > year. So, years close to the current one are likely completed\n> > correctly. In Oracle RR returns year in [1950; 1949] range. So, it\n> > seems to be designed near 2000 :). I don't think we need to copy this\n> > behavior.\n> >\n> > Handling YYY and YY in standard way seems quite easy. We can complete\n> > them as 2YYY and 20YY. This should be standard conforming till 2100.\n> >\n> > But handling Y looks problematic. Immutable way of handling this\n> > would work only for decade. Current code completes Y as 200Y and it\n> > looks pretty \"outdated\" now in 2019. Using current real year would\n> > make conversion timestamp-dependent. This property doesn't look favor\n> > for to_date()/to_timestamp() and unacceptable for immutable jsonpath\n> > functions (but we can forbid using Y pattern there). Current patch\n> > complete Y as 202Y assuming v13 will be released in 2020. But I'm not\n> > sure what is better solution here. The bright side is that I haven't\n> > seen anybody use Y patten in real life :)\n>\n> Revised patchset is attached. It adds and adjusts commit messages,\n> comments and does other cosmetic improvements.\n>\n> I think 0001 and 0002 are well reviewed already. And these patches\n> are usable not only for jsonpath .datetime(), but contain improvements\n> for existing to_date()/to_timestamp() SQL functions. I'm going to\n> push these two if no objections.\n\nThose two patches are pushed. Just before commit I've renamed\ndeciseconds to \"tenths of seconds\", sentiseconds to \"hundredths of\nseconds\" as discussed before [1].\n\nThe rest of patchset is attached.\n\nLinks\n1. https://www.postgresql.org/message-id/0409fb42-18d3-bdb7-37ab-d742d5313a40%402ndQuadrant.com\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 16 Sep 2019 22:05:03 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Mon, Sep 16, 2019 at 10:05 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Sat, Sep 14, 2019 at 10:18 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Tue, Aug 27, 2019 at 5:19 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > Revised patchset is attached. It still requires some polishing. But\n> > > the most doubtful part is handling of RR, YYY, YY and Y.\n> > >\n> > > Standard requires us to complete YYY, YY and Y with high digits from\n> > > current year. So, if YY matches 99, then year should be 2099, not\n> > > 1999.\n> > >\n> > > For RR, standard requirements are relaxed. Implementation may choose\n> > > matching year from range [current_year - 100; current_year + 100]. It\n> > > looks reasonable to handle RR in the same way we currently handle YY:\n> > > select appropriate year in [1970; 2069] range. It seems like we\n> > > select this range to start in the same point as unix timestamp. But\n> > > nowadays it still looks reasonable: it's about +- 50 from current\n> > > year. So, years close to the current one are likely completed\n> > > correctly. In Oracle RR returns year in [1950; 1949] range. So, it\n> > > seems to be designed near 2000 :). I don't think we need to copy this\n> > > behavior.\n> > >\n> > > Handling YYY and YY in standard way seems quite easy. We can complete\n> > > them as 2YYY and 20YY. This should be standard conforming till 2100.\n> > >\n> > > But handling Y looks problematic. Immutable way of handling this\n> > > would work only for decade. Current code completes Y as 200Y and it\n> > > looks pretty \"outdated\" now in 2019. Using current real year would\n> > > make conversion timestamp-dependent. This property doesn't look favor\n> > > for to_date()/to_timestamp() and unacceptable for immutable jsonpath\n> > > functions (but we can forbid using Y pattern there). Current patch\n> > > complete Y as 202Y assuming v13 will be released in 2020. But I'm not\n> > > sure what is better solution here. The bright side is that I haven't\n> > > seen anybody use Y patten in real life :)\n> >\n> > Revised patchset is attached. It adds and adjusts commit messages,\n> > comments and does other cosmetic improvements.\n> >\n> > I think 0001 and 0002 are well reviewed already. And these patches\n> > are usable not only for jsonpath .datetime(), but contain improvements\n> > for existing to_date()/to_timestamp() SQL functions. I'm going to\n> > push these two if no objections.\n>\n> Those two patches are pushed. Just before commit I've renamed\n> deciseconds to \"tenths of seconds\", sentiseconds to \"hundredths of\n> seconds\" as discussed before [1].\n>\n> The rest of patchset is attached.\n\nI've reordered the patchset. I moved the most debatable patch, which\nintroduces RRRR and RR and changes parsing of YYY, YY and Y to the\nend. I think we have enough of time in this release cycle to decide\nwhether we want this.\n\nPatches 0001-0005 looks quite mature for me. I'm going to push this\nif no objections. After pushing them, I'm going to start discussion\nrelated to RR, YY and friends in separate thread.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 23 Sep 2019 22:05:01 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On Mon, Sep 23, 2019 at 10:05 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> I've reordered the patchset. I moved the most debatable patch, which\n> introduces RRRR and RR and changes parsing of YYY, YY and Y to the\n> end. I think we have enough of time in this release cycle to decide\n> whether we want this.\n>\n> Patches 0001-0005 looks quite mature for me. I'm going to push this\n> if no objections. After pushing them, I'm going to start discussion\n> related to RR, YY and friends in separate thread.\n\nPushed. Remaining patch is attached. I'm going to start the separate\nthread with its detailed explanation.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 25 Sep 2019 22:55:07 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Support for jsonpath .datetime() method"
},
{
"msg_contents": "On 25.09.2019 22:55, Alexander Korotkov wrote:\n\n> On Mon, Sep 23, 2019 at 10:05 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n>> I've reordered the patchset. I moved the most debatable patch, which\n>> introduces RRRR and RR and changes parsing of YYY, YY and Y to the\n>> end. I think we have enough of time in this release cycle to decide\n>> whether we want this.\n>>\n>> Patches 0001-0005 looks quite mature for me. I'm going to push this\n>> if no objections. After pushing them, I'm going to start discussion\n>> related to RR, YY and friends in separate thread.\n> Pushed. Remaining patch is attached. I'm going to start the separate\n> thread with its detailed explanation.\n\nAttached patch with refactoring of compareDatetime() according\nto the complaints of Tom Lane in [1]:\n * extracted four subroutines for type conversions\n * extracted subroutine for error reporting\n * added default cases to all switches\n * have_error flag is expected to be not-NULL always\n * fixed errhint() message style\n\n[1] https://www.postgresql.org/message-id/32308.1569455803%40sss.pgh.pa.us\n\n-- \nNikita Glukhov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 27 Sep 2019 17:25:54 +0300",
"msg_from": "Nikita Glukhov <n.gluhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Support for jsonpath .datetime() method"
}
] |
[
{
"msg_contents": "--> Function ' filter_id ' filters the ID's based on some conditions.\n--> Input is set of ID's. (Not directly taking the input since there is no\nprovision to pass multiple rows to a function)\n\ncreate function filter_id()\nreturn table (id bigint)\nbegin\n\n--> Assuming input table is already created #temp_input_id\n\nretun query as select id\nfrom tbl a\ninner join\n#temp_input_id b on (a.id = b.id)\nwhere a.<conditions>;\n\nend;\n\n\n--> Calling Function:\n\ncreate function caller()\nreturn table (id bigint,col1 bigint, col2 bigint)\nbegin\n\n--> do some processing\n\n--> Find out the input id's for filtering.\n\n--> Create temp table for providing input for the filtering function\n\ncreate temp table #TEMP1\nas select id from tbla........;\n(Cannot move the input id logic to filter_function)\n\n--> calling the filter function\ncreate temp table #TEMP2\nas select * from filter_id(); --> This is a generic function used in many\nfunctions.\n\n\nreturn query\nas select a.*\nfrom tb3 a inner join tb4 inner join tb 5 inner join #TEMP2;\nend;\n\n\nIs there any alternate way of achieving this? Passing multiple records to a\nfunction im creating a temp table before invoking the function.\nFor receiving an output of multiple rows i'm creating a temp table to reuse\nfurther in the code.\n\nCan this be done using Refcursor? Is it possible to convert refcursor to a\ntemp table and use it as normal table in query?\n\n--> Function '\n\nfilter_id ' filters the ID's based on some conditions.--> Input is set of ID's. (Not directly taking the input since there is no provision to pass multiple rows to a function)create function filter_id()return table (id bigint)begin\t--> Assuming input table is already created #temp_input_id\tretun query as select id \t\t\t\t\tfrom tbl a \t\t\t\t\t\t inner join \t\t\t\t\t\t #temp_input_id b on (a.id = b.id)\t\t\t\t\twhere a.<conditions>; end;--> Calling Function:create function caller()return table (id bigint,col1 bigint, col2 bigint)begin\t--> do some processing \t--> Find out the input id's for filtering. \t--> Create temp table for providing input for the filtering function \tcreate temp table #TEMP1 \tas select id from tbla........;\t(Cannot move the input id logic to filter_function) \t--> calling the filter function \tcreate temp table #TEMP2 \tas select * from filter_id(); --> This is a generic function used in many functions. \treturn query \tas select a.*\t\tfrom tb3 a inner join tb4 inner join tb 5 inner join #TEMP2;end;Is there any alternate way of achieving this? Passing multiple records to a function im creating a temp table before invoking the function.For receiving an output of multiple rows i'm creating a temp table to reuse further in the code.Can this be done using Refcursor? Is it possible to convert refcursor to a temp table and use it as normal table in query?",
"msg_date": "Tue, 28 May 2019 20:06:16 +0530",
"msg_from": "RAJIN RAJ K <rajin89@gmail.com>",
"msg_from_op": true,
"msg_subject": "Alternate methods for multiple rows input/output to a function."
},
{
"msg_contents": "On 5/28/19 7:36 AM, RAJIN RAJ K wrote:\n> --> Function ' filter_id ' filters the ID's based on some conditions.\n> --> Input is set of ID's. (Not directly taking the input since there is \n> no provision to pass multiple rows to a function)\n\nTo be honest I cannot follow what you are trying to achieve below. I do \nhave one suggestion as to creating temp tables.\n\nWhy not use a CTE:\n\nhttps://www.postgresql.org/docs/11/queries-with.html\n\nin the function to build a 'temp' table on the fly?\n\n> \n> create function filter_id()\n> return table (id bigint)\n> begin\n> \n> --> Assuming input table is already created #temp_input_id\n> \n> retun query as select id\n> from tbl a\n> inner join\n> #temp_input_id b on (a.id <http://a.id> = b.id <http://b.id>)\n> where a.<conditions>;\n> \n> end;\n> \n> \n> --> Calling Function:\n> \n> create function caller()\n> return table (id bigint,col1 bigint, col2 bigint)\n> begin\n> \n> --> do some processing\n> \n> --> Find out the input id's for filtering.\n> \n> --> Create temp table for providing input for the filtering function\n> \n> create temp table #TEMP1\n> as select id from tbla........;\n> (Cannot move the input id logic to filter_function)\n> \n> --> calling the filter function\n> create temp table #TEMP2\n> as select * from filter_id(); --> This is a generic function used in \n> many functions.\n> \n> \n> return query\n> as select a.*\n> from tb3 a inner join tb4 inner join tb 5 inner join #TEMP2;\n> end;\n> \n> \n> Is there any alternate way of achieving this? Passing multiple records \n> to a function im creating a temp table before invoking the function.\n> For receiving an output of multiple rows i'm creating a temp table to \n> reuse further in the code.\n> \n> Can this be done using Refcursor? Is it possible to convert refcursor to \n> a temp table and use it as normal table in query?\n> \n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n",
"msg_date": "Tue, 28 May 2019 07:59:30 -0700",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Alternate methods for multiple rows input/output to a function."
},
{
"msg_contents": "On 5/28/19 8:06 AM, RAJIN RAJ K wrote:\n\nPlease reply to list also.\nCcing list.\n\n> Thanks for the response.\n> \n> CTE is not useful in my case. Here i want to pass the table to a \n> function and get the filtered results back from the function.\n> I tried few but not use full.\n> 1. Pass table input --> Ref cursor is the only option but which again \n> require loop to fetch the records. (FETCH ALL results cannot be stored \n> in a variable)\n> Here im creating temp table withe required input data before the \n> function call.\n\nI'm going to take a stab at this though I do not entirely follow the \nlogic. Definitely not tested:\n\n1) create function filter_id(tbl_name varchar)\nreturn table (id bigint)\nbegin\n\n--> Assuming input table is already created #temp_input_id\n\nreturn query EXECUTE format('select id '\n'from tbl a '\n'inner join'\n'%I b on (a.id = b.id)'\n'where a.<conditions>', tbl_name);\n\nend;\n\n2) In calling function:\n\nWITH temp_tbl AS (select id from tbla...\n), filter_tbl AS (select * from filter_id(temp_bl))\nselect a.*\nfrom tb3 a inner join tb4 inner join tb 5 inner join filter_tbl;\n\n\n\n> \n> \n> On Tue, May 28, 2019 at 8:29 PM Adrian Klaver <adrian.klaver@aklaver.com \n> <mailto:adrian.klaver@aklaver.com>> wrote:\n> \n> On 5/28/19 7:36 AM, RAJIN RAJ K wrote:\n> > --> Function ' filter_id ' filters the ID's based on some conditions.\n> > --> Input is set of ID's. (Not directly taking the input since\n> there is\n> > no provision to pass multiple rows to a function)\n> \n> To be honest I cannot follow what you are trying to achieve below. I do\n> have one suggestion as to creating temp tables.\n> \n> Why not use a CTE:\n> \n> https://www.postgresql.org/docs/11/queries-with.html\n> \n> in the function to build a 'temp' table on the fly?\n> \n> >\n> > create function filter_id()\n> > return table (id bigint)\n> > begin\n> >\n> > --> Assuming input table is already created #temp_input_id\n> >\n> > retun query as select id\n> > from tbl a\n> > inner join\n> > #temp_input_id b on (a.id <http://a.id> <http://a.id> = b.id\n> <http://b.id> <http://b.id>)\n> > where a.<conditions>;\n> >\n> > end;\n> >\n> >\n> > --> Calling Function:\n> >\n> > create function caller()\n> > return table (id bigint,col1 bigint, col2 bigint)\n> > begin\n> >\n> > --> do some processing\n> >\n> > --> Find out the input id's for filtering.\n> >\n> > --> Create temp table for providing input for the filtering function\n> >\n> > create temp table #TEMP1\n> > as select id from tbla........;\n> > (Cannot move the input id logic to filter_function)\n> >\n> > --> calling the filter function\n> > create temp table #TEMP2\n> > as select * from filter_id(); --> This is a generic function used in\n> > many functions.\n> >\n> >\n> > return query\n> > as select a.*\n> > from tb3 a inner join tb4 inner join tb 5 inner join #TEMP2;\n> > end;\n> >\n> >\n> > Is there any alternate way of achieving this? Passing multiple\n> records\n> > to a function im creating a temp table before invoking the function.\n> > For receiving an output of multiple rows i'm creating a temp\n> table to\n> > reuse further in the code.\n> >\n> > Can this be done using Refcursor? Is it possible to convert\n> refcursor to\n> > a temp table and use it as normal table in query?\n> >\n> >\n> \n> \n> -- \n> Adrian Klaver\n> adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>\n> \n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n",
"msg_date": "Tue, 28 May 2019 10:26:41 -0700",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Alternate methods for multiple rows input/output to a function."
}
] |
[
{
"msg_contents": "Issue found while translating the v12 manual. I also fixed something that\nwas missing, as far as I understand it (first fix, the typo is the second\nfix).\n\nSee patch attached.\n\nThanks.\n\n\n-- \nGuillaume.",
"msg_date": "Tue, 28 May 2019 17:05:10 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Quick doc typo fix"
},
{
"msg_contents": "On Tue, May 28, 2019 at 05:05:10PM +0200, Guillaume Lelarge wrote:\n> <row>\n> <entry><link linkend=\"catalog-pg-am\"><structname>pg_am</structname></link></entry>\n> - <entry>index access methods</entry>\n> + <entry>table and index access methods</entry>\n> </row>\n\nPerhaps we could just say \"relation\" here? That's the term used on\nthe paragraph describing pg_am.\n--\nMichael",
"msg_date": "Tue, 28 May 2019 15:27:57 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le mar. 28 mai 2019 à 21:28, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Tue, May 28, 2019 at 05:05:10PM +0200, Guillaume Lelarge wrote:\n> > <row>\n> > <entry><link\n> linkend=\"catalog-pg-am\"><structname>pg_am</structname></link></entry>\n> > - <entry>index access methods</entry>\n> > + <entry>table and index access methods</entry>\n> > </row>\n>\n> Perhaps we could just say \"relation\" here? That's the term used on\n> the paragraph describing pg_am.\n>\n\nHehe, that was the first thing I wrote :) but went with \"table and index\"\nas it was also used a bit later in the chapter. Both are fine with me.\n\n\n-- \nGuillaume.\n\nLe mar. 28 mai 2019 à 21:28, Michael Paquier <michael@paquier.xyz> a écrit :On Tue, May 28, 2019 at 05:05:10PM +0200, Guillaume Lelarge wrote:\n> <row>\n> <entry><link linkend=\"catalog-pg-am\"><structname>pg_am</structname></link></entry>\n> - <entry>index access methods</entry>\n> + <entry>table and index access methods</entry>\n> </row>\n\nPerhaps we could just say \"relation\" here? That's the term used on\nthe paragraph describing pg_am.Hehe, that was the first thing I wrote :) but went with \"table and index\" as it was also used a bit later in the chapter. Both are fine with me.-- Guillaume.",
"msg_date": "Tue, 28 May 2019 21:46:48 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le mar. 28 mai 2019 à 21:46, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Le mar. 28 mai 2019 à 21:28, Michael Paquier <michael@paquier.xyz> a\n> écrit :\n>\n>> On Tue, May 28, 2019 at 05:05:10PM +0200, Guillaume Lelarge wrote:\n>> > <row>\n>> > <entry><link\n>> linkend=\"catalog-pg-am\"><structname>pg_am</structname></link></entry>\n>> > - <entry>index access methods</entry>\n>> > + <entry>table and index access methods</entry>\n>> > </row>\n>>\n>> Perhaps we could just say \"relation\" here? That's the term used on\n>> the paragraph describing pg_am.\n>>\n>\n> Hehe, that was the first thing I wrote :) but went with \"table and index\"\n> as it was also used a bit later in the chapter. Both are fine with me.\n>\n>\nAnd here is another one. See patch attached.\n\n\n-- \nGuillaume.",
"msg_date": "Wed, 29 May 2019 17:30:33 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "On Tue, May 28, 2019 at 09:46:48PM +0200, Guillaume Lelarge wrote:\n> Hehe, that was the first thing I wrote :) but went with \"table and index\"\n> as it was also used a bit later in the chapter. Both are fine with me.\n\nOkay, done this way. Thanks for the report.\n--\nMichael",
"msg_date": "Wed, 29 May 2019 11:40:26 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "On Wed, May 29, 2019 at 05:30:33PM +0200, Guillaume Lelarge wrote:\n> And here is another one. See patch attached.\n\nAre you still going through some parts of the documentation? Perhaps\nyou may notice something else? I am wondering if it would be better\nto wait a bit more so as we can group all issues you are finding at\nonce.\n--\nMichael",
"msg_date": "Wed, 29 May 2019 13:45:08 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le mer. 29 mai 2019 19:45, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Wed, May 29, 2019 at 05:30:33PM +0200, Guillaume Lelarge wrote:\n> > And here is another one. See patch attached.\n>\n> Are you still going through some parts of the documentation? Perhaps\n> you may notice something else? I am wondering if it would be better\n> to wait a bit more so as we can group all issues you are finding at\n> once.\n>\n\nYeah, I still have quite a lot to process. That might be better to do it\nall in once.\n\nLe mer. 29 mai 2019 19:45, Michael Paquier <michael@paquier.xyz> a écrit :On Wed, May 29, 2019 at 05:30:33PM +0200, Guillaume Lelarge wrote:\n> And here is another one. See patch attached.\n\nAre you still going through some parts of the documentation? Perhaps\nyou may notice something else? I am wondering if it would be better\nto wait a bit more so as we can group all issues you are finding at\nonce.Yeah, I still have quite a lot to process. That might be better to do it all in once.",
"msg_date": "Wed, 29 May 2019 19:47:12 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "On Wed, May 29, 2019 at 07:47:12PM +0200, Guillaume Lelarge wrote:\n> Yeah, I still have quite a lot to process. That might be better to do it\n> all in once.\n\nOK, thanks! Could you ping me on this thread once you think you are\ndone?\n--\nMichael",
"msg_date": "Wed, 29 May 2019 13:52:21 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le mer. 29 mai 2019 19:52, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Wed, May 29, 2019 at 07:47:12PM +0200, Guillaume Lelarge wrote:\n> > Yeah, I still have quite a lot to process. That might be better to do it\n> > all in once.\n>\n> OK, thanks! Could you ping me on this thread once you think you are\n> done?\n>\n\nSure.\n\nLe mer. 29 mai 2019 19:52, Michael Paquier <michael@paquier.xyz> a écrit :On Wed, May 29, 2019 at 07:47:12PM +0200, Guillaume Lelarge wrote:\n> Yeah, I still have quite a lot to process. That might be better to do it\n> all in once.\n\nOK, thanks! Could you ping me on this thread once you think you are\ndone?Sure.",
"msg_date": "Wed, 29 May 2019 20:28:59 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Hi Guillaume,\n\nOn Wed, May 29, 2019 at 08:28:59PM +0200, Guillaume Lelarge wrote:\n> Sure.\n\nI have noticed your message on the French list about the completion of\nthe traduction, and congrats for that, it is a huge amount of work.\nDid you find anything else after your last report?\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 22:32:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "On Sat, Jun 8, 2019 at 3:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi Guillaume,\n>\n> On Wed, May 29, 2019 at 08:28:59PM +0200, Guillaume Lelarge wrote:\n> > Sure.\n>\n> I have noticed your message on the French list about the completion of\n> the traduction, and congrats for that, it is a huge amount of work.\n> Did you find anything else after your last report?\n\nIt was the merge of upstream documentation completion in the french\nrepo, so the actual translation work can now begin. Probably there\nwill be more typo finding in the next weeks.\n\n\n",
"msg_date": "Sat, 8 Jun 2019 15:44:04 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le sam. 8 juin 2019 à 15:44, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> On Sat, Jun 8, 2019 at 3:33 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > Hi Guillaume,\n> >\n> > On Wed, May 29, 2019 at 08:28:59PM +0200, Guillaume Lelarge wrote:\n> > > Sure.\n> >\n> > I have noticed your message on the French list about the completion of\n> > the traduction, and congrats for that, it is a huge amount of work.\n> > Did you find anything else after your last report?\n>\n>\nI have two more fixes. See attached patch.\n\nIt was the merge of upstream documentation completion in the french\n> repo, so the actual translation work can now begin. Probably there\n> will be more typo finding in the next weeks.\n>\n\nYeah, only the merge is done. We now need to work on the actual\ntranslation. But this is way more fun than the merge :)\n\nWe might find more typos, but it will take time. Applying this patch now\n(if it fits you) is probably better.\n\n\n-- \nGuillaume.",
"msg_date": "Sat, 8 Jun 2019 16:23:55 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "On Sat, Jun 08, 2019 at 04:23:55PM +0200, Guillaume Lelarge wrote:\n> We might find more typos, but it will take time. Applying this patch now\n> (if it fits you) is probably better.\n\nI can imagine that it is a daunting task... Ok, for now I have\napplied what you sent. Thanks!\n--\nMichael",
"msg_date": "Sun, 9 Jun 2019 11:27:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quick doc typo fix"
},
{
"msg_contents": "Le dim. 9 juin 2019 à 04:27, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Sat, Jun 08, 2019 at 04:23:55PM +0200, Guillaume Lelarge wrote:\n> > We might find more typos, but it will take time. Applying this patch now\n> > (if it fits you) is probably better.\n>\n> I can imagine that it is a daunting task... Ok, for now I have\n> applied what you sent. Thanks!\n>\n\nThank you.\n\n\n-- \nGuillaume.\n\nLe dim. 9 juin 2019 à 04:27, Michael Paquier <michael@paquier.xyz> a écrit :On Sat, Jun 08, 2019 at 04:23:55PM +0200, Guillaume Lelarge wrote:\n> We might find more typos, but it will take time. Applying this patch now\n> (if it fits you) is probably better.\n\nI can imagine that it is a daunting task... Ok, for now I have\napplied what you sent. Thanks!Thank you.-- Guillaume.",
"msg_date": "Mon, 10 Jun 2019 11:48:36 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Quick doc typo fix"
}
] |
[
{
"msg_contents": "Hi everybody,\n thanks a lot for your work.\n\n This is just a stupid patch to fix some typos.\n Thanks a lot to Magnus for the review.\n\n You can see it also on GitHub,¹ if you prefer, or\n apply it on top of today latest GIT.²\n\n It passed \"make check\" on Linux.\n\nCiao,\nGelma\n\n---\n\n ¹ https://github.com/Gelma/postgres/commit/6c59961f91b89f55b103c57fffa94308dc29546a\n ² commit: d5ec46bf224d2ea1b010b2bc10a65e44d4456553",
"msg_date": "Tue, 28 May 2019 20:17:18 +0200",
"msg_from": "Andrea Gelmini <andrea.gelmini@linux.it>",
"msg_from_op": true,
"msg_subject": "[PATCH] Simple typos fix"
},
{
"msg_contents": "Thanks for finding these ; I think a few hunks are false positives and should\nbe removed. A few more are debatable and could be correct either way:\n\nKazakstan\nalloced - not an English word, but a technical one;\ndelink - \"unlink\" is better for the filesystem operation, but I don't think \"delink\" is wrong for a list operation.\ndependees (?)\nThis'd\ndefine'd\n\nOn Tue, May 28, 2019 at 08:17:18PM +0200, Andrea Gelmini wrote:\n> diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\n> index de0a98f6d9..ff13b0c9e7 100644\n> --- a/contrib/amcheck/verify_nbtree.c\n> +++ b/contrib/amcheck/verify_nbtree.c\n> @@ -1278,7 +1278,7 @@ bt_right_page_check_scankey(BtreeCheckState *state)\n> \t * Routines like _bt_search() don't require *any* page split interlock\n> \t * when descending the tree, including something very light like a buffer\n> \t * pin. That's why it's okay that we don't either. This avoidance of any\n> -\t * need to \"couple\" buffer locks is the raison d' etre of the Lehman & Yao\n> +\t * need to \"couple\" buffer locks is the reason d'etre of the Lehman & Yao\n\nI think this is wrong. The French phase is \"raison d'etre\".\n\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index e7c32f2a13..20bb928016 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -2279,7 +2279,7 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb, ReorderBufferTXN *txn)\n> \n> \t\t/*\n> \t\t * store in segment in which it belongs by start lsn, don't split over\n> -\t\t * multiple segments tho\n> +\t\t * multiple segments to\n\nI think this is wrong. It should say \"though\". Or perhaps:\n * store at segment to which its start lsn belongs, but don't split over\n * multiple segments\n\n> diff --git a/src/backend/utils/cache/relmapper.c b/src/backend/utils/cache/relmapper.c\n> index 3b4f21bc54..403435df52 100644\n> --- a/src/backend/utils/cache/relmapper.c\n> +++ b/src/backend/utils/cache/relmapper.c\n> @@ -146,7 +146,7 @@ static void perform_relmap_update(bool shared, const RelMapFile *updates);\n> /*\n> * RelationMapOidToFilenode\n> *\n> - * The raison d' etre ... given a relation OID, look up its filenode.\n> + * The reason d'etre... given a relation OID, look up its filenode.\n\nWrong\n\n> @@ -907,7 +907,7 @@ write_relmap_file(bool shared, RelMapFile *newmap,\n> \t * Make sure that the files listed in the map are not deleted if the outer\n> \t * transaction aborts. This had better be within the critical section\n> \t * too: it's not likely to fail, but if it did, we'd arrive at transaction\n> -\t * abort with the files still vulnerable. PANICing will leave things in a\n> +\t * abort with the files still vulnerable. Panicking will leave things in a\n\nWrong ?\n\nThanks,\nJustin\n\n\n",
"msg_date": "Sun, 2 Jun 2019 16:42:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple typos fix"
},
{
"msg_contents": "On Sun, Jun 02, 2019 at 04:42:57PM -0500, Justin Pryzby wrote:\n> Thanks for finding these ; I think a few hunks are false positives and should\n> be removed.\n\nYes, some of them are the changes in imath.c and snowball/, which we\ninclude in Postgres but in reality are independent projects, so these\nshould be fixed in upstream instead, and Postgres will include those\nfixes when merging with newer versions. If we were to fix those\nissues ourselves, then we would likely create conflicts when merging\nwith newer versions of the upstream modules.\n\n> A few more are debatable and could be correct either way:\n> \n> alloced - not an English word, but a technical one;\n\nIndeed. The current wording is fine by me.\n\n> delink - \"unlink\" is better for the filesystem operation, but I\n> don't think \"delink\" is wrong for a list operation.\n> dependees (?)\n\nThese terms could be used in programming.\n\n> This'd\n> define'd\n\nDon't think it is much of a big deal to keep these as well.\n\"invokable\" can be used in programming, and \"cachable\" is an alternate\nspelling of \"cacheable\" based on some research.\n\n> On Tue, May 28, 2019 at 08:17:18PM +0200, Andrea Gelmini wrote:\n>> diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\n>> index de0a98f6d9..ff13b0c9e7 100644\n>> --- a/contrib/amcheck/verify_nbtree.c\n>> +++ b/contrib/amcheck/verify_nbtree.c\n>> @@ -1278,7 +1278,7 @@ bt_right_page_check_scankey(BtreeCheckState *state)\n>> \t * Routines like _bt_search() don't require *any* page split interlock\n>> \t * when descending the tree, including something very light like a buffer\n>> \t * pin. That's why it's okay that we don't either. This avoidance of any\n>> -\t * need to \"couple\" buffer locks is the raison d' etre of the Lehman & Yao\n>> +\t * need to \"couple\" buffer locks is the reason d'etre of the Lehman & Yao\n> \n> I think this is wrong. The French phase is \"raison d'etre\".\n\nFrench here. Note that an accent is missing on the first 'e' (être)\nbut we don't want non-ASCII characters in the code. So the current\nwording is fine in my opinion.\n\n> I think this is wrong. It should say \"though\". Or perhaps:\n> * store at segment to which its start lsn belongs, but don't split over\n> * multiple segments\n\nI would replace it by \"though\", \"tho\" is not incorrect tho ;)\n\n>> @@ -907,7 +907,7 @@ write_relmap_file(bool shared, RelMapFile *newmap,\n>> \t * Make sure that the files listed in the map are not deleted if the outer\n>> \t * transaction aborts. This had better be within the critical section\n>> \t * too: it's not likely to fail, but if it did, we'd arrive at transaction\n>> -\t * abort with the files still vulnerable. PANICing will leave things in a\n>> +\t * abort with the files still vulnerable. Panicking will leave things in a\n> \n> Wrong ?\n\nYes, the suggestion is wrong. The comment refers to the elog level.\n\nThe original patch proposed 63 diffs. After the false positives are\nremoved, 21 remain, which I have now committed. You have done good\nwork in catching all these, by the way. Thanks for taking the time to\ndo so.\n--\nMichael",
"msg_date": "Mon, 3 Jun 2019 13:47:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Simple typos fix"
}
] |
[
{
"msg_contents": "Hello,\n\nA recent revision of PG-Strom has its columnar-storage using Apache\nArrow format files on\nFDW infrastructure. Because of the columnar nature, it allows to load\nthe values which are\nreferenced by the query, thus, maximizes efficiency of the storage bandwidth.\nhttp://heterodb.github.io/pg-strom/arrow_fdw/\n\nApache Arrow defines various primitive types that can be mapped on\nPostgreSQL data types.\nFor example, FloatingPoint (precision=Single) on Arrow is equivalent\nto float4 of PostgreSQL.\nOne interesting data type in Apache Arrow is \"Struct\" data type. It is\nequivalent to composite\ntype in PostgreSQL. The \"Struct\" type has sub-fields, and individual\nsub-fields have its own\nvalues array for each.\n\nIt means we can skip to load the sub-fields unreferenced, if\nquery-planner can handle\nreferenced and unreferenced sub-fields correctly.\nOn the other hands, it looks to me RelOptInfo or other optimizer\nrelated structure don't have\nthis kind of information. RelOptInfo->attr_needed tells extension\nwhich attributes are referenced\nby other relation, however, its granularity is not sufficient for sub-fields.\n\nProbably, all we can do right now is walk-on the RelOptInfo list to\nlookup FieldSelect node\nto see the referenced sub-fields. Do we have a good idea instead of\nthis expensive way?\n# Right now, PG-Strom loads all the sub-fields of Struct column from\narrow_fdw foreign-table\n# regardless of referenced / unreferenced sub-fields. Just a second best.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 May 2019 12:13:42 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "How to know referenced sub-fields of a composite type?"
},
{
"msg_contents": "Kaigai-san,\n\nOn 2019/05/29 12:13, Kohei KaiGai wrote:\n> One interesting data type in Apache Arrow is \"Struct\" data type. It is\n> equivalent to composite\n> type in PostgreSQL. The \"Struct\" type has sub-fields, and individual\n> sub-fields have its own\n> values array for each.\n> \n> It means we can skip to load the sub-fields unreferenced, if\n> query-planner can handle\n> referenced and unreferenced sub-fields correctly.\n> On the other hands, it looks to me RelOptInfo or other optimizer\n> related structure don't have\n> this kind of information. RelOptInfo->attr_needed tells extension\n> which attributes are referenced\n> by other relation, however, its granularity is not sufficient for sub-fields.\n\nIsn't that true for some other cases as well, like when a query accesses\nonly some sub-fields of a json(b) column? In that case too, planner\nitself can't optimize away access to other sub-fields. What it can do\nthough is match a suitable index to the operator used to access the\nindividual sub-fields, so that the index (if one is matched and chosen)\ncan optimize away accessing unnecessary sub-fields. IOW, it seems to me\nthat the optimizer leaves it up to the indexes (and plan nodes) to further\noptimize access to within a field. How is this case any different?\n\n> Probably, all we can do right now is walk-on the RelOptInfo list to\n> lookup FieldSelect node\n> to see the referenced sub-fields. Do we have a good idea instead of\n> this expensive way?\n> # Right now, PG-Strom loads all the sub-fields of Struct column from\n> arrow_fdw foreign-table\n> # regardless of referenced / unreferenced sub-fields. Just a second best.\n\nI'm missing something, but if PG-Strom/arrow_fdw does look at the\nFieldSelect nodes to see which sub-fields are referenced, why doesn't it\ngenerate a plan that will only access those sub-fields or why can't it?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 29 May 2019 13:26:19 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: How to know referenced sub-fields of a composite type?"
},
{
"msg_contents": "Hi Amit,\n\n2019年5月29日(水) 13:26 Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>:\n>\n> Kaigai-san,\n>\n> On 2019/05/29 12:13, Kohei KaiGai wrote:\n> > One interesting data type in Apache Arrow is \"Struct\" data type. It is\n> > equivalent to composite\n> > type in PostgreSQL. The \"Struct\" type has sub-fields, and individual\n> > sub-fields have its own\n> > values array for each.\n> >\n> > It means we can skip to load the sub-fields unreferenced, if\n> > query-planner can handle\n> > referenced and unreferenced sub-fields correctly.\n> > On the other hands, it looks to me RelOptInfo or other optimizer\n> > related structure don't have\n> > this kind of information. RelOptInfo->attr_needed tells extension\n> > which attributes are referenced\n> > by other relation, however, its granularity is not sufficient for sub-fields.\n>\n> Isn't that true for some other cases as well, like when a query accesses\n> only some sub-fields of a json(b) column? In that case too, planner\n> itself can't optimize away access to other sub-fields. What it can do\n> though is match a suitable index to the operator used to access the\n> individual sub-fields, so that the index (if one is matched and chosen)\n> can optimize away accessing unnecessary sub-fields. IOW, it seems to me\n> that the optimizer leaves it up to the indexes (and plan nodes) to further\n> optimize access to within a field. How is this case any different?\n>\nI think it is a little bit different scenario.\nEven if an index on sub-fields can indicate the tuples to be fetched,\nthe fetched tuple contains all the sub-fields because heaptuple is\nrow-oriented data.\nFor example, if WHERE-clause checks a sub-field: \"x\" then aggregate\nfunction references other sub-field \"y\", Scan/Join node has to return\na tuple that contains both \"x\" and \"y\". IndexScan also pops up a tuple\nwith a full composite type, so here is no problem if we cannot know\nwhich sub-fields are referenced in the later stage.\nMaybe, if IndexOnlyScan supports to return a partial composite type,\nit needs similar infrastructure that can be used for a better composite\ntype support on columnar storage.\n\n> > Probably, all we can do right now is walk-on the RelOptInfo list to\n> > lookup FieldSelect node\n> > to see the referenced sub-fields. Do we have a good idea instead of\n> > this expensive way?\n> > # Right now, PG-Strom loads all the sub-fields of Struct column from\n> > arrow_fdw foreign-table\n> > # regardless of referenced / unreferenced sub-fields. Just a second best.\n>\n> I'm missing something, but if PG-Strom/arrow_fdw does look at the\n> FieldSelect nodes to see which sub-fields are referenced, why doesn't it\n> generate a plan that will only access those sub-fields or why can't it?\n>\nLikely, it is not a technical problem but not a smart implementation.\nIf I missed some existing infrastructure we can apply, it may be more suitable\nthan query/expression tree walking.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 29 May 2019 15:50:30 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to know referenced sub-fields of a composite type?"
},
{
"msg_contents": "On Wed, May 29, 2019 at 4:51 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n\n> Hi Amit,\n>\n> 2019年5月29日(水) 13:26 Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>:\n> >\n> > Kaigai-san,\n> >\n> > On 2019/05/29 12:13, Kohei KaiGai wrote:\n> > > One interesting data type in Apache Arrow is \"Struct\" data type. It is\n> > > equivalent to composite\n> > > type in PostgreSQL. The \"Struct\" type has sub-fields, and individual\n> > > sub-fields have its own\n> > > values array for each.\n> > >\n> > > It means we can skip to load the sub-fields unreferenced, if\n> > > query-planner can handle\n> > > referenced and unreferenced sub-fields correctly.\n> > > On the other hands, it looks to me RelOptInfo or other optimizer\n> > > related structure don't have\n> > > this kind of information. RelOptInfo->attr_needed tells extension\n> > > which attributes are referenced\n> > > by other relation, however, its granularity is not sufficient for\n> sub-fields.\n> >\n> > Isn't that true for some other cases as well, like when a query accesses\n> > only some sub-fields of a json(b) column? In that case too, planner\n> > itself can't optimize away access to other sub-fields. What it can do\n> > though is match a suitable index to the operator used to access the\n> > individual sub-fields, so that the index (if one is matched and chosen)\n> > can optimize away accessing unnecessary sub-fields. IOW, it seems to me\n> > that the optimizer leaves it up to the indexes (and plan nodes) to\n> further\n> > optimize access to within a field. How is this case any different?\n> >\n> I think it is a little bit different scenario.\n> Even if an index on sub-fields can indicate the tuples to be fetched,\n> the fetched tuple contains all the sub-fields because heaptuple is\n> row-oriented data.\n> For example, if WHERE-clause checks a sub-field: \"x\" then aggregate\n> function references other sub-field \"y\", Scan/Join node has to return\n> a tuple that contains both \"x\" and \"y\". IndexScan also pops up a tuple\n> with a full composite type, so here is no problem if we cannot know\n> which sub-fields are referenced in the later stage.\n> Maybe, if IndexOnlyScan supports to return a partial composite type,\n> it needs similar infrastructure that can be used for a better composite\n> type support on columnar storage.\n>\n\nThere is another issue related to the columnar store that needs targeted\ncolumns for projection from the scan is discussed in zedstore [1].\nProjecting all columns from a columnar store is quite expensive than\nthe row store.\n\n[1] -\nhttps://www.postgresql.org/message-id/CALfoeivu-n5o8Juz9wW%2BkTjnis6_%2BrfMf%2BzOTky1LiTVk-ZFjA%40mail.gmail.com\n\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Wed, May 29, 2019 at 4:51 PM Kohei KaiGai <kaigai@heterodb.com> wrote:Hi Amit,\n\n2019年5月29日(水) 13:26 Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>:\n>\n> Kaigai-san,\n>\n> On 2019/05/29 12:13, Kohei KaiGai wrote:\n> > One interesting data type in Apache Arrow is \"Struct\" data type. It is\n> > equivalent to composite\n> > type in PostgreSQL. The \"Struct\" type has sub-fields, and individual\n> > sub-fields have its own\n> > values array for each.\n> >\n> > It means we can skip to load the sub-fields unreferenced, if\n> > query-planner can handle\n> > referenced and unreferenced sub-fields correctly.\n> > On the other hands, it looks to me RelOptInfo or other optimizer\n> > related structure don't have\n> > this kind of information. RelOptInfo->attr_needed tells extension\n> > which attributes are referenced\n> > by other relation, however, its granularity is not sufficient for sub-fields.\n>\n> Isn't that true for some other cases as well, like when a query accesses\n> only some sub-fields of a json(b) column? In that case too, planner\n> itself can't optimize away access to other sub-fields. What it can do\n> though is match a suitable index to the operator used to access the\n> individual sub-fields, so that the index (if one is matched and chosen)\n> can optimize away accessing unnecessary sub-fields. IOW, it seems to me\n> that the optimizer leaves it up to the indexes (and plan nodes) to further\n> optimize access to within a field. How is this case any different?\n>\nI think it is a little bit different scenario.\nEven if an index on sub-fields can indicate the tuples to be fetched,\nthe fetched tuple contains all the sub-fields because heaptuple is\nrow-oriented data.\nFor example, if WHERE-clause checks a sub-field: \"x\" then aggregate\nfunction references other sub-field \"y\", Scan/Join node has to return\na tuple that contains both \"x\" and \"y\". IndexScan also pops up a tuple\nwith a full composite type, so here is no problem if we cannot know\nwhich sub-fields are referenced in the later stage.\nMaybe, if IndexOnlyScan supports to return a partial composite type,\nit needs similar infrastructure that can be used for a better composite\ntype support on columnar storage.\nThere is another issue related to the columnar store that needs targetedcolumns for projection from the scan is discussed in zedstore [1]. Projecting all columns from a columnar store is quite expensive thanthe row store. [1] - \n\nhttps://www.postgresql.org/message-id/CALfoeivu-n5o8Juz9wW%2BkTjnis6_%2BrfMf%2BzOTky1LiTVk-ZFjA%40mail.gmail.com Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Wed, 29 May 2019 18:44:42 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to know referenced sub-fields of a composite type?"
},
{
"msg_contents": "On 2019/05/29 15:50, Kohei KaiGai wrote:\n> 2019年5月29日(水) 13:26 Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>:\n>>> It means we can skip to load the sub-fields unreferenced, if\n>>> query-planner can handle\n>>> referenced and unreferenced sub-fields correctly.\n>>> On the other hands, it looks to me RelOptInfo or other optimizer\n>>> related structure don't have\n>>> this kind of information. RelOptInfo->attr_needed tells extension\n>>> which attributes are referenced\n>>> by other relation, however, its granularity is not sufficient for sub-fields.\n>>\n>> Isn't that true for some other cases as well, like when a query accesses\n>> only some sub-fields of a json(b) column? In that case too, planner\n>> itself can't optimize away access to other sub-fields. What it can do\n>> though is match a suitable index to the operator used to access the\n>> individual sub-fields, so that the index (if one is matched and chosen)\n>> can optimize away accessing unnecessary sub-fields. IOW, it seems to me\n>> that the optimizer leaves it up to the indexes (and plan nodes) to further\n>> optimize access to within a field. How is this case any different?\n>\n> I think it is a little bit different scenario.\n> Even if an index on sub-fields can indicate the tuples to be fetched,\n> the fetched tuple contains all the sub-fields because heaptuple is\n> row-oriented data.\n>\n> For example, if WHERE-clause checks a sub-field: \"x\" then aggregate\n> function references other sub-field \"y\", Scan/Join node has to return\n> a tuple that contains both \"x\" and \"y\". IndexScan also pops up a tuple\n> with a full composite type, so here is no problem if we cannot know\n> which sub-fields are referenced in the later stage.\n> Maybe, if IndexOnlyScan supports to return a partial composite type,\n> it needs similar infrastructure that can be used for a better composite\n> type support on columnar storage.\n\nAh indeed. I think I had misunderstood your intent. Indexes have to do\nwith optimizing the \"filtering\" of complex/nested type (json, Arrow\nStruct, etc.) values, where unnecessary sub-fields need not be read before\nfiltering, whereas you're interested in optimizing \"projections\" of\ncomplex types, where sub-fields that are not used anywhere in the query\nneed not be read from the stored values.\n\n>>> Probably, all we can do right now is walk-on the RelOptInfo list to\n>>> lookup FieldSelect node\n>>> to see the referenced sub-fields. Do we have a good idea instead of\n>>> this expensive way?\n>>> # Right now, PG-Strom loads all the sub-fields of Struct column from\n>>> arrow_fdw foreign-table\n>>> # regardless of referenced / unreferenced sub-fields. Just a second best.\n>>\n>> I'm missing something, but if PG-Strom/arrow_fdw does look at the\n>> FieldSelect nodes to see which sub-fields are referenced, why doesn't it\n>> generate a plan that will only access those sub-fields or why can't it?\n>>\n> Likely, it is not a technical problem but not a smart implementation.\n> If I missed some existing infrastructure we can apply, it may be more suitable\n> than query/expression tree walking.\n\nThere is no infrastructure for this as far as I know. Maybe, some will be\nbuilt in the future now that storage format is pluggable.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Thu, 30 May 2019 16:33:17 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: How to know referenced sub-fields of a composite type?"
},
{
"msg_contents": "2019/05/30 16:33、Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>のメール:\n\n>> On 2019/05/29 15:50, Kohei KaiGai wrote:\n>> 2019年5月29日(水) 13:26 Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>:\n>>>> It means we can skip to load the sub-fields unreferenced, if\n>>>> query-planner can handle\n>>>> referenced and unreferenced sub-fields correctly.\n>>>> On the other hands, it looks to me RelOptInfo or other optimizer\n>>>> related structure don't have\n>>>> this kind of information. RelOptInfo->attr_needed tells extension\n>>>> which attributes are referenced\n>>>> by other relation, however, its granularity is not sufficient for sub-fields.\n>>> \n>>> Isn't that true for some other cases as well, like when a query accesses\n>>> only some sub-fields of a json(b) column? In that case too, planner\n>>> itself can't optimize away access to other sub-fields. What it can do\n>>> though is match a suitable index to the operator used to access the\n>>> individual sub-fields, so that the index (if one is matched and chosen)\n>>> can optimize away accessing unnecessary sub-fields. IOW, it seems to me\n>>> that the optimizer leaves it up to the indexes (and plan nodes) to further\n>>> optimize access to within a field. How is this case any different?\n>> \n>> I think it is a little bit different scenario.\n>> Even if an index on sub-fields can indicate the tuples to be fetched,\n>> the fetched tuple contains all the sub-fields because heaptuple is\n>> row-oriented data.\n>> \n>> For example, if WHERE-clause checks a sub-field: \"x\" then aggregate\n>> function references other sub-field \"y\", Scan/Join node has to return\n>> a tuple that contains both \"x\" and \"y\". IndexScan also pops up a tuple\n>> with a full composite type, so here is no problem if we cannot know\n>> which sub-fields are referenced in the later stage.\n>> Maybe, if IndexOnlyScan supports to return a partial composite type,\n>> it needs similar infrastructure that can be used for a better composite\n>> type support on columnar storage.\n> \n> Ah indeed. I think I had misunderstood your intent. Indexes have to do\n> with optimizing the \"filtering\" of complex/nested type (json, Arrow\n> Struct, etc.) values, where unnecessary sub-fields need not be read before\n> filtering, whereas you're interested in optimizing \"projections\" of\n> complex types, where sub-fields that are not used anywhere in the query\n> need not be read from the stored values.\n> \n>>>> Probably, all we can do right now is walk-on the RelOptInfo list to\n>>>> lookup FieldSelect node\n>>>> to see the referenced sub-fields. Do we have a good idea instead of\n>>>> this expensive way?\n>>>> # Right now, PG-Strom loads all the sub-fields of Struct column from\n>>>> arrow_fdw foreign-table\n>>>> # regardless of referenced / unreferenced sub-fields. Just a second best.\n>>> \n>>> I'm missing something, but if PG-Strom/arrow_fdw does look at the\n>>> FieldSelect nodes to see which sub-fields are referenced, why doesn't it\n>>> generate a plan that will only access those sub-fields or why can't it?\n>>> \n>> Likely, it is not a technical problem but not a smart implementation.\n>> If I missed some existing infrastructure we can apply, it may be more suitable\n>> than query/expression tree walking.\n> \n> There is no infrastructure for this as far as I know. Maybe, some will be\n> built in the future now that storage format is pluggable.\n\nIf we design a common infrastructure for both of built-in and extension features, it makes sense for the kinds of storage system.\nIndexOnlyScan is one of the built-in feature that is beneficial by the information of projection. Currently, we always don’t choose IndexOnlyScan if index is on sub-field of composite.\n\n Best regards,\n\n\n\n\n",
"msg_date": "Fri, 31 May 2019 08:14:21 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: How to know referenced sub-fields of a composite type?"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm getting a server crash when executing the following test-case:\n\ncreate table t1(a int primary key, b text);\ninsert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');\nselect a, b, array_agg(a order by a) from t1 group by grouping sets ((a),\n(b));\n\n*Backtrace:*\n#0 0x00007f37d0630277 in raise () from /lib64/libc.so.6\n#1 0x00007f37d0631968 in abort () from /lib64/libc.so.6\n#2 0x0000000000a5685e in ExceptionalCondition (conditionName=0xc29fd0\n\"!(op->d.fetch.kind == slot->tts_ops)\", errorType=0xc29cc1\n\"FailedAssertion\",\n fileName=0xc29d09 \"execExprInterp.c\", lineNumber=1905) at assert.c:54\n#3 0x00000000006dfa2b in CheckOpSlotCompatibility (op=0x2e84e38,\nslot=0x2e6e268) at execExprInterp.c:1905\n#4 0x00000000006dd446 in ExecInterpExpr (state=0x2e84da0,\necontext=0x2e6d8e8, isnull=0x7ffe53cba4af) at execExprInterp.c:439\n#5 0x00000000007010e5 in ExecEvalExprSwitchContext (state=0x2e84da0,\necontext=0x2e6d8e8, isNull=0x7ffe53cba4af)\n at ../../../src/include/executor/executor.h:307\n#6 0x0000000000701be7 in advance_aggregates (aggstate=0x2e6d6b0) at\nnodeAgg.c:679\n#7 0x0000000000703a5d in agg_retrieve_direct (aggstate=0x2e6d6b0) at\nnodeAgg.c:1847\n#8 0x00000000007034da in ExecAgg (pstate=0x2e6d6b0) at nodeAgg.c:1572\n#9 0x00000000006e797f in ExecProcNode (node=0x2e6d6b0) at\n../../../src/include/executor/executor.h:239\n#10 0x00000000006ea174 in ExecutePlan (estate=0x2e6d458,\nplanstate=0x2e6d6b0, use_parallel_mode=false, operation=CMD_SELECT,\nsendTuples=true,\n numberTuples=0, direction=ForwardScanDirection, dest=0x2e76b30,\nexecute_once=true) at execMain.c:1648\n#11 0x00000000006e7f91 in standard_ExecutorRun (queryDesc=0x2e7b3b8,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:365\n#12 0x00000000006e7dc7 in ExecutorRun (queryDesc=0x2e7b3b8,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:309\n#13 0x00000000008e40c7 in PortalRunSelect (portal=0x2e10bc8, forward=true,\ncount=0, dest=0x2e76b30) at pquery.c:929\n#14 0x00000000008e3d66 in PortalRun (portal=0x2e10bc8,\ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2e76b30,\naltdest=0x2e76b30,\n completionTag=0x7ffe53cba850 \"\") at pquery.c:770\n\nThe following Assert statement in *CheckOpSlotCompatibility*() fails.\n\n1905 Assert(op->d.fetch.kind == slot->tts_ops);\n\nAnd above assert statement was added by you as a part of the following git\ncommit.\n\ncommit 15d8f83128e15de97de61430d0b9569f5ebecc26\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Thu Nov 15 22:00:30 2018 -0800\n\n Verify that expected slot types match returned slot types.\n\n This is important so JIT compilation knows what kind of tuple slot the\n deforming routine can expect. There's also optimization potential for\n expression initialization without JIT compilation. It e.g. seems\n plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with\n virtual slots.\n\n Author: Andres Freund\n\n*Analysis:*\nI did some quick investigation on this and found that when the aggregate is\nperformed on the first group i.e. group by 'a', all the input tuples are\nfetched from the outer plan and stored into the tuplesort object and for\nthe subsequent groups i.e. from the second group onwards, the tuples stored\nin tuplessort object during 1st phase is used. But, then, the tuples stored\nin the tuplesort object are actually the minimal tuples whereas it is\nexpected to be a heap tuple which actually results into the assertion\nfailure.\n\nI might be wrong, but it seems to me like the slot fetched from tuplesort\nobject needs to be converted to the heap tuple. Actually the following\nlines of code in agg_retrieve_direct() gets executed only when we have\ncrossed a group boundary. I think, at least the function call to\nExecCopySlotHeapTuple(outerslot); followed by ExecForceStoreHeapTuple();\nshould always happen irrespective of the group boundary limit is crossed or\nnot... Sorry if I'm saying something ...\n\n1871 * If we are grouping,\ncheck whether we've crossed a group\n │\n │1872 * boundary.\n\n │\n │1873 */\n\n │\n │1874 if (node->aggstrategy\n!= AGG_PLAIN)\n │\n │1875 {\n\n │\n │1876\n tmpcontext->ecxt_innertuple = firstSlot;\n │\n │1877 if\n(!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1],\n │\n │1878\n tmpcontext))\n │\n │1879 {\n\n │\n │1880\n aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot);\n │\n │1881 break;\n\n │\n │1882 }\n\n │\n │1883 }\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nHi All,I'm getting a server crash when executing the following test-case:create table t1(a int primary key, b text);insert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');select a, b, array_agg(a order by a) from t1 group by grouping sets ((a), (b));Backtrace:#0 0x00007f37d0630277 in raise () from /lib64/libc.so.6#1 0x00007f37d0631968 in abort () from /lib64/libc.so.6#2 0x0000000000a5685e in ExceptionalCondition (conditionName=0xc29fd0 \"!(op->d.fetch.kind == slot->tts_ops)\", errorType=0xc29cc1 \"FailedAssertion\", fileName=0xc29d09 \"execExprInterp.c\", lineNumber=1905) at assert.c:54#3 0x00000000006dfa2b in CheckOpSlotCompatibility (op=0x2e84e38, slot=0x2e6e268) at execExprInterp.c:1905#4 0x00000000006dd446 in ExecInterpExpr (state=0x2e84da0, econtext=0x2e6d8e8, isnull=0x7ffe53cba4af) at execExprInterp.c:439#5 0x00000000007010e5 in ExecEvalExprSwitchContext (state=0x2e84da0, econtext=0x2e6d8e8, isNull=0x7ffe53cba4af) at ../../../src/include/executor/executor.h:307#6 0x0000000000701be7 in advance_aggregates (aggstate=0x2e6d6b0) at nodeAgg.c:679#7 0x0000000000703a5d in agg_retrieve_direct (aggstate=0x2e6d6b0) at nodeAgg.c:1847#8 0x00000000007034da in ExecAgg (pstate=0x2e6d6b0) at nodeAgg.c:1572#9 0x00000000006e797f in ExecProcNode (node=0x2e6d6b0) at ../../../src/include/executor/executor.h:239#10 0x00000000006ea174 in ExecutePlan (estate=0x2e6d458, planstate=0x2e6d6b0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x2e76b30, execute_once=true) at execMain.c:1648#11 0x00000000006e7f91 in standard_ExecutorRun (queryDesc=0x2e7b3b8, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:365#12 0x00000000006e7dc7 in ExecutorRun (queryDesc=0x2e7b3b8, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:309#13 0x00000000008e40c7 in PortalRunSelect (portal=0x2e10bc8, forward=true, count=0, dest=0x2e76b30) at pquery.c:929#14 0x00000000008e3d66 in PortalRun (portal=0x2e10bc8, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2e76b30, altdest=0x2e76b30, completionTag=0x7ffe53cba850 \"\") at pquery.c:770The following Assert statement in CheckOpSlotCompatibility() fails.1905 Assert(op->d.fetch.kind == slot->tts_ops);And above assert statement was added by you as a part of the following git commit.commit 15d8f83128e15de97de61430d0b9569f5ebecc26Author: Andres Freund <andres@anarazel.de>Date: Thu Nov 15 22:00:30 2018 -0800 Verify that expected slot types match returned slot types. This is important so JIT compilation knows what kind of tuple slot the deforming routine can expect. There's also optimization potential for expression initialization without JIT compilation. It e.g. seems plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with virtual slots. Author: Andres FreundAnalysis:I did some quick investigation on this and found that when the aggregate is performed on the first group i.e. group by 'a', all the input tuples are fetched from the outer plan and stored into the tuplesort object and for the subsequent groups i.e. from the second group onwards, the tuples stored in tuplessort object during 1st phase is used. But, then, the tuples stored in the tuplesort object are actually the minimal tuples whereas it is expected to be a heap tuple which actually results into the assertion failure.I might be wrong, but it seems to me like the slot fetched from tuplesort object needs to be converted to the heap tuple. Actually the following lines of code in agg_retrieve_direct() gets executed only when we have crossed a group boundary. I think, at least the function call to ExecCopySlotHeapTuple(outerslot); followed by ExecForceStoreHeapTuple(); should always happen irrespective of the group boundary limit is crossed or not... Sorry if I'm saying something ...1871 * If we are grouping, check whether we've crossed a group │ │1872 * boundary. │ │1873 */ │ │1874 if (node->aggstrategy != AGG_PLAIN) │ │1875 { │ │1876 tmpcontext->ecxt_innertuple = firstSlot; │ │1877 if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1], │ │1878 tmpcontext)) │ │1879 { │ │1880 aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot); │ │1881 break; │ │1882 } │ │1883 }-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Wed, 29 May 2019 17:50:35 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Server crash due to assertion failure in CheckOpSlotCompatibility()"
},
{
"msg_contents": "Hi All,\n\nHere are some more details on the crash reported in my previous e-mail for\nbetter clarity:\n\nThe crash only happens when a *primary key* or *btree index* is created on\nthe test table. For example consider the following two scenarios.\n\n*TC1: With PK*\ncreate table t1(a int *primary key*, b text);\ninsert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');\nselect a, b, array_agg(a order by a) from t1 group by grouping sets ((a),\n(b));\n\nThis (TC1) is the problematic case, the explain plan for the query causing\nthe crash is as follows\n\npostgres=# explain select a, b, array_agg(a order by a) from t1 group by\ngrouping sets ((a), (b));\n QUERY PLAN\n\n-----------------------------------------------------------------------------\n GroupAggregate (cost=0.15..166.92 rows=1470 width=68)\n Group Key: a\n Sort Key: b\n Group Key: b\n -> Index Scan using t1_pkey on t1 (cost=0.15..67.20 rows=1270 width=36)\n(5 rows)\n\n*TC2: Without PK/Btree index*\ncreate table t2(a int, b text);\ninsert into t2 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');\nselect a, b, array_agg(a order by a) from t2 group by grouping sets ((a),\n(b));\n\nAnd here is the explain plan for the query in TC2 that doesn't cause any\ncrash\n\npostgres=# explain select a, b, array_agg(a order by a) from t2 group by\ngrouping sets ((a), (b));\n QUERY PLAN\n-------------------------------------------------------------------\n GroupAggregate (cost=88.17..177.69 rows=400 width=68)\n Group Key: a\n Sort Key: b\n Group Key: b\n -> Sort (cost=88.17..91.35 rows=1270 width=36)\n *Sort Key: a*\n -> Seq Scan on t2 (cost=0.00..22.70 rows=1270 width=36)\n(7 rows)\n\nIf you notice the difference between the two plans, in case of TC1, the\nIndex Scan was performed on the test table and as the data in the index\n(btree index) is already sorted, when grouping aggregate is performed on\nthe column 'a', there is *no* sorting done for it (you would see that \"*Sort\nKey: a*\" is missing in the explain plan for TC1)and for that reason it\nexpects the slot to contain the heap tuple but then, as the slots are\nfetched from the tuplesort object, it actually contains minimal tuple. On\nthe other hand, if you see the explain plan for TC2, the sorting is done\nfor both the groups (i.e. both \"Sort Key: b\" && \"Sort Key: a\" exists) and\nhence the expected slot is always the minimal slot so there is no assertion\nfailure in case 2.\n\nThanks,\n\n-- \nWith Regards,\nAshutosh Sharma\nEnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n\nOn Wed, May 29, 2019 at 5:50 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Hi All,\n>\n> I'm getting a server crash when executing the following test-case:\n>\n> create table t1(a int primary key, b text);\n> insert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');\n> select a, b, array_agg(a order by a) from t1 group by grouping sets ((a),\n> (b));\n>\n> *Backtrace:*\n> #0 0x00007f37d0630277 in raise () from /lib64/libc.so.6\n> #1 0x00007f37d0631968 in abort () from /lib64/libc.so.6\n> #2 0x0000000000a5685e in ExceptionalCondition (conditionName=0xc29fd0\n> \"!(op->d.fetch.kind == slot->tts_ops)\", errorType=0xc29cc1\n> \"FailedAssertion\",\n> fileName=0xc29d09 \"execExprInterp.c\", lineNumber=1905) at assert.c:54\n> #3 0x00000000006dfa2b in CheckOpSlotCompatibility (op=0x2e84e38,\n> slot=0x2e6e268) at execExprInterp.c:1905\n> #4 0x00000000006dd446 in ExecInterpExpr (state=0x2e84da0,\n> econtext=0x2e6d8e8, isnull=0x7ffe53cba4af) at execExprInterp.c:439\n> #5 0x00000000007010e5 in ExecEvalExprSwitchContext (state=0x2e84da0,\n> econtext=0x2e6d8e8, isNull=0x7ffe53cba4af)\n> at ../../../src/include/executor/executor.h:307\n> #6 0x0000000000701be7 in advance_aggregates (aggstate=0x2e6d6b0) at\n> nodeAgg.c:679\n> #7 0x0000000000703a5d in agg_retrieve_direct (aggstate=0x2e6d6b0) at\n> nodeAgg.c:1847\n> #8 0x00000000007034da in ExecAgg (pstate=0x2e6d6b0) at nodeAgg.c:1572\n> #9 0x00000000006e797f in ExecProcNode (node=0x2e6d6b0) at\n> ../../../src/include/executor/executor.h:239\n> #10 0x00000000006ea174 in ExecutePlan (estate=0x2e6d458,\n> planstate=0x2e6d6b0, use_parallel_mode=false, operation=CMD_SELECT,\n> sendTuples=true,\n> numberTuples=0, direction=ForwardScanDirection, dest=0x2e76b30,\n> execute_once=true) at execMain.c:1648\n> #11 0x00000000006e7f91 in standard_ExecutorRun (queryDesc=0x2e7b3b8,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:365\n> #12 0x00000000006e7dc7 in ExecutorRun (queryDesc=0x2e7b3b8,\n> direction=ForwardScanDirection, count=0, execute_once=true) at\n> execMain.c:309\n> #13 0x00000000008e40c7 in PortalRunSelect (portal=0x2e10bc8, forward=true,\n> count=0, dest=0x2e76b30) at pquery.c:929\n> #14 0x00000000008e3d66 in PortalRun (portal=0x2e10bc8,\n> count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2e76b30,\n> altdest=0x2e76b30,\n> completionTag=0x7ffe53cba850 \"\") at pquery.c:770\n>\n> The following Assert statement in *CheckOpSlotCompatibility*() fails.\n>\n> 1905 Assert(op->d.fetch.kind == slot->tts_ops);\n>\n> And above assert statement was added by you as a part of the following git\n> commit.\n>\n> commit 15d8f83128e15de97de61430d0b9569f5ebecc26\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Thu Nov 15 22:00:30 2018 -0800\n>\n> Verify that expected slot types match returned slot types.\n>\n> This is important so JIT compilation knows what kind of tuple slot the\n> deforming routine can expect. There's also optimization potential for\n> expression initialization without JIT compilation. It e.g. seems\n> plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with\n> virtual slots.\n>\n> Author: Andres Freund\n>\n> *Analysis:*\n> I did some quick investigation on this and found that when the aggregate\n> is performed on the first group i.e. group by 'a', all the input tuples are\n> fetched from the outer plan and stored into the tuplesort object and for\n> the subsequent groups i.e. from the second group onwards, the tuples stored\n> in tuplessort object during 1st phase is used. But, then, the tuples stored\n> in the tuplesort object are actually the minimal tuples whereas it is\n> expected to be a heap tuple which actually results into the assertion\n> failure.\n>\n> I might be wrong, but it seems to me like the slot fetched from tuplesort\n> object needs to be converted to the heap tuple. Actually the following\n> lines of code in agg_retrieve_direct() gets executed only when we have\n> crossed a group boundary. I think, at least the function call to\n> ExecCopySlotHeapTuple(outerslot); followed by ExecForceStoreHeapTuple();\n> should always happen irrespective of the group boundary limit is crossed or\n> not... Sorry if I'm saying something ...\n>\n> 1871 * If we are grouping,\n> check whether we've crossed a group\n> │\n> │1872 * boundary.\n>\n> │\n> │1873 */\n>\n> │\n> │1874 if (node->aggstrategy\n> != AGG_PLAIN)\n> │\n> │1875 {\n>\n> │\n> │1876\n> tmpcontext->ecxt_innertuple = firstSlot;\n> │\n> │1877 if\n> (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1],\n> │\n> │1878\n> tmpcontext))\n> │\n> │1879 {\n>\n> │\n> │1880\n> aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot);\n> │\n> │1881 break;\n>\n> │\n> │1882 }\n>\n> │\n> │1883 }\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*\n>\n\nHi All,Here are some more details on the crash reported in my previous e-mail for better clarity:The crash only happens when a primary key or btree index is created on the test table. For example consider the following two scenarios.TC1: With PKcreate table t1(a int primary key, b text);insert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');select a, b, array_agg(a order by a) from t1 group by grouping sets ((a), (b));This (TC1) is the problematic case, the explain plan for the query causing the crash is as followspostgres=# explain select a, b, array_agg(a order by a) from t1 group by grouping sets ((a), (b)); QUERY PLAN ----------------------------------------------------------------------------- GroupAggregate (cost=0.15..166.92 rows=1470 width=68) Group Key: a Sort Key: b Group Key: b -> Index Scan using t1_pkey on t1 (cost=0.15..67.20 rows=1270 width=36)(5 rows)TC2: Without PK/Btree indexcreate table t2(a int, b text);insert into t2 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');select a, b, array_agg(a order by a) from t2 group by grouping sets ((a), (b));And here is the explain plan for the query in TC2 that doesn't cause any crashpostgres=# explain select a, b, array_agg(a order by a) from t2 group by grouping sets ((a), (b)); QUERY PLAN ------------------------------------------------------------------- GroupAggregate (cost=88.17..177.69 rows=400 width=68) Group Key: a Sort Key: b Group Key: b -> Sort (cost=88.17..91.35 rows=1270 width=36) Sort Key: a -> Seq Scan on t2 (cost=0.00..22.70 rows=1270 width=36)(7 rows)If you notice the difference between the two plans, in case of TC1, the Index Scan was performed on the test table and as the data in the index (btree index) is already sorted, when grouping aggregate is performed on the column 'a', there is *no* sorting done for it (you would see that \"Sort Key: a\" is missing in the explain plan for TC1)and for that reason it expects the slot to contain the heap tuple but then, as the slots are fetched from the tuplesort object, it actually contains minimal tuple. On the other hand, if you see the explain plan for TC2, the sorting is done for both the groups (i.e. both \"Sort Key: b\" && \"Sort Key: a\" exists) and hence the expected slot is always the minimal slot so there is no assertion failure in case 2.Thanks,-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Wed, May 29, 2019 at 5:50 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Hi All,I'm getting a server crash when executing the following test-case:create table t1(a int primary key, b text);insert into t1 values (1, 'aa'), (2, 'bb'), (3, 'aa'), (4, 'bb');select a, b, array_agg(a order by a) from t1 group by grouping sets ((a), (b));Backtrace:#0 0x00007f37d0630277 in raise () from /lib64/libc.so.6#1 0x00007f37d0631968 in abort () from /lib64/libc.so.6#2 0x0000000000a5685e in ExceptionalCondition (conditionName=0xc29fd0 \"!(op->d.fetch.kind == slot->tts_ops)\", errorType=0xc29cc1 \"FailedAssertion\", fileName=0xc29d09 \"execExprInterp.c\", lineNumber=1905) at assert.c:54#3 0x00000000006dfa2b in CheckOpSlotCompatibility (op=0x2e84e38, slot=0x2e6e268) at execExprInterp.c:1905#4 0x00000000006dd446 in ExecInterpExpr (state=0x2e84da0, econtext=0x2e6d8e8, isnull=0x7ffe53cba4af) at execExprInterp.c:439#5 0x00000000007010e5 in ExecEvalExprSwitchContext (state=0x2e84da0, econtext=0x2e6d8e8, isNull=0x7ffe53cba4af) at ../../../src/include/executor/executor.h:307#6 0x0000000000701be7 in advance_aggregates (aggstate=0x2e6d6b0) at nodeAgg.c:679#7 0x0000000000703a5d in agg_retrieve_direct (aggstate=0x2e6d6b0) at nodeAgg.c:1847#8 0x00000000007034da in ExecAgg (pstate=0x2e6d6b0) at nodeAgg.c:1572#9 0x00000000006e797f in ExecProcNode (node=0x2e6d6b0) at ../../../src/include/executor/executor.h:239#10 0x00000000006ea174 in ExecutePlan (estate=0x2e6d458, planstate=0x2e6d6b0, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x2e76b30, execute_once=true) at execMain.c:1648#11 0x00000000006e7f91 in standard_ExecutorRun (queryDesc=0x2e7b3b8, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:365#12 0x00000000006e7dc7 in ExecutorRun (queryDesc=0x2e7b3b8, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:309#13 0x00000000008e40c7 in PortalRunSelect (portal=0x2e10bc8, forward=true, count=0, dest=0x2e76b30) at pquery.c:929#14 0x00000000008e3d66 in PortalRun (portal=0x2e10bc8, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x2e76b30, altdest=0x2e76b30, completionTag=0x7ffe53cba850 \"\") at pquery.c:770The following Assert statement in CheckOpSlotCompatibility() fails.1905 Assert(op->d.fetch.kind == slot->tts_ops);And above assert statement was added by you as a part of the following git commit.commit 15d8f83128e15de97de61430d0b9569f5ebecc26Author: Andres Freund <andres@anarazel.de>Date: Thu Nov 15 22:00:30 2018 -0800 Verify that expected slot types match returned slot types. This is important so JIT compilation knows what kind of tuple slot the deforming routine can expect. There's also optimization potential for expression initialization without JIT compilation. It e.g. seems plausible to elide EEOP_*_FETCHSOME ops entirely when dealing with virtual slots. Author: Andres FreundAnalysis:I did some quick investigation on this and found that when the aggregate is performed on the first group i.e. group by 'a', all the input tuples are fetched from the outer plan and stored into the tuplesort object and for the subsequent groups i.e. from the second group onwards, the tuples stored in tuplessort object during 1st phase is used. But, then, the tuples stored in the tuplesort object are actually the minimal tuples whereas it is expected to be a heap tuple which actually results into the assertion failure.I might be wrong, but it seems to me like the slot fetched from tuplesort object needs to be converted to the heap tuple. Actually the following lines of code in agg_retrieve_direct() gets executed only when we have crossed a group boundary. I think, at least the function call to ExecCopySlotHeapTuple(outerslot); followed by ExecForceStoreHeapTuple(); should always happen irrespective of the group boundary limit is crossed or not... Sorry if I'm saying something ...1871 * If we are grouping, check whether we've crossed a group │ │1872 * boundary. │ │1873 */ │ │1874 if (node->aggstrategy != AGG_PLAIN) │ │1875 { │ │1876 tmpcontext->ecxt_innertuple = firstSlot; │ │1877 if (!ExecQual(aggstate->phase->eqfunctions[node->numCols - 1], │ │1878 tmpcontext)) │ │1879 { │ │1880 aggstate->grp_firstTuple = ExecCopySlotHeapTuple(outerslot); │ │1881 break; │ │1882 } │ │1883 }-- With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Thu, 30 May 2019 16:31:39 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Server crash due to assertion failure in\n CheckOpSlotCompatibility()"
},
{
"msg_contents": "Hi,\n\n\nOn 2019-05-30 16:31:39 +0530, Ashutosh Sharma wrote:\n> Here are some more details on the crash reported in my previous e-mail for\n> better clarity:\n\nI'll look into this once pgcon is over... Thanks for finding!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 May 2019 07:18:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Server crash due to assertion failure in\n CheckOpSlotCompatibility()"
},
{
"msg_contents": "On 2019-May-30, Andres Freund wrote:\n\n> Hi,\n> \n> \n> On 2019-05-30 16:31:39 +0530, Ashutosh Sharma wrote:\n> > Here are some more details on the crash reported in my previous e-mail for\n> > better clarity:\n> \n> I'll look into this once pgcon is over... Thanks for finding!\n\nPing?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 22:42:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Server crash due to assertion failure in\n CheckOpSlotCompatibility()"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-30 16:31:39 +0530, Ashutosh Sharma wrote:\n> > *Analysis:*\n> > I did some quick investigation on this and found that when the aggregate\n> > is performed on the first group i.e. group by 'a', all the input tuples are\n> > fetched from the outer plan and stored into the tuplesort object and for\n> > the subsequent groups i.e. from the second group onwards, the tuples stored\n> > in tuplessort object during 1st phase is used. But, then, the tuples stored\n> > in the tuplesort object are actually the minimal tuples whereas it is\n> > expected to be a heap tuple which actually results into the assertion\n> > failure.\n> >\n> > I might be wrong, but it seems to me like the slot fetched from tuplesort\n> > object needs to be converted to the heap tuple. Actually the following\n> > lines of code in agg_retrieve_direct() gets executed only when we have\n> > crossed a group boundary. I think, at least the function call to\n> > ExecCopySlotHeapTuple(outerslot); followed by ExecForceStoreHeapTuple();\n> > should always happen irrespective of the group boundary limit is crossed or\n> > not... Sorry if I'm saying something ...\n\nI think that's mostly the right diagnosis, but I think it's not the\nright fix. We can just flag here that the slot type isn't fixed - we're\nnot using any slot type specific functions, we just are promising that\nthe slot type doesn't change (mostly for the benefit of JIT compiled\ndeforming, which needs to generate different code for different slot\ntypes).\n\nI've pushed a fix for that. As the commit explains:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=af3deff3f2ac79585481181cb198b04c67486c09\n\nwe probably could quite easily optimize this case further by setting the\nslot type separately for each \"phase\" of grouping set processing. As we\nalready generate separate expressions for each phase, that should be\nquite doable. But that's something for another day, and not for v12.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Jul 2019 14:38:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Server crash due to assertion failure in\n CheckOpSlotCompatibility()"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-10 22:42:52 -0400, Alvaro Herrera wrote:\n> On 2019-May-30, Andres Freund wrote:\n> > On 2019-05-30 16:31:39 +0530, Ashutosh Sharma wrote:\n> > > Here are some more details on the crash reported in my previous e-mail for\n> > > better clarity:\n> > \n> > I'll look into this once pgcon is over... Thanks for finding!\n> \n> Ping?\n\n:( I've now finally pushed the fix. I was kinda exhausted for a\nwhile...\n\nAshutosh, thanks for the report!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Jul 2019 14:39:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Server crash due to assertion failure in\n CheckOpSlotCompatibility()"
}
] |
[
{
"msg_contents": "Hi,\n\n\nSeems that per the documentation on information_schema.views [1] we do \nnot support check_options (\"Applies to a feature not available in \nPostgreSQL\").\n\n\nAttached is a patch that fix this description. As CHECK OPTION is \nsupported since 9.4, the patch might be applied on all versions since 9.4.\n\n\n[1] https://www.postgresql.org/docs/current/infoschema-views.html\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Wed, 29 May 2019 14:26:50 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Doc fix on information_schema.views"
},
{
"msg_contents": "On Wed, May 29, 2019 at 02:26:50PM +0200, Gilles Darold wrote:\n> Attached is a patch that fix this description. As CHECK OPTION is supported\n> since 9.4, the patch might be applied on all versions since 9.4.\n\nThanks Gilles! Applied and back-patched.\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:35:35 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc fix on information_schema.views"
}
] |
[
{
"msg_contents": "I ran clang checker and noticed these. It looks like the \nsha2 implementation is trying to zero out state on exit, but\nclang checker finds at least 'a' is a dead store. \n\nShould we fix this?\nIs something like the attached sensible?\nIs there a common/better approach to zero-out in PG ?\n\nGarick",
"msg_date": "Wed, 29 May 2019 13:24:19 +0000",
"msg_from": "\"Hamlin, Garick L\" <ghamlin@isc.upenn.edu>",
"msg_from_op": true,
"msg_subject": "Dead stores in src/common/sha2.c"
},
{
"msg_contents": "On Wed, May 29, 2019 at 01:24:19PM +0000, Hamlin, Garick L wrote:\n> I ran clang checker and noticed these. It looks like the \n> sha2 implementation is trying to zero out state on exit, but\n> clang checker finds at least 'a' is a dead store. \n> \n> Should we fix this?\n> Is something like the attached sensible?\n> Is there a common/better approach to zero-out in PG ?\n\nThis code comes from the SHA-2 implementation of OpenBSD, so it is not\nadapted to directly touch it. What's the current state of this code\nin upstream? Should we perhaps try to sync with the upstream\nimplementation instead?\n\nAfter a quick search I am not seeing that this area has actually\nchanged:\nhttp://fxr.watson.org/fxr/source/crypto/sha2.c?v=OPENBSD\n--\nMichael",
"msg_date": "Wed, 29 May 2019 10:32:09 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Dead stores in src/common/sha2.c"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 29, 2019 at 01:24:19PM +0000, Hamlin, Garick L wrote:\n>> I ran clang checker and noticed these. It looks like the \n>> sha2 implementation is trying to zero out state on exit, but\n>> clang checker finds at least 'a' is a dead store. \n>> \n>> Should we fix this?\n>> Is something like the attached sensible?\n>> Is there a common/better approach to zero-out in PG ?\n\n> This code comes from the SHA-2 implementation of OpenBSD, so it is not\n> adapted to directly touch it. What's the current state of this code\n> in upstream? Should we perhaps try to sync with the upstream\n> implementation instead?\n> After a quick search I am not seeing that this area has actually\n> changed:\n> http://fxr.watson.org/fxr/source/crypto/sha2.c?v=OPENBSD\n\nHm ... plastering \"volatile\"s all over it isn't good for readability\nor for quality of the generated code. (In particular, I'm worried\nabout this patch causing all those variables to be forced into memory\ninstead of registers.)\n\nAt the same time, I'm not sure if we should just write this off as an\nignorable warning. If the C compiler concludes these are dead stores\nit'll probably optimize them away, leading to not accomplishing the\ngoal of wiping the values.\n\nOn the third hand, that goal may not be worth much, particularly not\nif the variables do get kept in registers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 11:01:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dead stores in src/common/sha2.c"
},
{
"msg_contents": "On Wed, May 29, 2019 at 11:01:05AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, May 29, 2019 at 01:24:19PM +0000, Hamlin, Garick L wrote:\n> >> I ran clang checker and noticed these. It looks like the \n> >> sha2 implementation is trying to zero out state on exit, but\n> >> clang checker finds at least 'a' is a dead store. \n> >> \n> >> Should we fix this?\n> >> Is something like the attached sensible?\n> >> Is there a common/better approach to zero-out in PG ?\n> \n> > This code comes from the SHA-2 implementation of OpenBSD, so it is not\n> > adapted to directly touch it. What's the current state of this code\n> > in upstream? Should we perhaps try to sync with the upstream\n> > implementation instead?\n> > After a quick search I am not seeing that this area has actually\n> > changed:\n> > http://fxr.watson.org/fxr/source/crypto/sha2.c?v=OPENBSD\n> \n> Hm ... plastering \"volatile\"s all over it isn't good for readability\n> or for quality of the generated code. (In particular, I'm worried\n> about this patch causing all those variables to be forced into memory\n> instead of registers.)\n\nYeah, I don't actually think it's a great approach which is why I \nwas wondering what if PG had a right approach. I figured it was\nthe clearest way to start the discussion. Especially, since I wasn't\nsure if people would want to fix it.\n\n> At the same time, I'm not sure if we should just write this off as an\n> ignorable warning. If the C compiler concludes these are dead stores\n> it'll probably optimize them away, leading to not accomplishing the\n> goal of wiping the values.\n\nYeah, I mean it's odd to put code in to zero/hide state knowing it's\nprobably optimized out. \n\nWe could also take it out, but maybe it does help somewhere?\n\n... or put in a comment that says: This probably gets optimized away, but\nwe don't consider it much of a risk.\n\n> On the third hand, that goal may not be worth much, particularly not\n> if the variables do get kept in registers.\n\nI haven't looked at the asm. \nMaybe they are in registers...\n\nGarick\n\n\n",
"msg_date": "Wed, 29 May 2019 15:47:07 +0000",
"msg_from": "\"Hamlin, Garick L\" <ghamlin@isc.upenn.edu>",
"msg_from_op": true,
"msg_subject": "Re: Dead stores in src/common/sha2.c"
},
{
"msg_contents": "On 29/05/2019 18:47, Hamlin, Garick L wrote:\n> On Wed, May 29, 2019 at 11:01:05AM -0400, Tom Lane wrote:\n>> At the same time, I'm not sure if we should just write this off as an\n>> ignorable warning. If the C compiler concludes these are dead stores\n>> it'll probably optimize them away, leading to not accomplishing the\n>> goal of wiping the values.\n> \n> Yeah, I mean it's odd to put code in to zero/hide state knowing it's\n> probably optimized out.\n> \n> We could also take it out, but maybe it does help somewhere?\n> \n> ... or put in a comment that says: This probably gets optimized away, but\n> we don't consider it much of a risk.\n\nThere's a function called explicit_bzero() in glibc, for this purpose. \nSee \nhttps://www.gnu.org/software/libc/manual/html_node/Erasing-Sensitive-Data.html. \nIt's not totally portable, but it's also available in some BSDs, at \nleast. That documentation mentions the possibility that it might force \nvariables to be stored in memory that would've otherwise been kept only \nin registers, but says that in most situations it's nevertheless better \nto use explicit_bero() than not. So I guess we should use that, if it's \navailable.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 29 May 2019 23:57:35 -0400",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Dead stores in src/common/sha2.c"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 29/05/2019 18:47, Hamlin, Garick L wrote:\n>> On Wed, May 29, 2019 at 11:01:05AM -0400, Tom Lane wrote:\n>>> At the same time, I'm not sure if we should just write this off as an\n>>> ignorable warning. If the C compiler concludes these are dead stores\n>>> it'll probably optimize them away, leading to not accomplishing the\n>>> goal of wiping the values.\n\n>> Yeah, I mean it's odd to put code in to zero/hide state knowing it's\n>> probably optimized out.\n>> We could also take it out, but maybe it does help somewhere?\n>> ... or put in a comment that says: This probably gets optimized away, but\n>> we don't consider it much of a risk.\n\n> There's a function called explicit_bzero() in glibc, for this purpose. \n> See \n> https://www.gnu.org/software/libc/manual/html_node/Erasing-Sensitive-Data.html. \n> It's not totally portable, but it's also available in some BSDs, at \n> least. That documentation mentions the possibility that it might force \n> variables to be stored in memory that would've otherwise been kept only \n> in registers, but says that in most situations it's nevertheless better \n> to use explicit_bero() than not. So I guess we should use that, if it's \n> available.\n\nMeh. After looking at this closer, I'm convinced that doing anything\nthat might force the variables into memory would be utterly stupid.\nAside from any performance penalty we'd take, that would make their\nvalues more observable not less so.\n\nIn any case, though, it's very hard to see why we should care in the\nleast. Somebody who can observe the contents of server memory (or\napplication memory, in the case of frontend usage) can surely extract\nthe input or output of the SHA2 transform, which is going to be way\nmore useful than a few intermediate values.\n\nSo I think we should either do nothing, or suppress this warning by\ncommenting out the variable-zeroing.\n\n(Note that an eyeball scan finds several more dead zeroings besides\nthe ones in Garick's patch. Some of them are ifdef'd out ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 11:26:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dead stores in src/common/sha2.c"
}
] |
[
{
"msg_contents": "Pursuant to today's discussion at PGCon about code coverage, I went\nnosing into some of the particularly under-covered subdirectories\nin our tree, and immediately tripped over an interesting factoid:\nthe ASCII<->MIC and ASCII<->UTF8 encoding conversion functions are\nuntested ... not because the regression tests don't try, but because\nthose conversions are unreachable. pg_do_encoding_conversion() and\nits sister functions have hard-wired fast paths for any conversion\nin which the source or target encoding is SQL_ASCII, so that an\nencoding conversion function declared for such a case will never\nbe used.\n\n(The coverage results do show ascii_to_utf8 as being covered, but\nthat's just because alter_table.sql randomly chose to test\nALTER CONVERSION using a user-defined conversion from SQL_ASCII\nto UTF8, rather than any other case. CreateConversionCommand()\nwill invoke the specified function on an empty string just to see\nif it works, so that's where that \"coverage\" comes from.)\n\nThis situation seems kinda silly. My inclination is to delete\nthese functions as useless, but I suppose another approach is\nto suppress the fast paths if there's a declared conversion function.\n(Doing so would likely require added catalog lookups in places we\nmight not want them...)\n\nIf we do delete them as useless, it might also be advisable to change\nCreateConversionCommand() to refuse creation of conversions to/from\nSQL_ASCII, to prevent future confusion.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 15:03:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Dead encoding conversion functions"
},
{
"msg_contents": "> On 29 May 2019, at 15:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Pursuant to today's discussion at PGCon about code coverage, I went\n> nosing into some of the particularly under-covered subdirectories\n> in our tree,\n\nOn a similar, but much less important/interesting note. I fat-fingered when\ncompiling isolationtester on the plane over here and happened to compile\nsrc/test/examples, and in there testlo.c and testlo64.c has two dead functions\nfor which the callsites have been commented out since the Postgres95 import\n(and now cause a warning). Is there any (historic?) reason to keep that code?\nIt also seems kind of broken as it doesn’t really handle the open() call\nfailure very well.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 29 May 2019 15:10:55 -0400",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Dead encoding conversion functions"
},
{
"msg_contents": "On Wed, May 29, 2019 at 03:03:13PM -0400, Tom Lane wrote:\n> Pursuant to today's discussion at PGCon about code coverage, I went\n> nosing into some of the particularly under-covered subdirectories\n> in our tree, and immediately tripped over an interesting factoid:\n> the ASCII<->MIC and ASCII<->UTF8 encoding conversion functions are\n> untested ... not because the regression tests don't try, but because\n> those conversions are unreachable. pg_do_encoding_conversion() and\n> its sister functions have hard-wired fast paths for any conversion\n> in which the source or target encoding is SQL_ASCII, so that an\n> encoding conversion function declared for such a case will never\n> be used.\n\n> This situation seems kinda silly. My inclination is to delete\n> these functions as useless, but I suppose another approach is\n> to suppress the fast paths if there's a declared conversion function.\n> (Doing so would likely require added catalog lookups in places we\n> might not want them...)\n\nRemoving the fast paths to make ascii_to_utf8() reachable would cause ERROR\nwhen server_encoding=SQL_ASCII, client_encoding=UTF8, and a query would\notherwise send the client any character outside 7-bit ASCII. That's fairly\ndefensible, but doing it for only UTF8 and MULE_INTERNAL is not. So if we\nlike the ascii_to_utf8() behavior, I think the action would be to replace the\nfast path with an encoding-independent verification that all bytes are 7-bit\nASCII. (The check would not apply when both server_encoding and\nclient_encoding are SQL_ASCII, of course.) Alternately, one might prefer to\nreplace the fast path with an encoding verification; in the SQL_ASCII-to-UTF8\ncase, we'd allow byte sequences that are valid UTF8, even though the validity\nmay be a coincidence and mojibake may ensue. SQL_ASCII is for being casual\nabout encoding, so it's not clear to me whether or not either prospective\nbehavior change would be an improvement. However, I do find it clear to\ndelete ascii_to_utf8() and ascii_to_mic().\n\n> If we do delete them as useless, it might also be advisable to change\n> CreateConversionCommand() to refuse creation of conversions to/from\n> SQL_ASCII, to prevent future confusion.\n\nSounds good.\n\n\n",
"msg_date": "Sat, 15 Jun 2019 11:07:32 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Dead encoding conversion functions"
},
{
"msg_contents": "On 2019-05-29 21:03, Tom Lane wrote:\n> If we do delete them as useless, it might also be advisable to change\n> CreateConversionCommand() to refuse creation of conversions to/from\n> SQL_ASCII, to prevent future confusion.\n\nIt seems nonsensical by definition to allow that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 25 Jun 2019 14:33:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Dead encoding conversion functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-05-29 21:03, Tom Lane wrote:\n>> If we do delete them as useless, it might also be advisable to change\n>> CreateConversionCommand() to refuse creation of conversions to/from\n>> SQL_ASCII, to prevent future confusion.\n\n> It seems nonsensical by definition to allow that.\n\nHere's a completed patch for that. Obviously this is a bit late\nfor v12, but if there aren't objections I'll push this soon after\nv13 opens.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 30 Jun 2019 17:30:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Dead encoding conversion functions"
}
] |
[
{
"msg_contents": "Tom pointed out that coverage for worker_spi is 0%. For a module that\nonly exists to provide coverage, that's pretty stupid. This patch\nincreases coverage to 90.9% line-wise and 100% function-wise, which\nseems like a sufficient starting point.\n\nHow would people feel about me getting this in master at this point in\nthe cycle, it being just some test code? We can easily revert if\nit seems too unstable.\n\n-- \n�lvaro Herrera",
"msg_date": "Wed, 29 May 2019 15:32:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "coverage increase for worker_spi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Tom pointed out that coverage for worker_spi is 0%. For a module that\n> only exists to provide coverage, that's pretty stupid. This patch\n> increases coverage to 90.9% line-wise and 100% function-wise, which\n> seems like a sufficient starting point.\n\n> How would people feel about me getting this in master at this point in\n> the cycle, it being just some test code? We can easily revert if\n> it seems too unstable.\n\nI'm not opposed to adding a new test case at this point in the cycle,\nbut as written this one seems more or less guaranteed to fail under\nload. You can't just sleep for worker_spi.naptime and expect that\nthe worker will certainly have run.\n\nPerhaps you could use a plpgsql DO block with a loop to wait up\nto X seconds until the expected state appears, for X around 120\nto 180 seconds (compare poll_query_until in the TAP tests).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 18:39:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "On 2019-May-29, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Tom pointed out that coverage for worker_spi is 0%. For a module that\n> > only exists to provide coverage, that's pretty stupid. This patch\n> > increases coverage to 90.9% line-wise and 100% function-wise, which\n> > seems like a sufficient starting point.\n> \n> > How would people feel about me getting this in master at this point in\n> > the cycle, it being just some test code? We can easily revert if\n> > it seems too unstable.\n> \n> I'm not opposed to adding a new test case at this point in the cycle,\n> but as written this one seems more or less guaranteed to fail under\n> load.\n\nTrue. Here's a version that should be more resilient.\n\nOne thing I noticed while writing it, though, is that worker_spi uses\nthe postgres database, instead of the contrib_regression database that\nwas created for it. And we create a schema and a table there. This is\ngoing to get some eyebrows raised, I think, so I'll look into fixing\nthat as a bugfix before getting this commit in.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 30 May 2019 10:22:15 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-29, Tom Lane wrote:\n>> I'm not opposed to adding a new test case at this point in the cycle,\n>> but as written this one seems more or less guaranteed to fail under\n>> load.\n\n> True. Here's a version that should be more resilient.\n\nHm, I don't understand how this works at all:\n\n+\t\t\tPERFORM pg_sleep(CASE WHEN count(*) = 0 THEN 0 ELSE 0.1 END)\n+\t\t\tFROM schema1.counted WHERE type = 'delta';\n+\t\t\tGET DIAGNOSTICS count = ROW_COUNT;\n\nGiven that it uses an aggregate, the ROW_COUNT must always be 1, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 12:51:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "On 2019-May-30, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-May-29, Tom Lane wrote:\n> >> I'm not opposed to adding a new test case at this point in the cycle,\n> >> but as written this one seems more or less guaranteed to fail under\n> >> load.\n> \n> > True. Here's a version that should be more resilient.\n> \n> Hm, I don't understand how this works at all:\n> \n> +\t\t\tPERFORM pg_sleep(CASE WHEN count(*) = 0 THEN 0 ELSE 0.1 END)\n> +\t\t\tFROM schema1.counted WHERE type = 'delta';\n> +\t\t\tGET DIAGNOSTICS count = ROW_COUNT;\n> \n> Given that it uses an aggregate, the ROW_COUNT must always be 1, no?\n\nWell, I was surprised to see the count(*) work there as an argument for\npg_sleep there at all frankly (maybe we are sleeping 0.1s more than we\nreally need, per your observation), but the row_count is concerned with\nrows that have type = 'delta', which are deleted by the bgworker. So\nthe test script job is done when the bgworker has run once through its\nloop.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 May 2019 13:00:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-30, Tom Lane wrote:\n>> Hm, I don't understand how this works at all:\n>> \n>> +\t\t\tPERFORM pg_sleep(CASE WHEN count(*) = 0 THEN 0 ELSE 0.1 END)\n>> +\t\t\tFROM schema1.counted WHERE type = 'delta';\n>> +\t\t\tGET DIAGNOSTICS count = ROW_COUNT;\n>> \n>> Given that it uses an aggregate, the ROW_COUNT must always be 1, no?\n\n> Well, I was surprised to see the count(*) work there as an argument for\n> pg_sleep there at all frankly (maybe we are sleeping 0.1s more than we\n> really need, per your observation), but the row_count is concerned with\n> rows that have type = 'delta', which are deleted by the bgworker. So\n> the test script job is done when the bgworker has run once through its\n> loop.\n\nNo, the row_count is going to report the number of rows returned by\nthe aggregate query, which is going to be one row, independently\nof how many rows went into the aggregate.\n\nregression=# do $$\ndeclare c int;\nbegin\nperform count(*) from tenk1; \nget diagnostics c = row_count;\nraise notice 'c = %', c;\nend$$;\npsql: NOTICE: c = 1\nDO\nregression=# do $$\ndeclare c int;\nbegin\nperform count(*) from tenk1 where false;\nget diagnostics c = row_count;\nraise notice 'c = %', c;\nend$$;\npsql: NOTICE: c = 1\nDO\n\nI think you want to capture the actual aggregate output rather than\nrelying on row_count:\n\nregression=# do $$\ndeclare c int;\nbegin\nc := count(*) from tenk1;\nraise notice 'c = %', c;\nend$$;\npsql: NOTICE: c = 10000\nDO\nregression=# do $$\ndeclare c int;\nbegin\nc := count(*) from tenk1 where false;\nraise notice 'c = %', c;\nend$$;\npsql: NOTICE: c = 0\nDO\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 13:46:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "On 2019-May-30, Alvaro Herrera wrote:\n\n> One thing I noticed while writing it, though, is that worker_spi uses\n> the postgres database, instead of the contrib_regression database that\n> was created for it. And we create a schema and a table there. This is\n> going to get some eyebrows raised, I think, so I'll look into fixing\n> that as a bugfix before getting this commit in.\n\nAnother thing I noticed when fixing *this*, in turn, is that if you load\nworker_spi in shared_preload_libraries then the contrib_regression\ndatabase doesn't exist by the point that runs, so those workers fail to\nstart. The dynamic one does start in the configured database.\nI guess we could just ignore the failures and just rely on the dynamic\nworker.\n\nI ended up with these two patches. I'm not sure about pushing\nseparately. It seems pointless to backport the \"fix\" to back branches\nanyway.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 31 May 2019 15:17:52 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I ended up with these two patches. I'm not sure about pushing\n> separately. It seems pointless to backport the \"fix\" to back branches\n> anyway.\n\nPatch passes the eyeball test, though I did not try to run it.\nI concur with squashing into one commit and applying to HEAD only.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 Jun 2019 13:23:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage increase for worker_spi"
},
{
"msg_contents": "On 2019-Jun-01, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I ended up with these two patches. I'm not sure about pushing\n> > separately. It seems pointless to backport the \"fix\" to back branches\n> > anyway.\n> \n> Patch passes the eyeball test, though I did not try to run it.\n> I concur with squashing into one commit and applying to HEAD only.\n\nOkay, pushed. Let's see how it does, now.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 2 Jun 2019 00:35:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage increase for worker_spi"
}
] |
[
{
"msg_contents": "SPI_connect() changes the memory context to a newly-created one, and\nthen SPI_finish() restores it. That seems a bit dangerous because the\ncaller might not be expecting it. Is there a reason it doesn't just\nchange to the new memory context as-needed?\n\nspi.c:161:\n\n /* ... and switch to procedure's context */\n _SPI_current->savedcxt = MemoryContextSwitchTo(_SPI_current-\n>procCxt);\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 29 May 2019 14:20:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Why does SPI_connect change the memory context?"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> SPI_connect() changes the memory context to a newly-created one, and\n> then SPI_finish() restores it. That seems a bit dangerous because the\n> caller might not be expecting it. Is there a reason it doesn't just\n> change to the new memory context as-needed?\n\nBecause the expectation is that palloc inside the SPI procedure will\nallocate in a procedure-specific context. If the caller isn't expecting\nthat, they haven't read the documentation, specifically\n\nhttps://www.postgresql.org/docs/devel/spi-memory.html\n\nwhich says\n\n <para>\n <function>SPI_connect</function> creates a new memory context and\n makes it current. <function>SPI_finish</function> restores the\n previous current memory context and destroys the context created by\n <function>SPI_connect</function>. These actions ensure that\n transient memory allocations made inside your C function are\n reclaimed at C function exit, avoiding memory leakage.\n </para>\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 18:25:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does SPI_connect change the memory context?"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that debug_print_rel outputs \"unknown expr\" when the fields\nin baserestrictinfo are typed as varchar.\n\ncreate table tbl_a(id int, info varchar(32));\n\nRELOPTINFO (tbl_a): rows=4 width=86\n baserestrictinfo: unknown expr = pattern\n\nMy approach is to handle the RelabelType case in print_expr. After\nthe patch, I get:\n\nRELOPTINFO (tbl_a): rows=4 width=86\n baserestrictinfo: tbl_a.info = pattern\n\nI wonder if this is a proper way of fixing it?\n\nThank you,\nDonald Dong",
"msg_date": "Wed, 29 May 2019 17:33:59 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Print baserestrictinfo for varchar fields"
},
{
"msg_contents": "Donald Dong <xdong@csumb.edu> writes:\n> I noticed that debug_print_rel outputs \"unknown expr\" when the fields\n> in baserestrictinfo are typed as varchar.\n> ...\n> I wonder if this is a proper way of fixing it?\n\nIt's hard to muster much enthusiasm for extending print_expr(),\nconsidering how incomplete and little-used it is. I'd rather\nspend effort on ripping it out in favor of using the far more\ncomplete, and better-tested, code in ruleutils.c.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 01:37:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Print baserestrictinfo for varchar fields"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 01:37:22AM -0400, Tom Lane wrote:\n> It's hard to muster much enthusiasm for extending print_expr(),\n> considering how incomplete and little-used it is. I'd rather\n> spend effort on ripping it out in favor of using the far more\n> complete, and better-tested, code in ruleutils.c.\n\nIf it is possible to get the same amount of coverage when debugging\nthe planner, count me in. Now it seems to me that we'd still require\nsome work to get the same level of information as for range table\nentry kinds..\n--\nMichael",
"msg_date": "Tue, 4 Jun 2019 10:37:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Print baserestrictinfo for varchar fields"
}
] |
[
{
"msg_contents": "Hi,\n\nAfter I make temp-install on HEAD with a clean build, I fail to start\npsql (tmp_install/usr/local/pgsql/bin/psql) and get an error message:\n\n./psql: symbol lookup error: ./psql: undefined symbol: PQgssEncInUse\n\nHowever, make check and other tests still work. For me, it is fine\nuntil commit b0b39f72b9904bcb80f97b35837ccff1578aa4b8. I wonder if\nthis only occurs to me?\n\nThank you,\nDonald Dong\n\n\n",
"msg_date": "Wed, 29 May 2019 18:21:46 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "undefined symbol: PQgssEncInUse"
},
{
"msg_contents": "Have you used the correct libpq library? If yes, you might want to check\nthe build logs and related files to see where is wrong. In my environment,\nit's ok with both gssapi enabled or disabled.\n\nOn Thu, May 30, 2019 at 9:22 AM Donald Dong <xdong@csumb.edu> wrote:\n\n> Hi,\n>\n> After I make temp-install on HEAD with a clean build, I fail to start\n> psql (tmp_install/usr/local/pgsql/bin/psql) and get an error message:\n>\n> ./psql: symbol lookup error: ./psql: undefined symbol: PQgssEncInUse\n>\n> However, make check and other tests still work. For me, it is fine\n> until commit b0b39f72b9904bcb80f97b35837ccff1578aa4b8. I wonder if\n> this only occurs to me?\n>\n> Thank you,\n> Donald Dong\n>\n>\n>\n\nHave you used the correct libpq library? If yes, you might want to check the build logs and related files to see where is wrong. In my environment, it's ok with both gssapi enabled or disabled.On Thu, May 30, 2019 at 9:22 AM Donald Dong <xdong@csumb.edu> wrote:Hi,\n\nAfter I make temp-install on HEAD with a clean build, I fail to start\npsql (tmp_install/usr/local/pgsql/bin/psql) and get an error message:\n\n./psql: symbol lookup error: ./psql: undefined symbol: PQgssEncInUse\n\nHowever, make check and other tests still work. For me, it is fine\nuntil commit b0b39f72b9904bcb80f97b35837ccff1578aa4b8. I wonder if\nthis only occurs to me?\n\nThank you,\nDonald Dong",
"msg_date": "Thu, 30 May 2019 11:23:56 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: undefined symbol: PQgssEncInUse"
},
{
"msg_contents": "On May 29, 2019, at 8:23 PM, Paul Guo <pguo@pivotal.io> wrote:\n> Have you used the correct libpq library? If yes, you might want to check the build logs and related files to see where is wrong. In my environment, it's ok with both gssapi enabled or disabled.\n\nThank you! Resetting libpq's path fixes it.\n\nRegards,\nDonald Dong\n\n",
"msg_date": "Wed, 29 May 2019 21:26:27 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: undefined symbol: PQgssEncInUse"
}
] |
[
{
"msg_contents": "Dear all,\n\nI'm working on development of some PL/pgSQL plugin.\nThe smaller part of my code is written on C.\nIt's a standard extension code for integration with fmgr (_PG_init ...)\n\nBut bigger part of the code is written on C++. \nAnd here I need declarations of internal PL/pgSQL structs from plpgsql.h\n\nDirect include of this file to my C++ code results in the following errors:\n\n\n/opt/pgsql-11/include/server/plpgsql.h:1201:45: ошибка: expected <,> or <...> before <new>\n extern void plpgsql_adddatum(PLpgSQL_datum *new);\n ^\n/opt/pgsql-11/include/server/plpgsql.h:1228:15: ошибка: expected <,> or <...> before <typeid>\n Oid *typeid, int32 *typmod, Oid *collation);\n ^\n\nIt's obviously that this code can't be compiled with C++ because the\nC++ keywords are used as an identifiers. I modified plpgsql.h.\nSo, please advise does the renaming is the right step in this situation??\n\nAll my modifications are in the attached patch.\nCorrections are made also in C-files (pl_comp.c and pl_exec.c), where the function definitions are \nlocated, but this is not necessarily.\n\nGeorge",
"msg_date": "Thu, 30 May 2019 15:14:01 +0000",
"msg_from": "=?koi8-r?B?9MHSwdPP1yDnxc/Sx8nKIPfJ1MHM2MXXyd4=?= <Tarasov-G@gaz-is.ru>",
"msg_from_op": true,
"msg_subject": "compiling PL/pgSQL plugin with C++"
},
{
"msg_contents": "[ redirecting to -hackers ]\n\n=?koi8-r?B?9MHSwdPP1yDnxc/Sx8nKIPfJ1MHM2MXXyd4=?= <Tarasov-G@gaz-is.ru> writes:\n> I'm working on development of some PL/pgSQL plugin.\n> The smaller part of my code is written on C.\n> It's a standard extension code for integration with fmgr (_PG_init ...)\n> But bigger part of the code is written on C++. \n> And here I need declarations of internal PL/pgSQL structs from plpgsql.h\n\nSo ... that's supposed to work, because we have a test script that\nverifies that all our headers compile as C++.\n\nOr I thought it was \"all\", anyway. Closer inspection shows that it's\nnot checking src/pl. Nor contrib.\n\nI propose that we change src/tools/pginclude/cpluspluscheck so that\nit searches basically everywhere:\n \n-for f in `find src/include src/interfaces/libpq/libpq-fe.h src/interfaces/libpq/libpq-events.h -name '*.h' -print | \\\n+for f in `find src contrib -name '*.h' -print | \\\n\nHowever, trying to run that, I find that plpython and plperl are both\nseriously in violation of the project convention that headers should\ncompile standalone. It looks like most of their headers rely on an\nassumption that the calling .c file already included the Python or\nPerl headers respectively.\n\nAnybody object to me reshuffling the #include's to make this pass?\nI propose doing that for HEAD only, although we should back-patch\nGeorge's fix (and any other actual C++ problems we find).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 11:54:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling PL/pgSQL plugin with C++"
},
{
"msg_contents": "I wrote:\n> I propose that we change src/tools/pginclude/cpluspluscheck so that\n> it searches basically everywhere:\n \n> -for f in `find src/include src/interfaces/libpq/libpq-fe.h src/interfaces/libpq/libpq-events.h -name '*.h' -print | \\\n> +for f in `find src contrib -name '*.h' -print | \\\n\nAfter further experimentation with that, it seems like we'll have\nto continue to exclude src/bin/pg_dump/*.h from the C++ check.\npg_dump uses \"public\" and \"namespace\" as field names in various\nstructs, both of which are C++ keywords. Changing these names\nwould be quite invasive, and at least in the short run I see no\npayoff for doing so.\n\necpg/preproc/type.h is also using \"new\" as a field name, but it\nlooks like there are few enough references that renaming that\nfield isn't unreasonable.\n\nThere are various other minor issues, but they generally look\nfixable with little consequence.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 17:46:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling PL/pgSQL plugin with C++"
},
{
"msg_contents": "I wrote:\n> There are various other minor issues, but they generally look\n> fixable with little consequence.\n\nI've now pushed your patch and additional minor fixes, and\nwe've expanded cpluspluscheck's coverage so we don't miss\nsuch issues in future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 May 2019 17:39:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiling PL/pgSQL plugin with C++"
}
] |
[
{
"msg_contents": "Hi!\n\nFor those of you that have not read the minutes from the developer meeting\nahead of pgcon (can be found at\nhttps://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\nto announce here as well that David Rowley has joined the ranks of\nPostgreSQL committers.\n\nCongratulations to David, may the buildfarm be gentle to him, and his first\nrevert far away!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nHi!For those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers.Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Thu, 30 May 2019 11:39:23 -0400",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "New committer: David Rowley"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-30 11:39:23 -0400, Magnus Hagander wrote:\n> For those of you that have not read the minutes from the developer meeting\n> ahead of pgcon (can be found at\n> https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\n> to announce here as well that David Rowley has joined the ranks of\n> PostgreSQL committers.\n> \n> Congratulations to David, may the buildfarm be gentle to him, and his first\n> revert far away!\n\nCongrats!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 May 2019 08:43:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, May 30, 2019 at 6:39 PM Magnus Hagander <magnus@hagander.net> wrote:\nFor those of you that have not read the minutes from the developer meeting\nahead of pgcon (can be found at\nhttps://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\nto announce here as well that David Rowley has joined the ranks of\nPostgreSQL committers.\n\nCongratulations to David, may the buildfarm be gentle to him, and his first\nrevert far away!\n\nYee.\n\n>\n\nOn Thu, May 30, 2019 at 6:39 PM Magnus Hagander <magnus@hagander.net> wrote:For those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers.Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!Yee.",
"msg_date": "Thu, 30 May 2019 18:54:18 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On 5/30/19 11:39 AM, Magnus Hagander wrote:\n> \n> For those of you that have not read the minutes from the developer \n> meeting ahead of pgcon (can be found at \n> https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd \n> like to announce here as well that David Rowley has joined the ranks of \n> PostgreSQL committers.\n> \n> Congratulations to David, may the buildfarm be gentle to him, and his \n> first revert far away!\n\nCongratulations! Very well deserved!\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 30 May 2019 12:45:12 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, May 30, 2019 at 11:39:23AM -0400, Magnus Hagander wrote:\n> Hi!\n> \n> For those of you that have not read the minutes from the developer meeting\n> ahead of pgcon (can be found at\n> https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\n> to announce here as well that David Rowley has joined the ranks of\n> PostgreSQL committers.\n> \n> Congratulations to David, may the buildfarm be gentle to him, and his first\n> revert far away!\n\nKudos, David!\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Thu, 30 May 2019 19:24:31 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, May 30, 2019 at 9:09 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> Hi!\n>\n> For those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers.\n>\n> Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!\n>\n\nCongratulation David!\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 May 2019 23:40:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On 05/30/2019 10:39 am, Magnus Hagander wrote:\n\n> Hi! \n> \n> For those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers. \n> \n> Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!\n\nCongrats! \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\nOn 05/30/2019 10:39 am, Magnus Hagander wrote:\n\nHi!\n \nFor those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers.\n \nCongratulations to David, may the buildfarm be gentle to him, and his first revert far away!\n \n \n \n\n \n\n\n\n \nCongrats!\n-- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: ler@lerctr.org US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106",
"msg_date": "Thu, 30 May 2019 13:39:30 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On 5/30/19 11:43 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-30 11:39:23 -0400, Magnus Hagander wrote:\n>> For those of you that have not read the minutes from the developer meeting\n>> ahead of pgcon (can be found at\n>> https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\n>> to announce here as well that David Rowley has joined the ranks of\n>> PostgreSQL committers.\n>>\n>> Congratulations to David, may the buildfarm be gentle to him, and his first\n>> revert far away!\n> \n> Congrats!\n\n+1\n\nCongratulations David!\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Thu, 30 May 2019 16:26:37 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, May 30, 2019 at 11:39:23AM -0400, Magnus Hagander wrote:\n> Congratulations to David, may the buildfarm be gentle to him, and his first\n> revert far away!\n\nCongrats!\n--\nMichael",
"msg_date": "Thu, 30 May 2019 17:55:47 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On 2019/05/31 0:39, Magnus Hagander wrote:\n> Hi!\n> \n> For those of you that have not read the minutes from the developer meeting\n> ahead of pgcon (can be found at\n> https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like\n> to announce here as well that David Rowley has joined the ranks of\n> PostgreSQL committers.\n> \n> Congratulations to David, may the buildfarm be gentle to him, and his first\n> revert far away!\n\nVery well deserved, congratulations!\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 31 May 2019 09:26:32 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, May 30, 2019 at 6:39 PM Magnus Hagander <magnus@hagander.net> wrote:\n> For those of you that have not read the minutes from the developer meeting ahead of pgcon (can be found at https://wiki.postgresql.org/wiki/PgCon_2019_Developer_Meeting), we'd like to announce here as well that David Rowley has joined the ranks of PostgreSQL committers.\n>\n> Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!\n\n+1\n\nCongratulations to David! Very much deserved!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 31 May 2019 07:01:15 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On 5/30/19 11:39 AM, Magnus Hagander wrote:\n> Congratulations to David, may the buildfarm be gentle to him, and his first\n> revert far away!\n> \n\nCongrats !\n\nBest regards,\n Jesper\n\n\n\n",
"msg_date": "Fri, 31 May 2019 08:07:18 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
},
{
"msg_contents": "On Thu, 30 May 2019 at 11:39, Magnus Hagander <magnus@hagander.net> wrote:\n> Congratulations to David, may the buildfarm be gentle to him, and his first revert far away!\n\nThank you, all. I will do my best not to anger the build gods and\nturn the farm red ;-)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 31 May 2019 08:14:44 -0400",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: New committer: David Rowley"
}
] |
[
{
"msg_contents": "Hi,\n\nI, Devansh Gupta, have just completed my sophomore year in B.Tech. in\nComputer Science and Engineering from International Institute of\nInformation Technology Hyderabad, India and am planning to contribute to\nthe documentation of the *postgreSQL *project.\n\nI have already used numPy for many projects and have gone through its\ndocumentation to implement the same. I am also well versed with the\nlanguages used for developing the project and also have the experience in\ndocumentation as part of different assignments and internships.\n\nSince GSoD is relatively newer, I need guidance to know where to start\nfrom. Is there any task that I can perform to know the project better as\nwell as to develop and showcase the necessary skills?\n\nThanks and regards\n\nHi,I, Devansh Gupta, have just completed my sophomore year in B.Tech. in Computer Science and Engineering from International Institute of Information Technology Hyderabad, India and am planning to contribute to the documentation of the postgreSQL project. I have already used numPy for many projects and have gone through its documentation to implement the same. I am also well versed with the languages used for developing the project and also have the experience in documentation as part of different assignments and internships. Since GSoD is relatively newer, I need guidance to know where to start from. Is there any task that I can perform to know the project better as well as to develop and showcase the necessary skills?Thanks and regards",
"msg_date": "Thu, 30 May 2019 23:09:18 +0530",
"msg_from": "Devansh Gupta <devansh.gupta@students.iiit.ac.in>",
"msg_from_op": true,
"msg_subject": "Applicant for Google Season of Documentation"
}
] |
[
{
"msg_contents": "I just enabled --enabled-llvm on the coverage reporting machine, which\nmade src/backend/jit/jit.c go from 60/71 % (line/function wise) to 78/85 % ...\nand src/backend/jit/llvm from not appearing at all in the report to\n78/94 %. That's a good improvement.\n\nIf there are other obvious improvements to be had, please let me know.\n(We have PG_TEST_EXTRA=\"ssl ldap\" currently, do we have any more extra\ntests now?)\n\n-- \n�lvaro Herrera\n\n\n",
"msg_date": "Thu, 30 May 2019 13:52:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "coverage additions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I just enabled --enabled-llvm on the coverage reporting machine, which\n> made src/backend/jit/jit.c go from 60/71 % (line/function wise) to 78/85 % ...\n> and src/backend/jit/llvm from not appearing at all in the report to\n> 78/94 %. That's a good improvement.\n\n> If there are other obvious improvements to be had, please let me know.\n\nI was going to suggest that adding some or all of\n\n-DCOPY_PARSE_PLAN_TREES\n-DWRITE_READ_PARSE_PLAN_TREES\n-DRAW_EXPRESSION_COVERAGE_TEST\n\nto your CPPFLAGS might improve the reported coverage in backend/nodes/,\nand perhaps other places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 14:05:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On 2019-May-30, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I just enabled --enabled-llvm on the coverage reporting machine, which\n> > made src/backend/jit/jit.c go from 60/71 % (line/function wise) to 78/85 % ...\n> > and src/backend/jit/llvm from not appearing at all in the report to\n> > 78/94 %. That's a good improvement.\n> \n> > If there are other obvious improvements to be had, please let me know.\n> \n> I was going to suggest that adding some or all of\n> \n> -DCOPY_PARSE_PLAN_TREES\n> -DWRITE_READ_PARSE_PLAN_TREES\n> -DRAW_EXPRESSION_COVERAGE_TEST\n> \n> to your CPPFLAGS might improve the reported coverage in backend/nodes/,\n> and perhaps other places.\n\nI did that, and it certainly increased backend/nodes numbers\nconsiderably. Thanks.\n\n(extensible.c remains at 0% though, as does its companion nodeCustom.c).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 May 2019 15:28:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "Apparently, for ecpg you have to do \"make checktcp\" in order for some of\nthe tests to run, and \"make check-world\" doesn't do that. Not sure\nwhat's a good fix for this; do we want to add \"make -C\nsrc/interfaces/ecpg/test checktcp\" to what \"make check-world\" does,\nor do we rather what to add checktcp as a dependency of \"make check\" in\nsrc/interfaces/ecpg?\n\nOr do we just not want this test to be run by default, and thus I should\nadd \"make -C src/interfaces/ecpg/test checktcp\" to coverage.pg.org's\nshell script? Maybe all we need is a way to have it run using\nthe PG_EXTRA_TEST thingy, but I'm not sure how that works ...?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 May 2019 16:23:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Apparently, for ecpg you have to do \"make checktcp\" in order for some of\n> the tests to run, and \"make check-world\" doesn't do that. Not sure\n> what's a good fix for this; do we want to add \"make -C\n> src/interfaces/ecpg/test checktcp\" to what \"make check-world\" does,\n> or do we rather what to add checktcp as a dependency of \"make check\" in\n> src/interfaces/ecpg?\n\n> Or do we just not want this test to be run by default, and thus I should\n> add \"make -C src/interfaces/ecpg/test checktcp\" to coverage.pg.org's\n> shell script?\n\nI believe it's intentionally not run by default because it opens up\nan externally-accessible server port.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 17:54:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On Thu, May 30, 2019 at 01:52:20PM -0400, Alvaro Herrera wrote:\n> If there are other obvious improvements to be had, please let me know.\n> (We have PG_TEST_EXTRA=\"ssl ldap\" currently, do we have any more extra\n> tests now?)\n\nYou can add kerberos to this list, to give:\nPG_TEST_EXTRA='ssl ldap kerberos'\n--\nMichael",
"msg_date": "Thu, 30 May 2019 17:54:36 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On 2019-May-30, Michael Paquier wrote:\n\n> On Thu, May 30, 2019 at 01:52:20PM -0400, Alvaro Herrera wrote:\n> > If there are other obvious improvements to be had, please let me know.\n> > (We have PG_TEST_EXTRA=\"ssl ldap\" currently, do we have any more extra\n> > tests now?)\n> \n> You can add kerberos to this list, to give:\n> PG_TEST_EXTRA='ssl ldap kerberos'\n\nAh, now I remember that I tried this before, but it requires some extra\npackages installed in the machine I think, and those create running\nservices. Did you note that src/backend/libpq does not even list the\ngssapi file?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Jun 2019 00:55:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On Thu, 2019-05-30 at 17:54 -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Apparently, for ecpg you have to do \"make checktcp\" in order for\n> > some of\n> > the tests to run, and \"make check-world\" doesn't do that. Not sure\n> > what's a good fix for this; do we want to add \"make -C\n> > src/interfaces/ecpg/test checktcp\" to what \"make check-world\" does,\n> > or do we rather what to add checktcp as a dependency of \"make\n> > check\" in\n> > src/interfaces/ecpg?\n> > Or do we just not want this test to be run by default, and thus I\n> > should\n> > add \"make -C src/interfaces/ecpg/test checktcp\" to\n> > coverage.pg.org's\n> > shell script?\n> \n> I believe it's intentionally not run by default because it opens up\n> an externally-accessible server port.\n\nCorrect, iirc.\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Sun, 02 Jun 2019 00:37:08 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On 2019-Jun-02, Michael Meskes wrote:\n\n> On Thu, 2019-05-30 at 17:54 -0400, Tom Lane wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > > Or do we just not want this test to be run by default, and thus I\n> > > should add \"make -C src/interfaces/ecpg/test checktcp\" to\n> > > coverage.pg.org's shell script?\n> > \n> > I believe it's intentionally not run by default because it opens up\n> > an externally-accessible server port.\n> \n> Correct, iirc.\n\nOkay ... I added a \"make -C src/interfaces/ecpg/test checktcp\". Now\nfunction-wise ecpg seems reasonable almost everywhere except compatlib\n(though line-wise things are not so great).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 1 Jun 2019 23:07:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:55:47AM -0400, Alvaro Herrera wrote:\n> Ah, now I remember that I tried this before, but it requires some extra\n> packages installed in the machine I think, and those create running\n> services. Did you note that src/backend/libpq does not even list the\n> gssapi file?\n\nDo you mean the header file be-gssapi-common.h? It is stored in\nsrc/backend/libpq/ which is obviously incorrect. I think that it\nshould be moved to src/include/libpq/be-gssapi-common.h. Its\nidentification marker even says that. Perhaps that's because of MSVC?\nStephen?\n--\nMichael",
"msg_date": "Tue, 4 Jun 2019 10:46:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On 2019-Jun-04, Michael Paquier wrote:\n\n> On Sat, Jun 01, 2019 at 12:55:47AM -0400, Alvaro Herrera wrote:\n> > Ah, now I remember that I tried this before, but it requires some extra\n> > packages installed in the machine I think, and those create running\n> > services. Did you note that src/backend/libpq does not even list the\n> > gssapi file?\n> \n> Do you mean the header file be-gssapi-common.h?\n\nActually, I meant be-gssapi-common.c, but I suppose having the file\nappear at all would be dependent on whether the GSSAPI stuff is compiled\nin, which seems to require yet another configure switch that we don't\nhave in the coverage machine.\n\nBut yeah, I think be-gssapi-common.h be in src/backend/libpq is against\nour established practice and we should put it in src/include/libpq.\n\nWhich in turn makes me think that perhaps src/include/libpq/libpq.h\nneeds some splitting or something, because the be-openssl-common.c file\ndoes not seem to have a corresponding header ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:07:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 04:07:17PM -0400, Alvaro Herrera wrote:\n> On 2019-Jun-04, Michael Paquier wrote:\n>> On Sat, Jun 01, 2019 at 12:55:47AM -0400, Alvaro Herrera wrote:\n>>> Ah, now I remember that I tried this before, but it requires some extra\n>>> packages installed in the machine I think, and those create running\n>>> services. Did you note that src/backend/libpq does not even list the\n>>> gssapi file?\n>> \n>> Do you mean the header file be-gssapi-common.h?\n> \n> Actually, I meant be-gssapi-common.c, but I suppose having the file\n> appear at all would be dependent on whether the GSSAPI stuff is compiled\n> in, which seems to require yet another configure switch that we don't\n> have in the coverage machine.\n\nNot sure I still follow.. In src/backend/libpq we have\nbe-gssapi-common.c and be-gssapi-common.c, both getting added only if \nwith_gssapi is enabled.\n\n> Which in turn makes me think that perhaps src/include/libpq/libpq.h\n> needs some splitting or something, because the be-openssl-common.c file\n> does not seem to have a corresponding header ...\n\nYeah, it seems that there could be ways to split that in a smarter\nway.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 18:14:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 06:14:45PM +0900, Michael Paquier wrote:\n> Not sure I still follow.. In src/backend/libpq we have\n> be-gssapi-common.c and be-gssapi-common.c, both getting added only if \n> with_gssapi is enabled.\n\nI am going to spawn a new thread with a patch for the header file. I\nthink that we had better fix that before v12 ships.\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 13:23:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: coverage additions"
}
] |
[
{
"msg_contents": "Using Apple's clang as c++ compiler:\n\nIn file included from /tmp/cpluspluscheck.KejiIw/test.cpp:3:\n./src/include/access/tableam.h:144:16: error: typedef redefinition with different types ('void (*)(Relation, HeapTuple, Datum *, bool *, bool, void *)' (aka 'void (*)(RelationData *, HeapTupleData *, unsigned long *, bool *, bool, void *)') vs 'IndexBuildCallback')\ntypedef void (*IndexBuildCallback) (Relation index,\n ^\n./src/include/access/tableam.h:36:8: note: previous definition is here\nstruct IndexBuildCallback;\n ^\n\n(there are some cascading errors, but this is the important one)\n\nKinda looks like you can't get away with using \"struct\" on a forward\ndeclaration of something that is not actually a struct type.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 May 2019 14:01:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "tableam.h fails cpluspluscheck"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-30 14:01:00 -0400, Tom Lane wrote:\n> Using Apple's clang as c++ compiler:\n> \n> In file included from /tmp/cpluspluscheck.KejiIw/test.cpp:3:\n> ./src/include/access/tableam.h:144:16: error: typedef redefinition with different types ('void (*)(Relation, HeapTuple, Datum *, bool *, bool, void *)' (aka 'void (*)(RelationData *, HeapTupleData *, unsigned long *, bool *, bool, void *)') vs 'IndexBuildCallback')\n> typedef void (*IndexBuildCallback) (Relation index,\n> ^\n> ./src/include/access/tableam.h:36:8: note: previous definition is here\n> struct IndexBuildCallback;\n> ^\n> \n> (there are some cascading errors, but this is the important one)\n> \n> Kinda looks like you can't get away with using \"struct\" on a forward\n> declaration of something that is not actually a struct type.\n\nUgh. Odd that only C++ compilers complain. I just removed the typedef,\nit's not needed anymore (it used to be neccessary before moving\nIndexBuildCallback's definition to tableam.h - but was wrong then too,\njust cpluspluscheck didn't notice).\n\nPushed the obvious fix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 May 2019 13:47:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam.h fails cpluspluscheck"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-30 14:01:00 -0400, Tom Lane wrote:\n>> Kinda looks like you can't get away with using \"struct\" on a forward\n>> declaration of something that is not actually a struct type.\n\n> Ugh. Odd that only C++ compilers complain. I just removed the typedef,\n> it's not needed anymore (it used to be neccessary before moving\n> IndexBuildCallback's definition to tableam.h - but was wrong then too,\n> just cpluspluscheck didn't notice).\n\nCool, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 May 2019 09:57:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: tableam.h fails cpluspluscheck"
}
] |
[
{
"msg_contents": "Hi,\n\nright now cpluspluscheck doesn't work with vpath builds. That's pretty\nannoying, because it does require cloning the git tree into a separate\ndirectory + doing configure there just to run cpluspluscheck.\n\nAttached is a small patch allowing cpluspluscheck to run from different\ndirectories. I needs the src and build directories for that,\nunsurprisingly.\n\nAs that makes it more complicated to invoke, I added a makefile target\n(in the top level) for it.\n\nSeems we could round the edges a good bit further than what's done in\nthe attached (argument checking, for example). But I think this would\nalready be an improvement?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 May 2019 15:02:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "cpluspluscheck vs vpath"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-30 15:02:44 -0700, Andres Freund wrote:\n> right now cpluspluscheck doesn't work with vpath builds. That's pretty\n> annoying, because it does require cloning the git tree into a separate\n> directory + doing configure there just to run cpluspluscheck.\n> \n> Attached is a small patch allowing cpluspluscheck to run from different\n> directories. I needs the src and build directories for that,\n> unsurprisingly.\n> \n> As that makes it more complicated to invoke, I added a makefile target\n> (in the top level) for it.\n> \n> Seems we could round the edges a good bit further than what's done in\n> the attached (argument checking, for example, but also using the C++\n> compiler from configure). But I think this would already be an\n> improvement?\n\nUgh, sent the previous email too early.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 30 May 2019 15:04:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs vpath"
},
{
"msg_contents": "On 2019-May-30, Andres Freund wrote:\n\n> On 2019-05-30 15:02:44 -0700, Andres Freund wrote:\n>\n> > Seems we could round the edges a good bit further than what's done in\n> > the attached (argument checking, for example, but also using the C++\n> > compiler from configure). But I think this would already be an\n> > improvement?\n\n+1 I've stumbled upon this too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 May 2019 18:08:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck vs vpath"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Attached is a small patch allowing cpluspluscheck to run from different\n> directories. I needs the src and build directories for that,\n> unsurprisingly.\n\nNo objection to changing this, but you could reduce the surprise\nfactor for existing workflows with a couple of defaults for the\narguments --- allow srcdir to default to \".\" and builddir to default\nto the same as srcdir.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 May 2019 09:56:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cpluspluscheck vs vpath"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-31 09:56:45 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Attached is a small patch allowing cpluspluscheck to run from different\n> > directories. I needs the src and build directories for that,\n> > unsurprisingly.\n> \n> No objection to changing this, but you could reduce the surprise\n> factor for existing workflows with a couple of defaults for the\n> arguments --- allow srcdir to default to \".\" and builddir to default\n> to the same as srcdir.\n\nPushed, with that modification.\n\nWould be kinda nice to do the check in parallel...\n\n- Andres\n\n\n",
"msg_date": "Fri, 31 May 2019 13:05:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: cpluspluscheck vs vpath"
}
] |
[
{
"msg_contents": "Hi,\n\nI was wondering why there is not a type Range of time without time zone, I\nthink it may be useful for someone, Is good if i do PR.\n\nSorry if I've worte in the wrong place\n\nHi,I was wondering why there is not a type Range of time without time zone, I think it may be useful for someone, Is good if i do PR.Sorry if I've worte in the wrong place",
"msg_date": "Fri, 31 May 2019 08:35:31 +0200",
"msg_from": "Donald Shtjefni <dnld.sht@gmail.com>",
"msg_from_op": true,
"msg_subject": "Time range"
},
{
"msg_contents": "On Fri, May 31, 2019 at 08:35:31AM +0200, Donald Shtjefni wrote:\n>Hi,\n>\n>I was wondering why there is not a type Range of time without time zone, I\n>think it may be useful for someone, Is good if i do PR.\n>\n>Sorry if I've worte in the wrong place\n\nDoesn't tsrange already do that? That's a timestamp without timezone range\ntype.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 31 May 2019 16:45:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Time range"
},
{
"msg_contents": "\n\nDonald Shtjefni schrieb am 31.05.2019 um 13:35:\n> I was wondering why there is not a type Range of time without time zone, I think it may be useful for someone, Is good if i do PR.\n\nyou can easily create one: \n\n create type timerange as range (subtype = time);\n\nThomas\n\n \n\n\n",
"msg_date": "Fri, 31 May 2019 22:39:50 +0700",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Time range"
},
{
"msg_contents": "timetzrange is also missing. In my database I have:\n\nCREATE TYPE timerange AS RANGE (SUBTYPE = time);\nCOMMENT ON TYPE timerange IS 'range of times without time zone';\nGRANT USAGE ON TYPE timerange TO PUBLIC;\n\nCREATE TYPE timetzrange AS RANGE (SUBTYPE = timetz);\nCOMMENT ON TYPE timetzrange IS 'range of times with time zone';\nGRANT USAGE ON TYPE timetzrange TO PUBLIC;\n\nThe intent is that these range types are the same as if they were built in.\nI don't believe I have ever used timetzrange but I did it for completeness.\n\nGiven that other built-in types have built-in range types, I think that the\ntime and timetz types should also have built-in range types.\n\nOn Fri, 31 May 2019 at 11:40, Thomas Kellerer <shammat@gmx.net> wrote:\n\n>\n>\n> Donald Shtjefni schrieb am 31.05.2019 um 13:35:\n> > I was wondering why there is not a type Range of time without time zone,\n> I think it may be useful for someone, Is good if i do PR.\n>\n> you can easily create one:\n>\n> create type timerange as range (subtype = time);\n>\n> Thomas\n>\n>\n>\n>\n>\n\ntimetzrange is also missing. In my database I have:CREATE TYPE timerange AS RANGE (SUBTYPE = time);COMMENT ON TYPE timerange IS 'range of times without time zone';GRANT USAGE ON TYPE timerange TO PUBLIC;CREATE TYPE timetzrange AS RANGE (SUBTYPE = timetz);COMMENT ON TYPE timetzrange IS 'range of times with time zone';GRANT USAGE ON TYPE timetzrange TO PUBLIC;The intent is that these range types are the same as if they were built in. I don't believe I have ever used timetzrange but I did it for completeness.Given that other built-in types have built-in range types, I think that the time and timetz types should also have built-in range types.On Fri, 31 May 2019 at 11:40, Thomas Kellerer <shammat@gmx.net> wrote:\n\nDonald Shtjefni schrieb am 31.05.2019 um 13:35:\n> I was wondering why there is not a type Range of time without time zone, I think it may be useful for someone, Is good if i do PR.\n\nyou can easily create one: \n\n create type timerange as range (subtype = time);\n\nThomas",
"msg_date": "Fri, 31 May 2019 14:09:04 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time range"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> Given that other built-in types have built-in range types, I think that the\n> time and timetz types should also have built-in range types.\n\nThere's only a very small number of built-in range types:\n\npostgres=# select typname from pg_type where typtype = 'r' order by 1;\n typname \n-----------\n daterange\n int4range\n int8range\n numrange\n tsrange\n tstzrange\n(6 rows)\n\nI don't think there's any appetite for creating built-in range types\nacross-the-board. The time and timetz types are pretty little used\n(with good reason), so leaving them out of this list seems fine to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 May 2019 15:00:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Time range"
}
] |
[
{
"msg_contents": "Please see the diff attached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Fri, 31 May 2019 11:02:37 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Comment typo in tableam.h"
},
{
"msg_contents": "On Fri, 31 May 2019 at 05:02, Antonin Houska <ah@cybertec.at> wrote:\n> Please see the diff attached.\n\nPushed. Thanks.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 31 May 2019 13:34:17 -0400",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "There were few more minor typos I had collected for table am, passing them\nalong here.\n\nSome of the required callback functions are missing Assert checking (minor\nthing), adding them in separate patch.",
"msg_date": "Mon, 3 Jun 2019 17:24:15 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "Hi,\n\nThanks for these!\n\nOn 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> \t/*\n> \t * Estimate the size of shared memory needed for a parallel scan of this\n> -\t * relation. The snapshot does not need to be accounted for.\n> +\t * relation.\n> \t */\n> \tSize\t\t(*parallelscan_estimate) (Relation rel);\n\nThat's not a typo?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Jun 2019 17:26:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> Thanks for these!\n>\n> On 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> > /*\n> > * Estimate the size of shared memory needed for a parallel scan\n> of this\n> > - * relation. The snapshot does not need to be accounted for.\n> > + * relation.\n> > */\n> > Size (*parallelscan_estimate) (Relation rel);\n>\n> That's not a typo?\n>\n\nThe snapshot is not passed as argument to that function hence seems weird\nto refer to snapshot in the comment, as anyways callback function can't\naccount for it. Seems stale piece of comment and hence that piece of text\nshould be removed. I should have refereed to changes as general comment\nfixes instead of explicit typo fixes :-)\n\nOn Mon, Jun 3, 2019 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nThanks for these!\n\nOn 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> /*\n> * Estimate the size of shared memory needed for a parallel scan of this\n> - * relation. The snapshot does not need to be accounted for.\n> + * relation.\n> */\n> Size (*parallelscan_estimate) (Relation rel);\n\nThat's not a typo?The snapshot is not passed as argument to that function hence seems weird to refer to snapshot in the comment, as anyways callback function can't account for it. Seems stale piece of comment and hence that piece of text should be removed. I should have refereed to changes as general comment fixes instead of explicit typo fixes :-)",
"msg_date": "Mon, 3 Jun 2019 18:21:56 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-03 18:21:56 -0700, Ashwin Agrawal wrote:\n> On Mon, Jun 3, 2019 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > Thanks for these!\n> >\n> > On 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> > > /*\n> > > * Estimate the size of shared memory needed for a parallel scan\n> > of this\n> > > - * relation. The snapshot does not need to be accounted for.\n> > > + * relation.\n> > > */\n> > > Size (*parallelscan_estimate) (Relation rel);\n> >\n> > That's not a typo?\n> >\n> \n> The snapshot is not passed as argument to that function hence seems weird\n> to refer to snapshot in the comment, as anyways callback function can't\n> account for it.\n\nIt's part of the parallel scan struct, and it used to be accounted for\nby pre tableam function...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Jun 2019 18:24:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 6:24 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-03 18:21:56 -0700, Ashwin Agrawal wrote:\n> > On Mon, Jun 3, 2019 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > Hi,\n> > >\n> > > Thanks for these!\n> > >\n> > > On 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> > > > /*\n> > > > * Estimate the size of shared memory needed for a parallel\n> scan\n> > > of this\n> > > > - * relation. The snapshot does not need to be accounted for.\n> > > > + * relation.\n> > > > */\n> > > > Size (*parallelscan_estimate) (Relation rel);\n> > >\n> > > That's not a typo?\n> > >\n> >\n> > The snapshot is not passed as argument to that function hence seems weird\n> > to refer to snapshot in the comment, as anyways callback function can't\n> > account for it.\n>\n> It's part of the parallel scan struct, and it used to be accounted for\n> by pre tableam function...\n>\n\nReads like the comment written from past context then, and doesn't have\nmuch value now. Its confusing than helping, to state not to account for\nsnapshot and not any other field.\ntable_parallelscan_estimate() has snapshot argument and it accounts for it,\nbut callback doesn't. I am not sure how a callback would explicitly use\nthat comment and avoid accounting for snapshot if its using generic\nParallelTableScanDescData. But if you feel is helpful, please feel free to\nkeep that text.\n\nOn Mon, Jun 3, 2019 at 6:24 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-03 18:21:56 -0700, Ashwin Agrawal wrote:\n> On Mon, Jun 3, 2019 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > Thanks for these!\n> >\n> > On 2019-06-03 17:24:15 -0700, Ashwin Agrawal wrote:\n> > > /*\n> > > * Estimate the size of shared memory needed for a parallel scan\n> > of this\n> > > - * relation. The snapshot does not need to be accounted for.\n> > > + * relation.\n> > > */\n> > > Size (*parallelscan_estimate) (Relation rel);\n> >\n> > That's not a typo?\n> >\n> \n> The snapshot is not passed as argument to that function hence seems weird\n> to refer to snapshot in the comment, as anyways callback function can't\n> account for it.\n\nIt's part of the parallel scan struct, and it used to be accounted for\nby pre tableam function...Reads like the comment written from past context then, and doesn't have much value now. Its confusing than helping, to state not to account for snapshot and not any other field.table_parallelscan_estimate() has snapshot argument and it accounts for it, but callback doesn't. I am not sure how a callback would explicitly use that comment and avoid accounting for snapshot if its using generic ParallelTableScanDescData. But if you feel is helpful, please feel free to keep that text.",
"msg_date": "Mon, 3 Jun 2019 18:41:35 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 5:24 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n> There were few more minor typos I had collected for table am, passing them\n> along here.\n>\n> Some of the required callback functions are missing Assert checking (minor\n> thing), adding them in separate patch.\n>\n\nCurious to know if need to register such small typo fixing and assertion\nadding patchs to commit-fest as well ?\n\nOn Mon, Jun 3, 2019 at 5:24 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:There were few more minor typos I had collected for table am, passing them along here.Some of the required callback functions are missing Assert checking (minor thing), adding them in separate patch.Curious to know if need to register such small typo fixing and assertion adding patchs to commit-fest as well ?",
"msg_date": "Mon, 24 Jun 2019 10:55:43 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Mon, Jun 24, 2019 at 11:26 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>\n> On Mon, Jun 3, 2019 at 5:24 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>>\n>> There were few more minor typos I had collected for table am, passing them along here.\n>>\n>> Some of the required callback functions are missing Assert checking (minor thing), adding them in separate patch.\n>\n>\n> Curious to know if need to register such small typo fixing and assertion adding patchs to commit-fest as well ?\n>\n\nNormally, such things are handled out of CF, but to avoid forgetting,\nwe can register it. However, this particular item suits more to 'Open\nItems'[1]. You can remove the objectionable part of the comment,\nother things in your patch look good to me. If nobody else picks it\nup, I will take care of it.\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 29 Jun 2019 02:17:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jun 24, 2019 at 11:26 PM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> >\n> > On Mon, Jun 3, 2019 at 5:24 PM Ashwin Agrawal <aagrawal@pivotal.io>\n> wrote:\n> >>\n> >> There were few more minor typos I had collected for table am, passing\n> them along here.\n> >>\n> >> Some of the required callback functions are missing Assert checking\n> (minor thing), adding them in separate patch.\n> >\n> >\n> > Curious to know if need to register such small typo fixing and assertion\n> adding patchs to commit-fest as well ?\n> >\n>\n> Normally, such things are handled out of CF, but to avoid forgetting,\n> we can register it. However, this particular item suits more to 'Open\n> Items'[1]. You can remove the objectionable part of the comment,\n> other things in your patch look good to me. If nobody else picks it\n> up, I will take care of it.\n>\n\nThank you, I thought Committer would remove the objectionable part of\ncomment change and commit the patch if seems fine. I don't mind changing,\njust feel adds extra back and forth cycle.\n\nPlease find attached v2 of patch 1 without objectionable comment change. v1\nof patch 2 attaching here just for convenience, no modifications made to it.",
"msg_date": "Mon, 1 Jul 2019 12:30:04 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 1:00 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Please find attached v2 of patch 1 without objectionable comment change. v1 of patch 2 attaching here just for convenience, no modifications made to it.\n>\n\n0001*\n * See table_index_fetch_tuple's comment about what the difference between\n- * these functions is. This function is the correct to use outside of\n- * index entry->table tuple lookups.\n+ * these functions is. This function is correct to use outside of index\n+ * entry->table tuple lookups.\n\nHow about if we write the last line of comment as \"It is correct to\nuse this function outside of index entry->table tuple lookups.\"? I am\nnot an expert on this matter, but I find the way I am suggesting\neasier to read.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 6 Jul 2019 12:35:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Sat, Jul 6, 2019 at 12:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Jul 2, 2019 at 1:00 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > Please find attached v2 of patch 1 without objectionable comment change.\n> v1 of patch 2 attaching here just for convenience, no modifications made to\n> it.\n> >\n>\n> 0001*\n> * See table_index_fetch_tuple's comment about what the difference between\n> - * these functions is. This function is the correct to use outside of\n> - * index entry->table tuple lookups.\n> + * these functions is. This function is correct to use outside of index\n> + * entry->table tuple lookups.\n>\n> How about if we write the last line of comment as \"It is correct to\n> use this function outside of index entry->table tuple lookups.\"? I am\n> not an expert on this matter, but I find the way I am suggesting\n> easier to read.\n>\n\nI am fine with the way you have suggested.\n\nOn Sat, Jul 6, 2019 at 12:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jul 2, 2019 at 1:00 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:> Please find attached v2 of patch 1 without objectionable comment change. v1 of patch 2 attaching here just for convenience, no modifications made to it.>\n0001* * See table_index_fetch_tuple's comment about what the difference between- * these functions is. This function is the correct to use outside of- * index entry->table tuple lookups.+ * these functions is. This function is correct to use outside of index+ * entry->table tuple lookups.\nHow about if we write the last line of comment as \"It is correct touse this function outside of index entry->table tuple lookups.\"? I amnot an expert on this matter, but I find the way I am suggestingeasier to read.I am fine with the way you have suggested.",
"msg_date": "Mon, 8 Jul 2019 09:51:34 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 10:21 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>\n>\n> On Sat, Jul 6, 2019 at 12:05 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jul 2, 2019 at 1:00 AM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>> > Please find attached v2 of patch 1 without objectionable comment change. v1 of patch 2 attaching here just for convenience, no modifications made to it.\n>> >\n>>\n>> 0001*\n>> * See table_index_fetch_tuple's comment about what the difference between\n>> - * these functions is. This function is the correct to use outside of\n>> - * index entry->table tuple lookups.\n>> + * these functions is. This function is correct to use outside of index\n>> + * entry->table tuple lookups.\n>>\n>> How about if we write the last line of comment as \"It is correct to\n>> use this function outside of index entry->table tuple lookups.\"? I am\n>> not an expert on this matter, but I find the way I am suggesting\n>> easier to read.\n>\n>\n> I am fine with the way you have suggested.\n>\n\nPushed. I have already pushed your other patch a few days back. So,\nas per my knowledge, we are done here. Do, let me know if anything\nproposed in this thread is pending?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Jul 2019 17:20:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "More typos in tableam.h along with a few grammar changes.",
"msg_date": "Thu, 11 Jul 2019 20:44:02 -0500",
"msg_from": "Brad DeJong <bpd0018@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-11 20:44:02 -0500, Brad DeJong wrote:\n> More typos in tableam.h along with a few grammar changes.\n\nThanks! Applied.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 19:52:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Comment typo in tableam.h"
}
] |
[
{
"msg_contents": "Speaking with Robert today at pgcon, I happily discovered that REFRESH\nMATERIALIZED VIEW CONCURRENTLY actually only updates rows that have changed\nsince the last refresh, rather than rewriting every row. In my curiosity,\nI went to the docs, and found that this detail is not mentioned anywhere.\n\nThis is a great feature that is being undersold, and it should be made\nclear in the docs.\n\nIn my experience, there can be tons of WAL generated from large\nmaterialized views and the normal REFRESH (without CONCURRENTLY). I had\nassumed the only benefit of CONCURRENTLY was to allow concurrent access to\nthe table. But actually the incremental refresh is a much bigger win for\nus in reducing WAL overhead drastically.\n\nI've not submitted a patch before, and have a few suggestions I'd like\nfeedback on before I write one (for the docs only).\n\n1.\n\nFirst, even this summary looks untrue:\n\nREFRESH MATERIALIZED VIEW — replace the contents of a materialized view.\n\n\"replace\" is not really accurate with the CONCURRENTLY option, because in\nfact it only updates changed rows.\n\nPerhaps instead of \"replace\":\n\n - \"replace or incrementally update the contents of a materialized view\".\n\nAlso, the Description part has the same inaccuracy:\n\n\"completely replaces the contents of a materialized view.....The old\ncontents are discarded.\"\n\nThat is not true with CONCURRENTLY, correct? Only the old contents *which\nhave changed* are discarded.\n\n2.\n\nLastly, I would suggest adding something like the following to the first\nparagraph under CONCURRENTLY:\n\n - With this option, only actual changed rows are updated in the\n materialized view, which can significantly reduce the amount of write churn\n and WAL traffic from a refresh if only a small number of rows will change\n with each refresh. It is recommended to have a unique index on the\n materialized view if possible, which will improve the performance of a\n concurrent refresh.\n\nPlease correct me if my understanding of this is not right.\n\n3.\n\nOn a different note, none of the documentation on materialized views notes\nthat they can only be LOGGED. This should be noted, or at least it should\nbe noted that one cannot create an UNLOGGED materialized view in the same\nplace it says that one cannot create a temporary one (under Description in\nCREATE MATERIALIZED VIEW).\n\n\nThanks!\nJeremy Finzel\n\nSpeaking with Robert today at pgcon, I happily discovered that REFRESH MATERIALIZED VIEW CONCURRENTLY actually only updates rows that have changed since the last refresh, rather than rewriting every row. In my curiosity, I went to the docs, and found that this detail is not mentioned anywhere.This is a great feature that is being undersold, and it should be made clear in the docs.In my experience, there can be tons of WAL generated from large materialized views and the normal REFRESH (without CONCURRENTLY). I had assumed the only benefit of CONCURRENTLY was to allow concurrent access to the table. But actually the incremental refresh is a much bigger win for us in reducing WAL overhead drastically.I've not submitted a patch before, and have a few suggestions I'd like feedback on before I write one (for the docs only).1.First, even this summary looks untrue:REFRESH MATERIALIZED VIEW — replace the contents of a materialized view.\"replace\" is not really accurate with the CONCURRENTLY option, because in fact it only updates changed rows.Perhaps instead of \"replace\":\"replace or incrementally update the contents of a materialized view\".Also, the Description part has the same inaccuracy:\"completely replaces the contents of a materialized view.....The old contents are discarded.\"That is not true with CONCURRENTLY, correct? Only the old contents *which have changed* are discarded.2.Lastly, I would suggest adding something like the following to the first paragraph under CONCURRENTLY:With this option, only actual changed rows are updated in the materialized view, which can significantly reduce the amount of write churn and WAL traffic from a refresh if only a small number of rows will change with each refresh. It is recommended to have a unique index on the materialized view if possible, which will improve the performance of a concurrent refresh.Please correct me if my understanding of this is not right.3.On a different note, none of the documentation on materialized views notes that they can only be LOGGED. This should be noted, or at least it should be noted that one cannot create an UNLOGGED materialized view in the same place it says that one cannot create a temporary one (under Description in CREATE MATERIALIZED VIEW).Thanks!Jeremy Finzel",
"msg_date": "Fri, 31 May 2019 16:41:14 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Docs for refresh materialized view concurrently"
},
{
"msg_contents": "Jeremy Finzel <finzelj@gmail.com> writes:\n> I've not submitted a patch before, and have a few suggestions I'd like\n> feedback on before I write one (for the docs only).\n\nOK ...\n\n> First, even this summary looks untrue:\n> REFRESH MATERIALIZED VIEW — replace the contents of a materialized view.\n\nAgreed. I'd just make it say \"update the contents...\", personally.\nMore words are not better in command summaries.\n\n> Also, the Description part has the same inaccuracy:\n> \"completely replaces the contents of a materialized view.....The old\n> contents are discarded.\"\n\nYeah, that just wasn't updated :-(\n\n> On a different note, none of the documentation on materialized views notes\n> that they can only be LOGGED. This should be noted,\n\nAlso agreed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 Jun 2019 13:30:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Docs for refresh materialized view concurrently"
}
] |
[
{
"msg_contents": "Hi,\n\nWe had a short conversation about this on Friday but I didn't have time\nto think of a constructive suggestion, and now I've had more time to\nthink about it.\n\nRegarding the proposed PG 13 jsonpath extensions (array, map, and\nsequence constructors, lambdas, map/fold/reduce, user-defined\nfunctions), literally all this stuff is in XPath/XQuery 3.1, and\nclearly the SQL committee is imitating XPath/XQuery in the design\nof jsonpath.\n\nTherefore it would not be surprising at all if the committee eventually\nadds those features in jsonpath. At that point, if the syntax matches\nwhat we've added, we are happy, and if not, we have a multi-year,\nmulti-release, standard_conforming_strings-style headache.\n\nSo, a few ideas fall out....\n\nFirst, with Peter being a participant, if there are any rumblings in the\nSQL committee about adding those features, we should know the proposed\nsyntax as soon as we can and try to follow that.\n\nIf such rumblings are entirely absent, we should see what we can do to\nstart some, proposing the syntax we've got.\n\nIn either case, perhaps we should immediately add a way to identify a\njsonpath as being PostgreSQL-extended. Maybe a keyword 'pg' that can\nbe accepted at the start in addition to any lax/strict, so you could\nhave 'pg lax $.map(x => x + 10)'.\n\nIf we initially /require/ 'pg' for the extensions to be recognized, then\nwe can relax the requirement for whichever ones later appear in the spec\nusing the same syntax. If they appear in the spec with a different\nsyntax, then by requiring 'pg' already for our variant, we already have\navoided the standard_conforming_strings kind of multi-release\nreconciliation effort.\n\nIn the near term, there is already one such potential conflict in\n12beta: the like_regex using POSIX REs instead of XQuery ones as the\nspec requires. Of course we don't currently have an XQuery regex\nengine, but if we ever have one, we then face a headache if we want to\nmove jsonpath toward using it. (Ties in to conversation [1].)\n\nMaybe we could avoid that by recognizing now an extra P in flags, to\nspecify a POSIX re. Or, as like_regex has a named-parameter-like\nsyntax--like_regex(\"abc\" flag \"i\")--perhaps 'posix' should just be\nan extra keyword in that grammar: like-regex(\"abc\" posix). That would\nbe safe from the committee adding a P flag that means something else.\n\nThe conservative approach would be to simply require the 'posix' keyword\nin all cases now, simply because we don't have the XQuery regex engine.\n\nAlternatively, if there's a way to analyze a regex for the use of any\nconstructs with different meanings in POSIX and XQuery REs (and if\nthat's appreciably easier than writing an XQuery regex engine), then\nthe 'posix' keyword could be required only when it matters. But the\nconservative approach sounds easier, and sufficient. The finer-grained\nanalysis would have to catch not just constructs that are in one RE\nstyle and not the other, but any subtleties in semantics, and I\ncertainly wouldn't trust myself to write that.\n\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/5CF2754F.7000702%40anastigmatix.net\n\n\n",
"msg_date": "Sat, 01 Jun 2019 10:41:36 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "On 2019-Jun-01, Chapman Flack wrote:\n\n> In either case, perhaps we should immediately add a way to identify a\n> jsonpath as being PostgreSQL-extended. Maybe a keyword 'pg' that can\n> be accepted at the start in addition to any lax/strict, so you could\n> have 'pg lax $.map(x => x + 10)'.\n> \n> If we initially /require/ 'pg' for the extensions to be recognized, then\n> we can relax the requirement for whichever ones later appear in the spec\n> using the same syntax. If they appear in the spec with a different\n> syntax, then by requiring 'pg' already for our variant, we already have\n> avoided the standard_conforming_strings kind of multi-release\n> reconciliation effort.\n\nI agree we should do this (or something similar) now, to avoid future\npain. It seems a similar problem to E'' strings vs. SQL-standard\n''-ones, which was a painful transition. We have an opportunity to do\nbetter this time.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Jun 2019 11:49:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "On Sat, 1 Jun 2019, 16:41 Chapman Flack, <chap@anastigmatix.net> wrote:\n\n> Hi,\n>\n> We had a short conversation about this on Friday but I didn't have time\n> to think of a constructive suggestion, and now I've had more time to\n> think about it.\n>\n> Regarding the proposed PG 13 jsonpath extensions (array, map, and\n> sequence constructors, lambdas, map/fold/reduce, user-defined\n> functions), literally all this stuff is in XPath/XQuery 3.1, and\n> clearly the SQL committee is imitating XPath/XQuery in the design\n> of jsonpath.\n>\n> Therefore it would not be surprising at all if the committee eventually\n> adds those features in jsonpath. At that point, if the syntax matches\n> what we've added, we are happy, and if not, we have a multi-year,\n> multi-release, standard_conforming_strings-style headache.\n>\n> So, a few ideas fall out....\n>\n> First, with Peter being a participant, if there are any rumblings in the\n> SQL committee about adding those features, we should know the proposed\n> syntax as soon as we can and try to follow that.\n>\n\nAFAIK, there is rumour about 'native json data type' and 'dot style syntax'\nfor json, but not about jsonpath.\n\n\n> If such rumblings are entirely absent, we should see what we can do to\n> start some, proposing the syntax we've got.\n>\n> In either case, perhaps we should immediately add a way to identify a\n> jsonpath as being PostgreSQL-extended. Maybe a keyword 'pg' that can\n> be accepted at the start in addition to any lax/strict, so you could\n> have 'pg lax $.map(x => x + 10)'.\n>\n\nThis is exactly what we were thinking about !\n\n>\n> If we initially /require/ 'pg' for the extensions to be recognized, then\n> we can relax the requirement for whichever ones later appear in the spec\n> using the same syntax. If they appear in the spec with a different\n> syntax, then by requiring 'pg' already for our variant, we already have\n> avoided the standard_conforming_strings kind of multi-release\n> reconciliation effort.\n>\n> In the near term, there is already one such potential conflict in\n> 12beta: the like_regex using POSIX REs instead of XQuery ones as the\n> spec requires. Of course we don't currently have an XQuery regex\n> engine, but if we ever have one, we then face a headache if we want to\n> move jsonpath toward using it. (Ties in to conversation [1].)\n>\n> Maybe we could avoid that by recognizing now an extra P in flags, to\n> specify a POSIX re. Or, as like_regex has a named-parameter-like\n> syntax--like_regex(\"abc\" flag \"i\")--perhaps 'posix' should just be\n> an extra keyword in that grammar: like-regex(\"abc\" posix). That would\n> be safe from the committee adding a P flag that means something else.\n>\n> The conservative approach would be to simply require the 'posix' keyword\n> in all cases now, simply because we don't have the XQuery regex engine.\n>\n> Alternatively, if there's a way to analyze a regex for the use of any\n> constructs with different meanings in POSIX and XQuery REs (and if\n> that's appreciably easier than writing an XQuery regex engine), then\n> the 'posix' keyword could be required only when it matters. But the\n> conservative approach sounds easier, and sufficient. The finer-grained\n> analysis would have to catch not just constructs that are in one RE\n> style and not the other, but any subtleties in semantics, and I\n> certainly wouldn't trust myself to write that.\n>\n\nWe didn't think about regex, I don't know anybody working on xquery.\n\n\n> -Chap\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/5CF2754F.7000702%40anastigmatix.net\n>\n>\n>\n\nOn Sat, 1 Jun 2019, 16:41 Chapman Flack, <chap@anastigmatix.net> wrote:Hi,\n\nWe had a short conversation about this on Friday but I didn't have time\nto think of a constructive suggestion, and now I've had more time to\nthink about it.\n\nRegarding the proposed PG 13 jsonpath extensions (array, map, and\nsequence constructors, lambdas, map/fold/reduce, user-defined\nfunctions), literally all this stuff is in XPath/XQuery 3.1, and\nclearly the SQL committee is imitating XPath/XQuery in the design\nof jsonpath.\n\nTherefore it would not be surprising at all if the committee eventually\nadds those features in jsonpath. At that point, if the syntax matches\nwhat we've added, we are happy, and if not, we have a multi-year,\nmulti-release, standard_conforming_strings-style headache.\n\nSo, a few ideas fall out....\n\nFirst, with Peter being a participant, if there are any rumblings in the\nSQL committee about adding those features, we should know the proposed\nsyntax as soon as we can and try to follow that.AFAIK, there is rumour about 'native json data type' and 'dot style syntax' for json, but not about jsonpath.\n\nIf such rumblings are entirely absent, we should see what we can do to\nstart some, proposing the syntax we've got.\n\nIn either case, perhaps we should immediately add a way to identify a\njsonpath as being PostgreSQL-extended. Maybe a keyword 'pg' that can\nbe accepted at the start in addition to any lax/strict, so you could\nhave 'pg lax $.map(x => x + 10)'.This is exactly what we were thinking about !\n\nIf we initially /require/ 'pg' for the extensions to be recognized, then\nwe can relax the requirement for whichever ones later appear in the spec\nusing the same syntax. If they appear in the spec with a different\nsyntax, then by requiring 'pg' already for our variant, we already have\navoided the standard_conforming_strings kind of multi-release\nreconciliation effort.\n\nIn the near term, there is already one such potential conflict in\n12beta: the like_regex using POSIX REs instead of XQuery ones as the\nspec requires. Of course we don't currently have an XQuery regex\nengine, but if we ever have one, we then face a headache if we want to\nmove jsonpath toward using it. (Ties in to conversation [1].)\n\nMaybe we could avoid that by recognizing now an extra P in flags, to\nspecify a POSIX re. Or, as like_regex has a named-parameter-like\nsyntax--like_regex(\"abc\" flag \"i\")--perhaps 'posix' should just be\nan extra keyword in that grammar: like-regex(\"abc\" posix). That would\nbe safe from the committee adding a P flag that means something else.\n\nThe conservative approach would be to simply require the 'posix' keyword\nin all cases now, simply because we don't have the XQuery regex engine.\n\nAlternatively, if there's a way to analyze a regex for the use of any\nconstructs with different meanings in POSIX and XQuery REs (and if\nthat's appreciably easier than writing an XQuery regex engine), then\nthe 'posix' keyword could be required only when it matters. But the\nconservative approach sounds easier, and sufficient. The finer-grained\nanalysis would have to catch not just constructs that are in one RE\nstyle and not the other, but any subtleties in semantics, and I\ncertainly wouldn't trust myself to write that.We didn't think about regex, I don't know anybody working on xquery. \n\n-Chap\n\n\n[1]\nhttps://www.postgresql.org/message-id/5CF2754F.7000702%40anastigmatix.net",
"msg_date": "Tue, 18 Jun 2019 18:51:10 +0200",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "On 6/18/19 12:51 PM, Oleg Bartunov wrote:\n>> have 'pg lax $.map(x => x + 10)'.\n> \n> This is exactly what we were thinking about !\n\nPerfect!\n\n>> specify a POSIX re. Or, as like_regex has a named-parameter-like\n>> syntax--like_regex(\"abc\" flag \"i\")--perhaps 'posix' should just be\n>> an extra keyword in that grammar: like-regex(\"abc\" posix). That would\n>> be safe from the committee adding a P flag that means something else.\n> \n> We didn't think about regex, I don't know anybody working on xquery.\n\nI do. :)\n\nBut is that even the point? It's already noted in [1] that the standard\ncalls for one style of regexps and we're providing another.\n\nIt's relatively uncomplicated now to add some kind of distinguishing\nsyntax for our posix flavor of like_regex. Yes, it would be a change\nbetween beta1 and final release, but that doesn't seem unheard-of.\n\nIn contrast, if such a distinction is not added now, we know that will\nbe a headache for any future effort to more closely conform to the\nstandard. Whether such a future effort seems near-term or far off, it\ndoesn't seem strategic to make current choices that avoidably make it\nharder.\n\nAside: I just looked over the 12 doco to see if the note in [1] is\nin there, and all I see is that 'like_regex' is documented as \"Tests\npattern matching with POSIX regular expressions.\"\n\nIn my opinion, that ought to have a note flagging that as different\nfrom the standard. The user experience is not so good if someone comes\nassuming we conform to the standard, writes code, then has to learn\nwhy it didn't work. The whole doc section [2] about XML is intended\nto spare people from unwelcome discoveries of that sort, but it was\nwritten after the fact. I think it's better to have it from the start.\n\n[1]\nhttps://github.com/obartunov/sqljsondoc/blob/master/jsonpath.md#sqljson-conformance\n\n[2] https://www.postgresql.org/docs/12/xml-limits-conformance.html\n\n\n",
"msg_date": "Tue, 18 Jun 2019 13:26:31 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> But is that even the point? It's already noted in [1] that the standard\n> calls for one style of regexps and we're providing another.\n\n> It's relatively uncomplicated now to add some kind of distinguishing\n> syntax for our posix flavor of like_regex. Yes, it would be a change\n> between beta1 and final release, but that doesn't seem unheard-of.\n\n> In contrast, if such a distinction is not added now, we know that will\n> be a headache for any future effort to more closely conform to the\n> standard. Whether such a future effort seems near-term or far off, it\n> doesn't seem strategic to make current choices that avoidably make it\n> harder.\n\nJust to not leave this thread hanging --- the discussion was picked up\nin this other thread:\n\nhttps://www.postgresql.org/message-id/flat/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH%2B_Rv2rNRqfg%40mail.gmail.com\n\nand I think we've come to the conclusion that the only really awful regex\ncompatibility problem is differing interpretations of the 'x' flag, which\nwe solved temporarily by treating that as unimplemented in jsonpath.\nThere are some other unimplemented features that we can consider adding\nlater, too. (Fortunately, Spencer's engine throws errors for all of\nthose, so adding them won't create new compatibility issues.) And we did\nadd some documentation:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0a97edb12ec44f8d2d8828cbca6dd7639408ac88\n\nThere remains the question of whether we should do something like\nrequiring a \"pg\" prefix to allow access to the other nonstandard\nfeatures we added to jsonpath. I see the point that the SQL committee\nmight well add something pretty similar in future. But I'm not too\nconcerned about that, on two grounds: (1) the same argument could be\nraised against *every* non-spec feature we have or ever will have;\n(2) now that Peter's in on SQL committee deliberations, we have a\nchance to push for any future spec changes to not be unnecessarily\nincompatible. So my inclination is to close this open item as\nsufficiently done, once the minor lexer issues raised in the other\nthread are done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 18:35:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "On 09/19/19 18:35, Tom Lane wrote:\n\n> There remains the question of whether we should do something like\n> requiring a \"pg\" prefix to allow access to the other nonstandard\n> features we added to jsonpath. I see the point that the SQL committee\n> might well add something pretty similar in future. But I'm not too\n> concerned about that, on two grounds: (1) the same argument could be\n> raised against *every* non-spec feature we have or ever will have;\n\nThis should not be read as a violent objection, but I do think that\npoint (1) glosses over a, well, possibly salient difference in likelihood:\n\nSure, against *every* non-spec feature we have or ever will have, someone\n/could/ raise a generic \"what if SQL committee might add something pretty\nsimilar in future\".\n\nBut what we have in this case are specific non-spec features (array, map,\nand sequence constructors, lambdas, map/fold/reduce, user-defined\nfunctions) that are flat-out already present in the current version of\nthe language that the SQL committee is clearly modeling jsonpath on.\n\nThat might raise the likelihood of collision in this case above its\nusual, universal cosmic background level.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 19 Sep 2019 18:57:32 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> Sure, against *every* non-spec feature we have or ever will have, someone\n> /could/ raise a generic \"what if SQL committee might add something pretty\n> similar in future\".\n> But what we have in this case are specific non-spec features (array, map,\n> and sequence constructors, lambdas, map/fold/reduce, user-defined\n> functions) that are flat-out already present in the current version of\n> the language that the SQL committee is clearly modeling jsonpath on.\n\nSure. But we also modeled those features on the same language that the\ncommittee is looking at (or at least I sure hope we did). So it's\nreasonable to assume that they would come out at the same spot without\nany prompting. And we can prompt them ;-).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Sep 2019 19:14:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "On 2019-09-20 01:14, Tom Lane wrote:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> Sure, against *every* non-spec feature we have or ever will have, someone\n>> /could/ raise a generic \"what if SQL committee might add something pretty\n>> similar in future\".\n>> But what we have in this case are specific non-spec features (array, map,\n>> and sequence constructors, lambdas, map/fold/reduce, user-defined\n>> functions) that are flat-out already present in the current version of\n>> the language that the SQL committee is clearly modeling jsonpath on.\n> \n> Sure. But we also modeled those features on the same language that the\n> committee is looking at (or at least I sure hope we did). So it's\n> reasonable to assume that they would come out at the same spot without\n> any prompting. And we can prompt them ;-).\n\nAlso, I understand these are features proposed for PG13, not in PG12.\nSo while this is an important discussion, it's not relevant to the PG12\nrelease, right?\n\n(If so, I'm content to close these open items.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Sep 2019 13:07:41 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-09-20 01:14, Tom Lane wrote:\n>> Sure. But we also modeled those features on the same language that the\n>> committee is looking at (or at least I sure hope we did). So it's\n>> reasonable to assume that they would come out at the same spot without\n>> any prompting. And we can prompt them ;-).\n\n> Also, I understand these are features proposed for PG13, not in PG12.\n> So while this is an important discussion, it's not relevant to the PG12\n> release, right?\n> (If so, I'm content to close these open items.)\n\nI took a quick look to compare our jsonpath documentation with\nISO/IEC TR_19075-6_2017 (I did *not* try to see if the code agrees\nwith the docs ;-)). As far as I can see, everything described in\nour docs appears in the TR, with the exception of two things\nthat are already documented as Postgres extensions:\n\n1. Recursive (multilevel) wildcards, ie .** and .**{level [to level]}\naccessors, per table 8.25.\n\n2. We allow a path expression to be a Boolean predicate, although the TR\nallows predicates only in filters, per example in 9.16.1:\n\t'$.track.segments[*].HR < 70'\n(It's not exactly clear to me why this syntax is necessary; what's\nit do that you can't do more verbosely with a filter?)\n\nI have no opinion on whether we're opening ourselves to significant\nspec-compliance risks through these two features. I am, however,\nunexcited about adding some kind of \"PG only\" marker to the language,\nfor a couple of reasons. First, I really doubt that a single boolean\nflag would get us far in terms of dealing with future compliance\nissues. As soon as we have two extension features (i.e., already)\nwe have the question of what happens if one gets standardized and\nthe other doesn't; and that risk gets bigger if we're going to add\nhalf a dozen more things. Second, we've procrastinated too long\nand thereby effectively made a decision already. At this point\nI don't see how we could push in any such change without delaying\nthe release.\n\nSo my vote at this point is \"ship it as-is\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Sep 2019 18:03:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding possible future conformance headaches in JSON work"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have been playing lately with the table AM API to do some stuff, and\nI got surprised that in the minimum set of headers which needs to be\nincluded for a table AM we have a hard dependency with heapam.h for\nBulkInsertState and vacuum.h for VacuumParams.\n\nI am fine to live with the dependency with vacuum.h as it is not that\nstrange. However for BulkInsertState we get a hard dependency with a\nheap-related area and it seems to me that we had better move that part\nout of heapam.c, as we want a clear dependency cut with the heap AM\nfor any new custom table AM.\n\nI'd like to think that the best way to deal with that and reduce the\nconfusion would be to move anything related to bulk inserts into their\nown header/file, meaning the following set:\n- ReleaseBulkInsertStatePin\n- GetBulkInsertState\n- FreeBulkInsertState\nThere is the argument that we could also move that part into tableam.h\nitself though as some of the rather generic table-related callbacks,\nbut that seems grotty. So I think that we could just move that stuff\nas backend/access/common/bulkinsert.c.\n\nThoughts?\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:09:24 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Custom table AMs need to include heapam.h because of BulkInsertState"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-01 15:09:24 -0400, Michael Paquier wrote:\n> I have been playing lately with the table AM API to do some stuff, and\n> I got surprised that in the minimum set of headers which needs to be\n> included for a table AM we have a hard dependency with heapam.h for\n> BulkInsertState and vacuum.h for VacuumParams.\n\nI've noted this before as a future todo.\n\n\n> I'd like to think that the best way to deal with that and reduce the\n> confusion would be to move anything related to bulk inserts into their\n> own header/file, meaning the following set:\n> - ReleaseBulkInsertStatePin\n> - GetBulkInsertState\n> - FreeBulkInsertState\n> There is the argument that we could also move that part into tableam.h\n> itself though as some of the rather generic table-related callbacks,\n> but that seems grotty. So I think that we could just move that stuff\n> as backend/access/common/bulkinsert.c.\n\nYea, I think we should do that at some point. But I'm not sure this is\nthe right design. Bulk insert probably needs to rather be something\nthat's allocated inside the AM.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jun 2019 12:19:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:19:43PM -0700, Andres Freund wrote:\n> Yea, I think we should do that at some point. But I'm not sure this is\n> the right design. Bulk insert probably needs to rather be something\n> that's allocated inside the AM.\n\nYeah, actually you may be right that I am not taking the correct path\nhere. At quick glance it looks that there is a strong relationship\nbetween the finish_bulk_insert callback and the bistate free already,\nso we could do much better than moving the code around. Perhaps we\ncould just have a TODO? As one of the likely-doable items.\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:55:05 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sat, Jun 1, 2019 at 3:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I am fine to live with the dependency with vacuum.h as it is not that\n> strange. However for BulkInsertState we get a hard dependency with a\n> heap-related area and it seems to me that we had better move that part\n> out of heapam.c, as we want a clear dependency cut with the heap AM\n> for any new custom table AM.\n\nYeah, I noticed this, too. +1 for doing something about it. Not sure\nexactly what is the best approach.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jun 2019 10:18:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 10:18:03AM -0400, Robert Haas wrote:\n> On Sat, Jun 1, 2019 at 3:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> I am fine to live with the dependency with vacuum.h as it is not that\n>> strange. However for BulkInsertState we get a hard dependency with a\n>> heap-related area and it seems to me that we had better move that part\n>> out of heapam.c, as we want a clear dependency cut with the heap AM\n>> for any new custom table AM.\n> \n> Yeah, I noticed this, too. +1 for doing something about it. Not sure\n> exactly what is the best approach.\n\nOne thing which is a bit tricky is that for example with COPY FROM we\nhave a code path which is able to release a buffer held by the bulk\ninsert state. So I think that we could get easily out by combining\nthe bistate free path with finish_bulk_insert, create the bistate\nwithin the AM when doing a single or multi tuple insert, and having\none extra callback to release a buffer held. Still this last bit does\nnot completely feel right in terms of flexibility and readability.\n\nNote as well that we never actually use bistate when calling\ntable_tuple_insert_speculative() on HEAD. I guess that the argument\nis here for consistency with the tuple_insert callback. Could we do\nsomething separately about that?\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 11:29:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 10:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n> One thing which is a bit tricky is that for example with COPY FROM we\n> have a code path which is able to release a buffer held by the bulk\n> insert state.\n\nAre you talking about the call to ReleaseBulkInsertStatePin, or something else?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 08:55:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sat, Jun 1, 2019 at 3:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd like to think that the best way to deal with that and reduce the\n> > confusion would be to move anything related to bulk inserts into their\n> > own header/file, meaning the following set:\n> > - ReleaseBulkInsertStatePin\n> > - GetBulkInsertState\n> > - FreeBulkInsertState\n> > There is the argument that we could also move that part into tableam.h\n> > itself though as some of the rather generic table-related callbacks,\n> > but that seems grotty. So I think that we could just move that stuff\n> > as backend/access/common/bulkinsert.c.\n>\n> Yea, I think we should do that at some point. But I'm not sure this is\n> the right design. Bulk insert probably needs to rather be something\n> that's allocated inside the AM.\n\nAs far as I can see, any on-disk, row-oriented, block-based AM is\nlikely to want the same implementation as the heap. Column stores\nmight want to pin multiple buffers, and an in-memory AM might have a\ncompletely different set of requirements, but something like zheap\nreally has no reason to depart from what the heap does. I think it's\nreally important that new table AMs not only have the option to do\nsomething different than the heap in any particular area, but that\nthey also have the option to do the SAME thing as the heap without\nhaving to duplicate a bunch of code. So I think it would be\nreasonable to start by doing some pure code movement here, along the\nlines proposed by Michael -- not sure if src/backend/access/common is\nright or if it should be src/backend/access/table -- and then add the\nabstraction afterwards. Worth noting is ReadBufferBI() also needs\nmoving and is a actually a bigger problem than the functions that\nMichael mentioned, because the other functions are accessible if\nyou're willing to stoop to including heap-specific headers, but that\nfunction is static and you'll have to just copy-and-paste it. Uggh.\n\nHere's a draft design for adding some abstraction, roughly modeled on\nthe abstraction Andres added for TupleTableSlots:\n\n1. a BulkInsertState becomes a struct whose only member is a pointer\nto const BulkInsertStateOps *const ops\n\n2. that structure has a member for each defined operation on a BulkInsertState:\n\nvoid (*free)(BulkInsertState *);\nvoid (*release_pin)(BulkInsertState *); // maybe rename to make it more generic\n\n3. table AM gets a new member BulkInsertState\n*(*create_bistate)(Relation Rel) and a corresponding function\ntable_create_bistate(), analogous to table_create_slot(), which can\ncall the constructor function for the appropriate type of\nBulkInsertState and return the result\n\n4. each type of BulkInsertState has its own functions for making use\nof it, akin to ReadBufferBI. That particular function signature is\nonly likely to be correct for something that does more-or-less what\nthe existing type of BulkInsertState does; if you're using a\ncolumn-store that pins multiple buffers or something, you'll need your\nown code path. But that's OK, because ReadBufferBI or whatever other\nfunctions you have are only going to get called from AM-specific code,\nwhich will know what type of BulkInsertState they have got, because\nthey are in control of which kind of BulkInsertState gets created for\ntheir relations as per point #4, so they can just call the right\nfunctions.\n\n5. The current implementation of BulkInsertState gets renamed to\nBlockBulkInsertState (or something else) and is used by heap and any\nAMs that like it.\n\nI see Michael's point about the relationship between\nfinish_bulk_insert() and the BulkInsertState, and maybe if we could\nfigure that out we could avoid the need for a BulkInsertState to have\na free method (or maybe any methods at all, in which case it could\njust be an opaque struct, like a Node). However, it looks to me as\nthough copy.c can create a bunch of BulkInsertStates but only call\nfinish_bulk_insert() once, so unless that's a bug in need of fixing I\ndon't quite see how to make that approach work.\n\nComments?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 09:48:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi,\n\n(David, see bottom if you're otherwise not interested).\n\nOn 2019-06-07 09:48:29 -0400, Robert Haas wrote:\n> On Sat, Jun 1, 2019 at 3:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I'd like to think that the best way to deal with that and reduce the\n> > > confusion would be to move anything related to bulk inserts into their\n> > > own header/file, meaning the following set:\n> > > - ReleaseBulkInsertStatePin\n> > > - GetBulkInsertState\n> > > - FreeBulkInsertState\n> > > There is the argument that we could also move that part into tableam.h\n> > > itself though as some of the rather generic table-related callbacks,\n> > > but that seems grotty. So I think that we could just move that stuff\n> > > as backend/access/common/bulkinsert.c.\n> >\n> > Yea, I think we should do that at some point. But I'm not sure this is\n> > the right design. Bulk insert probably needs to rather be something\n> > that's allocated inside the AM.\n> \n> As far as I can see, any on-disk, row-oriented, block-based AM is\n> likely to want the same implementation as the heap.\n\nI'm pretty doubtful about that. It'd e.g. would make a ton of sense to\nkeep the VM pinned, even for heap. You could also do a lot better with\ntoast. And for zheap we'd - unless we change the design - quite\npossibly benefit from keeping the last needed tpd buffer around.\n\n\n> Here's a draft design for adding some abstraction, roughly modeled on\n> the abstraction Andres added for TupleTableSlots:\n\nHm, I'm not sure I see the need for a vtable based approach here. Won't\nevery AM know exactly what they need / have? I'm not convinced it's\nworthwhile to treat that separately from the tableam. I.e. have a\nBulkInsertState struct with *no* members, and then, as you suggest:\n\n> \n> 3. table AM gets a new member BulkInsertState\n> *(*create_bistate)(Relation Rel) and a corresponding function\n> table_create_bistate(), analogous to table_create_slot(), which can\n> call the constructor function for the appropriate type of\n> BulkInsertState and return the result\n\nbut also route the following through the AM:\n\n> 2. that structure has a member for each defined operation on a BulkInsertState:\n> \n> void (*free)(BulkInsertState *);\n> void (*release_pin)(BulkInsertState *); // maybe rename to make it more generic\n\nWhere free would just be part of finish_bulk_insert, and release_pin a\nnew callback.\n\n\n> 4. each type of BulkInsertState has its own functions for making use\n> of it, akin to ReadBufferBI.\n\nRight, I don't think that's avoidable unfortunately.\n\n\n\n> I see Michael's point about the relationship between\n> finish_bulk_insert() and the BulkInsertState, and maybe if we could\n> figure that out we could avoid the need for a BulkInsertState to have\n> a free method (or maybe any methods at all, in which case it could\n> just be an opaque struct, like a Node).\n\nRight, so we actually eneded up at the same place. And you found a bug:\n\n> However, it looks to me as though copy.c can create a bunch of\n> BulkInsertStates but only call finish_bulk_insert() once, so unless\n> that's a bug in need of fixing I don't quite see how to make that\n> approach work.\n\nThat is a bug. Not a currently \"active\" one with in-core AMs (no\ndangerous bulk insert flags ever get set for partitioned tables), but we\nobviously need to fix it nevertheless.\n\nRobert, seems we'll have to - regardless of where we come down on fixing\nthis bug - have to make copy use multiple BulkInsertState's, even in the\nCIM_SINGLE (with proute) case. Or do you have a better idea?\n\nDavid, any opinions on how to best fix this? It's not extremely obvious\nhow to do so best in the current setup of the partition actually being\nhidden somewhere a few calls away (i.e. the table_open happening in\nExecInitPartitionInfo()).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 09:51:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 08:55:36AM -0400, Robert Haas wrote:\n> Are you talking about the call to ReleaseBulkInsertStatePin, or something else?\n\nYes, I was referring to ReleaseBulkInsertStatePin()\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 09:03:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sat, 8 Jun 2019 at 04:51, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-06-07 09:48:29 -0400, Robert Haas wrote:\n> > However, it looks to me as though copy.c can create a bunch of\n> > BulkInsertStates but only call finish_bulk_insert() once, so unless\n> > that's a bug in need of fixing I don't quite see how to make that\n> > approach work.\n>\n> That is a bug. Not a currently \"active\" one with in-core AMs (no\n> dangerous bulk insert flags ever get set for partitioned tables), but we\n> obviously need to fix it nevertheless.\n>\n> Robert, seems we'll have to - regardless of where we come down on fixing\n> this bug - have to make copy use multiple BulkInsertState's, even in the\n> CIM_SINGLE (with proute) case. Or do you have a better idea?\n>\n> David, any opinions on how to best fix this? It's not extremely obvious\n> how to do so best in the current setup of the partition actually being\n> hidden somewhere a few calls away (i.e. the table_open happening in\n> ExecInitPartitionInfo()).\n\nThat's been overlooked. I agree it's not a bug with heap, since\nheapam_finish_bulk_insert() only does anything there when we're\nskipping WAL, which we don't do in copy.c for partitioned tables.\nHowever, who knows what other AMs will need, so we'd better fix that.\n\nMy proposed patch is attached.\n\nI ended up moving the call to CopyMultiInsertInfoCleanup() down to\nafter we call table_finish_bulk_insert for the main table. This might\nnot be required but I lack imagination right now to what AMs might put\nin the finish_bulk_insert call, so doing this seems safer.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 10 Jun 2019 11:45:17 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 12:51 PM Andres Freund <andres@anarazel.de> wrote:\n> > As far as I can see, any on-disk, row-oriented, block-based AM is\n> > likely to want the same implementation as the heap.\n>\n> I'm pretty doubtful about that. It'd e.g. would make a ton of sense to\n> keep the VM pinned, even for heap. You could also do a lot better with\n> toast. And for zheap we'd - unless we change the design - quite\n> possibly benefit from keeping the last needed tpd buffer around.\n\nThat's fair enough to a point, but I'm not trying to enforce code\nreuse; I'm trying to make it possible. If it's good enough for the\nheap, which is really the gold standard for AMs until somebody manages\nto do better, it's entirely reasonable for somebody else to want to\njust do it the way the heap does. We gain nothing by making that\ndifficult.\n\n> > Here's a draft design for adding some abstraction, roughly modeled on\n> > the abstraction Andres added for TupleTableSlots:\n>\n> Hm, I'm not sure I see the need for a vtable based approach here. Won't\n> every AM know exactly what they need / have? I'm not convinced it's\n> worthwhile to treat that separately from the tableam. I.e. have a\n> BulkInsertState struct with *no* members, and then, as you suggest:\n\nHmm, so what we would we do here? Just 'struct BulkInsertState;\ntypedef struct BulkInsertState BulkInsertState;' ... and then never\nactually define the struct anywhere?\n\n> > 3. table AM gets a new member BulkInsertState\n> > *(*create_bistate)(Relation Rel) and a corresponding function\n> > table_create_bistate(), analogous to table_create_slot(), which can\n> > call the constructor function for the appropriate type of\n> > BulkInsertState and return the result\n>\n> but also route the following through the AM:\n>\n> > 2. that structure has a member for each defined operation on a BulkInsertState:\n> >\n> > void (*free)(BulkInsertState *);\n> > void (*release_pin)(BulkInsertState *); // maybe rename to make it more generic\n>\n> Where free would just be part of finish_bulk_insert, and release_pin a\n> new callback.\n\nOK, that's an option. I guess we'd change free_bulk_insert to take\nthe BulkInsertState as an additional option?\n\n> Robert, seems we'll have to - regardless of where we come down on fixing\n> this bug - have to make copy use multiple BulkInsertState's, even in the\n> CIM_SINGLE (with proute) case. Or do you have a better idea?\n\nNope.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Jun 2019 08:27:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Mon, 10 Jun 2019 at 11:45, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>\n> On Sat, 8 Jun 2019 at 04:51, Andres Freund <andres@anarazel.de> wrote:\n> > David, any opinions on how to best fix this? It's not extremely obvious\n> > how to do so best in the current setup of the partition actually being\n> > hidden somewhere a few calls away (i.e. the table_open happening in\n> > ExecInitPartitionInfo()).\n>\n> That's been overlooked. I agree it's not a bug with heap, since\n> heapam_finish_bulk_insert() only does anything there when we're\n> skipping WAL, which we don't do in copy.c for partitioned tables.\n> However, who knows what other AMs will need, so we'd better fix that.\n>\n> My proposed patch is attached.\n>\n> I ended up moving the call to CopyMultiInsertInfoCleanup() down to\n> after we call table_finish_bulk_insert for the main table. This might\n> not be required but I lack imagination right now to what AMs might put\n> in the finish_bulk_insert call, so doing this seems safer.\n\nAndres, do you want to look at this before I look again?\n\nDo you see any issue with calling table_finish_bulk_insert() when the\npartition's CopyMultiInsertBuffer is evicted from the\nCopyMultiInsertInfo rather than at the end of the copy? It can mean\nthat we call the function multiple times per partition. I assume the\nfunction is only really intended to flush bulk inserted tuple to the\nstorage, so calling it more than once would just mean an inefficiency\nrather than a bug.\n\nLet me know your thoughts.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:42:11 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi,\n\nOn June 12, 2019 6:42:11 PM PDT, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>On Mon, 10 Jun 2019 at 11:45, David Rowley\n><david.rowley@2ndquadrant.com> wrote:\n>>\n>> On Sat, 8 Jun 2019 at 04:51, Andres Freund <andres@anarazel.de>\n>wrote:\n>> > David, any opinions on how to best fix this? It's not extremely\n>obvious\n>> > how to do so best in the current setup of the partition actually\n>being\n>> > hidden somewhere a few calls away (i.e. the table_open happening in\n>> > ExecInitPartitionInfo()).\n>>\n>> That's been overlooked. I agree it's not a bug with heap, since\n>> heapam_finish_bulk_insert() only does anything there when we're\n>> skipping WAL, which we don't do in copy.c for partitioned tables.\n>> However, who knows what other AMs will need, so we'd better fix that.\n>>\n>> My proposed patch is attached.\n>>\n>> I ended up moving the call to CopyMultiInsertInfoCleanup() down to\n>> after we call table_finish_bulk_insert for the main table. This might\n>> not be required but I lack imagination right now to what AMs might\n>put\n>> in the finish_bulk_insert call, so doing this seems safer.\n>\n>Andres, do you want to look at this before I look again?\n>\n>Do you see any issue with calling table_finish_bulk_insert() when the\n>partition's CopyMultiInsertBuffer is evicted from the\n>CopyMultiInsertInfo rather than at the end of the copy? It can mean\n>that we call the function multiple times per partition. I assume the\n>function is only really intended to flush bulk inserted tuple to the\n>storage, so calling it more than once would just mean an inefficiency\n>rather than a bug.\n>\n>Let me know your thoughts.\n\nI'm out on vacation until Monday (very needed, pretty damn exhausted). So I can't really give you a in depth answer right now.\n\nOff the cuff, I'd say it's worthwhile to try somewhat hard to avoid superfluous finish calls. They can be quite expensive (fsync), and we ought to have nearly all the state for doing it only as much as necessary. Possibly we need one bool per partition to track whether any rows where inserted, but thats peanuts in comparison to all the other state.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 12:53:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 07:53, Andres Freund <andres@anarazel.de> wrote:\n>\n> On June 12, 2019 6:42:11 PM PDT, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> >Do you see any issue with calling table_finish_bulk_insert() when the\n> >partition's CopyMultiInsertBuffer is evicted from the\n> >CopyMultiInsertInfo rather than at the end of the copy? It can mean\n> >that we call the function multiple times per partition. I assume the\n> >function is only really intended to flush bulk inserted tuple to the\n> >storage, so calling it more than once would just mean an inefficiency\n> >rather than a bug.\n> >\n> >Let me know your thoughts.\n>\n> I'm out on vacation until Monday (very needed, pretty damn exhausted). So I can't really give you a in depth answer right now.\n>\n> Off the cuff, I'd say it's worthwhile to try somewhat hard to avoid superfluous finish calls. They can be quite expensive (fsync), and we ought to have nearly all the state for doing it only as much as necessary. Possibly we need one bool per partition to track whether any rows where inserted, but thats peanuts in comparison to all the other state.\n\nNo worries. I'll just park this patch here until you're ready to give it a look.\n\nWith the attached version I'm just calling table_finish_bulk_insert()\nonce per partition at the end of CopyFrom(). We've got an array with\nall the ResultRelInfos we touched in the proute, so it's mostly a\nmatter of looping over that array and calling the function on each\nResultRelInfo's ri_RelationDesc. However, to make it more complex,\nPartitionTupleRouting is private to execPartition.c so we can't do\nthis directly... After staring at my screen for a while, I decided to\nwrite a function that calls a callback function on each ResultRelInfo\nin the PartitionTupleRouting.\n\nThe three alternative ways I thought of were:\n\n1) Put PartitionTupleRouting back into execPartition.h and write the\nloop over each ResultRelInfo in copy.c.\n2) Write a specific function in execPartition.c that calls\ntable_finish_bulk_insert()\n3) Modify ExecCleanupTupleRouting to pass in the ti_options and a bool\nto say if it should call table_finish_bulk_insert() or not.\n\nI didn't really like either of those. For #1, I'd rather keep it\nprivate. For #2, it just seems a bit too specific a function to go\ninto execPartition.c. For #3 I really don't want to slow down\nExecCleanupTupleRouting() any. I designed those to be as fast as\npossible since they're called for single-row INSERTs into partitioned\ntables. Quite a bit of work went into PG12 to make those fast.\n\nOf course, someone might see one of the alternatives as better than\nwhat the patch does, so comments welcome.\n\nThe other thing I noticed is that we call\ntable_finish_bulk_insert(cstate->rel, ti_options); in copy.c\nregardless of if we've done any bulk inserts or not. Perhaps that\nshould be under an if (insertMethod != CIM_SINGLE)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 15 Jun 2019 12:25:12 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 12:25:12PM +1200, David Rowley wrote:\n> With the attached version I'm just calling table_finish_bulk_insert()\n> once per partition at the end of CopyFrom(). We've got an array with\n> all the ResultRelInfos we touched in the proute, so it's mostly a\n> matter of looping over that array and calling the function on each\n> ResultRelInfo's ri_RelationDesc. However, to make it more complex,\n> PartitionTupleRouting is private to execPartition.c so we can't do\n> this directly... After staring at my screen for a while, I decided to\n> write a function that calls a callback function on each ResultRelInfo\n> in the PartitionTupleRouting.\n\nDon't take me bad, but I find the solution of defining and using a new\ncallback to call the table AM callback not really elegant, and keeping\nall table AM callbacks called at a higher level than the executor\nmakes the code easier to follow. Shouldn't we try to keep any calls\nto table_finish_bulk_insert() within copy.c for each partition\ninstead?\n\n> The other thing I noticed is that we call\n> table_finish_bulk_insert(cstate->rel, ti_options); in copy.c\n> regardless of if we've done any bulk inserts or not. Perhaps that\n> should be under an if (insertMethod != CIM_SINGLE)\n\nYeah, good point.\n--\nMichael",
"msg_date": "Mon, 24 Jun 2019 19:16:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Mon, 24 Jun 2019 at 22:16, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jun 15, 2019 at 12:25:12PM +1200, David Rowley wrote:\n> > With the attached version I'm just calling table_finish_bulk_insert()\n> > once per partition at the end of CopyFrom(). We've got an array with\n> > all the ResultRelInfos we touched in the proute, so it's mostly a\n> > matter of looping over that array and calling the function on each\n> > ResultRelInfo's ri_RelationDesc. However, to make it more complex,\n> > PartitionTupleRouting is private to execPartition.c so we can't do\n> > this directly... After staring at my screen for a while, I decided to\n> > write a function that calls a callback function on each ResultRelInfo\n> > in the PartitionTupleRouting.\n>\n> Don't take me bad, but I find the solution of defining and using a new\n> callback to call the table AM callback not really elegant, and keeping\n> all table AM callbacks called at a higher level than the executor\n> makes the code easier to follow. Shouldn't we try to keep any calls\n> to table_finish_bulk_insert() within copy.c for each partition\n> instead?\n\nI'm not quite sure if I follow you since the call to\ntable_finish_bulk_insert() is within copy.c still.\n\nThe problem was that PartitionTupleRouting is private to\nexecPartition.c, and we need a way to determine which of the\npartitions we routed tuples to. It seems inefficient to flush all of\nthem if only a small number had tuples inserted into them and to me,\nit seems inefficient to add some additional tracking in CopyFrom(),\nlike a hash table to store partition Oids that we inserted into. Using\nPartitionTupleRouting makes sense. It's just a question of how to\naccess it, which is not so easy due to it being private.\n\nI did suggest a few other ways that we could solve this. I'm not so\nclear on which one of those you're suggesting or if you're thinking of\nsomething new.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 24 Jun 2019 23:12:49 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Mon, 24 Jun 2019 at 23:12, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>\n> On Mon, 24 Jun 2019 at 22:16, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > Don't take me bad, but I find the solution of defining and using a new\n> > callback to call the table AM callback not really elegant, and keeping\n> > all table AM callbacks called at a higher level than the executor\n> > makes the code easier to follow. Shouldn't we try to keep any calls\n> > to table_finish_bulk_insert() within copy.c for each partition\n> > instead?\n>\n> I'm not quite sure if I follow you since the call to\n> table_finish_bulk_insert() is within copy.c still.\n>\n> The problem was that PartitionTupleRouting is private to\n> execPartition.c, and we need a way to determine which of the\n> partitions we routed tuples to. It seems inefficient to flush all of\n> them if only a small number had tuples inserted into them and to me,\n> it seems inefficient to add some additional tracking in CopyFrom(),\n> like a hash table to store partition Oids that we inserted into. Using\n> PartitionTupleRouting makes sense. It's just a question of how to\n> access it, which is not so easy due to it being private.\n>\n> I did suggest a few other ways that we could solve this. I'm not so\n> clear on which one of those you're suggesting or if you're thinking of\n> something new.\n\nAny further thoughts on this Michael?\n\nOr Andres? Do you have a preference to which of the approaches\n(mentioned upthread) I use for the fix?\n\nIf I don't hear anything I'll probably just push the first fix. The\ninefficiency does not affect heap, so likely the people with the most\ninterest in improving that will be authors of other table AMs that\nactually do something during table_finish_bulk_insert() for\npartitions. We could revisit this in PG13 if someone comes up with a\nneed to improve things here.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 30 Jun 2019 17:54:29 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Sun, 30 Jun 2019 at 17:54, David Rowley <david.rowley@2ndquadrant.com> wrote:\n\n> Any further thoughts on this Michael?\n>\n> Or Andres? Do you have a preference to which of the approaches\n> (mentioned upthread) I use for the fix?\n>\n> If I don't hear anything I'll probably just push the first fix. The\n> inefficiency does not affect heap, so likely the people with the most\n> interest in improving that will be authors of other table AMs that\n> actually do something during table_finish_bulk_insert() for\n> partitions. We could revisit this in PG13 if someone comes up with a\n> need to improve things here.\n\nI've pushed the original patch plus a small change to only call\ntable_finish_bulk_insert() for the target of the copy when we're using\nbulk inserts.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 01:26:26 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Tue, Jul 02, 2019 at 01:26:26AM +1200, David Rowley wrote:\n> I've pushed the original patch plus a small change to only call\n> table_finish_bulk_insert() for the target of the copy when we're using\n> bulk inserts.\n\nYes, sorry for coming late at the party here. What I meant previously\nis that I did not find the version published at [1] to be natural with\nits structure to define an executor callback which then calls a\ncallback for table AMs, still I get your point that it would be better\nto try to avoid unnecessary fsync calls on partitions where no tuples\nhave been redirected with a COPY. The version 1 of the patch attached\nat [2] felt much more straight-forward and cleaner by keeping all the\ntable AM callbacks within copy.c.\n\nThis has been reverted as of f5db56f, still it seems to me that this\nwas moving in the right direction.\n\n[1]: https://postgr.es/m/CAKJS1f95sB21LBF=1MCsEV+XLtA_JC3mtXx5kgDuHDsOGoWhKg@mail.gmail.com\n[2]: https://postgr.es/m/CAKJS1f_0t-K0_3xe+erXPQ-jgaOb6tRZayErCXF2RpGdUVMt9g@mail.gmail.com \n--\nMichael",
"msg_date": "Wed, 3 Jul 2019 16:34:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Wed, 3 Jul 2019 at 19:35, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jul 02, 2019 at 01:26:26AM +1200, David Rowley wrote:\n> > I've pushed the original patch plus a small change to only call\n> > table_finish_bulk_insert() for the target of the copy when we're using\n> > bulk inserts.\n>\n> Yes, sorry for coming late at the party here. What I meant previously\n> is that I did not find the version published at [1] to be natural with\n> its structure to define an executor callback which then calls a\n> callback for table AMs, still I get your point that it would be better\n> to try to avoid unnecessary fsync calls on partitions where no tuples\n> have been redirected with a COPY. The version 1 of the patch attached\n> at [2] felt much more straight-forward and cleaner by keeping all the\n> table AM callbacks within copy.c.\n>\n> This has been reverted as of f5db56f, still it seems to me that this\n> was moving in the right direction.\n\nI think the only objection to doing it the way [2] did was, if there\nare more than MAX_PARTITION_BUFFERS partitions then we may end up\nevicting the CopyMultiInsertBuffer out of the CopyMultiInsertInfo and\nthus cause a call to table_finish_bulk_insert() before we're done with\nthe copy. It's not impossible that this could happen many times for a\ngiven partition. I agree that a working version of [2] is cleaner\nthan [1] but it's just the thought of those needless calls.\n\nFor [1], I wasn't very happy with the way it turned out which is why I\nended up suggesting a few other ideas. I just don't really like either\nof them any better than [1], so I didn't chase those up, and that's\nwhy I ended up going for [2]. If you think any of the other ideas I\nsuggested are better (apart from [2]) then let me know and I can see\nabout writing a patch. Otherwise, I plan to just fix [2] and push.\n\n> [1]: https://postgr.es/m/CAKJS1f95sB21LBF=1MCsEV+XLtA_JC3mtXx5kgDuHDsOGoWhKg@mail.gmail.com\n> [2]: https://postgr.es/m/CAKJS1f_0t-K0_3xe+erXPQ-jgaOb6tRZayErCXF2RpGdUVMt9g@mail.gmail.com\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 3 Jul 2019 19:46:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Wed, 3 Jul 2019 at 19:35, Michael Paquier <michael@paquier.xyz> wrote:\n> This has been reverted as of f5db56f, still it seems to me that this\n> was moving in the right direction.\n\nI've pushed this again, this time with the cleanup code done in the right order.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 10 Jul 2019 21:40:59 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi David,\n\nOn Wed, Jul 10, 2019 at 09:40:59PM +1200, David Rowley wrote:\n> On Wed, 3 Jul 2019 at 19:35, Michael Paquier <michael@paquier.xyz> wrote:\n>> This has been reverted as of f5db56f, still it seems to me that this\n>> was moving in the right direction.\n> \n> I've pushed this again, this time with the cleanup code done in the\n> right order. \n\nI have spent some time lately analyzing f7c830f as I was curious about\nthe logic behind it, and FWIW the result looks good. Thanks!\n--\nMichael",
"msg_date": "Tue, 16 Jul 2019 18:44:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi,\n\nSorry for not chiming in again earlier, I was a bit exhausted...\n\n\nOn 2019-07-03 19:46:06 +1200, David Rowley wrote:\n> I think the only objection to doing it the way [2] did was, if there\n> are more than MAX_PARTITION_BUFFERS partitions then we may end up\n> evicting the CopyMultiInsertBuffer out of the CopyMultiInsertInfo and\n> thus cause a call to table_finish_bulk_insert() before we're done with\n> the copy.\n\nRight.\n\n\n> It's not impossible that this could happen many times for a\n> given partition. I agree that a working version of [2] is cleaner\n> than [1] but it's just the thought of those needless calls.\n\nI think it's fairly important to optimize this. E.g. emitting\nunnecessary fsyncs as it'd happen for heap is a pretty huge constant to\nadd to bulk loading.\n\n\n> For [1], I wasn't very happy with the way it turned out which is why I\n> ended up suggesting a few other ideas. I just don't really like either\n> of them any better than [1], so I didn't chase those up, and that's\n> why I ended up going for [2].\n\nYea, I don't like [1] either - they all seems too tied to copy.c's\nusage. Ideas:\n\n1) Have ExecFindPartition() return via a bool* whether the partition is\n being accessed for the first time. In copy.c push the partition onto\n a list of to-be-bulk-finished tables.\n2) Add a execPartition.c function that returns all the used tables from\n a PartitionTupleRouting*.\n\nboth seem cleaner to me than your proposals in [1], albeit not perfect\neither. I think knowing which partitions are referenced is a reasonable\nthing to want from the partition machinery. But using bulk-insert etc\nseems outside of execPartition.c's remit, so doing that in copy.c seems\nto make sense.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 11:46:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 06:46, Andres Freund <andres@anarazel.de> wrote:\n> 1) Have ExecFindPartition() return via a bool* whether the partition is\n> being accessed for the first time. In copy.c push the partition onto\n> a list of to-be-bulk-finished tables.\n> 2) Add a execPartition.c function that returns all the used tables from\n> a PartitionTupleRouting*.\n\n#2 seems better than #1 as it does not add overhead to ExecFindPartition().\n\nAre you thinking this should go back into v12, or just for v13 only?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jul 2019 11:29:37 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 11:29:37 +1200, David Rowley wrote:\n> On Wed, 17 Jul 2019 at 06:46, Andres Freund <andres@anarazel.de> wrote:\n> > 1) Have ExecFindPartition() return via a bool* whether the partition is\n> > being accessed for the first time. In copy.c push the partition onto\n> > a list of to-be-bulk-finished tables.\n> > 2) Add a execPartition.c function that returns all the used tables from\n> > a PartitionTupleRouting*.\n> \n> #2 seems better than #1 as it does not add overhead to ExecFindPartition().\n\nI don't see how #1 would add meaningful overhead compared to the other\ncosts of that function. Wouldn't it just be adding if (isnew) *isnew =\nfalse; to the \"/* ResultRelInfo already built */\" branch, and the\nreverse to the else? That got to be several orders of magnitude cheaper\nthan e.g. FormPartitionKeyDatum() which is unconditionally executed?\n\n\n> Are you thinking this should go back into v12, or just for v13 only?\n\nNot sure, tbh. Probably depends a bit on how complicated it'd look?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:36:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Thu, 18 Jul 2019 at 11:36, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-07-18 11:29:37 +1200, David Rowley wrote:\n> > On Wed, 17 Jul 2019 at 06:46, Andres Freund <andres@anarazel.de> wrote:\n> > > 1) Have ExecFindPartition() return via a bool* whether the partition is\n> > > being accessed for the first time. In copy.c push the partition onto\n> > > a list of to-be-bulk-finished tables.\n> > > 2) Add a execPartition.c function that returns all the used tables from\n> > > a PartitionTupleRouting*.\n> >\n> > #2 seems better than #1 as it does not add overhead to ExecFindPartition().\n>\n> I don't see how #1 would add meaningful overhead compared to the other\n> costs of that function. Wouldn't it just be adding if (isnew) *isnew =\n> false; to the \"/* ResultRelInfo already built */\" branch, and the\n> reverse to the else?\n\nYes\n\n> That got to be several orders of magnitude cheaper\n> than e.g. FormPartitionKeyDatum() which is unconditionally executed?\n\nProbably.\n\nHowever, I spent quite a bit of time trying to make that function as\nfast as possible in v12, and since #2 seems like a perfectly good\nalternative, I'd rather go with that than to add pollution to\nExecFindPartition's signature. Also, #2 seems better since it keeps\nCopyFrom() from having to maintain a list. I think we all agreed\nsomewhere that that code is more complex than we'd all like it to be.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 18 Jul 2019 11:57:44 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-18 11:57:44 +1200, David Rowley wrote:\n> However, I spent quite a bit of time trying to make that function as\n> fast as possible in v12, and since #2 seems like a perfectly good\n> alternative, I'd rather go with that than to add pollution to\n> ExecFindPartition's signature. Also, #2 seems better since it keeps\n> CopyFrom() from having to maintain a list. I think we all agreed\n> somewhere that that code is more complex than we'd all like it to be.\n\nFair enough.\n\nOne last thought for #1: I was wondering about is whether a the bool *\napproach might be useful for nodeModifyTable.c too? I thought that maybe\nthat could be used to avoid some checks for setting up per partition\nstate, but it seems not to be the case ATM.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 18:20:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 06:46, Andres Freund <andres@anarazel.de> wrote:\n> 2) Add a execPartition.c function that returns all the used tables from\n> a PartitionTupleRouting*.\n\nHere's a patch which implements it that way.\n\nI struggled a bit to think of a good name for the execPartition.c\nfunction. I ended up with ExecGetRoutedToRelations. I'm open to better\nideas.\n\nI also chose to leave the change of function signatures done in\nf7c830f1a in place. I don't think the additional now unused parameter\nis that out of place. Also, the function is inlined, so removing it\nwouldn't help performance any.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Wed, 14 Aug 2019 18:11:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Custom table AMs need to include heapam.h because of\n BulkInsertState"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile going through the table AM callbacks, I have bumped into a\ncouple of references to heap. I think that we should make that more\ngeneric by using the term \"table\" as done when opening relations and\nsuch. Attached is a cleanup patch.\n\nWhile on it, I found a set of typos which looked like a copy-pasto\nwhich got spread => \"index_nfo\". I know, these are nits, but I think\nthat this also reduces the confusion with the way table AM callbacks\nare presented to extension developers.\n\nThanks,\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:09:46 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Table AM callbacks referring to heap in declarations (+typos)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-01 15:09:46 -0400, Michael Paquier wrote:\n> While going through the table AM callbacks, I have bumped into a\n> couple of references to heap. I think that we should make that more\n> generic by using the term \"table\" as done when opening relations and\n> such. Attached is a cleanup patch.\n\nI'm unbothered by this, but I'm also not opposed to changing this. It's\nlargely just keeping the previous code / comment.\n\n\n> While on it, I found a set of typos which looked like a copy-pasto\n> which got spread => \"index_nfo\".\n\nYea, we should fix this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jun 2019 12:22:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Table AM callbacks referring to heap in declarations (+typos)"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:22:10PM -0700, Andres Freund wrote:\n> On 2019-06-01 15:09:46 -0400, Michael Paquier wrote:\n>> While going through the table AM callbacks, I have bumped into a\n>> couple of references to heap. I think that we should make that more\n>> generic by using the term \"table\" as done when opening relations and\n>> such. Attached is a cleanup patch.\n> \n> I'm unbothered by this, but I'm also not opposed to changing this. It's\n> largely just keeping the previous code / comment.\n> \n>> While on it, I found a set of typos which looked like a copy-pasto\n>> which got spread => \"index_nfo\".\n> \n> Yea, we should fix this.\n\nThanks. Do you mind if I fix both then?\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:37:43 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Table AM callbacks referring to heap in declarations (+typos)"
},
{
"msg_contents": "\nOn 2019-06-01 15:37:43 -0400, Michael Paquier wrote:\n> On Sat, Jun 01, 2019 at 12:22:10PM -0700, Andres Freund wrote:\n> > On 2019-06-01 15:09:46 -0400, Michael Paquier wrote:\n> >> While going through the table AM callbacks, I have bumped into a\n> >> couple of references to heap. I think that we should make that more\n> >> generic by using the term \"table\" as done when opening relations and\n> >> such. Attached is a cleanup patch.\n> > \n> > I'm unbothered by this, but I'm also not opposed to changing this. It's\n> > largely just keeping the previous code / comment.\n> > \n> >> While on it, I found a set of typos which looked like a copy-pasto\n> >> which got spread => \"index_nfo\".\n> > \n> > Yea, we should fix this.\n> \n> Thanks. Do you mind if I fix both then?\n\nI don't mind at all (although it's imo not a fix for the s/heap/table)!\n\n- Andres\n\n\n",
"msg_date": "Sat, 1 Jun 2019 12:43:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Table AM callbacks referring to heap in declarations (+typos)"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:43:11PM -0700, Andres Freund wrote:\n> I don't mind at all (although it's imo not a fix for the s/heap/table)!\n\nThanks, committed what I had.\n--\nMichael",
"msg_date": "Tue, 4 Jun 2019 09:50:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Table AM callbacks referring to heap in declarations (+typos)"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have bumped into a couple of issues with psql completion for access\nmethods:\n1) CREATE INDEX USING suggests both index and table AMs.\n2) CREATE TABLE USING has no completion support, USING not being\nincluded in the completion, and the follow-up table AMs are missing as\nwell.\n3) CREATE ACCESS METHOD TYPE suggests only INDEX.\n\nAttached is a patch to close the gap. Thoughts?\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:10:07 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "psql completion bugs with access methods"
},
{
"msg_contents": "Hi,\n\nI'm not sure I understand starting 10 threads about approximately the\nsame topic. That seems purely confusing.\n\nOn 2019-06-01 15:10:07 -0400, Michael Paquier wrote:\n> I have bumped into a couple of issues with psql completion for access\n> methods:\n> 1) CREATE INDEX USING suggests both index and table AMs.\n\nLet's fix that.\n\n\n> 2) CREATE TABLE USING has no completion support, USING not being\n> included in the completion, and the follow-up table AMs are missing as\n> well.\n> 3) CREATE ACCESS METHOD TYPE suggests only INDEX.\n\nI don't think these are bugs. I'm fine with adding those for 12, but I\ndon't think it's needed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jun 2019 12:25:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: psql completion bugs with access methods"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:25:29PM -0700, Andres Freund wrote:\n> I'm not sure I understand starting 10 threads about approximately the\n> same topic. That seems purely confusing.\n\nWell, each topic is separated IMO and has a separate patch, so I just\nwanted to keep the discussion of each issue clear.\n\n> On 2019-06-01 15:10:07 -0400, Michael Paquier wrote:\n>> I have bumped into a couple of issues with psql completion for access\n>> methods:\n>> 1) CREATE INDEX USING suggests both index and table AMs.\n> \n> Let's fix that.\n> \n>> 2) CREATE TABLE USING has no completion support, USING not being\n>> included in the completion, and the follow-up table AMs are missing as\n>> well.\n>> 3) CREATE ACCESS METHOD TYPE suggests only INDEX.\n> \n> I don't think these are bugs. I'm fine with adding those for 12, but I\n> don't think it's needed.\n\nI would just fix both. Once you apply the filtering of access AMs for\nindexes, the rest just makes sense to get done as well. If you are\nstrongly opposed to that, I am fine not to fix it, but as we're on\nit.\n--\nMichael",
"msg_date": "Sat, 1 Jun 2019 15:41:29 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: psql completion bugs with access methods"
},
{
"msg_contents": "On 2019-06-01 15:41:29 -0400, Michael Paquier wrote:\n> On Sat, Jun 01, 2019 at 12:25:29PM -0700, Andres Freund wrote:\n> > I don't think these are bugs. I'm fine with adding those for 12, but I\n> > don't think it's needed.\n> \n> I would just fix both. Once you apply the filtering of access AMs for\n> indexes, the rest just makes sense to get done as well. If you are\n> strongly opposed to that, I am fine not to fix it, but as we're on\n> it.\n\n\"I'm fine with adding those for 12\", so no, I'm not strongly opposed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jun 2019 12:44:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: psql completion bugs with access methods"
},
{
"msg_contents": "On Sat, Jun 01, 2019 at 12:44:05PM -0700, Andres Freund wrote:\n> \"I'm fine with adding those for 12\", so no, I'm not strongly opposed.\n\nOK, fixed this one for now.\n--\nMichael",
"msg_date": "Mon, 3 Jun 2019 11:04:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: psql completion bugs with access methods"
}
] |
[
{
"msg_contents": "Hi, hackers!\nI'm a student participating in GSoC 2019 and my project is related to TOAST\nslices.\nWhen I'm getting familiar with the postgresql codebase, I find that\nPG_DETOAST_DATUM_SLICE, when to run on a compressed TOAST entry, will fetch\nall compressed data chunks then extract the relevant slice. Obviously, this\nis unnecessary, we only need to fetch the data chunks we need.\n\nThe patch optimizes partial TOAST decompression.\nFor an example of the improvement possible, this trivial example:\n---------------------------------------------------------------------\ncreate table slicingtest (\nid serial primary key,\na text\n);\n\ninsert into slicingtest (a) select\nrepeat('1234567890-=abcdefghijklmnopqrstuvwxyz', 1000000) as a from\ngenerate_series(1,100);\n\\timing\nselect sum(length(substr(a, 0, 20))) from slicingtest;\n---------------------------------------------------------------------\nenvironment: Linux 4.15.0-33-generic #36~16.04.1-Ubuntu x86_64 GNU/Linux\nOn master, I get\nTime: 28.123 ms (Take ten times average)\nWith the patch, I get\nTime: 2.306 ms (take ten times average)\n\nThis seems to have a 10x improvement. If the number of toast data chunks is\nmore, I believe that patch can play a greater role, there are about 200\nrelated TOAST data chunks for each entry in the case.\n\nRelated discussion:\nhttps://www.postgresql.org/message-id/flat/CACowWR07EDm7Y4m2kbhN_jnys%3DBBf9A6768RyQdKm_%3DNpkcaWg%40mail.gmail.com\n\nBest regards, Binguo Bao.",
"msg_date": "Sun, 2 Jun 2019 22:48:34 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Optimize partial TOAST decompression"
},
{
"msg_contents": "Hi, Binguo!\n\n> 2 июня 2019 г., в 19:48, Binguo Bao <djydewang@gmail.com> написал(а):\n> \n> Hi, hackers!\n....\n> This seems to have a 10x improvement. If the number of toast data chunks is more, I believe that patch can play a greater role, there are about 200 related TOAST data chunks for each entry in the case.\n\nThat's really cool that you could produce meaningful patch long before end of GSoC!\n\nI'll describe what is going on a little:\n1. We have compressed value, which resides in TOAST table.\n2. We want only some fraction of this value. We want some prefix with length L.\n3. Previously Paul Ramsey submitted patch that omits decompression of value beyond desired L bytes.\n4. Binguo's patch tries to do not fetch compressed data which will not bee needed to decompressor. In fact it fetches L bytes from TOAST table.\n\nThis is not correct: L bytes of compressed data do not always can be decoded into at least L bytes of data. At worst we have one control byte per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8 bytes with current pglz format.\n\nAlso, I'm not sure you use SET_VARSIZE_COMPRESSED correctly...\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 23 Jun 2019 14:23:54 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "> This is not correct: L bytes of compressed data do not always can be\ndecoded into at least L bytes of data. At worst we have one control byte\nper 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8\nbytes with current pglz format.\n\nGood catch! I've corrected the related code in the patch.\n\n> Also, I'm not sure you use SET_VARSIZE_COMPRESSED correctly...\nI followed the code in toast_fetch_datum function[1], and I didn't see any\nwrong with it.\n\nBest regards, Binguo Bao\n\n[1]\nhttps://github.com/postgres/postgres/blob/master/src/backend/access/heap/tuptoaster.c#L1898\n\nAndrey Borodin <x4mmm@yandex-team.ru> 于2019年6月23日周日 下午5:23写道:\n\n> Hi, Binguo!\n>\n> > 2 июня 2019 г., в 19:48, Binguo Bao <djydewang@gmail.com> написал(а):\n> >\n> > Hi, hackers!\n> ....\n> > This seems to have a 10x improvement. If the number of toast data chunks\n> is more, I believe that patch can play a greater role, there are about 200\n> related TOAST data chunks for each entry in the case.\n>\n> That's really cool that you could produce meaningful patch long before end\n> of GSoC!\n>\n> I'll describe what is going on a little:\n> 1. We have compressed value, which resides in TOAST table.\n> 2. We want only some fraction of this value. We want some prefix with\n> length L.\n> 3. Previously Paul Ramsey submitted patch that omits decompression of\n> value beyond desired L bytes.\n> 4. Binguo's patch tries to do not fetch compressed data which will not bee\n> needed to decompressor. In fact it fetches L bytes from TOAST table.\n>\n> This is not correct: L bytes of compressed data do not always can be\n> decoded into at least L bytes of data. At worst we have one control byte\n> per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8\n> bytes with current pglz format.\n>\n> Also, I'm not sure you use SET_VARSIZE_COMPRESSED correctly...\n>\n> Best regards, Andrey Borodin.",
"msg_date": "Mon, 24 Jun 2019 10:53:49 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Hi!\nPlease, do not use top-posting, i.e. reply style where you quote whole message under your response. It makes reading of archives terse.\n\n> 24 июня 2019 г., в 7:53, Binguo Bao <djydewang@gmail.com> написал(а):\n> \n>> This is not correct: L bytes of compressed data do not always can be decoded into at least L bytes of data. At worst we have one control byte per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8 bytes with current pglz format.\n> \n> Good catch! I've corrected the related code in the patch.\n> ...\n> <0001-Optimize-partial-TOAST-decompression-2.patch>\n\nI've took a look into the code.\nI think we should extract function for computation of max_compressed_size and put it somewhere along with pglz code. Just in case something will change something about pglz so that they would not forget about compression algorithm assumption.\n\nAlso I suggest just using 64 bit computation to avoid overflows. And I think it worth to check if max_compressed_size is whole data and use min of (max_compressed_size, uncompressed_data_size).\n\nAlso you declared needsize and max_compressed_size too far from use. But this will be solved by function extraction anyway.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 29 Jun 2019 15:48:02 +0200",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Hi!\n\n> Andrey Borodin <x4mmm@yandex-team.ru> 于2019年6月29日周六 下午9:48写道:\n\n> Hi!\n> Please, do not use top-posting, i.e. reply style where you quote whole\n> message under your response. It makes reading of archives terse.\n>\n> > 24 июня 2019 г., в 7:53, Binguo Bao <djydewang@gmail.com> написал(а):\n> >\n> >> This is not correct: L bytes of compressed data do not always can be\n> decoded into at least L bytes of data. At worst we have one control byte\n> per 8 bytes of literal bytes. This means at most we need (L*9 + 8) / 8\n> bytes with current pglz format.\n> >\n> > Good catch! I've corrected the related code in the patch.\n> > ...\n> > <0001-Optimize-partial-TOAST-decompression-2.patch>\n>\n> I've took a look into the code.\n> I think we should extract function for computation of max_compressed_size\n> and put it somewhere along with pglz code. Just in case something will\n> change something about pglz so that they would not forget about compression\n> algorithm assumption.\n>\n> Also I suggest just using 64 bit computation to avoid overflows. And I\n> think it worth to check if max_compressed_size is whole data and use min of\n> (max_compressed_size, uncompressed_data_size).\n>\n> Also you declared needsize and max_compressed_size too far from use. But\n> this will be solved by function extraction anyway.\n>\n> Thanks!\n>\n> Best regards, Andrey Borodin.\n\n\nThanks for the suggestion.\nI've extracted function for computation for max_compressed_size and put the\nfunction into pg_lzcompress.c.\n\nBest regards, Binguo Bao.",
"msg_date": "Mon, 1 Jul 2019 21:46:28 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Mon, Jul 1, 2019 at 6:46 AM Binguo Bao <djydewang@gmail.com> wrote:\n> > Andrey Borodin <x4mmm@yandex-team.ru> 于2019年6月29日周六 下午9:48写道:\n>> I've took a look into the code.\n>> I think we should extract function for computation of max_compressed_size and put it somewhere along with pglz code. Just in case something will change something about pglz so that they would not forget about compression algorithm assumption.\n>>\n>> Also I suggest just using 64 bit computation to avoid overflows. And I think it worth to check if max_compressed_size is whole data and use min of (max_compressed_size, uncompressed_data_size).\n>>\n>> Also you declared needsize and max_compressed_size too far from use. But this will be solved by function extraction anyway.\n>>\n> Thanks for the suggestion.\n> I've extracted function for computation for max_compressed_size and put the function into pg_lzcompress.c.\n\nThis looks good to me. A little commentary around why\npglz_maximum_compressed_size() returns a universally correct answer\n(there's no way the compressed size can ever be larger than this\nbecause...) would be nice for peasants like myself.\n\nIf you're looking to continue down this code line in your next patch,\nthe next TODO item is a little more involved: a user-land (ala\nPG_DETOAST_DATUM) iterator API for access of TOAST datums would allow\nthe optimization of searching of large objects like JSONB types, and\nso on, where the thing you are looking for is not at a known location\nin the object. So, things like looking for a particular substring in a\nstring, or looking for a particular key in a JSONB. \"Iterate until you\nfind the thing.\" would allow optimization of some code lines that\ncurrently require full decompression of the objects.\n\nP.\n\n\n",
"msg_date": "Tue, 2 Jul 2019 07:46:13 -0700",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Paul Ramsey <pramsey@cleverelephant.ca> 于2019年7月2日周二 下午10:46写道:\n\n> This looks good to me. A little commentary around why\n> pglz_maximum_compressed_size() returns a universally correct answer\n> (there's no way the compressed size can ever be larger than this\n> because...) would be nice for peasants like myself.\n>\n> If you're looking to continue down this code line in your next patch,\n> the next TODO item is a little more involved: a user-land (ala\n> PG_DETOAST_DATUM) iterator API for access of TOAST datums would allow\n> the optimization of searching of large objects like JSONB types, and\n> so on, where the thing you are looking for is not at a known location\n> in the object. So, things like looking for a particular substring in a\n> string, or looking for a particular key in a JSONB. \"Iterate until you\n> find the thing.\" would allow optimization of some code lines that\n> currently require full decompression of the objects.\n>\n> P.\n>\n\nThanks for your comment. I've updated the patch.\nAs for the iterator API, I've implemented a de-TOAST iterator actually[0].\nAnd I’m looking for more of its application scenarios and perfecting it.\nAny comments would be much appreciated.\n\nBest Regards, Binguo Bao.\n\n[0]\nhttps://www.postgresql.org/message-id/flat/CAL-OGks_onzpc9M9bXPCztMofWULcFkyeCeKiAgXzwRL8kXiag@mail.gmail.com",
"msg_date": "Thu, 4 Jul 2019 00:06:13 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "\n> 3 июля 2019 г., в 18:06, Binguo Bao <djydewang@gmail.com> написал(а):\n> \n> Paul Ramsey <pramsey@cleverelephant.ca> 于2019年7月2日周二 下午10:46写道:\n> This looks good to me. A little commentary around why\n> pglz_maximum_compressed_size() returns a universally correct answer\n> (there's no way the compressed size can ever be larger than this\n> because...) would be nice for peasants like myself.\n> ...\n> \n> Thanks for your comment. I've updated the patch.\n\n\nThanks Biguo and Paul! From my POV patch looks ready for committer, so I switched state on CF.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 4 Jul 2019 11:10:24 +0200",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Thu, Jul 04, 2019 at 11:10:24AM +0200, Andrey Borodin wrote:\n>\n>> 3 июля 2019 г., в 18:06, Binguo Bao <djydewang@gmail.com> написал(а):\n>>\n>> Paul Ramsey <pramsey@cleverelephant.ca> 于2019年7月2日周二 下午10:46写道:\n>> This looks good to me. A little commentary around why\n>> pglz_maximum_compressed_size() returns a universally correct answer\n>> (there's no way the compressed size can ever be larger than this\n>> because...) would be nice for peasants like myself.\n>> ...\n>>\n>> Thanks for your comment. I've updated the patch.\n>\n>\n> Thanks Biguo and Paul! From my POV patch looks ready for committer, so I switched state on CF.\n>\n\nI've done a bit of testing and benchmaring on this patch today, and\nthere's a bug somewhere, making it look like there are corrupted data.\n\nWhat I'm seeing is this:\n\nCREATE TABLE t (a text);\n\n-- attached is data for one row\nCOPY t FROM '/tmp/t.data';\n\n\nSELECT length(substr(a,1000)) from t;\npsql: ERROR: compressed data is corrupted\n\nSELECT length(substr(a,0,1000)) from t;\n length \n--------\n 999\n(1 row)\n\nSELECT length(substr(a,1000)) from t;\npsql: ERROR: invalid memory alloc request size 2018785106\n\nThat's quite bizarre behavior - it does work with a prefix, but not with\nsuffix. And the exact ERROR changes after the prefix query. (Of course,\non master it works in all cases.)\n\nThe backtrace (with the patch applied) looks like this:\n\n#0 toast_decompress_datum (attr=0x12572e0) at tuptoaster.c:2291\n#1 toast_decompress_datum (attr=0x12572e0) at tuptoaster.c:2277\n#2 0x00000000004c3b08 in heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0, slicelength=-1) at tuptoaster.c:315\n#3 0x000000000085c1e5 in pg_detoast_datum_slice (datum=<optimized out>, first=<optimized out>, count=<optimized out>) at fmgr.c:1767\n#4 0x0000000000833b7a in text_substring (str=133761519127512, start=0, length=<optimized out>, length_not_specified=<optimized out>) at varlena.c:956\n...\n\nI've only observed this with a very small number of rows (the data is\ngenerated randomly with different compressibility etc.), so I'm only\nattaching one row that exhibits this issue.\n\nMy guess is toast_fetch_datum_slice() gets confused by the headers or\nsomething, or something like that. FWIW the new code added to this\nfunction does not adhere to our code style, and would deserve some\nadditional explanation of what it's doing/why. Same for the\nheap_tuple_untoast_attr_slice, BTW.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 4 Jul 2019 19:46:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Of course, I forgot to attach the files, so here they are.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 4 Jul 2019 20:02:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> 于2019年7月5日周五 上午1:46写道:\n\n> I've done a bit of testing and benchmaring on this patch today, and\n> there's a bug somewhere, making it look like there are corrupted data.\n>\n> What I'm seeing is this:\n>\n> CREATE TABLE t (a text);\n>\n> -- attached is data for one row\n> COPY t FROM '/tmp/t.data';\n>\n>\n> SELECT length(substr(a,1000)) from t;\n> psql: ERROR: compressed data is corrupted\n>\n> SELECT length(substr(a,0,1000)) from t;\n> length\n> --------\n> 999\n> (1 row)\n>\n> SELECT length(substr(a,1000)) from t;\n> psql: ERROR: invalid memory alloc request size 2018785106\n>\n> That's quite bizarre behavior - it does work with a prefix, but not with\n> suffix. And the exact ERROR changes after the prefix query. (Of course,\n> on master it works in all cases.)\n>\n> The backtrace (with the patch applied) looks like this:\n>\n> #0 toast_decompress_datum (attr=0x12572e0) at tuptoaster.c:2291\n> #1 toast_decompress_datum (attr=0x12572e0) at tuptoaster.c:2277\n> #2 0x00000000004c3b08 in heap_tuple_untoast_attr_slice (attr=<optimized\n> out>, sliceoffset=0, slicelength=-1) at tuptoaster.c:315\n> #3 0x000000000085c1e5 in pg_detoast_datum_slice (datum=<optimized out>,\n> first=<optimized out>, count=<optimized out>) at fmgr.c:1767\n> #4 0x0000000000833b7a in text_substring (str=133761519127512, start=0,\n> length=<optimized out>, length_not_specified=<optimized out>) at\n> varlena.c:956\n> ...\n>\n> I've only observed this with a very small number of rows (the data is\n> generated randomly with different compressibility etc.), so I'm only\n> attaching one row that exhibits this issue.\n>\n> My guess is toast_fetch_datum_slice() gets confused by the headers or\n> something, or something like that. FWIW the new code added to this\n> function does not adhere to our code style, and would deserve some\n> additional explanation of what it's doing/why. Same for the\n> heap_tuple_untoast_attr_slice, BTW.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi, Tomas!\nThanks for your testing and the suggestion.\n\nThat's quite bizarre behavior - it does work with a prefix, but not with\n> suffix. And the exact ERROR changes after the prefix query.\n\n\nI think bug is caused by \"#2 0x00000000004c3b08 in\nheap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0,\nslicelength=-1) at tuptoaster.c:315\",\nsince I ignore the case where slicelength is negative, and I've appended\nsome comments for heap_tuple_untoast_attr_slice for the case.\n\nFWIW the new code added to this\n> function does not adhere to our code style, and would deserve some\n> additional explanation of what it's doing/why. Same for the\n> heap_tuple_untoast_attr_slice, BTW.\n\n\nI've added more comments to explain the code's behavior.\nBesides, I also modified the macro \"TOAST_COMPRESS_RAWDATA\" to\n\"TOAST_COMPRESS_DATA\" since\nit is used to get toast compressed data rather than raw data.\n\nBest Regards, Binguo Bao.",
"msg_date": "Sat, 6 Jul 2019 02:27:56 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:\n>Hi, Tomas!\n>Thanks for your testing and the suggestion.\n>\n>That's quite bizarre behavior - it does work with a prefix, but not with\n>> suffix. And the exact ERROR changes after the prefix query.\n>\n>\n>I think bug is caused by \"#2 0x00000000004c3b08 in\n>heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0,\n>slicelength=-1) at tuptoaster.c:315\",\n>since I ignore the case where slicelength is negative, and I've appended\n>some comments for heap_tuple_untoast_attr_slice for the case.\n>\n>FWIW the new code added to this\n>> function does not adhere to our code style, and would deserve some\n>> additional explanation of what it's doing/why. Same for the\n>> heap_tuple_untoast_attr_slice, BTW.\n>\n>\n>I've added more comments to explain the code's behavior.\n>Besides, I also modified the macro \"TOAST_COMPRESS_RAWDATA\" to\n>\"TOAST_COMPRESS_DATA\" since\n>it is used to get toast compressed data rather than raw data.\n>\n\nThanks, this seems to address the issue - I can no longer reproduce the\ncrashes, allowing the benchmark to complete. I'm attaching the script I\nused and spreadsheet with a summary of results.\n\nFor the cases I've tested (100k - 10M values, different compressibility,\ndifferent prefix/length values), the results are kinda mixed - many\ncases got much faster (~2x), but other cases got slower too. We're\nhowever talking about queries taking a couple of miliseconds, so in\nabsolute numbers the differences are small.\n\nThat does not mean the optimization is useless - but the example shared\nat the beginning of this thread is quite extreme, as the values are\nextremely compressible. Each value is ~38MB (in plaintext), but a table\nwith 100 such values has only ~40MB. Tha's 100:1 compression ratio,\nwhich I think is not typical for real-world data sets.\n\nThe data I've used are less extreme, depending on the fraction of random\ndata in values.\n\nI went through the code too. I've reworder a couple of comments and code\nstyle issues, but there are a couple of more serious issues.\n\n\n1) Why rename TOAST_COMPRESS_RAWDATA to TOAST_COMPRESS_DATA?\n\nThis seems unnecessary, and it discards the clear hint that it's about\naccessing the *raw* data, and the relation to TOAST_COMPRESS_RAWSIZE.\nIMHO we should keep the original naming.\n\n\n2) pglz_maximum_compressed_size signatures are confusing\n\nThere are two places with pglz_maximum_compressed_size signature, and\nthose places are kinda out of sync when it comes to parameter names:\n\n int32\n pglz_maximum_compressed_size(int32 raw_slice_size,\n int32 total_compressed_size)\n\n extern\n int32 pglz_maximum_compressed_size(int32 raw_slice_size,\n int32 raw_size);\n\nAlso, pg_lzcompress.c has no concept of a \"slice\" because it only deals\nwith simple compression, slicing is responsibility of the tuptoaster. So\nwe should not mix those two, not even in comments.\n\n\nI propose tweaks per the attached patch - I think it makes the code\nclearer, and it's mostly cosmetic stuff. But I haven't tested the\nchanges beyond \"it compiles\".\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 6 Jul 2019 17:23:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:\n>On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:\n>>Hi, Tomas!\n>>Thanks for your testing and the suggestion.\n>>\n>>That's quite bizarre behavior - it does work with a prefix, but not with\n>>>suffix. And the exact ERROR changes after the prefix query.\n>>\n>>\n>>I think bug is caused by \"#2 0x00000000004c3b08 in\n>>heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0,\n>>slicelength=-1) at tuptoaster.c:315\",\n>>since I ignore the case where slicelength is negative, and I've appended\n>>some comments for heap_tuple_untoast_attr_slice for the case.\n>>\n>>FWIW the new code added to this\n>>>function does not adhere to our code style, and would deserve some\n>>>additional explanation of what it's doing/why. Same for the\n>>>heap_tuple_untoast_attr_slice, BTW.\n>>\n>>\n>>I've added more comments to explain the code's behavior.\n>>Besides, I also modified the macro \"TOAST_COMPRESS_RAWDATA\" to\n>>\"TOAST_COMPRESS_DATA\" since\n>>it is used to get toast compressed data rather than raw data.\n>>\n>\n>Thanks, this seems to address the issue - I can no longer reproduce the\n>crashes, allowing the benchmark to complete. I'm attaching the script I\n>used and spreadsheet with a summary of results.\n>\n>For the cases I've tested (100k - 10M values, different compressibility,\n>different prefix/length values), the results are kinda mixed - many\n>cases got much faster (~2x), but other cases got slower too. We're\n>however talking about queries taking a couple of miliseconds, so in\n>absolute numbers the differences are small.\n>\n>That does not mean the optimization is useless - but the example shared\n>at the beginning of this thread is quite extreme, as the values are\n>extremely compressible. Each value is ~38MB (in plaintext), but a table\n>with 100 such values has only ~40MB. Tha's 100:1 compression ratio,\n>which I think is not typical for real-world data sets.\n>\n>The data I've used are less extreme, depending on the fraction of random\n>data in values.\n>\n>I went through the code too. I've reworder a couple of comments and code\n>style issues, but there are a couple of more serious issues.\n>\n>\n>1) Why rename TOAST_COMPRESS_RAWDATA to TOAST_COMPRESS_DATA?\n>\n>This seems unnecessary, and it discards the clear hint that it's about\n>accessing the *raw* data, and the relation to TOAST_COMPRESS_RAWSIZE.\n>IMHO we should keep the original naming.\n>\n>\n>2) pglz_maximum_compressed_size signatures are confusing\n>\n>There are two places with pglz_maximum_compressed_size signature, and\n>those places are kinda out of sync when it comes to parameter names:\n>\n> int32\n> pglz_maximum_compressed_size(int32 raw_slice_size,\n> int32 total_compressed_size)\n>\n> extern\n> int32 pglz_maximum_compressed_size(int32 raw_slice_size,\n> int32 raw_size);\n>\n>Also, pg_lzcompress.c has no concept of a \"slice\" because it only deals\n>with simple compression, slicing is responsibility of the tuptoaster. So\n>we should not mix those two, not even in comments.\n>\n>\n>I propose tweaks per the attached patch - I think it makes the code\n>clearer, and it's mostly cosmetic stuff. But I haven't tested the\n>changes beyond \"it compiles\".\n>\n>\n>regards\n>\n\nFWIW I've done another round of tests with larger values (up to ~10MB)\nand the larger the values the better the speedup (a bit as expected).\nAttached is the script I used and a spreasheet with a summary.\n\nWe still need a patch addressing the review comments, though.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 9 Jul 2019 23:12:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> 于2019年7月10日周三 上午5:12写道:\n\n> On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:\n> >On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:\n> >>Hi, Tomas!\n> >>Thanks for your testing and the suggestion.\n> >>\n> >>That's quite bizarre behavior - it does work with a prefix, but not with\n> >>>suffix. And the exact ERROR changes after the prefix query.\n> >>\n> >>\n> >>I think bug is caused by \"#2 0x00000000004c3b08 in\n> >>heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0,\n> >>slicelength=-1) at tuptoaster.c:315\",\n> >>since I ignore the case where slicelength is negative, and I've appended\n> >>some comments for heap_tuple_untoast_attr_slice for the case.\n> >>\n> >>FWIW the new code added to this\n> >>>function does not adhere to our code style, and would deserve some\n> >>>additional explanation of what it's doing/why. Same for the\n> >>>heap_tuple_untoast_attr_slice, BTW.\n> >>\n> >>\n> >>I've added more comments to explain the code's behavior.\n> >>Besides, I also modified the macro \"TOAST_COMPRESS_RAWDATA\" to\n> >>\"TOAST_COMPRESS_DATA\" since\n> >>it is used to get toast compressed data rather than raw data.\n> >>\n> >\n> >Thanks, this seems to address the issue - I can no longer reproduce the\n> >crashes, allowing the benchmark to complete. I'm attaching the script I\n> >used and spreadsheet with a summary of results.\n> >\n> >For the cases I've tested (100k - 10M values, different compressibility,\n> >different prefix/length values), the results are kinda mixed - many\n> >cases got much faster (~2x), but other cases got slower too. We're\n> >however talking about queries taking a couple of miliseconds, so in\n> >absolute numbers the differences are small.\n> >\n> >That does not mean the optimization is useless - but the example shared\n> >at the beginning of this thread is quite extreme, as the values are\n> >extremely compressible. Each value is ~38MB (in plaintext), but a table\n> >with 100 such values has only ~40MB. Tha's 100:1 compression ratio,\n> >which I think is not typical for real-world data sets.\n> >\n> >The data I've used are less extreme, depending on the fraction of random\n> >data in values.\n> >\n> >I went through the code too. I've reworder a couple of comments and code\n> >style issues, but there are a couple of more serious issues.\n> >\n> >\n> >1) Why rename TOAST_COMPRESS_RAWDATA to TOAST_COMPRESS_DATA?\n> >\n> >This seems unnecessary, and it discards the clear hint that it's about\n> >accessing the *raw* data, and the relation to TOAST_COMPRESS_RAWSIZE.\n> >IMHO we should keep the original naming.\n> >\n> >\n> >2) pglz_maximum_compressed_size signatures are confusing\n> >\n> >There are two places with pglz_maximum_compressed_size signature, and\n> >those places are kinda out of sync when it comes to parameter names:\n> >\n> > int32\n> > pglz_maximum_compressed_size(int32 raw_slice_size,\n> > int32 total_compressed_size)\n> >\n> > extern\n> > int32 pglz_maximum_compressed_size(int32 raw_slice_size,\n> > int32 raw_size);\n> >\n> >Also, pg_lzcompress.c has no concept of a \"slice\" because it only deals\n> >with simple compression, slicing is responsibility of the tuptoaster. So\n> >we should not mix those two, not even in comments.\n> >\n> >\n> >I propose tweaks per the attached patch - I think it makes the code\n> >clearer, and it's mostly cosmetic stuff. But I haven't tested the\n> >changes beyond \"it compiles\".\n> >\n> >\n> >regards\n> >\n>\n> FWIW I've done another round of tests with larger values (up to ~10MB)\n> and the larger the values the better the speedup (a bit as expected).\n> Attached is the script I used and a spreasheet with a summary.\n>\n> We still need a patch addressing the review comments, though.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHi, Tomas!\nSorry for the late reply.\nThank you for further testing, I am trying to reproduce your first test\nsummary,\nsince I think the performance of the patch will not drop in almost all\ncases.\n\nBesides, If a value is composed of random characters,\nit will be hard to be compressed, because pglz requires a 25% compression\nratio by default or not worth it.\nThis means that querying the value will not trigger the patch. But the\nfirst test results show that the patch\nis slower than the master when the value is composed of random characters,\nwhich is confusing.\n\n From the second test result, we can infer that the first test result\nwas indeed affected by a random disturbance in the case of a small\ntime-consuming.\n\n> We still need a patch addressing the review comments, though.\ndone:)",
"msg_date": "Wed, 10 Jul 2019 13:35:25 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Wed, Jul 10, 2019 at 01:35:25PM +0800, Binguo Bao wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> 于2019年7月10日周三 上午5:12写道:\n>\n>> On Sat, Jul 06, 2019 at 05:23:37PM +0200, Tomas Vondra wrote:\n>> >On Sat, Jul 06, 2019 at 02:27:56AM +0800, Binguo Bao wrote:\n>> >>Hi, Tomas!\n>> >>Thanks for your testing and the suggestion.\n>> >>\n>> >>That's quite bizarre behavior - it does work with a prefix, but not with\n>> >>>suffix. And the exact ERROR changes after the prefix query.\n>> >>\n>> >>\n>> >>I think bug is caused by \"#2 0x00000000004c3b08 in\n>> >>heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=0,\n>> >>slicelength=-1) at tuptoaster.c:315\",\n>> >>since I ignore the case where slicelength is negative, and I've appended\n>> >>some comments for heap_tuple_untoast_attr_slice for the case.\n>> >>\n>> >>FWIW the new code added to this\n>> >>>function does not adhere to our code style, and would deserve some\n>> >>>additional explanation of what it's doing/why. Same for the\n>> >>>heap_tuple_untoast_attr_slice, BTW.\n>> >>\n>> >>\n>> >>I've added more comments to explain the code's behavior.\n>> >>Besides, I also modified the macro \"TOAST_COMPRESS_RAWDATA\" to\n>> >>\"TOAST_COMPRESS_DATA\" since\n>> >>it is used to get toast compressed data rather than raw data.\n>> >>\n>> >\n>> >Thanks, this seems to address the issue - I can no longer reproduce the\n>> >crashes, allowing the benchmark to complete. I'm attaching the script I\n>> >used and spreadsheet with a summary of results.\n>> >\n>> >For the cases I've tested (100k - 10M values, different compressibility,\n>> >different prefix/length values), the results are kinda mixed - many\n>> >cases got much faster (~2x), but other cases got slower too. We're\n>> >however talking about queries taking a couple of miliseconds, so in\n>> >absolute numbers the differences are small.\n>> >\n>> >That does not mean the optimization is useless - but the example shared\n>> >at the beginning of this thread is quite extreme, as the values are\n>> >extremely compressible. Each value is ~38MB (in plaintext), but a table\n>> >with 100 such values has only ~40MB. Tha's 100:1 compression ratio,\n>> >which I think is not typical for real-world data sets.\n>> >\n>> >The data I've used are less extreme, depending on the fraction of random\n>> >data in values.\n>> >\n>> >I went through the code too. I've reworder a couple of comments and code\n>> >style issues, but there are a couple of more serious issues.\n>> >\n>> >\n>> >1) Why rename TOAST_COMPRESS_RAWDATA to TOAST_COMPRESS_DATA?\n>> >\n>> >This seems unnecessary, and it discards the clear hint that it's about\n>> >accessing the *raw* data, and the relation to TOAST_COMPRESS_RAWSIZE.\n>> >IMHO we should keep the original naming.\n>> >\n>> >\n>> >2) pglz_maximum_compressed_size signatures are confusing\n>> >\n>> >There are two places with pglz_maximum_compressed_size signature, and\n>> >those places are kinda out of sync when it comes to parameter names:\n>> >\n>> > int32\n>> > pglz_maximum_compressed_size(int32 raw_slice_size,\n>> > int32 total_compressed_size)\n>> >\n>> > extern\n>> > int32 pglz_maximum_compressed_size(int32 raw_slice_size,\n>> > int32 raw_size);\n>> >\n>> >Also, pg_lzcompress.c has no concept of a \"slice\" because it only deals\n>> >with simple compression, slicing is responsibility of the tuptoaster. So\n>> >we should not mix those two, not even in comments.\n>> >\n>> >\n>> >I propose tweaks per the attached patch - I think it makes the code\n>> >clearer, and it's mostly cosmetic stuff. But I haven't tested the\n>> >changes beyond \"it compiles\".\n>> >\n>> >\n>> >regards\n>> >\n>>\n>> FWIW I've done another round of tests with larger values (up to ~10MB)\n>> and the larger the values the better the speedup (a bit as expected).\n>> Attached is the script I used and a spreasheet with a summary.\n>>\n>> We still need a patch addressing the review comments, though.\n>>\n>>\n>> regards\n>>\n>> --\n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>>\n>\n\n>Hi, Tomas! Sorry for the late reply. Thank you for further testing, I\n>am trying to reproduce your first test summary, since I think the\n>performance of the patch will not drop in almost all cases.\n>\n\nI've done some changes to the test script since the first benchmark,\naiming to make the results more stable\n\n1) uses larger amount of data (10x more)\n\n2) the data set recreated for each run (to rule out random differences in\nthe random data affecting the runs differently)\n\n3) minor configuration changes (more shared buffers etc.)\n\nI don't think we need to worry about small differences (within ~5%) which\ncan easily be due to changes to binary layout. And otherwise results from\nthe second benchmark round seem much more stable.\n\n>Besides, If a value is composed of random characters,\n>it will be hard to be compressed, because pglz requires a 25% compression\n>ratio by default or not worth it.\n>This means that querying the value will not trigger the patch. But the\n>first test results show that the patch\n>is slower than the master when the value is composed of random characters,\n>which is confusing.\n>\n\nYes, I know. But the values have compressible and incompressible (random)\npart, so in most cases the value should be compressible, although with\nvarious compression ratio. I have not tracked the size of the loaded data\nso I don't know which cases happened to be compressed or not. I'll rerun\nthe test and I'll include this information.\n\n>From the second test result, we can infer that the first test result\n>was indeed affected by a random disturbance in the case of a small\n>time-consuming.\n>\n\nYes, I agree.\n\n>> We still need a patch addressing the review comments, though.\n>done:)\n\nThanks.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 10 Jul 2019 16:47:25 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Hello, can you please update this patch?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:38:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 05:38:34PM -0300, Alvaro Herrera wrote:\n>Hello, can you please update this patch?\n>\n\nI'm not the patch author, but I've been looking at the patch recently\nand I have a rebased version at hand - so attached.\n\nFWIW I believe the patch is solid and in good shape, and it got stuck\nafter I reported some benchmarks showing somewhat flaky performance. I\nknow Binguo Bao was trying to reproduce the benchmark, and I assume the\nsilence means he was not successful :-(\n\nOn the larger data set the patch however performed very nicely, so maybe\nI just did something stupid while running the smaller one, or maybe it's\njust noise (the queries were just a couple of ms in that test). I do\nplan to rerun the benchmarks and investigate a bit - if I find the patch\nis fine, I'd like to commit it shortly.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Sep 2019 01:00:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Fri, Sep 27, 2019 at 01:00:36AM +0200, Tomas Vondra wrote:\n>On Wed, Sep 25, 2019 at 05:38:34PM -0300, Alvaro Herrera wrote:\n>>Hello, can you please update this patch?\n>>\n>\n>I'm not the patch author, but I've been looking at the patch recently\n>and I have a rebased version at hand - so attached.\n>\n>FWIW I believe the patch is solid and in good shape, and it got stuck\n>after I reported some benchmarks showing somewhat flaky performance. I\n>know Binguo Bao was trying to reproduce the benchmark, and I assume the\n>silence means he was not successful :-(\n>\n>On the larger data set the patch however performed very nicely, so maybe\n>I just did something stupid while running the smaller one, or maybe it's\n>just noise (the queries were just a couple of ms in that test). I do\n>plan to rerun the benchmarks and investigate a bit - if I find the patch\n>is fine, I'd like to commit it shortly.\n>\n\nOK, I was just about to push this after polishing it a bit, but then I\nnoticed the patch does not address one of the points from Paul' review,\nasking for comment explaining the pglz_maximum_compressed_size formula.\n\nI mean this:\n\n /*\n * Use int64 to prevent overflow during calculation.\n */\n compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n\nI'm not very familiar with pglz internals, but I'm a bit puzzled by\nthis. My first instinct was to compare it to this:\n\n #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n\nbut clearly that's a very different (much simpler) formula. So why\nshouldn't pglz_maximum_compressed_size simply use this macro?\n\nRegarding benchmarks - as I mentioned, I've repeated the tests and\neverything seems fine. The results from the two usual machines are\navailable in [1]. There are only very few noise-level regressions and\nmany very significant speedups.\n\nI have a theory what went wrong in the first run that showed some\nregressions - it's possible the machine had CPU power management\nenabled. I can't check this retroactively, but it'd explain variability\nfor short queries, and smooth behavior on longer queries.\n\n[1] https://github.com/tvondra/toast-optimize\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Sep 2019 17:56:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "\n\n> 30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> I mean this:\n> \n> /*\n> * Use int64 to prevent overflow during calculation.\n> */\n> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n> \n> I'm not very familiar with pglz internals, but I'm a bit puzzled by\n> this. My first instinct was to compare it to this:\n> \n> #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n> \n> but clearly that's a very different (much simpler) formula. So why\n> shouldn't pglz_maximum_compressed_size simply use this macro?\n\ncompressed_size accounts for possible increase of size during compression. pglz can consume up to 1 control byte for each 8 bytes of data in worst case.\nEven if whole data is compressed well - there can be prefix compressed extremely ineffectively. Thus, if you are going to decompress rawsize bytes, you need at most compressed_size bytes of compressed input.\n\n",
"msg_date": "Mon, 30 Sep 2019 21:20:22 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:\n>\n>\n>> 30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> I mean this:\n>>\n>> /*\n>> * Use int64 to prevent overflow during calculation.\n>> */\n>> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n>>\n>> I'm not very familiar with pglz internals, but I'm a bit puzzled by\n>> this. My first instinct was to compare it to this:\n>>\n>> #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n>>\n>> but clearly that's a very different (much simpler) formula. So why\n>> shouldn't pglz_maximum_compressed_size simply use this macro?\n\n>\n>compressed_size accounts for possible increase of size during\n>compression. pglz can consume up to 1 control byte for each 8 bytes of\n>data in worst case.\n\nOK, but does that actually translate in to the formula? We essentially\nneed to count 8-byte chunks in raw data, and multiply that by 9. Which\ngives us something like\n\n nchunks = ((rawsize + 7) / 8) * 9;\n\nwhich is not quite what the patch does.\n\n>Even if whole data is compressed well - there can be prefix compressed\n>extremely ineffectively. Thus, if you are going to decompress rawsize\n>bytes, you need at most compressed_size bytes of compressed input.\n\nOK, that explains why we can't use the PGLZ_MAX_OUTPUT macro.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 30 Sep 2019 19:29:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "\n\n> 30 сент. 2019 г., в 22:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:\n>> \n>> \n>>> 30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>> \n>>> I mean this:\n>>> \n>>> /*\n>>> * Use int64 to prevent overflow during calculation.\n>>> */\n>>> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n>>> \n>>> I'm not very familiar with pglz internals, but I'm a bit puzzled by\n>>> this. My first instinct was to compare it to this:\n>>> \n>>> #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n>>> \n>>> but clearly that's a very different (much simpler) formula. So why\n>>> shouldn't pglz_maximum_compressed_size simply use this macro?\n> \n>> \n>> compressed_size accounts for possible increase of size during\n>> compression. pglz can consume up to 1 control byte for each 8 bytes of\n>> data in worst case.\n> \n> OK, but does that actually translate in to the formula? We essentially\n> need to count 8-byte chunks in raw data, and multiply that by 9. Which\n> gives us something like\n> \n> nchunks = ((rawsize + 7) / 8) * 9;\n> \n> which is not quite what the patch does.\n\nI'm afraid neither formula is correct, but all this is hair-splitting differences.\n\nYour formula does not account for the fact that we may not need all bytes from last chunk.\nConsider desired decompressed size of 3 bytes. We may need 1 control byte and 3 literals, 4 bytes total\nBut nchunks = 9.\n\nBinguo's formula is appending 1 control bit per data byte and one extra control byte.\nConsider size = 8 bytes. We need 1 control byte, 8 literals, 9 total.\nBut compressed_size = 10.\n\nMathematically correct formula is\ncompressed_size = (int32) ((int64) rawsize * 9 + 7) / 8;\nHere we take one bit for each data byte, and 7 control bits for overflow.\n\nBut this equations make no big difference, each formula is safe. I'd pick one which is easier to understand and document (IMO, its nchunks = ((rawsize + 7) / 8) * 9).\n\nThanks!\n\n--\nAndrey Borodin\nOpen source RDBMS development team leader\nYandex.Cloud\n\n\n\n",
"msg_date": "Tue, 1 Oct 2019 11:20:39 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:\n>\n>\n>> 30 сент. 2019 г., в 22:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>\n>> On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:\n>>>\n>>>\n>>>> 30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>>>\n>>>> I mean this:\n>>>>\n>>>> /*\n>>>> * Use int64 to prevent overflow during calculation.\n>>>> */\n>>>> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n>>>>\n>>>> I'm not very familiar with pglz internals, but I'm a bit puzzled by\n>>>> this. My first instinct was to compare it to this:\n>>>>\n>>>> #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n>>>>\n>>>> but clearly that's a very different (much simpler) formula. So why\n>>>> shouldn't pglz_maximum_compressed_size simply use this macro?\n>>\n>>>\n>>> compressed_size accounts for possible increase of size during\n>>> compression. pglz can consume up to 1 control byte for each 8 bytes of\n>>> data in worst case.\n>>\n>> OK, but does that actually translate in to the formula? We essentially\n>> need to count 8-byte chunks in raw data, and multiply that by 9. Which\n>> gives us something like\n>>\n>> nchunks = ((rawsize + 7) / 8) * 9;\n>>\n>> which is not quite what the patch does.\n>\n>I'm afraid neither formula is correct, but all this is hair-splitting differences.\n>\n\nSure. I just want to be sure the formula is safe and we won't end up\nusing too low value in some corner case.\n\n>Your formula does not account for the fact that we may not need all bytes from last chunk.\n>Consider desired decompressed size of 3 bytes. We may need 1 control byte and 3 literals, 4 bytes total\n>But nchunks = 9.\n>\n\nOK, so essentially this means my formula works with whole chunks, i.e.\nif we happen to need just a part of a decompressed chunk, we still\nrequest enough data to decompress it whole. This way we may request up\nto 7 extra bytes, which seems fine.\n\n>Binguo's formula is appending 1 control bit per data byte and one extra\n>control byte. Consider size = 8 bytes. We need 1 control byte, 8\n>literals, 9 total. But compressed_size = 10.\n>\n>Mathematically correct formula is compressed_size = (int32) ((int64)\n>rawsize * 9 + 7) / 8; Here we take one bit for each data byte, and 7\n>control bits for overflow.\n>\n>But this equations make no big difference, each formula is safe. I'd\n>pick one which is easier to understand and document (IMO, its nchunks =\n>((rawsize + 7) / 8) * 9).\n>\n\nI'd use the *mathematically correct* formula, it doesn't seem to be any\nmore complex, and the \"one bit per byte, complete bytes\" explanation\nseems quite understandable.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 1 Oct 2019 12:08:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:\n>On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:\n>>\n>>\n>>>30 сент. 2019 г., в 22:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>>\n>>>On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:\n>>>>\n>>>>\n>>>>>30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>>>>\n>>>>>I mean this:\n>>>>>\n>>>>> /*\n>>>>> * Use int64 to prevent overflow during calculation.\n>>>>> */\n>>>>> compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n>>>>>\n>>>>>I'm not very familiar with pglz internals, but I'm a bit puzzled by\n>>>>>this. My first instinct was to compare it to this:\n>>>>>\n>>>>> #define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n>>>>>\n>>>>>but clearly that's a very different (much simpler) formula. So why\n>>>>>shouldn't pglz_maximum_compressed_size simply use this macro?\n>>>\n>>>>\n>>>>compressed_size accounts for possible increase of size during\n>>>>compression. pglz can consume up to 1 control byte for each 8 bytes of\n>>>>data in worst case.\n>>>\n>>>OK, but does that actually translate in to the formula? We essentially\n>>>need to count 8-byte chunks in raw data, and multiply that by 9. Which\n>>>gives us something like\n>>>\n>>> nchunks = ((rawsize + 7) / 8) * 9;\n>>>\n>>>which is not quite what the patch does.\n>>\n>>I'm afraid neither formula is correct, but all this is hair-splitting differences.\n>>\n>\n>Sure. I just want to be sure the formula is safe and we won't end up\n>using too low value in some corner case.\n>\n>>Your formula does not account for the fact that we may not need all bytes from last chunk.\n>>Consider desired decompressed size of 3 bytes. We may need 1 control byte and 3 literals, 4 bytes total\n>>But nchunks = 9.\n>>\n>\n>OK, so essentially this means my formula works with whole chunks, i.e.\n>if we happen to need just a part of a decompressed chunk, we still\n>request enough data to decompress it whole. This way we may request up\n>to 7 extra bytes, which seems fine.\n>\n>>Binguo's formula is appending 1 control bit per data byte and one extra\n>>control byte. Consider size = 8 bytes. We need 1 control byte, 8\n>>literals, 9 total. But compressed_size = 10.\n>>\n>>Mathematically correct formula is compressed_size = (int32) ((int64)\n>>rawsize * 9 + 7) / 8; Here we take one bit for each data byte, and 7\n>>control bits for overflow.\n>>\n>>But this equations make no big difference, each formula is safe. I'd\n>>pick one which is easier to understand and document (IMO, its nchunks =\n>>((rawsize + 7) / 8) * 9).\n>>\n>\n>I'd use the *mathematically correct* formula, it doesn't seem to be any\n>more complex, and the \"one bit per byte, complete bytes\" explanation\n>seems quite understandable.\n>\n\nPushed.\n\nI've ended up using the *mathematically correct* formula, hopefully\nwith sufficient explanation why it's correct. I've also polished a\ncouple more comments, and pushed like that.\n\nThanks to Binguo Bao for this improvement, and all the reviewers in this\nthread.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 1 Oct 2019 14:34:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Tue, Oct 01, 2019 at 02:34:20PM +0200, Tomas Vondra wrote:\n>On Tue, Oct 01, 2019 at 12:08:05PM +0200, Tomas Vondra wrote:\n>>On Tue, Oct 01, 2019 at 11:20:39AM +0500, Andrey Borodin wrote:\n>>>\n>>>\n>>>>30 сент. 2019 г., в 22:29, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>>>\n>>>>On Mon, Sep 30, 2019 at 09:20:22PM +0500, Andrey Borodin wrote:\n>>>>>\n>>>>>\n>>>>>>30 сент. 2019 г., в 20:56, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n>>>>>>\n>>>>>>I mean this:\n>>>>>>\n>>>>>>/*\n>>>>>> * Use int64 to prevent overflow during calculation.\n>>>>>> */\n>>>>>>compressed_size = (int32) ((int64) rawsize * 9 + 8) / 8;\n>>>>>>\n>>>>>>I'm not very familiar with pglz internals, but I'm a bit puzzled by\n>>>>>>this. My first instinct was to compare it to this:\n>>>>>>\n>>>>>>#define PGLZ_MAX_OUTPUT(_dlen)\t((_dlen) + 4)\n>>>>>>\n>>>>>>but clearly that's a very different (much simpler) formula. So why\n>>>>>>shouldn't pglz_maximum_compressed_size simply use this macro?\n>>>>\n>>>>>\n>>>>>compressed_size accounts for possible increase of size during\n>>>>>compression. pglz can consume up to 1 control byte for each 8 bytes of\n>>>>>data in worst case.\n>>>>\n>>>>OK, but does that actually translate in to the formula? We essentially\n>>>>need to count 8-byte chunks in raw data, and multiply that by 9. Which\n>>>>gives us something like\n>>>>\n>>>>nchunks = ((rawsize + 7) / 8) * 9;\n>>>>\n>>>>which is not quite what the patch does.\n>>>\n>>>I'm afraid neither formula is correct, but all this is hair-splitting differences.\n>>>\n>>\n>>Sure. I just want to be sure the formula is safe and we won't end up\n>>using too low value in some corner case.\n>>\n>>>Your formula does not account for the fact that we may not need all bytes from last chunk.\n>>>Consider desired decompressed size of 3 bytes. We may need 1 control byte and 3 literals, 4 bytes total\n>>>But nchunks = 9.\n>>>\n>>\n>>OK, so essentially this means my formula works with whole chunks, i.e.\n>>if we happen to need just a part of a decompressed chunk, we still\n>>request enough data to decompress it whole. This way we may request up\n>>to 7 extra bytes, which seems fine.\n>>\n>>>Binguo's formula is appending 1 control bit per data byte and one extra\n>>>control byte. Consider size = 8 bytes. We need 1 control byte, 8\n>>>literals, 9 total. But compressed_size = 10.\n>>>\n>>>Mathematically correct formula is compressed_size = (int32) ((int64)\n>>>rawsize * 9 + 7) / 8; Here we take one bit for each data byte, and 7\n>>>control bits for overflow.\n>>>\n>>>But this equations make no big difference, each formula is safe. I'd\n>>>pick one which is easier to understand and document (IMO, its nchunks =\n>>>((rawsize + 7) / 8) * 9).\n>>>\n>>\n>>I'd use the *mathematically correct* formula, it doesn't seem to be any\n>>more complex, and the \"one bit per byte, complete bytes\" explanation\n>>seems quite understandable.\n>>\n>\n>Pushed.\n>\n>I've ended up using the *mathematically correct* formula, hopefully\n>with sufficient explanation why it's correct. I've also polished a\n>couple more comments, and pushed like that.\n>\n>Thanks to Binguo Bao for this improvement, and all the reviewers in this\n>thread.\n>\n\nHmmm, this seems to trigger a failure on thorntail, which is a sparc64\nmachine (and it seems to pass on all x86 machines, so far). Per the\nbacktrace, it seems to have failed like this:\n\n Core was generated by `postgres: parallel worker for PID 2341 '.\n Program terminated with signal SIGUSR1, User defined signal 1.\n #0 heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=<optimized out>, slicelength=<optimized out>) at /home/nm/farm/sparc64_deb10_gcc_64_ubsan/HEAD/pgsql.build/../pgsql/src/backend/access/common/detoast.c:235\n 235\t\t\t\tmax_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n #0 heap_tuple_untoast_attr_slice (attr=<optimized out>, sliceoffset=<optimized out>, slicelength=<optimized out>) at /home/nm/farm/sparc64_deb10_gcc_64_ubsan/HEAD/pgsql.build/../pgsql/src/backend/access/common/detoast.c:235\n #1 0x00000100003d4ae8 in ExecInterpExpr (state=0x10000d02298, econtext=0x10000d01510, isnull=0x7feffb2fd1f) at /home/nm/farm/sparc64_deb10_gcc_64_ubsan/HEAD/pgsql.build/../pgsql/src/backend/executor/execExprInterp.c:690\n ...\n\nso likely on this line:\n\n max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n TOAST_COMPRESS_SIZE(attr));\n\nthe offset+length is just intereger arithmetics, so I don't see why that\nwould fail. So it has to be TOAST_COMPRESS_SIZE, which is defined like\nthis:\n\n #define TOAST_COMPRESS_SIZE(ptr) ((int32) VARSIZE(ptr) - TOAST_COMPRESS_HDRSZ)\n\nI wonder if that's wrong, somehow ... Maybe it should use VARSIZE_ANY,\nbut then how would it work on any platform and only fail on sparc64?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 1 Oct 2019 15:18:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> Hmmm, this seems to trigger a failure on thorntail, which is a sparc64\n> machine (and it seems to pass on all x86 machines, so far).\n\ngharial's not happy either, and I bet if you wait a bit longer you'll\nsee the same on other big-endian machines.\n\n> I wonder if that's wrong, somehow ... Maybe it should use VARSIZE_ANY,\n> but then how would it work on any platform and only fail on sparc64?\n\nMaybe it accidentally seems to work on little-endian, thanks to the\ndifferent definitions of varlena headers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Oct 2019 10:10:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Tue, Oct 01, 2019 at 10:10:37AM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> Hmmm, this seems to trigger a failure on thorntail, which is a sparc64\n>> machine (and it seems to pass on all x86 machines, so far).\n>\n>gharial's not happy either, and I bet if you wait a bit longer you'll\n>see the same on other big-endian machines.\n>\n>> I wonder if that's wrong, somehow ... Maybe it should use VARSIZE_ANY,\n>> but then how would it work on any platform and only fail on sparc64?\n>\n>Maybe it accidentally seems to work on little-endian, thanks to the\n>different definitions of varlena headers?\n>\n\nMaybe. Let's see if just using VARSIZE_ANY does the trick. If not, I'll\ninvestigate further.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 1 Oct 2019 16:57:58 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Tue, Oct 01, 2019 at 10:10:37AM -0400, Tom Lane wrote:\n>> Maybe it accidentally seems to work on little-endian, thanks to the\n>> different definitions of varlena headers?\n\n> Maybe. Let's see if just using VARSIZE_ANY does the trick. If not, I'll\n> investigate further.\n\nFWIW, prairiedog got past that test, so whatever it is seems specific\nto big-endian 64-bit. Odd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Oct 2019 11:34:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "Today I noticed strange behaviour, consider the following test:\n\npostgres@126111=#create table foo ( a text );\nCREATE TABLE\npostgres@126111=#insert into foo values ( repeat('PostgreSQL is the\nworld''s best database and leading by an Open Source Community.', 8000));\nINSERT 0 1\n\npostgres@126111=#select substring(a from 639921 for 81) from foo;\n substring\n-----------\n\n(1 row)\n\nBefore below commit:\n\ncommit 540f31680913b4e11f2caa40cafeca269cfcb22f\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Tue Oct 1 16:53:04 2019 +0200\n\n Blind attempt to fix pglz_maximum_compressed_size\n\n Commit 11a078cf87 triggered failures on big-endian machines, and the\n only plausible place for an issue seems to be that TOAST_COMPRESS_SIZE\n calls VARSIZE instead of VARSIZE_ANY. So try fixing that blindly.\n\n Discussion:\nhttps://www.postgresql.org/message-id/20191001131803.j6uin7nho7t6vxzy%40development\n\npostgres@75761=#select substring(a from 639921 for 81) from foo;\n\n substring\n\n----------------------------------------------------------------------------------\n PostgreSQL is the world's best database and leading by an Open Source\nCommunity.\n(1 row)\n\nToday I noticed strange behaviour, consider the following test:postgres@126111=#create table foo ( a text );CREATE TABLEpostgres@126111=#insert into foo values ( repeat('PostgreSQL is the world''s best database and leading by an Open Source Community.', 8000));INSERT 0 1postgres@126111=#select substring(a from 639921 for 81) from foo; substring ----------- (1 row)Before below commit:commit 540f31680913b4e11f2caa40cafeca269cfcb22fAuthor: Tomas Vondra <tomas.vondra@postgresql.org>Date: Tue Oct 1 16:53:04 2019 +0200 Blind attempt to fix pglz_maximum_compressed_size Commit 11a078cf87 triggered failures on big-endian machines, and the only plausible place for an issue seems to be that TOAST_COMPRESS_SIZE calls VARSIZE instead of VARSIZE_ANY. So try fixing that blindly. Discussion: https://www.postgresql.org/message-id/20191001131803.j6uin7nho7t6vxzy%40developmentpostgres@75761=#select substring(a from 639921 for 81) from foo; substring ---------------------------------------------------------------------------------- PostgreSQL is the world's best database and leading by an Open Source Community.(1 row)",
"msg_date": "Thu, 14 Nov 2019 15:27:42 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:\n>Today I noticed strange behaviour, consider the following test:\n>\n>postgres@126111=#create table foo ( a text );\n>CREATE TABLE\n>postgres@126111=#insert into foo values ( repeat('PostgreSQL is the\n>world''s best database and leading by an Open Source Community.', 8000));\n>INSERT 0 1\n>\n>postgres@126111=#select substring(a from 639921 for 81) from foo;\n> substring\n>-----------\n>\n>(1 row)\n>\n\nHmmm. I think the issue is heap_tuple_untoast_attr_slice is using the\nwrong way to determine compressed size in the VARATT_IS_EXTERNAL_ONDISK\nbranch. It does this\n\n max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n TOAST_COMPRESS_SIZE(attr));\n\nBut for the example you've posted TOAST_COMPRESS_SIZE(attr) returns 10,\nwhich is obviously bogus because the TOAST table contains ~75kB of data.\n\nI think it should be doing this instead:\n\n max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n toast_pointer.va_extsize);\n\nAt least that fixes it for me.\n\nI wonder if this actually explains the crashes 540f3168091 was supposed\nto fix, but it just masked them instead.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 14 Nov 2019 14:00:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
},
{
"msg_contents": "On Thu, Nov 14, 2019 at 6:30 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:\n> >Today I noticed strange behaviour, consider the following test:\n> >\n> >postgres@126111=#create table foo ( a text );\n> >CREATE TABLE\n> >postgres@126111=#insert into foo values ( repeat('PostgreSQL is the\n> >world''s best database and leading by an Open Source Community.', 8000));\n> >INSERT 0 1\n> >\n> >postgres@126111=#select substring(a from 639921 for 81) from foo;\n> > substring\n> >-----------\n> >\n> >(1 row)\n> >\n>\n> Hmmm. I think the issue is heap_tuple_untoast_attr_slice is using the\n> wrong way to determine compressed size in the VARATT_IS_EXTERNAL_ONDISK\n> branch. It does this\n>\n> max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n> TOAST_COMPRESS_SIZE(attr));\n>\n> But for the example you've posted TOAST_COMPRESS_SIZE(attr) returns 10,\n> which is obviously bogus because the TOAST table contains ~75kB of data.\n>\n> I think it should be doing this instead:\n>\n> max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n> toast_pointer.va_extsize);\n>\n> At least that fixes it for me.\n>\n> I wonder if this actually explains the crashes 540f3168091 was supposed\n> to fix, but it just masked them instead.\n>\n\n\nI tested the attached patch and that fixes the issue for me.\n\nThanks,\n\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com\n\nOn Thu, Nov 14, 2019 at 6:30 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Thu, Nov 14, 2019 at 03:27:42PM +0530, Rushabh Lathia wrote:\n>Today I noticed strange behaviour, consider the following test:\n>\n>postgres@126111=#create table foo ( a text );\n>CREATE TABLE\n>postgres@126111=#insert into foo values ( repeat('PostgreSQL is the\n>world''s best database and leading by an Open Source Community.', 8000));\n>INSERT 0 1\n>\n>postgres@126111=#select substring(a from 639921 for 81) from foo;\n> substring\n>-----------\n>\n>(1 row)\n>\n\nHmmm. I think the issue is heap_tuple_untoast_attr_slice is using the\nwrong way to determine compressed size in the VARATT_IS_EXTERNAL_ONDISK\nbranch. It does this\n\n max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n TOAST_COMPRESS_SIZE(attr));\n\nBut for the example you've posted TOAST_COMPRESS_SIZE(attr) returns 10,\nwhich is obviously bogus because the TOAST table contains ~75kB of data.\n\nI think it should be doing this instead:\n\n max_size = pglz_maximum_compressed_size(sliceoffset + slicelength,\n toast_pointer.va_extsize);\n\nAt least that fixes it for me.\n\nI wonder if this actually explains the crashes 540f3168091 was supposed\nto fix, but it just masked them instead.I tested the attached patch and that fixes the issue for me.Thanks,-- Rushabh Lathiawww.EnterpriseDB.com",
"msg_date": "Fri, 15 Nov 2019 12:14:53 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize partial TOAST decompression"
}
] |
[
{
"msg_contents": "cpluspluscheck's expanded coverage is now passing cleanly for me on\nthe macOS laptop I was testing it with at PGCon. But on returning\nhome, I find there's still some issues on some other boxes:\n\n* On Linux (at least Fedora and RHEL), I get variants of this:\n\n/usr/include/arpa/inet.h:84: error: declaration of 'char* inet_net_ntop(int, const void*, int, char*, size_t) throw ()' throws different exceptions\n/home/postgres/pgsql/src/include/port.h:506: error: from previous declaration 'char* inet_net_ntop(int, const void*, int, char*, size_t)'\n\nThat's because /usr/include/arpa/inet.h declares it as\n\nextern char *inet_net_ntop (int __af, const void *__cp, int __bits,\n char *__buf, size_t __len) __THROW;\n\nand of course when a C++ compiler reads that, __THROW will expand as\nsomething nonempty.\n\nOne possible fix for that is to teach configure to test whether\narpa/inet.h provides a declaration, and not compile our own declaration\nwhen it does. This would require being sure that we include arpa/inet.h\nanywhere we use that function, but there are few enough callers that\nthat's not much of a hardship.\n\nAlternatively, we could rename our function to pg_inet_net_ntop to\ndodge the conflict. This might be a good idea anyway to avoid\nconfusion, since our function doesn't necessarily recognize the same\naddress-family codes that libc would.\n\n* On FreeBSD 12, I get\n\n/home/tgl/pgsql/src/include/utils/hashutils.h:23:23: warning: 'register' storage\n class specifier is deprecated and incompatible with C++17\n [-Wdeprecated-register]\nextern Datum hash_any(register const unsigned char *k, register int keylen);\n ^~~~~~~~~\n/home/tgl/pgsql/src/include/utils/hashutils.h:23:56: warning: 'register' storage\n class specifier is deprecated and incompatible with C++17\n [-Wdeprecated-register]\nextern Datum hash_any(register const unsigned char *k, register int keylen);\n ^~~~~~~~~\n/home/tgl/pgsql/src/include/utils/hashutils.h:24:32: warning: 'register' storage\n class specifier is deprecated and incompatible with C++17\n [-Wdeprecated-register]\nextern Datum hash_any_extended(register const unsigned char *k,\n ^~~~~~~~~\n/home/tgl/pgsql/src/include/utils/hashutils.h:25:11: warning: 'register' storage\n class specifier is deprecated and incompatible with C++17\n [-Wdeprecated-register]\n register int ...\n ^~~~~~~~~\n\nwhich I'm inclined to think means we should drop those register keywords.\n\n(The FreeBSD box shows another couple of complaints too, but I think\nthe fixes for those are uncontroversial.)\n\nComments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Jun 2019 12:53:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Residual cpluspluscheck issues"
},
{
"msg_contents": "Hi Tom,\n\nSo it's been 17 months since you sent this email, so I'm not sure that\nnothing has happened (off list or in the code base), but...\n\nOn Sun, Jun 2, 2019 at 9:53 AM Tom Lane wrote:\n> * On FreeBSD 12, I get\n>\n> /home/tgl/pgsql/src/include/utils/hashutils.h:23:23: warning: 'register' storage\n> class specifier is deprecated and incompatible with C++17\n> [-Wdeprecated-register]\n> extern Datum hash_any(register const unsigned char *k, register int keylen);\n> ^~~~~~~~~\n> /home/tgl/pgsql/src/include/utils/hashutils.h:23:56: warning: 'register' storage\n> class specifier is deprecated and incompatible with C++17\n> [-Wdeprecated-register]\n> extern Datum hash_any(register const unsigned char *k, register int keylen);\n> ^~~~~~~~~\n> /home/tgl/pgsql/src/include/utils/hashutils.h:24:32: warning: 'register' storage\n> class specifier is deprecated and incompatible with C++17\n> [-Wdeprecated-register]\n> extern Datum hash_any_extended(register const unsigned char *k,\n> ^~~~~~~~~\n> /home/tgl/pgsql/src/include/utils/hashutils.h:25:11: warning: 'register' storage\n> class specifier is deprecated and incompatible with C++17\n> [-Wdeprecated-register]\n> register int ...\n> ^~~~~~~~~\n>\n> which I'm inclined to think means we should drop those register keywords.\n\nI think this is a warning from Clang, right? You can get the same\nwarning on macOS if you use the upstream Clang where the default value\nof -std for clang++ has been gnu++14 since LLVM 6.0 (not AppleClang,\nwhich carries a proprietary patch that simply reverts the bump, but they\ndidn't even bother to patch the manpage).\n\nI'm running into the same (well similar) warnings when running\ncpluspluscheck with GCC 11. Upon closer inspection, this is because In\nGCC 11, the default value of -std has been bumped to gnu++17. IOW, I\nwould've gotten the same warning had I just configured with CXX=\"g++-10\n-std=gnu++17\". The g++ warnings look like the following:\n\ngcc> In file included from ./src/include/port/atomics.h:70,\ngcc> from ./src/include/storage/lwlock.h:21,\ngcc> from ./src/include/storage/lock.h:23,\ngcc> from ./src/include/storage/proc.h:21,\ngcc> from ./src/include/storage/shm_mq.h:18,\ngcc> from ./src/test/modules/test_shm_mq/test_shm_mq.h:18,\ngcc> from /tmp/cpluspluscheck.AxICnl/test.cpp:3:\ngcc> ./src/include/port/atomics/arch-x86.h: In function 'bool\npg_atomic_test_set_flag_impl(volatile pg_atomic_flag*)':\ngcc> ./src/include/port/atomics/arch-x86.h:143:16: warning: ISO C++17\ndoes not allow 'register' storage class specifier [-Wregister]\ngcc> 143 | register char _res = 1;\ngcc> | ^~~~\ngcc> In file included from ./src/include/storage/spin.h:54,\ngcc> from ./src/test/modules/test_shm_mq/test_shm_mq.h:19,\ngcc> from /tmp/cpluspluscheck.AxICnl/test.cpp:3:\ngcc> ./src/include/storage/s_lock.h: In function 'int tas(volatile slock_t*)':\ngcc> ./src/include/storage/s_lock.h:226:19: warning: ISO C++17 does\nnot allow 'register' storage class specifier [-Wregister]\ngcc> 226 | register slock_t _res = 1;\ngcc> | ^~~~\n\nI think this is a problem worth solving: C++ 17 is removing the register\nkeyword, and C++ code that includes our headers have the difficult\nchoices of:\n\n1) With a pre-C++ 17 compiler that forewarns the deprecation, find a\ncompiler-specific switch to turn off the warning\n\n2) With a compiler that defaults to C++ 17 or later (major compiler\nvendors are upgrading the default to C++17):\n\n 2a) find a switch to explicitly downgrade to C++ 14 or below (and\n then possibly jump back to solving problem number 1).\n\n 2b) find a compiler-specific switch to stay in the post- C++ 17 mode,\n but somehow \"turn off\" the removal of register keyword. This is\n particularly cringe-y because the C++ programs themselves have to be\n non-formant through a header we supply.\n\nWe can either drop the register keywords here (I wonder what the impact\non code generation would be, but it'll be hard to examine, given we'll\nneed to examine _every_ instance of generated code for an inline\nfunction), or maybe consider hiding those sections with \"#ifndef\n__cplusplus\" (provided that we believe there's not much of a reason for\nthe including code to call these functions, just throwing out uneducated\nguesses here).\n\nDo we have a clear decision of what we want to do here? How can I\ncontribute?\n\nCheers,\nJesse\n\n\n",
"msg_date": "Wed, 30 Sep 2020 08:20:45 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Residual cpluspluscheck issues"
},
{
"msg_contents": "Jesse Zhang <sbjesse@gmail.com> writes:\n> So it's been 17 months since you sent this email, so I'm not sure that\n> nothing has happened (off list or in the code base), but...\n\nWell, we fixed the case that was discussed at the time [1].\n\nI'm not exactly convinced about removing the register keywords in\ns_lock.h. Those are all associated with asm blocks, which are already\nextremely C/GCC specific; complaining that the register declarations\naren't portable seems to be missing the forest for the trees.\n\nBTW, grepping my local tree says that plperl/ppport.h also has some\nregister variables, which is something we have no control over.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=232720be9\n\n\n",
"msg_date": "Wed, 30 Sep 2020 11:47:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Residual cpluspluscheck issues"
}
] |
[
{
"msg_contents": "The unused_oids script has gone from being something of interest to\neverybody that wants to write a patch that creates a new catalog\nentry, to something that patch authors could do without in many cases.\nI think that its output should prominently advertise that patches\nshould use random OIDs in the range 8000 - 9999. Commit\na6417078c4140e51cfd717448430f274b449d687 established that this should\nbe standard practice for patch authors.\n\nActually, maybe it should even suggest a *particular* random OID in\nthat range, so that the choice of OID is reliably random -- why even\nrequire patch authors to pick a number at random?\n\nIt also looks like pg_proc.dat should be updated, since it still\nmentions the old custom of trying to use contiguous OIDs. It also\ndiscourages the practice of picking OIDs at random, which is almost\nthe opposite of what it should say.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 2 Jun 2019 11:37:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "unused_oids script should advertise reserved OID range"
}
] |
[
{
"msg_contents": "Is there a reason why pgoutput sends data in text format? Seems to me that\nsending data in binary would provide a considerable performance improvement.\n\n\nDave Cramer\n\nIs there a reason why pgoutput sends data in text format? Seems to me that sending data in binary would provide a considerable performance improvement.Dave Cramer",
"msg_date": "Mon, 3 Jun 2019 10:49:54 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> Is there a reason why pgoutput sends data in text format? Seems to\n> me that sending data in binary would provide a considerable\n> performance improvement.\n\nAre you seeing something that suggests that the text output is taking\na lot of time or other resources?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 4 Jun 2019 02:54:15 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Mon, 3 Jun 2019 at 20:54, David Fetter <david@fetter.org> wrote:\n\n> On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> > Is there a reason why pgoutput sends data in text format? Seems to\n> > me that sending data in binary would provide a considerable\n> > performance improvement.\n>\n> Are you seeing something that suggests that the text output is taking\n> a lot of time or other resources?\n>\n> Actually it's on the other end that there is improvement. Parsing text\ntakes much longer for almost everything except ironically text.\n\nTo be more transparent there is some desire to use pgoutput for something\nother than logical replication. Change Data Capture clients such as\nDebezium have a requirement for a stable plugin which is shipped with core\nas this is always available in cloud providers offerings. There's no reason\nthat I am aware of that they cannot use pgoutput for this. There's also no\nreason that I am aware that binary outputs can't be supported. The protocol\nwould have to change slightly and I am working on a POC patch.\n\nThing is they aren't all written in C so using binary does provide a pretty\nsubstantial win on the decoding end.\n\nDave\n\nDave CramerOn Mon, 3 Jun 2019 at 20:54, David Fetter <david@fetter.org> wrote:On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> Is there a reason why pgoutput sends data in text format? Seems to\n> me that sending data in binary would provide a considerable\n> performance improvement.\n\nAre you seeing something that suggests that the text output is taking\na lot of time or other resources?\nActually it's on the other end that there is improvement. Parsing text takes much longer for almost everything except ironically text.To be more transparent there is some desire to use pgoutput for something other than logical replication. Change Data Capture clients such as Debezium have a requirement for a stable plugin which is shipped with core as this is always available in cloud providers offerings. There's no reason that I am aware of that they cannot use pgoutput for this. There's also no reason that I am aware that binary outputs can't be supported. The protocol would have to change slightly and I am working on a POC patch.Thing is they aren't all written in C so using binary does provide a pretty substantial win on the decoding end.Dave",
"msg_date": "Tue, 4 Jun 2019 15:47:04 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 15:47:04 -0400, Dave Cramer wrote:\n> On Mon, 3 Jun 2019 at 20:54, David Fetter <david@fetter.org> wrote:\n> \n> > On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> > > Is there a reason why pgoutput sends data in text format? Seems to\n> > > me that sending data in binary would provide a considerable\n> > > performance improvement.\n> >\n> > Are you seeing something that suggests that the text output is taking\n> > a lot of time or other resources?\n> >\n> > Actually it's on the other end that there is improvement. Parsing text\n> takes much longer for almost everything except ironically text.\n\nIt's on both sides, I'd say. E.g. float (until v12), timestamp, bytea\nare all much more expensive to convert from binary to text.\n\n\n> To be more transparent there is some desire to use pgoutput for something\n> other than logical replication. Change Data Capture clients such as\n> Debezium have a requirement for a stable plugin which is shipped with core\n> as this is always available in cloud providers offerings. There's no reason\n> that I am aware of that they cannot use pgoutput for this.\n\nExcept that that's not pgoutput's purpose, and we shouldn't make it\nmeaningfully more complicated or slower to achieve this. Don't think\nthere's a conflict in this case though.\n\n\n> There's also no reason that I am aware that binary outputs can't be\n> supported.\n\nWell, it *does* increase version dependencies, and does make replication\nmore complicated, because type oids etc cannot be relied to be the same\non source and target side.\n\n\n\n> The protocol would have to change slightly and I am working\n> on a POC patch.\n\nHm, what would have to be changed protocol wise? IIRC that'd just be a\ndifferent datum type? Or is that what you mean?\n\t\tpq_sendbyte(out, 't');\t/* 'text' data follows */\n\nIIRC there was code for the binary protocol in a predecessor of\npgoutput.\n\nI think if we were to add binary output - and I think we should - we\nought to only accept a patch if it's also used in core.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 13:38:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Tue, 4 Jun 2019 at 16:30, Andres Freund <andres.freund@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> On 2019-06-04 15:47:04 -0400, Dave Cramer wrote:\n> > On Mon, 3 Jun 2019 at 20:54, David Fetter <david@fetter.org> wrote:\n> >\n> > > On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> > > > Is there a reason why pgoutput sends data in text format? Seems to\n> > > > me that sending data in binary would provide a considerable\n> > > > performance improvement.\n> > >\n> > > Are you seeing something that suggests that the text output is taking\n> > > a lot of time or other resources?\n> > >\n> > > Actually it's on the other end that there is improvement. Parsing text\n> > takes much longer for almost everything except ironically text.\n>\n> It's on both sides, I'd say. E.g. float (until v12), timestamp, bytea\n> are all much more expensive to convert from binary to text.\n>\n>\n> > To be more transparent there is some desire to use pgoutput for something\n> > other than logical replication. Change Data Capture clients such as\n> > Debezium have a requirement for a stable plugin which is shipped with\n> core\n> > as this is always available in cloud providers offerings. There's no\n> reason\n> > that I am aware of that they cannot use pgoutput for this.\n>\n> Except that that's not pgoutput's purpose, and we shouldn't make it\n> meaningfully more complicated or slower to achieve this. Don't think\n> there's a conflict in this case though.\n>\n\nagreed, my intent was to slightly bend it to my will :)\n\n>\n>\n> > There's also no reason that I am aware that binary outputs can't be\n> > supported.\n>\n> Well, it *does* increase version dependencies, and does make replication\n> more complicated, because type oids etc cannot be relied to be the same\n> on source and target side.\n>\n> I was about to agree with this but if the type oids change from source to\ntarget you\nstill can't decode the text version properly. Unless I mis-understand\nsomething here ?\n\n>\n>\n> > The protocol would have to change slightly and I am working\n> > on a POC patch.\n>\n> Hm, what would have to be changed protocol wise? IIRC that'd just be a\n> different datum type? Or is that what you mean?\n> pq_sendbyte(out, 't'); /* 'text' data follows */\n>\n> I haven't really thought this through completely but one place JDBC has\nproblems with binary is with\ntimestamps with timezone as we don't know which timezone to use. Is it safe\nto assume everything is in UTC\nsince the server stores in UTC ? Then there are UDF's. My original thought\nwas to use options to send in the\ntypes that I wanted in binary, everything else could be sent as text.\n\nIIRC there was code for the binary protocol in a predecessor of\n> pgoutput.\n>\n\nHmmm that might be good place to start. I will do some digging through git\nhistory\n\n>\n> I think if we were to add binary output - and I think we should - we\n> ought to only accept a patch if it's also used in core.\n>\n\nCertainly; as not doing so would make my work completely irrelevant for my\npurpose.\n\nThanks,\n\nDave\n\nDave CramerOn Tue, 4 Jun 2019 at 16:30, Andres Freund <andres.freund@enterprisedb.com> wrote:Hi,\n\nOn 2019-06-04 15:47:04 -0400, Dave Cramer wrote:\n> On Mon, 3 Jun 2019 at 20:54, David Fetter <david@fetter.org> wrote:\n> \n> > On Mon, Jun 03, 2019 at 10:49:54AM -0400, Dave Cramer wrote:\n> > > Is there a reason why pgoutput sends data in text format? Seems to\n> > > me that sending data in binary would provide a considerable\n> > > performance improvement.\n> >\n> > Are you seeing something that suggests that the text output is taking\n> > a lot of time or other resources?\n> >\n> > Actually it's on the other end that there is improvement. Parsing text\n> takes much longer for almost everything except ironically text.\n\nIt's on both sides, I'd say. E.g. float (until v12), timestamp, bytea\nare all much more expensive to convert from binary to text.\n\n\n> To be more transparent there is some desire to use pgoutput for something\n> other than logical replication. Change Data Capture clients such as\n> Debezium have a requirement for a stable plugin which is shipped with core\n> as this is always available in cloud providers offerings. There's no reason\n> that I am aware of that they cannot use pgoutput for this.\n\nExcept that that's not pgoutput's purpose, and we shouldn't make it\nmeaningfully more complicated or slower to achieve this. Don't think\nthere's a conflict in this case though.agreed, my intent was to slightly bend it to my will :) \n\n\n> There's also no reason that I am aware that binary outputs can't be\n> supported.\n\nWell, it *does* increase version dependencies, and does make replication\nmore complicated, because type oids etc cannot be relied to be the same\non source and target side.\nI was about to agree with this but if the type oids change from source to target you still can't decode the text version properly. Unless I mis-understand something here ? \n\n\n> The protocol would have to change slightly and I am working\n> on a POC patch.\n\nHm, what would have to be changed protocol wise? IIRC that'd just be a\ndifferent datum type? Or is that what you mean?\n pq_sendbyte(out, 't'); /* 'text' data follows */\nI haven't really thought this through completely but one place JDBC has problems with binary is withtimestamps with timezone as we don't know which timezone to use. Is it safe to assume everything is in UTCsince the server stores in UTC ? Then there are UDF's. My original thought was to use options to send in the types that I wanted in binary, everything else could be sent as text. \nIIRC there was code for the binary protocol in a predecessor of\npgoutput.Hmmm that might be good place to start. I will do some digging through git history \n\nI think if we were to add binary output - and I think we should - we\nought to only accept a patch if it's also used in core.Certainly; as not doing so would make my work completely irrelevant for my purpose. Thanks,Dave",
"msg_date": "Tue, 4 Jun 2019 16:39:32 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 16:39:32 -0400, Dave Cramer wrote:\n> On Tue, 4 Jun 2019 at 16:30, Andres Freund <andres.freund@enterprisedb.com>\n> wrote:\n> > > There's also no reason that I am aware that binary outputs can't be\n> > > supported.\n> >\n> > Well, it *does* increase version dependencies, and does make replication\n> > more complicated, because type oids etc cannot be relied to be the same\n> > on source and target side.\n> >\n> I was about to agree with this but if the type oids change from source\n> to target you still can't decode the text version properly. Unless I\n> mis-understand something here ?\n\nThe text format doesn't care about oids. I don't see how it'd be a\nproblem? Note that some people *intentionally* use different types from\nsource to target system when logically replicating. So you can't rely on\nthe target table's types under any circumstance.\n\nI think you really have to use the textual type which we already write\nout (cf logicalrep_write_typ()) to call the binary input functions. And\nyou can send only data as binary that's from builtin types - otherwise\nthere's no guarantee at all that the target system has something\ncompatible. And even if you just assumed that all extensions etc are\npresent, you can't transport arrays / composite types in binary: For\nhard to discern reasons we a) embed type oids in them b) verify them. b)\nwon't ever work for non-builtin types, because oids are assigned\ndynamically.\n\n\n> > I think if we were to add binary output - and I think we should - we\n> > ought to only accept a patch if it's also used in core.\n> >\n> \n> Certainly; as not doing so would make my work completely irrelevant for my\n> purpose.\n\nWhat I mean is that the builtin logical replication would have to use\nthis on the receiving side too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 13:46:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 4 Jun 2019 at 16:46, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-04 16:39:32 -0400, Dave Cramer wrote:\n> > On Tue, 4 Jun 2019 at 16:30, Andres Freund <\n> andres.freund@enterprisedb.com>\n> > wrote:\n> > > > There's also no reason that I am aware that binary outputs can't be\n> > > > supported.\n> > >\n> > > Well, it *does* increase version dependencies, and does make\n> replication\n> > > more complicated, because type oids etc cannot be relied to be the same\n> > > on source and target side.\n> > >\n> > I was about to agree with this but if the type oids change from source\n> > to target you still can't decode the text version properly. Unless I\n> > mis-understand something here ?\n>\n> The text format doesn't care about oids. I don't see how it'd be a\n> problem? Note that some people *intentionally* use different types from\n> source to target system when logically replicating. So you can't rely on\n> the target table's types under any circumstance.\n>\n> I think you really have to use the textual type which we already write\n> out (cf logicalrep_write_typ()) to call the binary input functions. And\n> you can send only data as binary that's from builtin types - otherwise\n> there's no guarantee at all that the target system has something\n> compatible. And even if you just assumed that all extensions etc are\n> present, you can't transport arrays / composite types in binary: For\n> hard to discern reasons we a) embed type oids in them b) verify them. b)\n> won't ever work for non-builtin types, because oids are assigned\n> dynamically.\n>\n\nI figured arrays and UDT's would be problematic.\n\n>\n>\n> > > I think if we were to add binary output - and I think we should - we\n> > > ought to only accept a patch if it's also used in core.\n> > >\n> >\n> > Certainly; as not doing so would make my work completely irrelevant for\n> my\n> > purpose.\n>\n> What I mean is that the builtin logical replication would have to use\n> this on the receiving side too.\n>\n> Got it, thanks for validating that the idea isn't nuts. Now I *have* to\nproduce a POC.\n\nThanks,\nDave\n\n>\n>\n\nOn Tue, 4 Jun 2019 at 16:46, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-04 16:39:32 -0400, Dave Cramer wrote:\n> On Tue, 4 Jun 2019 at 16:30, Andres Freund <andres.freund@enterprisedb.com>\n> wrote:\n> > > There's also no reason that I am aware that binary outputs can't be\n> > > supported.\n> >\n> > Well, it *does* increase version dependencies, and does make replication\n> > more complicated, because type oids etc cannot be relied to be the same\n> > on source and target side.\n> >\n> I was about to agree with this but if the type oids change from source\n> to target you still can't decode the text version properly. Unless I\n> mis-understand something here ?\n\nThe text format doesn't care about oids. I don't see how it'd be a\nproblem? Note that some people *intentionally* use different types from\nsource to target system when logically replicating. So you can't rely on\nthe target table's types under any circumstance.\n\nI think you really have to use the textual type which we already write\nout (cf logicalrep_write_typ()) to call the binary input functions. And\nyou can send only data as binary that's from builtin types - otherwise\nthere's no guarantee at all that the target system has something\ncompatible. And even if you just assumed that all extensions etc are\npresent, you can't transport arrays / composite types in binary: For\nhard to discern reasons we a) embed type oids in them b) verify them. b)\nwon't ever work for non-builtin types, because oids are assigned\ndynamically. I figured arrays and UDT's would be problematic. \n\n\n> > I think if we were to add binary output - and I think we should - we\n> > ought to only accept a patch if it's also used in core.\n> >\n> \n> Certainly; as not doing so would make my work completely irrelevant for my\n> purpose.\n\nWhat I mean is that the builtin logical replication would have to use\nthis on the receiving side too.\nGot it, thanks for validating that the idea isn't nuts. Now I *have* to produce a POC.Thanks,Dave",
"msg_date": "Tue, 4 Jun 2019 16:55:33 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 6/4/19 4:39 PM, Dave Cramer wrote:\n> I haven't really thought this through completely but one place JDBC has\n> problems with binary is with\n> timestamps with timezone as we don't know which timezone to use. Is it safe\n> to assume everything is in UTC\n> since the server stores in UTC ?\n\nPL/Java, when converting to the Java 8 java.time types (because those\nare sane), will turn a timestamp with timezone into an OffsetDateTime\nwith explicit offset zero (UTC), no matter what timezone may have been\nused when the value was input (as you've observed, there's no way to\nrecover that). In the return direction, if given an OffsetDateTime\nwith any nonzero offset, it will adjust the value to UTC for postgres.\n\nSo, yes, say I.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 4 Jun 2019 17:33:28 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 04:55:33PM -0400, Dave Cramer wrote:\n> On Tue, 4 Jun 2019 at 16:46, Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2019-06-04 16:39:32 -0400, Dave Cramer wrote:\n> > > On Tue, 4 Jun 2019 at 16:30, Andres Freund <\n> > andres.freund@enterprisedb.com>\n> > > wrote:\n> > > > > There's also no reason that I am aware that binary outputs can't be\n> > > > > supported.\n> > > >\n> > > > Well, it *does* increase version dependencies, and does make\n> > replication\n> > > > more complicated, because type oids etc cannot be relied to be the same\n> > > > on source and target side.\n> > > >\n> > > I was about to agree with this but if the type oids change from source\n> > > to target you still can't decode the text version properly. Unless I\n> > > mis-understand something here ?\n> >\n> > The text format doesn't care about oids. I don't see how it'd be a\n> > problem? Note that some people *intentionally* use different types from\n> > source to target system when logically replicating. So you can't rely on\n> > the target table's types under any circumstance.\n> >\n> > I think you really have to use the textual type which we already write\n> > out (cf logicalrep_write_typ()) to call the binary input functions. And\n> > you can send only data as binary that's from builtin types - otherwise\n> > there's no guarantee at all that the target system has something\n> > compatible. And even if you just assumed that all extensions etc are\n> > present, you can't transport arrays / composite types in binary: For\n> > hard to discern reasons we a) embed type oids in them b) verify them. b)\n> > won't ever work for non-builtin types, because oids are assigned\n> > dynamically.\n> >\n> \n> I figured arrays and UDT's would be problematic.\n\nWould it make sense to work toward a binary format that's not\narchitecture-specific? I recall from COPY that our binary format is\nnot standardized across, for example, big- and little-endian machines.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Wed, 5 Jun 2019 00:05:02 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-05 00:05:02 +0200, David Fetter wrote:\n> Would it make sense to work toward a binary format that's not\n> architecture-specific? I recall from COPY that our binary format is\n> not standardized across, for example, big- and little-endian machines.\n\nI think you recall wrongly. It's obviously possible that we have bugs\naround this, but output/input routines are supposed to handle a\nendianess independent format. That usually means that you have to do\nendianess conversions, but that doesn't make it non-standardized.\n\n- Andres\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:08:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 4 Jun 2019 at 18:08, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n> > Would it make sense to work toward a binary format that's not\n> > architecture-specific? I recall from COPY that our binary format is\n> > not standardized across, for example, big- and little-endian machines.\n>\n> I think you recall wrongly. It's obviously possible that we have bugs\n> around this, but output/input routines are supposed to handle a\n> endianess independent format. That usually means that you have to do\n> endianess conversions, but that doesn't make it non-standardized.\n>\n\nAdditionally there are a number of drivers that already know how to handle\nour binary types.\nI don't really think there's a win here. I also want to keep the changes\nsmall .\n\nDave\n\nOn Tue, 4 Jun 2019 at 18:08, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-05 00:05:02 +0200, David Fetter wrote:\n> Would it make sense to work toward a binary format that's not\n> architecture-specific? I recall from COPY that our binary format is\n> not standardized across, for example, big- and little-endian machines.\n\nI think you recall wrongly. It's obviously possible that we have bugs\naround this, but output/input routines are supposed to handle a\nendianess independent format. That usually means that you have to do\nendianess conversions, but that doesn't make it non-standardized.Additionally there are a number of drivers that already know how to handle our binary types.I don't really think there's a win here. I also want to keep the changes small .Dave",
"msg_date": "Tue, 4 Jun 2019 18:32:23 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 05/06/2019 00:08, Andres Freund wrote:\n> Hi,\n> \n> On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>> Would it make sense to work toward a binary format that's not\n>> architecture-specific? I recall from COPY that our binary format is\n>> not standardized across, for example, big- and little-endian machines.\n> \n> I think you recall wrongly. It's obviously possible that we have bugs\n> around this, but output/input routines are supposed to handle a\n> endianess independent format. That usually means that you have to do\n> endianess conversions, but that doesn't make it non-standardized.\n> \n\nYeah, there are really 3 formats of data we have, text protocol, binary\nnetwork protocol and internal on disk format. The internal on disk\nformat will not work across big/little-endian but network binary\nprotocol will.\n\nFWIW I don't think the code for binary format was included in original\nlogical replication patch (I really tried to keep it as minimal as\npossible), but the code and protocol is pretty much ready for adding that.\n\nThat said, pglogical has code which handles this (I guess Andres means\nthat by predecessor of pgoutput) so if you look for example at the\nwrite_tuple/read_tuple/decide_datum_transfer in\nhttps://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\nthat can help you give some ideas on how to approach this.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Wed, 5 Jun 2019 13:18:42 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\n\nOn Wed, 5 Jun 2019 at 07:18, Petr Jelinek <petr.jelinek@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> On 05/06/2019 00:08, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n> >> Would it make sense to work toward a binary format that's not\n> >> architecture-specific? I recall from COPY that our binary format is\n> >> not standardized across, for example, big- and little-endian machines.\n> >\n> > I think you recall wrongly. It's obviously possible that we have bugs\n> > around this, but output/input routines are supposed to handle a\n> > endianess independent format. That usually means that you have to do\n> > endianess conversions, but that doesn't make it non-standardized.\n> >\n>\n> Yeah, there are really 3 formats of data we have, text protocol, binary\n> network protocol and internal on disk format. The internal on disk\n> format will not work across big/little-endian but network binary\n> protocol will.\n>\n> FWIW I don't think the code for binary format was included in original\n> logical replication patch (I really tried to keep it as minimal as\n> possible), but the code and protocol is pretty much ready for adding that.\n>\nYes, I looked through the public history and could not find it. Thanks for\nconfirming.\n\n>\n> That said, pglogical has code which handles this (I guess Andres means\n> that by predecessor of pgoutput) so if you look for example at the\n> write_tuple/read_tuple/decide_datum_transfer in\n>\n> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\n> that can help you give some ideas on how to approach this.\n>\n\nThanks for the tip!\n\n\nDave Cramer\n\n>\n>\n>\n\nHi,On Wed, 5 Jun 2019 at 07:18, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote:Hi,\n\nOn 05/06/2019 00:08, Andres Freund wrote:\n> Hi,\n> \n> On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>> Would it make sense to work toward a binary format that's not\n>> architecture-specific? I recall from COPY that our binary format is\n>> not standardized across, for example, big- and little-endian machines.\n> \n> I think you recall wrongly. It's obviously possible that we have bugs\n> around this, but output/input routines are supposed to handle a\n> endianess independent format. That usually means that you have to do\n> endianess conversions, but that doesn't make it non-standardized.\n> \n\nYeah, there are really 3 formats of data we have, text protocol, binary\nnetwork protocol and internal on disk format. The internal on disk\nformat will not work across big/little-endian but network binary\nprotocol will.\n\nFWIW I don't think the code for binary format was included in original\nlogical replication patch (I really tried to keep it as minimal as\npossible), but the code and protocol is pretty much ready for adding that.Yes, I looked through the public history and could not find it. Thanks for confirming. \n\nThat said, pglogical has code which handles this (I guess Andres means\nthat by predecessor of pgoutput) so if you look for example at the\nwrite_tuple/read_tuple/decide_datum_transfer in\nhttps://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\nthat can help you give some ideas on how to approach this.Thanks for the tip!Dave Cramer",
"msg_date": "Wed, 5 Jun 2019 07:21:28 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Wed, 5 Jun 2019 at 07:21, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Hi,\n>\n>\n> On Wed, 5 Jun 2019 at 07:18, Petr Jelinek <petr.jelinek@2ndquadrant.com>\n> wrote:\n>\n>> Hi,\n>>\n>> On 05/06/2019 00:08, Andres Freund wrote:\n>> > Hi,\n>> >\n>> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>> >> Would it make sense to work toward a binary format that's not\n>> >> architecture-specific? I recall from COPY that our binary format is\n>> >> not standardized across, for example, big- and little-endian machines.\n>> >\n>> > I think you recall wrongly. It's obviously possible that we have bugs\n>> > around this, but output/input routines are supposed to handle a\n>> > endianess independent format. That usually means that you have to do\n>> > endianess conversions, but that doesn't make it non-standardized.\n>> >\n>>\n>> Yeah, there are really 3 formats of data we have, text protocol, binary\n>> network protocol and internal on disk format. The internal on disk\n>> format will not work across big/little-endian but network binary\n>> protocol will.\n>>\n>> FWIW I don't think the code for binary format was included in original\n>> logical replication patch (I really tried to keep it as minimal as\n>> possible), but the code and protocol is pretty much ready for adding that.\n>>\n> Yes, I looked through the public history and could not find it. Thanks for\n> confirming.\n>\n>>\n>> That said, pglogical has code which handles this (I guess Andres means\n>> that by predecessor of pgoutput) so if you look for example at the\n>> write_tuple/read_tuple/decide_datum_transfer in\n>>\n>> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\n>> that can help you give some ideas on how to approach this.\n>>\n>\n> Thanks for the tip!\n>\n\nLooking at:\nhttps://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163\n\nthis seems completely ignored. What was the intent?\n\nDave\n\nOn Wed, 5 Jun 2019 at 07:21, Dave Cramer <davecramer@gmail.com> wrote:Hi,On Wed, 5 Jun 2019 at 07:18, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote:Hi,\n\nOn 05/06/2019 00:08, Andres Freund wrote:\n> Hi,\n> \n> On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>> Would it make sense to work toward a binary format that's not\n>> architecture-specific? I recall from COPY that our binary format is\n>> not standardized across, for example, big- and little-endian machines.\n> \n> I think you recall wrongly. It's obviously possible that we have bugs\n> around this, but output/input routines are supposed to handle a\n> endianess independent format. That usually means that you have to do\n> endianess conversions, but that doesn't make it non-standardized.\n> \n\nYeah, there are really 3 formats of data we have, text protocol, binary\nnetwork protocol and internal on disk format. The internal on disk\nformat will not work across big/little-endian but network binary\nprotocol will.\n\nFWIW I don't think the code for binary format was included in original\nlogical replication patch (I really tried to keep it as minimal as\npossible), but the code and protocol is pretty much ready for adding that.Yes, I looked through the public history and could not find it. Thanks for confirming. \n\nThat said, pglogical has code which handles this (I guess Andres means\nthat by predecessor of pgoutput) so if you look for example at the\nwrite_tuple/read_tuple/decide_datum_transfer in\nhttps://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\nthat can help you give some ideas on how to approach this.Thanks for the tip!Looking at:https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163this seems completely ignored. What was the intent?Dave",
"msg_date": "Wed, 5 Jun 2019 11:51:10 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi\n\nOn June 5, 2019 8:51:10 AM PDT, Dave Cramer <davecramer@gmail.com> wrote:\n>On Wed, 5 Jun 2019 at 07:21, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>> Hi,\n>>\n>>\n>> On Wed, 5 Jun 2019 at 07:18, Petr Jelinek\n><petr.jelinek@2ndquadrant.com>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> On 05/06/2019 00:08, Andres Freund wrote:\n>>> > Hi,\n>>> >\n>>> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>>> >> Would it make sense to work toward a binary format that's not\n>>> >> architecture-specific? I recall from COPY that our binary format\n>is\n>>> >> not standardized across, for example, big- and little-endian\n>machines.\n>>> >\n>>> > I think you recall wrongly. It's obviously possible that we have\n>bugs\n>>> > around this, but output/input routines are supposed to handle a\n>>> > endianess independent format. That usually means that you have to\n>do\n>>> > endianess conversions, but that doesn't make it non-standardized.\n>>> >\n>>>\n>>> Yeah, there are really 3 formats of data we have, text protocol,\n>binary\n>>> network protocol and internal on disk format. The internal on disk\n>>> format will not work across big/little-endian but network binary\n>>> protocol will.\n>>>\n>>> FWIW I don't think the code for binary format was included in\n>original\n>>> logical replication patch (I really tried to keep it as minimal as\n>>> possible), but the code and protocol is pretty much ready for adding\n>that.\n>>>\n>> Yes, I looked through the public history and could not find it.\n>Thanks for\n>> confirming.\n>>\n>>>\n>>> That said, pglogical has code which handles this (I guess Andres\n>means\n>>> that by predecessor of pgoutput) so if you look for example at the\n>>> write_tuple/read_tuple/decide_datum_transfer in\n>>>\n>>>\n>https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\n>>> that can help you give some ideas on how to approach this.\n>>>\n>>\n>> Thanks for the tip!\n>>\n>\n>Looking at:\n>https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163\n>\n>this seems completely ignored. What was the intent?\n\nThat's about the output of the plugin, not the datatypes. And independent of text/binary output, the protocol contains non-printable chars.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 05 Jun 2019 09:01:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn Wed, 5 Jun 2019 at 12:01, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi\n>\n> On June 5, 2019 8:51:10 AM PDT, Dave Cramer <davecramer@gmail.com> wrote:\n> >On Wed, 5 Jun 2019 at 07:21, Dave Cramer <davecramer@gmail.com> wrote:\n> >\n> >> Hi,\n> >>\n> >>\n> >> On Wed, 5 Jun 2019 at 07:18, Petr Jelinek\n> ><petr.jelinek@2ndquadrant.com>\n> >> wrote:\n> >>\n> >>> Hi,\n> >>>\n> >>> On 05/06/2019 00:08, Andres Freund wrote:\n> >>> > Hi,\n> >>> >\n> >>> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n> >>> >> Would it make sense to work toward a binary format that's not\n> >>> >> architecture-specific? I recall from COPY that our binary format\n> >is\n> >>> >> not standardized across, for example, big- and little-endian\n> >machines.\n> >>> >\n> >>> > I think you recall wrongly. It's obviously possible that we have\n> >bugs\n> >>> > around this, but output/input routines are supposed to handle a\n> >>> > endianess independent format. That usually means that you have to\n> >do\n> >>> > endianess conversions, but that doesn't make it non-standardized.\n> >>> >\n> >>>\n> >>> Yeah, there are really 3 formats of data we have, text protocol,\n> >binary\n> >>> network protocol and internal on disk format. The internal on disk\n> >>> format will not work across big/little-endian but network binary\n> >>> protocol will.\n> >>>\n> >>> FWIW I don't think the code for binary format was included in\n> >original\n> >>> logical replication patch (I really tried to keep it as minimal as\n> >>> possible), but the code and protocol is pretty much ready for adding\n> >that.\n> >>>\n> >> Yes, I looked through the public history and could not find it.\n> >Thanks for\n> >> confirming.\n> >>\n> >>>\n> >>> That said, pglogical has code which handles this (I guess Andres\n> >means\n> >>> that by predecessor of pgoutput) so if you look for example at the\n> >>> write_tuple/read_tuple/decide_datum_transfer in\n> >>>\n> >>>\n> >\n> https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\n> >>> that can help you give some ideas on how to approach this.\n> >>>\n> >>\n> >> Thanks for the tip!\n> >>\n> >\n> >Looking at:\n> >\n> https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163\n> >\n> >this seems completely ignored. What was the intent?\n>\n> That's about the output of the plugin, not the datatypes. And independent\n> of text/binary output, the protocol contains non-printable chars.\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.\n>\n\n\nSo one of the things they would like added is to get not null information\nin the schema record. This is so they can mark the field Optional in Java.\nI presume this would also have some uses in other languages. As I\nunderstand it this would require a protocol bump. If this were to be\naccepted are there any outstanding asks that would useful to add if we were\ngoing to bump the protocol?\n\nDave\n\nHi,On Wed, 5 Jun 2019 at 12:01, Andres Freund <andres@anarazel.de> wrote:Hi\n\nOn June 5, 2019 8:51:10 AM PDT, Dave Cramer <davecramer@gmail.com> wrote:\n>On Wed, 5 Jun 2019 at 07:21, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>> Hi,\n>>\n>>\n>> On Wed, 5 Jun 2019 at 07:18, Petr Jelinek\n><petr.jelinek@2ndquadrant.com>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> On 05/06/2019 00:08, Andres Freund wrote:\n>>> > Hi,\n>>> >\n>>> > On 2019-06-05 00:05:02 +0200, David Fetter wrote:\n>>> >> Would it make sense to work toward a binary format that's not\n>>> >> architecture-specific? I recall from COPY that our binary format\n>is\n>>> >> not standardized across, for example, big- and little-endian\n>machines.\n>>> >\n>>> > I think you recall wrongly. It's obviously possible that we have\n>bugs\n>>> > around this, but output/input routines are supposed to handle a\n>>> > endianess independent format. That usually means that you have to\n>do\n>>> > endianess conversions, but that doesn't make it non-standardized.\n>>> >\n>>>\n>>> Yeah, there are really 3 formats of data we have, text protocol,\n>binary\n>>> network protocol and internal on disk format. The internal on disk\n>>> format will not work across big/little-endian but network binary\n>>> protocol will.\n>>>\n>>> FWIW I don't think the code for binary format was included in\n>original\n>>> logical replication patch (I really tried to keep it as minimal as\n>>> possible), but the code and protocol is pretty much ready for adding\n>that.\n>>>\n>> Yes, I looked through the public history and could not find it.\n>Thanks for\n>> confirming.\n>>\n>>>\n>>> That said, pglogical has code which handles this (I guess Andres\n>means\n>>> that by predecessor of pgoutput) so if you look for example at the\n>>> write_tuple/read_tuple/decide_datum_transfer in\n>>>\n>>>\n>https://github.com/2ndQuadrant/pglogical/blob/REL2_x_STABLE/pglogical_proto_native.c\n>>> that can help you give some ideas on how to approach this.\n>>>\n>>\n>> Thanks for the tip!\n>>\n>\n>Looking at:\n>https://github.com/postgres/postgres/blob/8255c7a5eeba8f1a38b7a431c04909bde4f5e67d/src/backend/replication/pgoutput/pgoutput.c#L163\n>\n>this seems completely ignored. What was the intent?\n\nThat's about the output of the plugin, not the datatypes. And independent of text/binary output, the protocol contains non-printable chars.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.So one of the things they would like added is to get not null information in the schema record. This is so they can mark the field Optional in Java. I presume this would also have some uses in other languages. As I understand it this would require a protocol bump. If this were to be accepted are there any outstanding asks that would useful to add if we were going to bump the protocol?Dave",
"msg_date": "Wed, 5 Jun 2019 18:47:57 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-05 18:47:57 -0400, Dave Cramer wrote:\n> So one of the things they would like added is to get not null information\n> in the schema record. This is so they can mark the field Optional in Java.\n> I presume this would also have some uses in other languages. As I\n> understand it this would require a protocol bump. If this were to be\n> accepted are there any outstanding asks that would useful to add if we were\n> going to bump the protocol?\n\nI'm pretty strongly opposed to this. What's the limiting factor when\nadding such information? I think clients that want something like this\nought to query the database for catalog information when getting schema\ninformation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Jun 2019 15:50:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\n\nOn Wed, 5 Jun 2019 at 18:50, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-05 18:47:57 -0400, Dave Cramer wrote:\n> > So one of the things they would like added is to get not null information\n> > in the schema record. This is so they can mark the field Optional in\n> Java.\n> > I presume this would also have some uses in other languages. As I\n> > understand it this would require a protocol bump. If this were to be\n> > accepted are there any outstanding asks that would useful to add if we\n> were\n> > going to bump the protocol?\n>\n> I'm pretty strongly opposed to this. What's the limiting factor when\n> adding such information? I think clients that want something like this\n> ought to query the database for catalog information when getting schema\n> information.\n>\n\nI'm not intimately familiar with their code. I will query them more about\nthe ask.\n\nI am curious why you are \"strongly\" opposed however. We already have the\ninformation. Adding doesn't seem onerous.\n\nDave\n\nHi,On Wed, 5 Jun 2019 at 18:50, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-05 18:47:57 -0400, Dave Cramer wrote:\n> So one of the things they would like added is to get not null information\n> in the schema record. This is so they can mark the field Optional in Java.\n> I presume this would also have some uses in other languages. As I\n> understand it this would require a protocol bump. If this were to be\n> accepted are there any outstanding asks that would useful to add if we were\n> going to bump the protocol?\n\nI'm pretty strongly opposed to this. What's the limiting factor when\nadding such information? I think clients that want something like this\nought to query the database for catalog information when getting schema\ninformation.I'm not intimately familiar with their code. I will query them more about the ask.I am curious why you are \"strongly\" opposed however. We already have the information. Adding doesn't seem onerous.Dave",
"msg_date": "Wed, 5 Jun 2019 19:05:05 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-05 19:05:05 -0400, Dave Cramer wrote:\n> I am curious why you are \"strongly\" opposed however. We already have the\n> information. Adding doesn't seem onerous.\n\n(thought I'd already replied with this)\n\nThe problem is that I don't recognize a limiting principle:\n\nIf we want NOT NULL information for clients, why don't we include the\nunderlying types for arrays, and the fields in composite types? What\nabout foreign keys? And unique keys?\n\nAnd then we suddenly need tracking for all these, so we don't always\nsend out that information when we previously already did - and in some\nof the cases there's no infrastructure for that.\n\nI just don't quite buy that the output plugin build for pg's logical\nreplication needs is a good place to include a continually increasing\namount of metadata that logical replication doesn't need. That's going\nto add overhead and make the code more complicated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:27:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 06/07/19 19:27, Andres Freund wrote:\n> The problem is that I don't recognize a limiting principle:\n> \n> If we want NOT NULL information for clients, why don't we include the\n> underlying types for arrays, and the fields in composite types? What\n> about foreign keys? And unique keys?\n\nThis reminds me of an idea I had for a future fe/be protocol version,\nright after a talk by Alyssa Ritchie and Henrietta Dombrovskaya at the\nlast 2Q PGConf. [1]\n\nIt seems they had ended up designing a whole 'nother \"protocol level\"\ninvolving queries wrapping their results as JSON and an app layer that\nunwraps again, after trying a simpler first approach that was foiled by the\ninability to see into arrays and anonymous record types in the 'describe'\nresponse.\n\nI thought, in a new protocol rev, why not let the driver send additional\n'describe' messages after the first one, to drill into structure of\nindividual columns mentioned in the first response, before sending the\n'execute' message?\n\nIf it doesn't want the further detail, it doesn't have to ask.\n\n> And then we suddenly need tracking for all these, so we don't always\n> send out that information when we previously already did\n\nIf it's up to the client driver, it can track what it needs or already has.\n\nI haven't looked too deeply into the replication protocol ... it happens\nunder a kind of copy-both, right?, so maybe there's a way for the receiver\nto send some inquiries back, but maybe in a windowed, full-duplex way where\nit might have to buffer some incoming messages before getting the response\nto an inquiry message it sent.\n\nWould those be thinkable thoughts for a future protocol rev?\n\nRegards,\n-Chap\n\n\n[1]\nhttps://www.2qpgconf.com/schedule/information-exchange-techniques-for-javapostgresql-applications/\n\n\n",
"msg_date": "Fri, 7 Jun 2019 20:52:38 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n> It seems they had ended up designing a whole 'nother \"protocol level\"\n> involving queries wrapping their results as JSON and an app layer that\n> unwraps again, after trying a simpler first approach that was foiled by the\n> inability to see into arrays and anonymous record types in the 'describe'\n> response.\n\nI suspect quite a few people would have to have left the projectbefore\nthis would happen.\n\n\n> I thought, in a new protocol rev, why not let the driver send additional\n> 'describe' messages after the first one, to drill into structure of\n> individual columns mentioned in the first response, before sending the\n> 'execute' message?\n> \n> If it doesn't want the further detail, it doesn't have to ask.\n> \n> > And then we suddenly need tracking for all these, so we don't always\n> > send out that information when we previously already did\n> \n> If it's up to the client driver, it can track what it needs or already has.\n\n> I haven't looked too deeply into the replication protocol ... it happens\n> under a kind of copy-both, right?, so maybe there's a way for the receiver\n> to send some inquiries back, but maybe in a windowed, full-duplex way where\n> it might have to buffer some incoming messages before getting the response\n> to an inquiry message it sent.\n\nThat'd be a *lot* of additional complexity, and pretty much prohibitive\nfrom a performance POV. We'd have to not continue decoding on the server\nside *all* the time to give the client a chance to inquire additional\ninformation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 18:01:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 06/07/19 21:01, Andres Freund wrote:\n> On 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n>> It seems they had ended up designing a whole 'nother \"protocol level\"\n>> involving queries wrapping their results as JSON and an app layer that\n>> unwraps again, after trying a simpler first approach that was foiled by the\n>> inability to see into arrays and anonymous record types in the 'describe'\n>> response.\n> \n> I suspect quite a few people would have to have left the projectbefore\n> this would happen.\n\nI'm not sure I understand what you're getting at. The \"whole 'nother\nprotocol\" was something they actually implemented, at the application\nlevel, by rewriting their queries to produce JSON and their client to\nunwrap it. It wasn't proposed to go into postgres ... but it was a\nworkaround they were forced into by the current protocol's inability\nto tell them what they were getting.\n\n> That'd be a *lot* of additional complexity, and pretty much prohibitive\n> from a performance POV. We'd have to not continue decoding on the server\n> side *all* the time to give the client a chance to inquire additional\n> information.\n\nDoes anything travel in the client->server direction during replication?\nI thought I saw CopyBoth mentioned. Is there a select()/poll() being done\nso those messages can be received?\n\nIt does seem that the replication protocol would be the tougher problem.\nFor the regular extended-query protocol, it seems like allowing an extra\nDescribe or two before Execute might not be awfully hard.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 7 Jun 2019 21:16:12 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 21:16:12 -0400, Chapman Flack wrote:\n> On 06/07/19 21:01, Andres Freund wrote:\n> > On 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n> > That'd be a *lot* of additional complexity, and pretty much prohibitive\n> > from a performance POV. We'd have to not continue decoding on the server\n> > side *all* the time to give the client a chance to inquire additional\n> > information.\n> \n> Does anything travel in the client->server direction during replication?\n> I thought I saw CopyBoth mentioned. Is there a select()/poll() being done\n> so those messages can be received?\n\nYes, acknowledgements of how far data has been received (and how far\nprocessed), which is then used to release resources (WAL, xid horizon)\nand allow synchronous replication to block until something has been\nreceived.\n\n- Andres\n\n\n",
"msg_date": "Fri, 7 Jun 2019 18:18:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 06:01:12PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n>> It seems they had ended up designing a whole 'nother \"protocol level\"\n>> involving queries wrapping their results as JSON and an app layer that\n>> unwraps again, after trying a simpler first approach that was foiled by the\n>> inability to see into arrays and anonymous record types in the 'describe'\n>> response.\n>\n>I suspect quite a few people would have to have left the projectbefore\n>this would happen.\n>\n>\n>> I thought, in a new protocol rev, why not let the driver send additional\n>> 'describe' messages after the first one, to drill into structure of\n>> individual columns mentioned in the first response, before sending the\n>> 'execute' message?\n>>\n>> If it doesn't want the further detail, it doesn't have to ask.\n>>\n>> > And then we suddenly need tracking for all these, so we don't always\n>> > send out that information when we previously already did\n>>\n>> If it's up to the client driver, it can track what it needs or already has.\n>\n>> I haven't looked too deeply into the replication protocol ... it happens\n>> under a kind of copy-both, right?, so maybe there's a way for the receiver\n>> to send some inquiries back, but maybe in a windowed, full-duplex way where\n>> it might have to buffer some incoming messages before getting the response\n>> to an inquiry message it sent.\n>\n>That'd be a *lot* of additional complexity, and pretty much prohibitive\n>from a performance POV. We'd have to not continue decoding on the server\n>side *all* the time to give the client a chance to inquire additional\n>information.\n>\n\nI kinda agree with this, and I think it's an argument why replication\nsolutions that need such additional metadata (e.g. because they have no\ndatabase to query) should not rely on pgoutput but should invent their own\ndecoding plugin. Which is why it's a plugin.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 9 Jun 2019 00:27:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "This should have gone to hackers as well\n\n---------- Forwarded message ---------\nFrom: Dave Cramer <davecramer@gmail.com>\nDate: Sat, Jun 8, 2019, 6:41 PM\nSubject: Re: Binary support for pgoutput plugin\nTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n\n\n\n\nOn Sat, Jun 8, 2019, 6:27 PM Tomas Vondra, <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Fri, Jun 07, 2019 at 06:01:12PM -0700, Andres Freund wrote:\n> >Hi,\n> >\n> >On 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n> >> It seems they had ended up designing a whole 'nother \"protocol level\"\n> >> involving queries wrapping their results as JSON and an app layer that\n> >> unwraps again, after trying a simpler first approach that was foiled by\n> the\n> >> inability to see into arrays and anonymous record types in the\n> 'describe'\n> >> response.\n> >\n> >I suspect quite a few people would have to have left the projectbefore\n> >this would happen.\n> >\n> >\n> >> I thought, in a new protocol rev, why not let the driver send additional\n> >> 'describe' messages after the first one, to drill into structure of\n> >> individual columns mentioned in the first response, before sending the\n> >> 'execute' message?\n> >>\n> >> If it doesn't want the further detail, it doesn't have to ask.\n> >>\n> >> > And then we suddenly need tracking for all these, so we don't always\n> >> > send out that information when we previously already did\n> >>\n> >> If it's up to the client driver, it can track what it needs or already\n> has.\n> >\n> >> I haven't looked too deeply into the replication protocol ... it happens\n> >> under a kind of copy-both, right?, so maybe there's a way for the\n> receiver\n> >> to send some inquiries back, but maybe in a windowed, full-duplex way\n> where\n> >> it might have to buffer some incoming messages before getting the\n> response\n> >> to an inquiry message it sent.\n> >\n> >That'd be a *lot* of additional complexity, and pretty much prohibitive\n> >from a performance POV. We'd have to not continue decoding on the server\n> >side *all* the time to give the client a chance to inquire additional\n> >information.\n> >\n>\n> I kinda agree with this, and I think it's an argument why replication\n> solutions that need such additional metadata (e.g. because they have no\n> database to query) should not rely on pgoutput but should invent their own\n> decoding plugin. Which is why it's a plugin.\n>\n\nSo the reason we are discussing using pgoutput plugin is because it is part\nof core and guaranteed to be in cloud providers solutions. I'm trying to\nfind a balance here of using what we have as opposed to burdening core to\ntake on additional code to take care of. Not sending the metadata is not a\ndeal breaker but i can see some value in it.\n\n\nDave\n\n>\n>\n\nThis should have gone to hackers as well---------- Forwarded message ---------From: Dave Cramer <davecramer@gmail.com>Date: Sat, Jun 8, 2019, 6:41 PMSubject: Re: Binary support for pgoutput pluginTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>On Sat, Jun 8, 2019, 6:27 PM Tomas Vondra, <tomas.vondra@2ndquadrant.com> wrote:On Fri, Jun 07, 2019 at 06:01:12PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-06-07 20:52:38 -0400, Chapman Flack wrote:\n>> It seems they had ended up designing a whole 'nother \"protocol level\"\n>> involving queries wrapping their results as JSON and an app layer that\n>> unwraps again, after trying a simpler first approach that was foiled by the\n>> inability to see into arrays and anonymous record types in the 'describe'\n>> response.\n>\n>I suspect quite a few people would have to have left the projectbefore\n>this would happen.\n>\n>\n>> I thought, in a new protocol rev, why not let the driver send additional\n>> 'describe' messages after the first one, to drill into structure of\n>> individual columns mentioned in the first response, before sending the\n>> 'execute' message?\n>>\n>> If it doesn't want the further detail, it doesn't have to ask.\n>>\n>> > And then we suddenly need tracking for all these, so we don't always\n>> > send out that information when we previously already did\n>>\n>> If it's up to the client driver, it can track what it needs or already has.\n>\n>> I haven't looked too deeply into the replication protocol ... it happens\n>> under a kind of copy-both, right?, so maybe there's a way for the receiver\n>> to send some inquiries back, but maybe in a windowed, full-duplex way where\n>> it might have to buffer some incoming messages before getting the response\n>> to an inquiry message it sent.\n>\n>That'd be a *lot* of additional complexity, and pretty much prohibitive\n>from a performance POV. We'd have to not continue decoding on the server\n>side *all* the time to give the client a chance to inquire additional\n>information.\n>\n\nI kinda agree with this, and I think it's an argument why replication\nsolutions that need such additional metadata (e.g. because they have no\ndatabase to query) should not rely on pgoutput but should invent their own\ndecoding plugin. Which is why it's a plugin.So the reason we are discussing using pgoutput plugin is because it is part of core and guaranteed to be in cloud providers solutions. I'm trying to find a balance here of using what we have as opposed to burdening core to take on additional code to take care of. Not sending the metadata is not a deal breaker but i can see some value in it. Dave",
"msg_date": "Sat, 8 Jun 2019 19:41:34 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-08 19:41:34 -0400, Dave Cramer wrote:\n> So the reason we are discussing using pgoutput plugin is because it is part\n> of core and guaranteed to be in cloud providers solutions.\n\nIMO people needing this should then band together and write one that's\nsuitable, rather than trying to coerce test_decoding and now pgoutput\ninto something they're not made for.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Jun 2019 17:09:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sat, 8 Jun 2019 at 20:09, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-08 19:41:34 -0400, Dave Cramer wrote:\n> > So the reason we are discussing using pgoutput plugin is because it is\n> part\n> > of core and guaranteed to be in cloud providers solutions.\n>\n> IMO people needing this should then band together and write one that's\n> suitable, rather than trying to coerce test_decoding and now pgoutput\n> into something they're not made for.\n>\n\nAt the moment it would look a lot like pgoutput. For now I'm fine with no\nchanges to pgoutput other than binary\nOnce we have some real use cases we can look at writing a new one. I would\nimagine we would actually start with\npgoutput and just add to it.\n\nThanks,\nDave\n\nOn Sat, 8 Jun 2019 at 20:09, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-08 19:41:34 -0400, Dave Cramer wrote:\n> So the reason we are discussing using pgoutput plugin is because it is part\n> of core and guaranteed to be in cloud providers solutions.\n\nIMO people needing this should then band together and write one that's\nsuitable, rather than trying to coerce test_decoding and now pgoutput\ninto something they're not made for.At the moment it would look a lot like pgoutput. For now I'm fine with no changes to pgoutput other than binary Once we have some real use cases we can look at writing a new one. I would imagine we would actually start withpgoutput and just add to it.Thanks,Dave",
"msg_date": "Sat, 8 Jun 2019 20:40:43 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sat, Jun 08, 2019 at 08:40:43PM -0400, Dave Cramer wrote:\n>On Sat, 8 Jun 2019 at 20:09, Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2019-06-08 19:41:34 -0400, Dave Cramer wrote:\n>> > So the reason we are discussing using pgoutput plugin is because it is\n>> part\n>> > of core and guaranteed to be in cloud providers solutions.\n>>\n>> IMO people needing this should then band together and write one that's\n>> suitable, rather than trying to coerce test_decoding and now pgoutput\n>> into something they're not made for.\n>>\n>\n>At the moment it would look a lot like pgoutput. For now I'm fine with no\n>changes to pgoutput other than binary\n>Once we have some real use cases we can look at writing a new one. I would\n>imagine we would actually start with\n>pgoutput and just add to it.\n>\n\nI understand the desire to make this work for managed cloud environments,\nwe support quite a few customers who would benefit from it. But pgoutput\nis meant specifically for built-in replication, and adding complexity that\nis useless for that use case does not seem like a good tradeoff.\n\n From this POV the binary mode is fine, because it'd benefit pgoutput, but\nthe various other stuff mentioned here (e.g. nullability) is not.\n\nAnd if we implement a new plugin for use by out-of-core stuff, I guess\nwe'd probably done it in an extension. But even having it in contrib would\nnot make it automatically installed on managed systems, because AFAIK the\nvarious providers only allow whitelisted extensions. At which point\nthere's there's little difference compared to external extensions.\n\nI think the best party to implement such extension is whoever implements\nsuch replication system (say Debezium), because they are the ones who know\nwhich format / behavior would work for them. And they can also show the\nbenefit to their users, who can then push the cloud providers to install\nthe extension. Of course, that'll take a long time (but it's unclear how\nlong), and until then they'll have to provide some fallback.\n\nThis is a bit of a chicken-egg problem, with three parties - our project,\nprojects building on logical replication and cloud providers. And no\nmatter how you slice it, the party implementing it has only limited (if\nany) control over what the other parties allow.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 9 Jun 2019 12:47:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "So back to binary output.\n\n From what I can tell the place to specify binary options would be in the\ncreate publication and or in replication slots?\n\nThe challenge as I see it is that the subscriber would have to be able to\ndecode binary output.\n\nAny thoughts on how to handle this? At the moment I'm assuming that this\nwould only work for subscribers that knew how to handle binary.\n\nRegards,\n\nDave\n\n\n>\n\nSo back to binary output.From what I can tell the place to specify binary options would be in the create publication and or in replication slots?The challenge as I see it is that the subscriber would have to be able to decode binary output. Any thoughts on how to handle this? At the moment I'm assuming that this would only work for subscribers that knew how to handle binary.Regards,Dave",
"msg_date": "Mon, 10 Jun 2019 07:27:41 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 10/06/2019 13:27, Dave Cramer wrote:\n> So back to binary output.\n> \n> From what I can tell the place to specify binary options would be in the\n> create publication and or in replication slots?\n> \n> The challenge as I see it is that the subscriber would have to be able\n> to decode binary output. \n> \n> Any thoughts on how to handle this? At the moment I'm assuming that this\n> would only work for subscribers that knew how to handle binary.\n> \n\nGiven that we don't need to write anything extra to WAL on upstream to\nsupport binary output, why not just have the request for binary data as\nan option for the pgoutput and have it chosen dynamically? Then it's the\nsubscriber who asks for binary output via option(s) to START_REPLICATION.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 10 Jun 2019 13:49:47 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "OK, before I go too much further down this rabbit hole I'd like feedback on\nthe current code. See attached patch\n\nThere is one obvious hack where in binary mode I reset the input cursor to\nallow the binary input to be re-read\n From what I can tell the alternative is to convert the data in\nlogicalrep_read_tuple but that would require moving a lot of the logic\ncurrently in worker.c to proto.c. This seems minimally invasive.\n\nand thanks Petr for the tip to use pglogical for ideas.\n\nThanks,\nDave Cramer\n\n\n\n>\n>",
"msg_date": "Tue, 11 Jun 2019 15:44:02 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 10 Jun 2019 at 07:49, Petr Jelinek <petr.jelinek@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> On 10/06/2019 13:27, Dave Cramer wrote:\n> > So back to binary output.\n> >\n> > From what I can tell the place to specify binary options would be in the\n> > create publication and or in replication slots?\n> >\n> > The challenge as I see it is that the subscriber would have to be able\n> > to decode binary output.\n> >\n> > Any thoughts on how to handle this? At the moment I'm assuming that this\n> > would only work for subscribers that knew how to handle binary.\n> >\n>\n> Given that we don't need to write anything extra to WAL on upstream to\n> support binary output, why not just have the request for binary data as\n> an option for the pgoutput and have it chosen dynamically? Then it's the\n> subscriber who asks for binary output via option(s) to START_REPLICATION.\n>\n\nIf I understand this correctly this would add something to the CREATE/ALTER\nSUBSCRIPTION commands in the WITH clause.\nAdditionally another column would be required for pg_subscription for the\nbinary option.\nDoes it make sense to add an options column which would just be a comma\nseparated string?\nNot that I have future options in mind but seems like something that might\ncome up in the future.\n\n\nDave Cramer\n\n>\n>\n\nOn Mon, 10 Jun 2019 at 07:49, Petr Jelinek <petr.jelinek@2ndquadrant.com> wrote:Hi,\n\nOn 10/06/2019 13:27, Dave Cramer wrote:\n> So back to binary output.\n> \n> From what I can tell the place to specify binary options would be in the\n> create publication and or in replication slots?\n> \n> The challenge as I see it is that the subscriber would have to be able\n> to decode binary output. \n> \n> Any thoughts on how to handle this? At the moment I'm assuming that this\n> would only work for subscribers that knew how to handle binary.\n> \n\nGiven that we don't need to write anything extra to WAL on upstream to\nsupport binary output, why not just have the request for binary data as\nan option for the pgoutput and have it chosen dynamically? Then it's the\nsubscriber who asks for binary output via option(s) to START_REPLICATION.If I understand this correctly this would add something to the CREATE/ALTER SUBSCRIPTION commands in the WITH clause.Additionally another column would be required for pg_subscription for the binary option. Does it make sense to add an options column which would just be a comma separated string? Not that I have future options in mind but seems like something that might come up in the future.Dave Cramer",
"msg_date": "Wed, 12 Jun 2019 10:35:48 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 10:35:48AM -0400, Dave Cramer wrote:\n>On Mon, 10 Jun 2019 at 07:49, Petr Jelinek <petr.jelinek@2ndquadrant.com>\n>wrote:\n>\n>> Hi,\n>>\n>> On 10/06/2019 13:27, Dave Cramer wrote:\n>> > So back to binary output.\n>> >\n>> > From what I can tell the place to specify binary options would be in the\n>> > create publication and or in replication slots?\n>> >\n>> > The challenge as I see it is that the subscriber would have to be able\n>> > to decode binary output.\n>> >\n>> > Any thoughts on how to handle this? At the moment I'm assuming that this\n>> > would only work for subscribers that knew how to handle binary.\n>> >\n>>\n>> Given that we don't need to write anything extra to WAL on upstream to\n>> support binary output, why not just have the request for binary data as\n>> an option for the pgoutput and have it chosen dynamically? Then it's the\n>> subscriber who asks for binary output via option(s) to START_REPLICATION.\n>>\n>\n>If I understand this correctly this would add something to the CREATE/ALTER\n>SUBSCRIPTION commands in the WITH clause.\n>Additionally another column would be required for pg_subscription for the\n>binary option.\n>Does it make sense to add an options column which would just be a comma\n>separated string?\n>Not that I have future options in mind but seems like something that might\n>come up in the future.\n>\n\nI'd just add a boolean column to the catalog. That's what I did in the\npatch adding support for streaming in-progress transactions. I don't think\nwe expect many additional parameters, so it makes little sense to optimize\nfor that case.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 14 Jun 2019 20:36:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Fri, 14 Jun 2019 at 14:36, Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Jun 12, 2019 at 10:35:48AM -0400, Dave Cramer wrote:\n> >On Mon, 10 Jun 2019 at 07:49, Petr Jelinek <petr.jelinek@2ndquadrant.com>\n> >wrote:\n> >\n> >> Hi,\n> >>\n> >> On 10/06/2019 13:27, Dave Cramer wrote:\n> >> > So back to binary output.\n> >> >\n> >> > From what I can tell the place to specify binary options would be in\n> the\n> >> > create publication and or in replication slots?\n> >> >\n> >> > The challenge as I see it is that the subscriber would have to be able\n> >> > to decode binary output.\n> >> >\n> >> > Any thoughts on how to handle this? At the moment I'm assuming that\n> this\n> >> > would only work for subscribers that knew how to handle binary.\n> >> >\n> >>\n> >> Given that we don't need to write anything extra to WAL on upstream to\n> >> support binary output, why not just have the request for binary data as\n> >> an option for the pgoutput and have it chosen dynamically? Then it's the\n> >> subscriber who asks for binary output via option(s) to\n> START_REPLICATION.\n> >>\n> >\n> >If I understand this correctly this would add something to the\n> CREATE/ALTER\n> >SUBSCRIPTION commands in the WITH clause.\n> >Additionally another column would be required for pg_subscription for the\n> >binary option.\n> >Does it make sense to add an options column which would just be a comma\n> >separated string?\n> >Not that I have future options in mind but seems like something that might\n> >come up in the future.\n> >\n>\n> I'd just add a boolean column to the catalog. That's what I did in the\n> patch adding support for streaming in-progress transactions. I don't think\n> we expect many additional parameters, so it makes little sense to optimize\n> for that case.\n>\n\nWhich is what I have done. Thanks\n\nI've attached both patches for comments.\nI still have to add documentation.\n\nRegards,\n\nDave",
"msg_date": "Fri, 14 Jun 2019 15:42:47 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 15:42, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> Dave Cramer\n>\n>\n> On Fri, 14 Jun 2019 at 14:36, Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> wrote:\n>\n>> On Wed, Jun 12, 2019 at 10:35:48AM -0400, Dave Cramer wrote:\n>> >On Mon, 10 Jun 2019 at 07:49, Petr Jelinek <petr.jelinek@2ndquadrant.com\n>> >\n>> >wrote:\n>> >\n>> >> Hi,\n>> >>\n>> >> On 10/06/2019 13:27, Dave Cramer wrote:\n>> >> > So back to binary output.\n>> >> >\n>> >> > From what I can tell the place to specify binary options would be in\n>> the\n>> >> > create publication and or in replication slots?\n>> >> >\n>> >> > The challenge as I see it is that the subscriber would have to be\n>> able\n>> >> > to decode binary output.\n>> >> >\n>> >> > Any thoughts on how to handle this? At the moment I'm assuming that\n>> this\n>> >> > would only work for subscribers that knew how to handle binary.\n>> >> >\n>> >>\n>> >> Given that we don't need to write anything extra to WAL on upstream to\n>> >> support binary output, why not just have the request for binary data as\n>> >> an option for the pgoutput and have it chosen dynamically? Then it's\n>> the\n>> >> subscriber who asks for binary output via option(s) to\n>> START_REPLICATION.\n>> >>\n>> >\n>> >If I understand this correctly this would add something to the\n>> CREATE/ALTER\n>> >SUBSCRIPTION commands in the WITH clause.\n>> >Additionally another column would be required for pg_subscription for the\n>> >binary option.\n>> >Does it make sense to add an options column which would just be a comma\n>> >separated string?\n>> >Not that I have future options in mind but seems like something that\n>> might\n>> >come up in the future.\n>> >\n>>\n>> I'd just add a boolean column to the catalog. That's what I did in the\n>> patch adding support for streaming in-progress transactions. I don't think\n>> we expect many additional parameters, so it makes little sense to optimize\n>> for that case.\n>>\n>\n> Which is what I have done. Thanks\n>\n> I've attached both patches for comments.\n> I still have to add documentation.\n>\n> Regards,\n>\n> Dave\n>\n\nAdditional patch for documentation.\n\n\nDave Cramer",
"msg_date": "Mon, 17 Jun 2019 10:29:26 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Wed, 5 Jun 2019 at 18:50, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-06-05 18:47:57 -0400, Dave Cramer wrote:\n> > So one of the things they would like added is to get not null information\n> > in the schema record. This is so they can mark the field Optional in\n> Java.\n> > I presume this would also have some uses in other languages. As I\n> > understand it this would require a protocol bump. If this were to be\n> > accepted are there any outstanding asks that would useful to add if we\n> were\n> > going to bump the protocol?\n>\n> I'm pretty strongly opposed to this. What's the limiting factor when\n> adding such information? I think clients that want something like this\n> ought to query the database for catalog information when getting schema\n> information.\n>\n>\nSo talking some more to the guys that want to use this for Change Data\nCapture they pointed out that if the schema changes while they are offline\nthere is no way to query the catalog information\n\nDave\n\n\n>\n\nOn Wed, 5 Jun 2019 at 18:50, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-06-05 18:47:57 -0400, Dave Cramer wrote:\n> So one of the things they would like added is to get not null information\n> in the schema record. This is so they can mark the field Optional in Java.\n> I presume this would also have some uses in other languages. As I\n> understand it this would require a protocol bump. If this were to be\n> accepted are there any outstanding asks that would useful to add if we were\n> going to bump the protocol?\n\nI'm pretty strongly opposed to this. What's the limiting factor when\nadding such information? I think clients that want something like this\nought to query the database for catalog information when getting schema\ninformation.\nSo talking some more to the guys that want to use this for Change Data Capture they pointed out that if the schema changes while they are offline there is no way to query the catalog informationDave",
"msg_date": "Thu, 4 Jul 2019 09:49:30 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On Mon, Jun 17, 2019 at 10:29:26AM -0400, Dave Cramer wrote:\n> > Which is what I have done. Thanks\n> >\n> > I've attached both patches for comments.\n> > I still have to add documentation.\n>\n> Additional patch for documentation.\n\nThank you for the patch! Unfortunately 0002 has some conflicts, could\nyou please send a rebased version? In the meantime I have few questions:\n\n -LogicalRepRelId\n +void\n logicalrep_read_insert(StringInfo in, LogicalRepTupleData *newtup)\n {\n char\t\taction;\n -\tLogicalRepRelId relid;\n -\n -\t/* read the relation id */\n -\trelid = pq_getmsgint(in, 4);\n\n action = pq_getmsgbyte(in);\n if (action != 'N')\n @@ -175,7 +173,6 @@ logicalrep_read_insert(StringInfo in, LogicalRepTupleData *newtup)\n\n logicalrep_read_tuple(in, newtup);\n\n -\treturn relid;\n }\n\n ....\n\n -\trelid = logicalrep_read_insert(s, &newtup);\n +\t/* read the relation id */\n +\trelid = pq_getmsgint(s, 4);\n rel = logicalrep_rel_open(relid, RowExclusiveLock);\n +\n +\tlogicalrep_read_insert(s, &newtup);\n\nMaybe I'm missing something, what is the reason of moving pq_getmsgint\nout of logicalrep_read_*? Just from the patch it seems that the code is\nequivalent.\n\n> There is one obvious hack where in binary mode I reset the input\n> cursor to allow the binary input to be re-read From what I can tell\n> the alternative is to convert the data in logicalrep_read_tuple but\n> that would require moving a lot of the logic currently in worker.c to\n> proto.c. This seems minimally invasive.\n\nWhich logic has to be moved for example? Actually it sounds more natural\nto me, if this functionality would be in e.g. logicalrep_read_tuple\nrather than slot_store_data, since it has something to do with reading\ndata. And it seems that in pglogical it's also located in\npglogical_read_tuple.\n\n\n",
"msg_date": "Sun, 27 Oct 2019 16:02:28 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sun, 27 Oct 2019 at 11:00, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Mon, Jun 17, 2019 at 10:29:26AM -0400, Dave Cramer wrote:\n> > > Which is what I have done. Thanks\n> > >\n> > > I've attached both patches for comments.\n> > > I still have to add documentation.\n> >\n> > Additional patch for documentation.\n>\n> Thank you for the patch! Unfortunately 0002 has some conflicts, could\n> you please send a rebased version? In the meantime I have few questions:\n>\n> -LogicalRepRelId\n> +void\n> logicalrep_read_insert(StringInfo in, LogicalRepTupleData *newtup)\n> {\n> char action;\n> - LogicalRepRelId relid;\n> -\n> - /* read the relation id */\n> - relid = pq_getmsgint(in, 4);\n>\n> action = pq_getmsgbyte(in);\n> if (action != 'N')\n> @@ -175,7 +173,6 @@ logicalrep_read_insert(StringInfo in,\n> LogicalRepTupleData *newtup)\n>\n> logicalrep_read_tuple(in, newtup);\n>\n> - return relid;\n> }\n>\n> ....\n>\n> - relid = logicalrep_read_insert(s, &newtup);\n> + /* read the relation id */\n> + relid = pq_getmsgint(s, 4);\n> rel = logicalrep_rel_open(relid, RowExclusiveLock);\n> +\n> + logicalrep_read_insert(s, &newtup);\n>\n> Maybe I'm missing something, what is the reason of moving pq_getmsgint\n> out of logicalrep_read_*? Just from the patch it seems that the code is\n> equivalent.\n>\n> > There is one obvious hack where in binary mode I reset the input\n> > cursor to allow the binary input to be re-read From what I can tell\n> > the alternative is to convert the data in logicalrep_read_tuple but\n> > that would require moving a lot of the logic currently in worker.c to\n> > proto.c. This seems minimally invasive.\n>\n> Which logic has to be moved for example? Actually it sounds more natural\n> to me, if this functionality would be in e.g. logicalrep_read_tuple\n> rather than slot_store_data, since it has something to do with reading\n> data. And it seems that in pglogical it's also located in\n> pglogical_read_tuple.\n>\n\nOk, I've rebased and reverted logicalrep_read_insert\n\nAs to the last comment, honestly it's been so long I can't remember why I\nput that comment in there.\n\nThanks for reviewing\n\nDave",
"msg_date": "Wed, 30 Oct 2019 10:03:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Thu, Oct 31, 2019 at 3:03 AM Dave Cramer <davecramer@gmail.com> wrote:\n> Ok, I've rebased and reverted logicalrep_read_insert\n\nHi Dave,\n\n From the code style police (actually just from cfbot, which is set up\nto complain about declarations after statements, a bit of C99 we\naren't ready for):\n\nproto.c:557:6: error: ISO C90 forbids mixed declarations and code\n[-Werror=declaration-after-statement]\n int len = pq_getmsgint(in, 4); /* read length */\n ^\nproto.c:573:6: error: ISO C90 forbids mixed declarations and code\n[-Werror=declaration-after-statement]\n int len = pq_getmsgint(in, 4); /* read length */\n ^\n\n\n",
"msg_date": "Mon, 4 Nov 2019 15:46:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sun, 3 Nov 2019 at 21:47, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Oct 31, 2019 at 3:03 AM Dave Cramer <davecramer@gmail.com> wrote:\n> > Ok, I've rebased and reverted logicalrep_read_insert\n>\n> Hi Dave,\n>\n> From the code style police (actually just from cfbot, which is set up\n> to complain about declarations after statements, a bit of C99 we\n> aren't ready for):\n>\n> proto.c:557:6: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> int len = pq_getmsgint(in, 4); /* read length */\n> ^\n> proto.c:573:6: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> int len = pq_getmsgint(in, 4); /* read length */\n> ^\n>\n\nThomas,\n\nThanks for the review.\n\nSee attached",
"msg_date": "Tue, 5 Nov 2019 07:16:10 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On Tue, Nov 05, 2019 at 07:16:10AM -0500, Dave Cramer wrote:\n>\n> See attached\n\n --- a/src/backend/replication/logical/worker.c\n +++ b/src/backend/replication/logical/worker.c\n @@ -1779,6 +1779,7 @@ ApplyWorkerMain(Datum main_arg)\n options.slotname = myslotname;\n options.proto.logical.proto_version = LOGICALREP_PROTO_VERSION_NUM;\n options.proto.logical.publication_names = MySubscription->publications;\n +\toptions.proto.logical.binary = MySubscription->binary;\n\nI'm a bit confused, shouldn't be there also\n\n\t--- a/src/backend/catalog/pg_subscription.c\n\t+++ b/src/backend/catalog/pg_subscription.c\n\t@@ -71,6 +71,7 @@ GetSubscription(Oid subid, bool missing_ok)\n\t\t\tsub->name = pstrdup(NameStr(subform->subname));\n\t\t\tsub->owner = subform->subowner;\n\t\t\tsub->enabled = subform->subenabled;\n\t+ sub->binary = subform->subbinary;\n\nin the GetSubscription?\n\n\n",
"msg_date": "Fri, 8 Nov 2019 17:22:32 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 8 Nov 2019 at 11:20, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Tue, Nov 05, 2019 at 07:16:10AM -0500, Dave Cramer wrote:\n> >\n> > See attached\n>\n> --- a/src/backend/replication/logical/worker.c\n> +++ b/src/backend/replication/logical/worker.c\n> @@ -1779,6 +1779,7 @@ ApplyWorkerMain(Datum main_arg)\n> options.slotname = myslotname;\n> options.proto.logical.proto_version = LOGICALREP_PROTO_VERSION_NUM;\n> options.proto.logical.publication_names =\n> MySubscription->publications;\n> + options.proto.logical.binary = MySubscription->binary;\n>\n> I'm a bit confused, shouldn't be there also\n>\n> --- a/src/backend/catalog/pg_subscription.c\n> +++ b/src/backend/catalog/pg_subscription.c\n> @@ -71,6 +71,7 @@ GetSubscription(Oid subid, bool missing_ok)\n> sub->name = pstrdup(NameStr(subform->subname));\n> sub->owner = subform->subowner;\n> sub->enabled = subform->subenabled;\n> + sub->binary = subform->subbinary;\n>\n> in the GetSubscription?\n>\n\nyes, I have added this. I will supply an updated patch later.\n\nNow a bigger question(s).\n\nPreviously someone mentioned that we need to confirm whether the two\nservers are compatible for binary or not.\n\nChecking to make sure the two servers have the same endianness is obvious.\nSizeof int, long, float, double, timestamp (float/int) at a minimum.\n\nthis could be done in libpqrcv_startstreaming. The question I have\nremaining is do we fall back to text mode if needed or simply fail ?\n\nDave\n\nOn Fri, 8 Nov 2019 at 11:20, Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Tue, Nov 05, 2019 at 07:16:10AM -0500, Dave Cramer wrote:\n>\n> See attached\n\n --- a/src/backend/replication/logical/worker.c\n +++ b/src/backend/replication/logical/worker.c\n @@ -1779,6 +1779,7 @@ ApplyWorkerMain(Datum main_arg)\n options.slotname = myslotname;\n options.proto.logical.proto_version = LOGICALREP_PROTO_VERSION_NUM;\n options.proto.logical.publication_names = MySubscription->publications;\n + options.proto.logical.binary = MySubscription->binary;\n\nI'm a bit confused, shouldn't be there also\n\n --- a/src/backend/catalog/pg_subscription.c\n +++ b/src/backend/catalog/pg_subscription.c\n @@ -71,6 +71,7 @@ GetSubscription(Oid subid, bool missing_ok)\n sub->name = pstrdup(NameStr(subform->subname));\n sub->owner = subform->subowner;\n sub->enabled = subform->subenabled;\n + sub->binary = subform->subbinary;\n\nin the GetSubscription?yes, I have added this. I will supply an updated patch later.Now a bigger question(s). Previously someone mentioned that we need to confirm whether the two servers are compatible for binary or not.Checking to make sure the two servers have the same endianness is obvious.Sizeof int, long, float, double, timestamp (float/int) at a minimum.this could be done in libpqrcv_startstreaming. The question I have remaining is do we fall back to text mode if needed or simply fail ?Dave",
"msg_date": "Mon, 11 Nov 2019 11:15:45 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 2019-Nov-11, Dave Cramer wrote:\n\n> Previously someone mentioned that we need to confirm whether the two\n> servers are compatible for binary or not.\n> \n> Checking to make sure the two servers have the same endianness is obvious.\n> Sizeof int, long, float, double, timestamp (float/int) at a minimum.\n> \n> this could be done in libpqrcv_startstreaming. The question I have\n> remaining is do we fall back to text mode if needed or simply fail ?\n\nI think it makes more sense to have it fail. If the user wants to retry\nin text mode, they can do that easily enough; but if we make it\nfall-back automatically and they set up the received wrongly by mistake,\nthey would pay the performance penalty without noticing.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 11 Nov 2019 13:55:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 11 Nov 2019 at 12:04, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Nov-11, Dave Cramer wrote:\n>\n> > Previously someone mentioned that we need to confirm whether the two\n> > servers are compatible for binary or not.\n> >\n> > Checking to make sure the two servers have the same endianness is\n> obvious.\n> > Sizeof int, long, float, double, timestamp (float/int) at a minimum.\n> >\n> > this could be done in libpqrcv_startstreaming. The question I have\n> > remaining is do we fall back to text mode if needed or simply fail ?\n>\n> I think it makes more sense to have it fail. If the user wants to retry\n> in text mode, they can do that easily enough; but if we make it\n> fall-back automatically and they set up the received wrongly by mistake,\n> they would pay the performance penalty without noticing.\n>\n>\nAlvaro,\n\nthanks, after sending this I pretty much came to the same conclusion.\n\nDave\n\n>\n>\n\nOn Mon, 11 Nov 2019 at 12:04, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Nov-11, Dave Cramer wrote:\n\n> Previously someone mentioned that we need to confirm whether the two\n> servers are compatible for binary or not.\n> \n> Checking to make sure the two servers have the same endianness is obvious.\n> Sizeof int, long, float, double, timestamp (float/int) at a minimum.\n> \n> this could be done in libpqrcv_startstreaming. The question I have\n> remaining is do we fall back to text mode if needed or simply fail ?\n\nI think it makes more sense to have it fail. If the user wants to retry\nin text mode, they can do that easily enough; but if we make it\nfall-back automatically and they set up the received wrongly by mistake,\nthey would pay the performance penalty without noticing.\nAlvaro,thanks, after sending this I pretty much came to the same conclusion.Dave",
"msg_date": "Mon, 11 Nov 2019 12:07:20 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 11 Nov 2019 at 12:07, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Mon, 11 Nov 2019 at 12:04, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n>\n>> On 2019-Nov-11, Dave Cramer wrote:\n>>\n>> > Previously someone mentioned that we need to confirm whether the two\n>> > servers are compatible for binary or not.\n>> >\n>> > Checking to make sure the two servers have the same endianness is\n>> obvious.\n>> > Sizeof int, long, float, double, timestamp (float/int) at a minimum.\n>> >\n>> > this could be done in libpqrcv_startstreaming. The question I have\n>> > remaining is do we fall back to text mode if needed or simply fail ?\n>>\n>> I think it makes more sense to have it fail. If the user wants to retry\n>> in text mode, they can do that easily enough; but if we make it\n>> fall-back automatically and they set up the received wrongly by mistake,\n>> they would pay the performance penalty without noticing.\n>>\n>>\n> Alvaro,\n>\n> thanks, after sending this I pretty much came to the same conclusion.\n>\n> Dave\n>\n\nFollowing 2 patches address Dmitry's concern and check for compatibility.",
"msg_date": "Mon, 11 Nov 2019 14:42:30 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On Mon, Nov 11, 2019 at 11:15:45AM -0500, Dave Cramer wrote:\n> On Fri, 8 Nov 2019 at 11:20, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > > On Tue, Nov 05, 2019 at 07:16:10AM -0500, Dave Cramer wrote:\n> > >\n> > > See attached\n> >\n> > --- a/src/backend/replication/logical/worker.c\n> > +++ b/src/backend/replication/logical/worker.c\n> > @@ -1779,6 +1779,7 @@ ApplyWorkerMain(Datum main_arg)\n> > options.slotname = myslotname;\n> > options.proto.logical.proto_version = LOGICALREP_PROTO_VERSION_NUM;\n> > options.proto.logical.publication_names =\n> > MySubscription->publications;\n> > + options.proto.logical.binary = MySubscription->binary;\n> >\n> > I'm a bit confused, shouldn't be there also\n> >\n> > --- a/src/backend/catalog/pg_subscription.c\n> > +++ b/src/backend/catalog/pg_subscription.c\n> > @@ -71,6 +71,7 @@ GetSubscription(Oid subid, bool missing_ok)\n> > sub->name = pstrdup(NameStr(subform->subname));\n> > sub->owner = subform->subowner;\n> > sub->enabled = subform->subenabled;\n> > + sub->binary = subform->subbinary;\n> >\n> > in the GetSubscription?\n> >\n>\n> yes, I have added this. I will supply an updated patch later.\n>\n> Now a bigger question(s).\n\nWell, without this change it wasn't working for me at all. Other than\nthat yes, it was a small question :)\n\n\n",
"msg_date": "Mon, 11 Nov 2019 20:46:20 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 2019-Nov-11, Dave Cramer wrote:\n\n> Following 2 patches address Dmitry's concern and check for compatibility.\n\nPlease resend the whole patchset, so that the patch tester can verify\nthe series. (Doing it helps humans, too).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 11 Nov 2019 17:17:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 11 Nov 2019 at 15:17, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Nov-11, Dave Cramer wrote:\n>\n> > Following 2 patches address Dmitry's concern and check for compatibility.\n>\n> Please resend the whole patchset, so that the patch tester can verify\n> the series. (Doing it helps humans, too).\n>\n>\nAttached,\n\nThanks,\nDave",
"msg_date": "Mon, 11 Nov 2019 15:24:59 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn Mon, Nov 11, 2019 at 03:24:59PM -0500, Dave Cramer wrote:\n> Attached,\n\nThe latest patch set does not apply correctly. Could you send a\nrebase please? I am moving the patch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 10:47:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Rebased against head\n\nDave Cramer\n\n\nOn Sat, 30 Nov 2019 at 20:48, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi,\n>\n> On Mon, Nov 11, 2019 at 03:24:59PM -0500, Dave Cramer wrote:\n> > Attached,\n>\n> The latest patch set does not apply correctly. Could you send a\n> rebase please? I am moving the patch to next CF, waiting on author.\n> --\n> Michael\n>",
"msg_date": "Mon, 2 Dec 2019 14:35:40 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 2 Dec 2019 at 14:35, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Rebased against head\n>\n> Dave Cramer\n>\n>\n> On Sat, 30 Nov 2019 at 20:48, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> Hi,\n>>\n>> On Mon, Nov 11, 2019 at 03:24:59PM -0500, Dave Cramer wrote:\n>> > Attached,\n>>\n>> The latest patch set does not apply correctly. Could you send a\n>> rebase please? I am moving the patch to next CF, waiting on author.\n>> --\n>> Michael\n>>\n>\nCan I get someone to review this please ?\nDave Cramer\n\nOn Mon, 2 Dec 2019 at 14:35, Dave Cramer <davecramer@gmail.com> wrote:Rebased against headDave CramerOn Sat, 30 Nov 2019 at 20:48, Michael Paquier <michael@paquier.xyz> wrote:Hi,\n\nOn Mon, Nov 11, 2019 at 03:24:59PM -0500, Dave Cramer wrote:\n> Attached,\n\nThe latest patch set does not apply correctly. Could you send a\nrebase please? I am moving the patch to next CF, waiting on author.\n--\nMichaelCan I get someone to review this please ?Dave Cramer",
"msg_date": "Fri, 17 Jan 2020 12:26:41 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> Rebased against head\n\nThe cfbot's failing to apply this [1]. It looks like the reason is only\nthat you included a catversion bump in what you submitted. Protocol is to\n*not* do that in submitted patches, but rely on the committer to add it at\nthe last minute --- otherwise you'll waste a lot of time rebasing the\npatch, which is what it needs now.\n\n\t\t\tregards, tom lane\n\n[1] http://cfbot.cputube.org/patch_27_2152.log\n\n\n",
"msg_date": "Fri, 28 Feb 2020 18:34:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 28 Feb 2020 at 18:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > Rebased against head\n>\n> The cfbot's failing to apply this [1]. It looks like the reason is only\n> that you included a catversion bump in what you submitted. Protocol is to\n> *not* do that in submitted patches, but rely on the committer to add it at\n> the last minute --- otherwise you'll waste a lot of time rebasing the\n> patch, which is what it needs now.\n>\n> regards, tom lane\n>\n> [1] http://cfbot.cputube.org/patch_27_2152.log\n\n\nrebased and removed the catversion bump.",
"msg_date": "Sat, 29 Feb 2020 10:44:44 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi Dave,\n\nOn 29/02/2020 18:44, Dave Cramer wrote:\n> \n> \n> rebased and removed the catversion bump.\n\nLooked into this and it generally seems okay, but I do have one gripe here:\n\n> +\t\t\t\t\ttuple->values[i].data = palloc(len + 1);\n> +\t\t\t\t\t/* and data */\n> +\n> +\t\t\t\t\tpq_copymsgbytes(in, tuple->values[i].data, len);\n> +\t\t\t\t\ttuple->values[i].len = len;\n> +\t\t\t\t\ttuple->values[i].cursor = 0;\n> +\t\t\t\t\ttuple->values[i].maxlen = len;\n> +\t\t\t\t\t/* not strictly necessary but the docs say it is required */\n> +\t\t\t\t\ttuple->values[i].data[len] = '\\0';\n> +\t\t\t\t\tbreak;\n> +\t\t\t\t}\n> +\t\t\tcase 't':\t\t\t/* text formatted value */\n> +\t\t\t\t{\n> +\t\t\t\t\ttuple->changed[i] = true;\n> +\t\t\t\t\tint len = pq_getmsgint(in, 4);\t/* read length */\n> \n> \t\t\t\t\t/* and data */\n> -\t\t\t\t\ttuple->values[i] = palloc(len + 1);\n> -\t\t\t\t\tpq_copymsgbytes(in, tuple->values[i], len);\n> -\t\t\t\t\ttuple->values[i][len] = '\\0';\n> +\t\t\t\t\ttuple->values[i].data = palloc(len + 1);\n> +\t\t\t\t\tpq_copymsgbytes(in, tuple->values[i].data, len);\n> +\t\t\t\t\ttuple->values[i].data[len] = '\\0';\n> +\t\t\t\t\ttuple->values[i].len = len;\n\nThe cursor should be set to 0 in the text formatted case too if this is \nhow we chose to encode data.\n\nHowever I am not quite convinced I like the StringInfoData usage here. \nWhy not just change the struct to include additional array of lengths \nrather than replacing the existing values array with StringInfoData \narray, that seems generally both simpler and should have smaller memory \nfootprint too, no?\n\nWe could also merge the binary and changed arrays into single char array \nnamed something like format (as data can be either unchanged, binary or \ntext) and just reuse same identifiers we have in protocol.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 6 Mar 2020 17:54:15 +0100",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com> wrote:\n\n> Hi Dave,\n>\n> On 29/02/2020 18:44, Dave Cramer wrote:\n> >\n> >\n> > rebased and removed the catversion bump.\n>\n> Looked into this and it generally seems okay, but I do have one gripe here:\n>\n> > + tuple->values[i].data = palloc(len\n> + 1);\n> > + /* and data */\n> > +\n> > + pq_copymsgbytes(in,\n> tuple->values[i].data, len);\n> > + tuple->values[i].len = len;\n> > + tuple->values[i].cursor = 0;\n> > + tuple->values[i].maxlen = len;\n> > + /* not strictly necessary but the\n> docs say it is required */\n> > + tuple->values[i].data[len] = '\\0';\n> > + break;\n> > + }\n> > + case 't': /* text formatted\n> value */\n> > + {\n> > + tuple->changed[i] = true;\n> > + int len = pq_getmsgint(in, 4); /*\n> read length */\n> >\n> > /* and data */\n> > - tuple->values[i] = palloc(len + 1);\n> > - pq_copymsgbytes(in,\n> tuple->values[i], len);\n> > - tuple->values[i][len] = '\\0';\n> > + tuple->values[i].data = palloc(len\n> + 1);\n> > + pq_copymsgbytes(in,\n> tuple->values[i].data, len);\n> > + tuple->values[i].data[len] = '\\0';\n> > + tuple->values[i].len = len;\n>\n> The cursor should be set to 0 in the text formatted case too if this is\n> how we chose to encode data.\n>\n> However I am not quite convinced I like the StringInfoData usage here.\n> Why not just change the struct to include additional array of lengths\n> rather than replacing the existing values array with StringInfoData\n> array, that seems generally both simpler and should have smaller memory\n> footprint too, no?\n>\n\nCan you explain this a bit more? I don't really see a huge difference in\nmemory usage.\nWe still need length and the data copied into LogicalRepTupleData when\nsending the data in binary, no?\n\n\n\n>\n> We could also merge the binary and changed arrays into single char array\n> named something like format (as data can be either unchanged, binary or\n> text) and just reuse same identifiers we have in protocol.\n>\n\nThis seems like a good idea.\n\nDave Cramer\n\n\n>\n> --\n> Petr Jelinek\n> 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n> https://www.2ndQuadrant.com/\n>\n\nOn Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com> wrote:Hi Dave,\n\r\nOn 29/02/2020 18:44, Dave Cramer wrote:\r\n> \r\n> \r\n> rebased and removed the catversion bump.\n\r\nLooked into this and it generally seems okay, but I do have one gripe here:\n\r\n> + tuple->values[i].data = palloc(len + 1);\r\n> + /* and data */\r\n> +\r\n> + pq_copymsgbytes(in, tuple->values[i].data, len);\r\n> + tuple->values[i].len = len;\r\n> + tuple->values[i].cursor = 0;\r\n> + tuple->values[i].maxlen = len;\r\n> + /* not strictly necessary but the docs say it is required */\r\n> + tuple->values[i].data[len] = '\\0';\r\n> + break;\r\n> + }\r\n> + case 't': /* text formatted value */\r\n> + {\r\n> + tuple->changed[i] = true;\r\n> + int len = pq_getmsgint(in, 4); /* read length */\r\n> \r\n> /* and data */\r\n> - tuple->values[i] = palloc(len + 1);\r\n> - pq_copymsgbytes(in, tuple->values[i], len);\r\n> - tuple->values[i][len] = '\\0';\r\n> + tuple->values[i].data = palloc(len + 1);\r\n> + pq_copymsgbytes(in, tuple->values[i].data, len);\r\n> + tuple->values[i].data[len] = '\\0';\r\n> + tuple->values[i].len = len;\n\r\nThe cursor should be set to 0 in the text formatted case too if this is \r\nhow we chose to encode data.\n\r\nHowever I am not quite convinced I like the StringInfoData usage here. \r\nWhy not just change the struct to include additional array of lengths \r\nrather than replacing the existing values array with StringInfoData \r\narray, that seems generally both simpler and should have smaller memory \r\nfootprint too, no?Can you explain this a bit more? I don't really see a huge difference in memory usage.We still need length and the data copied into LogicalRepTupleData when sending the data in binary, no? \n\r\nWe could also merge the binary and changed arrays into single char array \r\nnamed something like format (as data can be either unchanged, binary or \r\ntext) and just reuse same identifiers we have in protocol.This seems like a good idea.Dave Cramer \n\r\n-- \r\nPetr Jelinek\r\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/",
"msg_date": "Sat, 7 Mar 2020 15:18:38 -0800",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 08/03/2020 00:18, Dave Cramer wrote:\n> On Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com \n> <mailto:petr@2ndquadrant.com>> wrote:\n> \n> Hi Dave,\n> \n> On 29/02/2020 18:44, Dave Cramer wrote:\n> >\n> >\n> > rebased and removed the catversion bump.\n> \n> Looked into this and it generally seems okay, but I do have one\n> gripe here:\n> \n> > + tuple->values[i].data =\n> palloc(len + 1);\n> > + /* and data */\n> > +\n> > + pq_copymsgbytes(in,\n> tuple->values[i].data, len);\n> > + tuple->values[i].len = len;\n> > + tuple->values[i].cursor = 0;\n> > + tuple->values[i].maxlen = len;\n> > + /* not strictly necessary\n> but the docs say it is required */\n> > + tuple->values[i].data[len]\n> = '\\0';\n> > + break;\n> > + }\n> > + case 't': /* text\n> formatted value */\n> > + {\n> > + tuple->changed[i] = true;\n> > + int len = pq_getmsgint(in,\n> 4); /* read length */\n> >\n> > /* and data */\n> > - tuple->values[i] =\n> palloc(len + 1);\n> > - pq_copymsgbytes(in,\n> tuple->values[i], len);\n> > - tuple->values[i][len] = '\\0';\n> > + tuple->values[i].data =\n> palloc(len + 1);\n> > + pq_copymsgbytes(in,\n> tuple->values[i].data, len);\n> > + tuple->values[i].data[len]\n> = '\\0';\n> > + tuple->values[i].len = len;\n> \n> The cursor should be set to 0 in the text formatted case too if this is\n> how we chose to encode data.\n> \n> However I am not quite convinced I like the StringInfoData usage here.\n> Why not just change the struct to include additional array of lengths\n> rather than replacing the existing values array with StringInfoData\n> array, that seems generally both simpler and should have smaller memory\n> footprint too, no?\n> \n> \n> Can you explain this a bit more? I don't really see a huge difference in \n> memory usage.\n> We still need length and the data copied into LogicalRepTupleData when \n> sending the data in binary, no?\n> \n\nYes but we don't need to have fixed sized array of 1600 elements that \ncontains maxlen and cursor positions of the StringInfoData struct which \nwe don't use for anything AFAICS.\n\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Fri, 3 Apr 2020 09:42:57 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 3 Apr 2020 at 03:43, Petr Jelinek <petr@2ndquadrant.com> wrote:\n\n> Hi,\n>\n> On 08/03/2020 00:18, Dave Cramer wrote:\n> > On Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com\n> > <mailto:petr@2ndquadrant.com>> wrote:\n> >\n> > Hi Dave,\n> >\n> > On 29/02/2020 18:44, Dave Cramer wrote:\n> > >\n> > >\n> > > rebased and removed the catversion bump.\n> >\n> > Looked into this and it generally seems okay, but I do have one\n> > gripe here:\n> >\n> > > + tuple->values[i].data =\n> > palloc(len + 1);\n> > > + /* and data */\n> > > +\n> > > + pq_copymsgbytes(in,\n> > tuple->values[i].data, len);\n> > > + tuple->values[i].len = len;\n> > > + tuple->values[i].cursor = 0;\n> > > + tuple->values[i].maxlen =\n> len;\n> > > + /* not strictly necessary\n> > but the docs say it is required */\n> > > + tuple->values[i].data[len]\n> > = '\\0';\n> > > + break;\n> > > + }\n> > > + case 't': /* text\n> > formatted value */\n> > > + {\n> > > + tuple->changed[i] = true;\n> > > + int len = pq_getmsgint(in,\n> > 4); /* read length */\n> > >\n> > > /* and data */\n> > > - tuple->values[i] =\n> > palloc(len + 1);\n> > > - pq_copymsgbytes(in,\n> > tuple->values[i], len);\n> > > - tuple->values[i][len] =\n> '\\0';\n> > > + tuple->values[i].data =\n> > palloc(len + 1);\n> > > + pq_copymsgbytes(in,\n> > tuple->values[i].data, len);\n> > > + tuple->values[i].data[len]\n> > = '\\0';\n> > > + tuple->values[i].len = len;\n> >\n> > The cursor should be set to 0 in the text formatted case too if this\n> is\n> > how we chose to encode data.\n> >\n> > However I am not quite convinced I like the StringInfoData usage\n> here.\n> > Why not just change the struct to include additional array of lengths\n> > rather than replacing the existing values array with StringInfoData\n> > array, that seems generally both simpler and should have smaller\n> memory\n> > footprint too, no?\n> >\n> >\n> > Can you explain this a bit more? I don't really see a huge difference in\n> > memory usage.\n> > We still need length and the data copied into LogicalRepTupleData when\n> > sending the data in binary, no?\n> >\n>\n> Yes but we don't need to have fixed sized array of 1600 elements that\n> contains maxlen and cursor positions of the StringInfoData struct which\n> we don't use for anything AFAICS.\n>\n\nOK, I can see an easy way to only allocate the number of elements required\nbut since OidReceiveFunctionCall takes\nStringInfo as one of it's arguments it seems like an easy path unless there\nis something I am missing ?\n\nDave\n\nOn Fri, 3 Apr 2020 at 03:43, Petr Jelinek <petr@2ndquadrant.com> wrote:Hi,\n\r\nOn 08/03/2020 00:18, Dave Cramer wrote:\r\n> On Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com \r\n> <mailto:petr@2ndquadrant.com>> wrote:\r\n> \r\n> Hi Dave,\r\n> \r\n> On 29/02/2020 18:44, Dave Cramer wrote:\r\n> >\r\n> >\r\n> > rebased and removed the catversion bump.\r\n> \r\n> Looked into this and it generally seems okay, but I do have one\r\n> gripe here:\r\n> \r\n> > + tuple->values[i].data =\r\n> palloc(len + 1);\r\n> > + /* and data */\r\n> > +\r\n> > + pq_copymsgbytes(in,\r\n> tuple->values[i].data, len);\r\n> > + tuple->values[i].len = len;\r\n> > + tuple->values[i].cursor = 0;\r\n> > + tuple->values[i].maxlen = len;\r\n> > + /* not strictly necessary\r\n> but the docs say it is required */\r\n> > + tuple->values[i].data[len]\r\n> = '\\0';\r\n> > + break;\r\n> > + }\r\n> > + case 't': /* text\r\n> formatted value */\r\n> > + {\r\n> > + tuple->changed[i] = true;\r\n> > + int len = pq_getmsgint(in,\r\n> 4); /* read length */\r\n> >\r\n> > /* and data */\r\n> > - tuple->values[i] =\r\n> palloc(len + 1);\r\n> > - pq_copymsgbytes(in,\r\n> tuple->values[i], len);\r\n> > - tuple->values[i][len] = '\\0';\r\n> > + tuple->values[i].data =\r\n> palloc(len + 1);\r\n> > + pq_copymsgbytes(in,\r\n> tuple->values[i].data, len);\r\n> > + tuple->values[i].data[len]\r\n> = '\\0';\r\n> > + tuple->values[i].len = len;\r\n> \r\n> The cursor should be set to 0 in the text formatted case too if this is\r\n> how we chose to encode data.\r\n> \r\n> However I am not quite convinced I like the StringInfoData usage here.\r\n> Why not just change the struct to include additional array of lengths\r\n> rather than replacing the existing values array with StringInfoData\r\n> array, that seems generally both simpler and should have smaller memory\r\n> footprint too, no?\r\n> \r\n> \r\n> Can you explain this a bit more? I don't really see a huge difference in \r\n> memory usage.\r\n> We still need length and the data copied into LogicalRepTupleData when \r\n> sending the data in binary, no?\r\n> \n\r\nYes but we don't need to have fixed sized array of 1600 elements that \r\ncontains maxlen and cursor positions of the StringInfoData struct which \r\nwe don't use for anything AFAICS.OK, I can see an easy way to only allocate the number of elements required but since OidReceiveFunctionCall takesStringInfo as one of it's arguments it seems like an easy path unless there is something I am missing ?Dave",
"msg_date": "Fri, 3 Apr 2020 16:44:11 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 3 Apr 2020 at 16:44, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Fri, 3 Apr 2020 at 03:43, Petr Jelinek <petr@2ndquadrant.com> wrote:\n>\n>> Hi,\n>>\n>> On 08/03/2020 00:18, Dave Cramer wrote:\n>> > On Fri, 6 Mar 2020 at 08:54, Petr Jelinek <petr@2ndquadrant.com\n>> > <mailto:petr@2ndquadrant.com>> wrote:\n>> >\n>> > Hi Dave,\n>> >\n>> > On 29/02/2020 18:44, Dave Cramer wrote:\n>> > >\n>> > >\n>> > > rebased and removed the catversion bump.\n>> >\n>> > Looked into this and it generally seems okay, but I do have one\n>> > gripe here:\n>> >\n>> > > + tuple->values[i].data =\n>> > palloc(len + 1);\n>> > > + /* and data */\n>> > > +\n>> > > + pq_copymsgbytes(in,\n>> > tuple->values[i].data, len);\n>> > > + tuple->values[i].len = len;\n>> > > + tuple->values[i].cursor =\n>> 0;\n>> > > + tuple->values[i].maxlen =\n>> len;\n>> > > + /* not strictly necessary\n>> > but the docs say it is required */\n>> > > + tuple->values[i].data[len]\n>> > = '\\0';\n>> > > + break;\n>> > > + }\n>> > > + case 't': /* text\n>> > formatted value */\n>> > > + {\n>> > > + tuple->changed[i] = true;\n>> > > + int len = pq_getmsgint(in,\n>> > 4); /* read length */\n>> > >\n>> > > /* and data */\n>> > > - tuple->values[i] =\n>> > palloc(len + 1);\n>> > > - pq_copymsgbytes(in,\n>> > tuple->values[i], len);\n>> > > - tuple->values[i][len] =\n>> '\\0';\n>> > > + tuple->values[i].data =\n>> > palloc(len + 1);\n>> > > + pq_copymsgbytes(in,\n>> > tuple->values[i].data, len);\n>> > > + tuple->values[i].data[len]\n>> > = '\\0';\n>> > > + tuple->values[i].len = len;\n>> >\n>> > The cursor should be set to 0 in the text formatted case too if\n>> this is\n>> > how we chose to encode data.\n>> >\n>> > However I am not quite convinced I like the StringInfoData usage\n>> here.\n>> > Why not just change the struct to include additional array of\n>> lengths\n>> > rather than replacing the existing values array with StringInfoData\n>> > array, that seems generally both simpler and should have smaller\n>> memory\n>> > footprint too, no?\n>> >\n>> >\n>> > Can you explain this a bit more? I don't really see a huge difference\n>> in\n>> > memory usage.\n>> > We still need length and the data copied into LogicalRepTupleData when\n>> > sending the data in binary, no?\n>> >\n>>\n>> Yes but we don't need to have fixed sized array of 1600 elements that\n>> contains maxlen and cursor positions of the StringInfoData struct which\n>> we don't use for anything AFAICS.\n>>\n>\nNew patch that fixes a number of errors in the check for validity as well\nas reduces the memory usage by\ndynamically allocating the data changed as well as collapsing the changed\nand binary arrays into a format array.\n\nDave Cramer\n\n>",
"msg_date": "Tue, 7 Apr 2020 15:45:57 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 7 Apr 2020, at 21:45, Dave Cramer <davecramer@gmail.com> wrote:\n\n> New patch that fixes a number of errors in the check for validity as well as reduces the memory usage by\n> dynamically allocating the data changed as well as collapsing the changed and binary arrays into a format array.\n\nThe 0001 patch fails to apply, and possibly other in the series. Please submit\na rebased version. Marking the CF entry as Waiting for Author in the meantime.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 1 Jul 2020 10:53:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Honestly I'm getting a little weary of fixing this up only to have the\npatch not get reviewed.\n\nApparently it has no value so unless someone is willing to review it and\nget it committed I'm just going to let it go.\n\nThanks,\n\nDave Cramer\n\n\nOn Wed, 1 Jul 2020 at 04:53, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 7 Apr 2020, at 21:45, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> > New patch that fixes a number of errors in the check for validity as\n> well as reduces the memory usage by\n> > dynamically allocating the data changed as well as collapsing the\n> changed and binary arrays into a format array.\n>\n> The 0001 patch fails to apply, and possibly other in the series. Please\n> submit\n> a rebased version. Marking the CF entry as Waiting for Author in the\n> meantime.\n>\n> cheers ./daniel\n\nHonestly I'm getting a little weary of fixing this up only to have the patch not get reviewed. Apparently it has no value so unless someone is willing to review it and get it committed I'm just going to let it go.Thanks,Dave CramerOn Wed, 1 Jul 2020 at 04:53, Daniel Gustafsson <daniel@yesql.se> wrote:> On 7 Apr 2020, at 21:45, Dave Cramer <davecramer@gmail.com> wrote:\n\n> New patch that fixes a number of errors in the check for validity as well as reduces the memory usage by\n> dynamically allocating the data changed as well as collapsing the changed and binary arrays into a format array.\n\nThe 0001 patch fails to apply, and possibly other in the series. Please submit\na rebased version. Marking the CF entry as Waiting for Author in the meantime.\n\ncheers ./daniel",
"msg_date": "Wed, 1 Jul 2020 06:43:42 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "rebased\n\nThanks,\n\nDave Cramer\n\n\nOn Wed, 1 Jul 2020 at 06:43, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Honestly I'm getting a little weary of fixing this up only to have the\n> patch not get reviewed.\n>\n> Apparently it has no value so unless someone is willing to review it and\n> get it committed I'm just going to let it go.\n>\n> Thanks,\n>\n> Dave Cramer\n>\n>\n> On Wed, 1 Jul 2020 at 04:53, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n>> > On 7 Apr 2020, at 21:45, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>> > New patch that fixes a number of errors in the check for validity as\n>> well as reduces the memory usage by\n>> > dynamically allocating the data changed as well as collapsing the\n>> changed and binary arrays into a format array.\n>>\n>> The 0001 patch fails to apply, and possibly other in the series. Please\n>> submit\n>> a rebased version. Marking the CF entry as Waiting for Author in the\n>> meantime.\n>>\n>> cheers ./daniel\n>\n>",
"msg_date": "Thu, 2 Jul 2020 12:41:14 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 2 Jul 2020, at 18:41, Dave Cramer <davecramer@gmail.com> wrote:\n> \n> rebased\n\nThanks! The new version of 0001 patch has a compiler warning due to mixed\ndeclarations and code:\n\nworker.c: In function ‘slot_store_data’:\nworker.c:366:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n int cursor = tupleData->values[remoteattnum]->cursor;\n ^\nworker.c: In function ‘slot_modify_data’:\nworker.c:485:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n int cursor = tupleData->values[remoteattnum]->cursor;\n ^\n\nI didn't investigate to see if it was new, but Travis is running with Werror\nwhich fails this build.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 5 Jul 2020 22:40:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On 2020-Jul-05, Daniel Gustafsson wrote:\n\n> > On 2 Jul 2020, at 18:41, Dave Cramer <davecramer@gmail.com> wrote:\n> > \n> > rebased\n> \n> Thanks! The new version of 0001 patch has a compiler warning due to mixed\n> declarations and code:\n> \n> worker.c: In function ‘slot_store_data’:\n> worker.c:366:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n\nAFAICS this is fixed in 0005. I'm going to suggest to use \"git rebase\n-i\" so that fixes for bugs that earlier patches introduce are applied as\nfix-ups in those patches; we don't need or want to see the submitter's\nintermediate versions. Ideally, each submitted patch should be free of\nsuch problems, so that we can consider each individual patch in the\nseries in isolation. Indeed, evidently the cfbot consider things that\nway.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jul 2020 17:11:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 5 Jul 2020, at 23:11, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jul-05, Daniel Gustafsson wrote:\n> \n>>> On 2 Jul 2020, at 18:41, Dave Cramer <davecramer@gmail.com> wrote:\n>>> \n>>> rebased\n>> \n>> Thanks! The new version of 0001 patch has a compiler warning due to mixed\n>> declarations and code:\n>> \n>> worker.c: In function ‘slot_store_data’:\n>> worker.c:366:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n> \n> AFAICS this is fixed in 0005.\n\nYes and no, 0005 fixes one such instance but the one failing the build is\nanother one in worker.c (the below being from 0008 which in turn change the row\nin question from previous patches):\n\n+ int cursor = tupleData->values[remoteattnum]->cursor;\n\n> I'm going to suggest to use \"git rebase -i\"\n\n+1\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 5 Jul 2020 23:28:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sun, 5 Jul 2020 at 17:28, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 5 Jul 2020, at 23:11, Alvaro Herrera <alvherre@2ndquadrant.com>\n> wrote:\n> >\n> > On 2020-Jul-05, Daniel Gustafsson wrote:\n> >\n> >>> On 2 Jul 2020, at 18:41, Dave Cramer <davecramer@gmail.com> wrote:\n> >>>\n> >>> rebased\n> >>\n> >> Thanks! The new version of 0001 patch has a compiler warning due to\n> mixed\n> >> declarations and code:\n> >>\n> >> worker.c: In function ‘slot_store_data’:\n> >> worker.c:366:5: error: ISO C90 forbids mixed declarations and code\n> [-Werror=declaration-after-statement]\n> >\n> > AFAICS this is fixed in 0005.\n>\n> Yes and no, 0005 fixes one such instance but the one failing the build is\n> another one in worker.c (the below being from 0008 which in turn change\n> the row\n> in question from previous patches):\n>\n> + int cursor = tupleData->values[remoteattnum]->cursor;\n>\n> > I'm going to suggest to use \"git rebase -i\"\n>\n> +1\n>\n\nStrangely I don't see those errors when I build on my machine, but I will\nfix them\n\nas far as rebase -i do what is advised here for squashing them. Just one\npatch now ?\n\nThanks,\n\nOn Sun, 5 Jul 2020 at 17:28, Daniel Gustafsson <daniel@yesql.se> wrote:> On 5 Jul 2020, at 23:11, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Jul-05, Daniel Gustafsson wrote:\n> \n>>> On 2 Jul 2020, at 18:41, Dave Cramer <davecramer@gmail.com> wrote:\n>>> \n>>> rebased\n>> \n>> Thanks! The new version of 0001 patch has a compiler warning due to mixed\n>> declarations and code:\n>> \n>> worker.c: In function ‘slot_store_data’:\n>> worker.c:366:5: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]\n> \n> AFAICS this is fixed in 0005.\n\nYes and no, 0005 fixes one such instance but the one failing the build is\nanother one in worker.c (the below being from 0008 which in turn change the row\nin question from previous patches):\n\n+ int cursor = tupleData->values[remoteattnum]->cursor;\n\n> I'm going to suggest to use \"git rebase -i\"\n\n+1Strangely I don't see those errors when I build on my machine, but I will fix themas far as rebase -i do what is advised here for squashing them. Just one patch now ?Thanks,",
"msg_date": "Mon, 6 Jul 2020 08:58:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 6 Jul 2020, at 14:58, Dave Cramer <davecramer@gmail.com> wrote:\n\n> as far as rebase -i do what is advised here for squashing them. Just one patch now ?\n\nOne patch per logical change, if there are two disjoint changes in the patchset\nwhere one builds on top of the other then multiple patches are of course fine.\nMy personal rule-of-thumb is to split it the way I envision it committed.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 6 Jul 2020 15:03:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 6 Jul 2020 at 09:03, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 6 Jul 2020, at 14:58, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> > as far as rebase -i do what is advised here for squashing them. Just one\n> patch now ?\n>\n> One patch per logical change, if there are two disjoint changes in the\n> patchset\n> where one builds on top of the other then multiple patches are of course\n> fine.\n> My personal rule-of-thumb is to split it the way I envision it committed.\n>\n\nAt this point it is the result of 3 rebases. I guess I'll have to break it\nup differently..\n\nThanks,\n\nDave\n\nOn Mon, 6 Jul 2020 at 09:03, Daniel Gustafsson <daniel@yesql.se> wrote:> On 6 Jul 2020, at 14:58, Dave Cramer <davecramer@gmail.com> wrote:\n\n> as far as rebase -i do what is advised here for squashing them. Just one patch now ?\n\nOne patch per logical change, if there are two disjoint changes in the patchset\nwhere one builds on top of the other then multiple patches are of course fine.\nMy personal rule-of-thumb is to split it the way I envision it committed.At this point it is the result of 3 rebases. I guess I'll have to break it up differently..Thanks,Dave",
"msg_date": "Mon, 6 Jul 2020 09:35:36 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Mon, 6 Jul 2020 at 09:35, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Mon, 6 Jul 2020 at 09:03, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n>> > On 6 Jul 2020, at 14:58, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>> > as far as rebase -i do what is advised here for squashing them. Just\n>> one patch now ?\n>>\n>> One patch per logical change, if there are two disjoint changes in the\n>> patchset\n>> where one builds on top of the other then multiple patches are of course\n>> fine.\n>> My personal rule-of-thumb is to split it the way I envision it committed.\n>>\n>\n> At this point it is the result of 3 rebases. I guess I'll have to break it\n> up differently..\n>\n>\nOK, rebased it down to 2 patches, attached.\n\n\n\n> Thanks,\n>\n> Dave\n>",
"msg_date": "Mon, 6 Jul 2020 20:16:50 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 7 Jul 2020, at 02:16, Dave Cramer <davecramer@gmail.com> wrote:\n\n> OK, rebased it down to 2 patches, attached.\n\nI took a look at this patchset today. The feature clearly seems like something\nwhich we'd benefit from having, especially if it allows for the kind of\nextensions that were discussed at the beginning of this thread. In general I\nthink it's in pretty good shape, there are however a few comments:\n\nThe patch lacks any kind of test, which I think is required for it to be\nconsidered for committing. It also doesn't update the \\dRs view in psql to\ninclude the subbinary column which IMO it should. I took the liberty to write\nthis as well as tests as I was playing with the patch, the attached 0003\ncontains this, while 0001 and 0002 are your patches included to ensure the\nCFBot can do it's thing. This was kind of thrown together to have something\nwhile testing, so it definately need a once-over or two.\n\nThe comment here implies that unchanged is the default value for format, but\nisn't this actually setting it to text formatted value?\n+ /* default is unchanged */\n+ tuple->format = palloc(natts * sizeof(char));\n+ memset(tuple->format, 't', natts * sizeof(char));\nAlso, since the values member isn't memset() with a default, this seems a bit\nmisleading at best no?\n\nFor the rest of the backend we aren't including the defname in the errmsg like\nwhat is done here. Maybe we should, but I think that should be done\nconsistently if so, and we should stick to just \"conflicting or redundant\noptions\" for now. At the very least, this needs a comma between \"options\" and\nthe defname and ideally the defname wrapped in \\\".\n- errmsg(\"conflicting or redundant options\")));\n+ errmsg(\"conflicting or redundant options %s already provided\", defel->defname)));\n\nThese types of constructs are IMHO quite hard to read:\n+ if(\n+ #ifdef WORDS_BIGENDIAN\n+ true\n+ #else\n+ false\n+ #endif\n+ != bigendian)\nHow about spelling out the statement completely for both cases, or perhaps\nencapsulating the logic in a macro? Something like the below perhaps?\n+ #ifdef WORDS_BIGENDIAN\n+ if (bigendian != true)\n+ #else\n+ if (bigendian != false)\n+ #endif\n\nThis change needs to be removed before a commit, just highlighting that here to\navoid it going unnoticed.\n-/* #define WAL_DEBUG */\n+#define WAL_DEBUG\n\nReading this I'm left wondering if we shoulnd't introduce macros for the kinds,\nsince we now compare with 'u', 't' etc in quite a few places and add comments\nexplaining the types everywhere. A descriptive name would make it easier to\ngrep for all occurrences, and avoid the need for the comment lines. Thats not\nnecesarily for this patch though, but an observation from reading it.\n\n\nI found a few smaller nitpicks as well, some of these might go away by a\npg_indent run but I've included them all here regardless:\n\nThis change, and the subsequent whitespace removal later in the same function,\nseems a bit pointless:\n- /* read the relation id */\n relid = pq_getmsgint(in, 4);\n-\n\nBraces should go on the next line:\n+ if (options->proto.logical.binary) {\n\nThis should be a C /* ... */ comment, or perhaps just removed since the code\nis quite self explanatory:\n+ // default to false\n+ *binary_basetypes = false;\n\nIndentation here:\n- errmsg(\"conflicting or redundant options\")));\n+ errmsg(\"conflicting or redundant options %s already provided\", defel->defname)));\n\n..as well as here (there are a few like this one):\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"incompatible datum size\")));\n\nCapitalization of \"after\" to make it a proper sentence:\n+ * after we know that the subscriber is requesting binary check to make sure\n\nExcessive whitespace and indentation in a few places, and not enough in some:\n+\t\t\t\tif (binary_given)\n+\t\t\t\t{\n+\t\t\t\tvalues[Anum_pg_subscription_subbinary - 1] =\n...\n+ if ( *binary_basetypes == true )\n...\n+ if (sizeof(int) != int_size)\n...\n+ if( float4_byval !=\n...\n+ if (sizeof(long) != long_size)\n+ ereport(ERROR,\n...\n+\t\tif (tupleData->format[remoteattnum] =='u')\n...\n+ bool binary_basetypes;\n\nThat's all for now.\n\ncheers ./daniel",
"msg_date": "Tue, 7 Jul 2020 16:01:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 7 Jul 2020 at 10:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 7 Jul 2020, at 02:16, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> > OK, rebased it down to 2 patches, attached.\n>\n> I took a look at this patchset today. The feature clearly seems like\n> something\n> which we'd benefit from having, especially if it allows for the kind of\n> extensions that were discussed at the beginning of this thread. In\n> general I\n> think it's in pretty good shape, there are however a few comments:\n>\n> The patch lacks any kind of test, which I think is required for it to be\n> considered for committing. It also doesn't update the \\dRs view in psql to\n> include the subbinary column which IMO it should. I took the liberty to\n> write\n> this as well as tests as I was playing with the patch, the attached 0003\n> contains this, while 0001 and 0002 are your patches included to ensure the\n> CFBot can do it's thing. This was kind of thrown together to have\n> something\n> while testing, so it definately need a once-over or two.\n>\n\nI have put all your requests other than the indentation as that can be\ndealt with by pg_indent into another patch which I reordered ahead of yours\nThis should make it easier to see that all of your issues have been\naddressed.\n\nI did not do the macro for updated, inserted, deleted, will give that a go\ntomorrow.\n\n\n>\n> The comment here implies that unchanged is the default value for format,\n> but\n> isn't this actually setting it to text formatted value?\n> + /* default is unchanged */\n> + tuple->format = palloc(natts * sizeof(char));\n> + memset(tuple->format, 't', natts * sizeof(char));\n> Also, since the values member isn't memset() with a default, this seems a\n> bit\n> misleading at best no?\n>\n> For the rest of the backend we aren't including the defname in the errmsg\n> like\n> what is done here. Maybe we should, but I think that should be done\n> consistently if so, and we should stick to just \"conflicting or redundant\n> options\" for now. At the very least, this needs a comma between \"options\"\n> and\n> the defname and ideally the defname wrapped in \\\".\n> - errmsg(\"conflicting or redundant options\")));\n> + errmsg(\"conflicting or redundant options %s\n> already provided\", defel->defname)));\n>\n\nI added these since this will now be used outside of logical replication\nand getting reasonable error messages when setting up\nreplication is useful. I added the \\\" and ,\n\n\n>\n> These types of constructs are IMHO quite hard to read:\n> + if(\n> + #ifdef WORDS_BIGENDIAN\n> + true\n> + #else\n> + false\n> + #endif\n> + != bigendian)\n> How about spelling out the statement completely for both cases, or perhaps\n> encapsulating the logic in a macro? Something like the below perhaps?\n> + #ifdef WORDS_BIGENDIAN\n> + if (bigendian != true)\n> + #else\n> + if (bigendian != false)\n> + #endif\n>\n> This change needs to be removed before a commit, just highlighting that\n> here to\n> avoid it going unnoticed.\n> -/* #define WAL_DEBUG */\n> +#define WAL_DEBUG\n>\n> Done\n\n\n> Reading this I'm left wondering if we shoulnd't introduce macros for the\n> kinds,\n> since we now compare with 'u', 't' etc in quite a few places and add\n> comments\n> explaining the types everywhere. A descriptive name would make it easier\n> to\n> grep for all occurrences, and avoid the need for the comment lines. Thats\n> not\n> necesarily for this patch though, but an observation from reading it.\n>\n\nI'll take a look at adding macros tomorrow.\n\nI've taken care of much of this below\n\n>\n>\n> I found a few smaller nitpicks as well, some of these might go away by a\n> pg_indent run but I've included them all here regardless:\n>\n> This change, and the subsequent whitespace removal later in the same\n> function,\n> seems a bit pointless:\n> - /* read the relation id */\n> relid = pq_getmsgint(in, 4);\n> -\n>\n> Braces should go on the next line:\n> + if (options->proto.logical.binary) {\n>\n> This should be a C /* ... */ comment, or perhaps just removed since the\n> code\n> is quite self explanatory:\n> + // default to false\n> + *binary_basetypes = false;\n>\n> Indentation here:\n> - errmsg(\"conflicting or redundant options\")));\n> + errmsg(\"conflicting or redundant options %s\n> already provided\", defel->defname)));\n>\n> ..as well as here (there are a few like this one):\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"incompatible datum size\")));\n>\n> Capitalization of \"after\" to make it a proper sentence:\n> + * after we know that the subscriber is requesting binary check to make\n> sure\n>\n> Excessive whitespace and indentation in a few places, and not enough in\n> some:\n> + if (binary_given)\n> + {\n> + values[Anum_pg_subscription_subbinary - 1]\n> =\n> ...\n> + if ( *binary_basetypes == true )\n> ...\n> + if (sizeof(int) != int_size)\n> ...\n> + if( float4_byval !=\n> ...\n> + if (sizeof(long) != long_size)\n> + ereport(ERROR,\n> ...\n> + if (tupleData->format[remoteattnum] =='u')\n> ...\n> + bool binary_basetypes;\n>\n> That's all for now.\n>\n> cheers ./daniel\n>\n>",
"msg_date": "Tue, 7 Jul 2020 16:53:53 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 7 Jul 2020, at 22:53, Dave Cramer <davecramer@gmail.com> wrote:\n\n> I have put all your requests other than the indentation as that can be dealt with by pg_indent into another patch which I reordered ahead of yours\n> This should make it easier to see that all of your issues have been addressed.\n\nThanks for the update! Do note that my patch included a new file which is\nmissing from this patchset:\n\n\tsrc/test/subscription/t/014_binary.pl\n\nThis is, IMO, the most interesting test of this feature so it would be good to\nbe included. It's a basic test and can no doubt be extended to be even more\nrelevant, but it's a start.\n\n> I did not do the macro for updated, inserted, deleted, will give that a go tomorrow.\n\nThis might not be a blocker, but personally I think it would make the code more\nreadable. Anyone else have an opinion on this?\n\n> I added these since this will now be used outside of logical replication and getting reasonable error messages when setting up\n> replication is useful. I added the \\\" and ,\n\nI think the \"lack of detail\" in the existing error messages is intentional to\nmake translation easier, but I might be wrong here.\n\nReading through the new patch, and running the tests, I'm marking this as Ready\nfor Committer. It does need some cosmetic TLC, quite possibly just from\npg_indent but I didn't validate if it will take care of everything, and comment\ntouchups (there is still a TODO comment around wording that needs to be\nresolved). However, I think it's in good enough shape for consideration at\nthis point.\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 9 Jul 2020 10:48:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Thanks for the update! Do note that my patch included a new file which is\n> missing from this patchset:\n> \tsrc/test/subscription/t/014_binary.pl\n> This is, IMO, the most interesting test of this feature so it would be good to\n> be included. It's a basic test and can no doubt be extended to be even more\n> relevant, but it's a start.\n\nI was about to complain that the latest patchset includes no meaningful\ntest cases, but I assume that this missing file contains those.\n\n>> I did not do the macro for updated, inserted, deleted, will give that a go tomorrow.\n\n> This might not be a blocker, but personally I think it would make the\n> code more readable. Anyone else have an opinion on this?\n\n+1 for using macros.\n\n> Reading through the new patch, and running the tests, I'm marking this as Ready\n> for Committer. It does need some cosmetic TLC, quite possibly just from\n> pg_indent but I didn't validate if it will take care of everything, and comment\n> touchups (there is still a TODO comment around wording that needs to be\n> resolved). However, I think it's in good enough shape for consideration at\n> this point.\n\nI took a quick look through the patch and had some concerns:\n\n* Please strip out the PG_VERSION_NUM and USE_INTEGER_DATETIMES checks.\nThose are quite dead so far as a patch for HEAD is concerned --- in fact,\nsince USE_INTEGER_DATETIMES hasn't even been defined since v10 or so,\nthe patch is actively doing the wrong thing there. Not that it matters.\nThis code will never appear in any branch where float timestamps could\nbe a thing.\n\n* I doubt that the checks on USE_FLOAT4/8_BYVAL, sizeof(int), endiannness,\netc, make any sense either. Those surely do not affect the on-the-wire\nrepresentation of values --- or if they do, we've blown it somewhere else.\nI'd just take out all those checks and assume that the binary\nrepresentation is sufficiently portable. (If it's not, it's more or less\nthe user's problem, just as in binary COPY.)\n\n* Please also remove debugging hacks such as enabling WAL_DEBUG.\n\n* It'd likely be wise for the documentation to point out that binary\nmode only works if all types to be transferred have send/receive\nfunctions.\n\n\nBTW, while it's not the job of this patch to fix it, I find it quite\ndistressing that we're apparently repeating the lookups of the type\nI/O functions for every row transferred.\n\nI'll set this back to WoA, but I concur with Daniel's opinion that\nit doesn't seem that far from committability.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jul 2020 14:20:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Fri, 10 Jul 2020 at 14:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> > Thanks for the update! Do note that my patch included a new file which\n> is\n> > missing from this patchset:\n> > src/test/subscription/t/014_binary.pl\n> > This is, IMO, the most interesting test of this feature so it would be\n> good to\n> > be included. It's a basic test and can no doubt be extended to be even\n> more\n> > relevant, but it's a start.\n>\n> I was about to complain that the latest patchset includes no meaningful\n> test cases, but I assume that this missing file contains those.\n>\n> >> I did not do the macro for updated, inserted, deleted, will give that a\n> go tomorrow.\n>\n> > This might not be a blocker, but personally I think it would make the\n> > code more readable. Anyone else have an opinion on this?\n>\n> +1 for using macros.\n>\n\nGot it, will add.\n\n>\n> > Reading through the new patch, and running the tests, I'm marking this\n> as Ready\n> > for Committer. It does need some cosmetic TLC, quite possibly just from\n> > pg_indent but I didn't validate if it will take care of everything, and\n> comment\n> > touchups (there is still a TODO comment around wording that needs to be\n> > resolved). However, I think it's in good enough shape for consideration\n> at\n> > this point.\n>\n> I took a quick look through the patch and had some concerns:\n>\n> * Please strip out the PG_VERSION_NUM and USE_INTEGER_DATETIMES checks.\n> Those are quite dead so far as a patch for HEAD is concerned --- in fact,\n> since USE_INTEGER_DATETIMES hasn't even been defined since v10 or so,\n> the patch is actively doing the wrong thing there. Not that it matters.\n> This code will never appear in any branch where float timestamps could\n> be a thing.\n>\n> * I doubt that the checks on USE_FLOAT4/8_BYVAL, sizeof(int), endiannness,\n> etc, make any sense either. Those surely do not affect the on-the-wire\n> representation of values --- or if they do, we've blown it somewhere else.\n> I'd just take out all those checks and assume that the binary\n> representation is sufficiently portable. (If it's not, it's more or less\n> the user's problem, just as in binary COPY.)\n>\n\nSo is there any point in having them as options then ?\n\n>\n> * Please also remove debugging hacks such as enabling WAL_DEBUG.\n>\n> * It'd likely be wise for the documentation to point out that binary\n> mode only works if all types to be transferred have send/receive\n> functions.\n>\n\nwill do\n\n>\n>\n> BTW, while it's not the job of this patch to fix it, I find it quite\n> distressing that we're apparently repeating the lookups of the type\n> I/O functions for every row transferred.\n>\n> I'll set this back to WoA, but I concur with Daniel's opinion that\n> it doesn't seem that far from committability.\n>\n\nThanks for looking at this\n\nDave Cramer\n\nOn Fri, 10 Jul 2020 at 14:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:Daniel Gustafsson <daniel@yesql.se> writes:\n> Thanks for the update! Do note that my patch included a new file which is\n> missing from this patchset:\n> src/test/subscription/t/014_binary.pl\n> This is, IMO, the most interesting test of this feature so it would be good to\n> be included. It's a basic test and can no doubt be extended to be even more\n> relevant, but it's a start.\n\nI was about to complain that the latest patchset includes no meaningful\ntest cases, but I assume that this missing file contains those.\n\n>> I did not do the macro for updated, inserted, deleted, will give that a go tomorrow.\n\n> This might not be a blocker, but personally I think it would make the\n> code more readable. Anyone else have an opinion on this?\n\n+1 for using macros.Got it, will add. \n\n> Reading through the new patch, and running the tests, I'm marking this as Ready\n> for Committer. It does need some cosmetic TLC, quite possibly just from\n> pg_indent but I didn't validate if it will take care of everything, and comment\n> touchups (there is still a TODO comment around wording that needs to be\n> resolved). However, I think it's in good enough shape for consideration at\n> this point.\n\nI took a quick look through the patch and had some concerns:\n\n* Please strip out the PG_VERSION_NUM and USE_INTEGER_DATETIMES checks.\nThose are quite dead so far as a patch for HEAD is concerned --- in fact,\nsince USE_INTEGER_DATETIMES hasn't even been defined since v10 or so,\nthe patch is actively doing the wrong thing there. Not that it matters.\nThis code will never appear in any branch where float timestamps could\nbe a thing.\n\n* I doubt that the checks on USE_FLOAT4/8_BYVAL, sizeof(int), endiannness,\netc, make any sense either. Those surely do not affect the on-the-wire\nrepresentation of values --- or if they do, we've blown it somewhere else.\nI'd just take out all those checks and assume that the binary\nrepresentation is sufficiently portable. (If it's not, it's more or less\nthe user's problem, just as in binary COPY.)So is there any point in having them as options then ? \n\n* Please also remove debugging hacks such as enabling WAL_DEBUG.\n\n* It'd likely be wise for the documentation to point out that binary\nmode only works if all types to be transferred have send/receive\nfunctions.will do \n\n\nBTW, while it's not the job of this patch to fix it, I find it quite\ndistressing that we're apparently repeating the lookups of the type\nI/O functions for every row transferred.\n\nI'll set this back to WoA, but I concur with Daniel's opinion that\nit doesn't seem that far from committability.Thanks for looking at thisDave Cramer",
"msg_date": "Sat, 11 Jul 2020 08:14:48 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 11/07/2020 14:14, Dave Cramer wrote:\n> \n> \n> On Fri, 10 Jul 2020 at 14:21, Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> > Reading through the new patch, and running the tests, I'm marking\n> this as Ready\n> > for Committer. It does need some cosmetic TLC, quite possibly\n> just from\n> > pg_indent but I didn't validate if it will take care of\n> everything, and comment\n> > touchups (there is still a TODO comment around wording that needs\n> to be\n> > resolved). However, I think it's in good enough shape for\n> consideration at\n> > this point.\n> \n> I took a quick look through the patch and had some concerns:\n> \n> * Please strip out the PG_VERSION_NUM and USE_INTEGER_DATETIMES checks.\n> Those are quite dead so far as a patch for HEAD is concerned --- in\n> fact,\n> since USE_INTEGER_DATETIMES hasn't even been defined since v10 or so,\n> the patch is actively doing the wrong thing there. Not that it matters.\n> This code will never appear in any branch where float timestamps could\n> be a thing.\n> \n> * I doubt that the checks on USE_FLOAT4/8_BYVAL, sizeof(int),\n> endiannness,\n> etc, make any sense either. Those surely do not affect the on-the-wire\n> representation of values --- or if they do, we've blown it somewhere\n> else.\n> I'd just take out all those checks and assume that the binary\n> representation is sufficiently portable. (If it's not, it's more or\n> less\n> the user's problem, just as in binary COPY.)\n> \n> \n> So is there any point in having them as options then ?\n> \n\nI am guessing this is copied from pglogical, right? We have them there \nbecause it can optionally send data in the on-disk format (not the \nnetwork binary format) and there this matters, but for network binary \nformat they do not matter as Tom says.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Sat, 11 Jul 2020 16:08:43 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Petr Jelinek <petr@2ndquadrant.com> writes:\n> On 11/07/2020 14:14, Dave Cramer wrote:\n>> So is there any point in having them as options then ?\n\n> I am guessing this is copied from pglogical, right? We have them there \n> because it can optionally send data in the on-disk format (not the \n> network binary format) and there this matters, but for network binary \n> format they do not matter as Tom says.\n\nAh, I wondered why that was there at all. Yes, you should just delete\nall that logic --- it's irrelevant as long as we use the send/recv\nfunctions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Jul 2020 10:20:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sat, 11 Jul 2020 at 10:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Petr Jelinek <petr@2ndquadrant.com> writes:\n> > On 11/07/2020 14:14, Dave Cramer wrote:\n> >> So is there any point in having them as options then ?\n>\n> > I am guessing this is copied from pglogical, right? We have them there\n> > because it can optionally send data in the on-disk format (not the\n> > network binary format) and there this matters, but for network binary\n> > format they do not matter as Tom says.\n>\n> Ah, I wondered why that was there at all. Yes, you should just delete\n> all that logic --- it's irrelevant as long as we use the send/recv\n> functions.\n>\n> regards, tom lane\n>\n\n\nOk,\n\nremoved all the unnecessary options.\nAdded the test case that Daniel had created.\nAdded a note to the docs.\n\nNote WAL_DEBUG is removed in patch 3. I could rebase that into patch 1 if\nrequired.\n\nThanks,\n\nDave",
"msg_date": "Mon, 13 Jul 2020 09:11:43 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "> On 13 Jul 2020, at 15:11, Dave Cramer <davecramer@gmail.com> wrote:\n\nI took another look at the updated version today. Since there now were some\nunused variables and (I believe) unnecessary checks (int size and endianness\netc) left, I took the liberty to fix those. I also fixed some markup in the\ncatalog docs, did some minor tidying up and ran pgindent on it.\n\nThe attached is a squash of the 4 patches in your email with the above fixes.\nI'm again marking it RfC since I believe all concerns raised so far has been\naddressed.\n\n> Added the test case that Daniel had created.\n\nNope, still missing AFAICT =) But I've included it in the attached.\n\ncheers ./daniel",
"msg_date": "Tue, 14 Jul 2020 15:26:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 09:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 13 Jul 2020, at 15:11, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> I took another look at the updated version today. Since there now were\n> some\n> unused variables and (I believe) unnecessary checks (int size and\n> endianness\n> etc) left, I took the liberty to fix those. I also fixed some markup in\n> the\n> catalog docs, did some minor tidying up and ran pgindent on it.\n>\n> The attached is a squash of the 4 patches in your email with the above\n> fixes.\n> I'm again marking it RfC since I believe all concerns raised so far has\n> been\n> addressed.\n>\n> > Added the test case that Daniel had created.\n>\n> Nope, still missing AFAICT =) But I've included it in the attached.\n>\n>\nThanks!\n\nDave\n\n\n>\n\nOn Tue, 14 Jul 2020 at 09:26, Daniel Gustafsson <daniel@yesql.se> wrote:> On 13 Jul 2020, at 15:11, Dave Cramer <davecramer@gmail.com> wrote:\n\nI took another look at the updated version today. Since there now were some\nunused variables and (I believe) unnecessary checks (int size and endianness\netc) left, I took the liberty to fix those. I also fixed some markup in the\ncatalog docs, did some minor tidying up and ran pgindent on it.\n\nThe attached is a squash of the 4 patches in your email with the above fixes.\nI'm again marking it RfC since I believe all concerns raised so far has been\naddressed.\n\n> Added the test case that Daniel had created.\n\nNope, still missing AFAICT =) But I've included it in the attached.\nThanks!Dave",
"msg_date": "Tue, 14 Jul 2020 09:36:37 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "So I started looking through this seriously, and my first question\nis why do the docs and code keep saying that \"base types\" are sent\nin binary? Why not just \"data\"? Are there any cases where we\ndon't use binary format, if the subscription requests it?\n\nIf there's not a concrete reason to use that terminology,\nI'd rather flush it, because it seems confusing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jul 2020 12:59:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 12:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> So I started looking through this seriously, and my first question\n> is why do the docs and code keep saying that \"base types\" are sent\n> in binary? Why not just \"data\"? Are there any cases where we\n> don't use binary format, if the subscription requests it?\n>\n> If there's not a concrete reason to use that terminology,\n> I'd rather flush it, because it seems confusing.\n>\n\nWell for some reason I thought there were some types that did not have send\nand receive functions.\n\nI've changed the docs to say data and the flag from binary_basetypes to\njust binary\n\nSee attached.\n\nThanks,\n\nDave\n\n>\n>",
"msg_date": "Tue, 14 Jul 2020 14:08:53 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Tue, 14 Jul 2020 at 12:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So I started looking through this seriously, and my first question\n>> is why do the docs and code keep saying that \"base types\" are sent\n>> in binary? Why not just \"data\"? Are there any cases where we\n>> don't use binary format, if the subscription requests it?\n\n> Well for some reason I thought there were some types that did not have send\n> and receive functions.\n\nThere are, but they're all base types, so this terminology is still\nunhelpful ;-).\n\nIt'd be possible for the sender to send binary for columns it has a\ntypsend function for, and otherwise send text. However, this only helps\nif the receiver has receive functions for all those types; in\ncross-version cases they might disagree about which types can be sent\nin binary. (Hm ... maybe we could have the receiver verify that it has\ntypreceive for every column included in its version of the table, before\nasking for binary mode?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jul 2020 14:36:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-14 14:08:53 -0400, Dave Cramer wrote:\n> On Tue, 14 Jul 2020 at 12:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > So I started looking through this seriously, and my first question\n> > is why do the docs and code keep saying that \"base types\" are sent\n> > in binary? Why not just \"data\"? Are there any cases where we\n> > don't use binary format, if the subscription requests it?\n> >\n> > If there's not a concrete reason to use that terminology,\n> > I'd rather flush it, because it seems confusing.\n> >\n> \n> Well for some reason I thought there were some types that did not have send\n> and receive functions.\n\nThere's also send/receive functions that do not work across systems,\nunfortunately :(. In particular record and array send functions embed\ntype oids and their receive functions verify that they match the local\nsystem. Which basically means that if there's any difference in oid\nassignment order between two systems that they will not allow to\nsend/recv such data between them :(.\n\n\nI suspect that is what that comments might have been referring to?\n\n\nI've several times suggested that we should remove those type checks in\nrecv, as they afaict don't provide any actual value. But unfortunately\nthere hasn't been much response to that. See e.g.\n\nhttps://postgr.es/m/20160426001713.hbqdiwvf4mkzkg55%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jul 2020 12:56:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> There's also send/receive functions that do not work across systems,\n> unfortunately :(. In particular record and array send functions embed\n> type oids and their receive functions verify that they match the local\n> system. Which basically means that if there's any difference in oid\n> assignment order between two systems that they will not allow to\n> send/recv such data between them :(.\n\nIt's not a problem particularly for built-in types, but I agree\nthere's an issue for extension types.\n\n> I've several times suggested that we should remove those type checks in\n> recv, as they afaict don't provide any actual value. But unfortunately\n> there hasn't been much response to that. See e.g.\n> https://postgr.es/m/20160426001713.hbqdiwvf4mkzkg55%40alap3.anarazel.de\n\nMaybe we could compromise by omitting the check if both OIDs are\noutside the built-in range?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jul 2020 19:46:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-14 19:46:52 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > There's also send/receive functions that do not work across systems,\n> > unfortunately :(. In particular record and array send functions embed\n> > type oids and their receive functions verify that they match the local\n> > system. Which basically means that if there's any difference in oid\n> > assignment order between two systems that they will not allow to\n> > send/recv such data between them :(.\n> \n> It's not a problem particularly for built-in types, but I agree\n> there's an issue for extension types.\n\nI'm not so sure. That's true for builtin types within a single major\nversion, but not necessarily across major versions. Not that I can\nimmediately recall cases where we renumbered type oids.\n\nIt also assumes that the type specification exactly matches between the\nsource / target system. It's probably not a great idea to try to use\nsend/recv for meaningfully different types, but it doesn't seem to crazy\nto e.g. allow to e.g. change varchar to text while doing a major version\nupgrade over logical rep.\n\n\nWhat is the gain in having these checks? recv functions need to be safe\nagainst arbitrary input, so a type crosscheck doesn't buy additional\nsafety in that regard. Not that a potential attacker couldn't just\nchange the content anyways?\n\n\n> > I've several times suggested that we should remove those type checks in\n> > recv, as they afaict don't provide any actual value. But unfortunately\n> > there hasn't been much response to that. See e.g.\n> > https://postgr.es/m/20160426001713.hbqdiwvf4mkzkg55%40alap3.anarazel.de\n> \n> Maybe we could compromise by omitting the check if both OIDs are\n> outside the built-in range?\n\nHm. That'd be a lot better than the current situation. So I'd definitely\ngo for that if that's what we can agree on.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jul 2020 18:41:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What is the gain in having these checks? recv functions need to be safe\n> against arbitrary input, so a type crosscheck doesn't buy additional\n> safety in that regard. Not that a potential attacker couldn't just\n> change the content anyways?\n\nYou're confusing security issues with user-friendliness issues.\nDetecting that you sent the wrong type via an OID mismatch error\nis a lot less painful than trying to figure out why you've got\nerrors along the line of \"incorrect binary data format\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jul 2020 22:28:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 2020-07-14 22:28:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What is the gain in having these checks? recv functions need to be safe\n> > against arbitrary input, so a type crosscheck doesn't buy additional\n> > safety in that regard. Not that a potential attacker couldn't just\n> > change the content anyways?\n> \n> You're confusing security issues with user-friendliness issues.\n> Detecting that you sent the wrong type via an OID mismatch error\n> is a lot less painful than trying to figure out why you've got\n> errors along the line of \"incorrect binary data format\".\n\nAn oid mismatch error without knowing what that's about isn't very\nhelpful either.\n\nHow about adding an errcontext that shows the \"source type oid\", the\ntarget type oid & type name and, for records, the column name of the\ntarget table? That'd make this a lot easier to debug.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jul 2020 19:47:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Tue, 14 Jul 2020 at 22:47, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-07-14 22:28:48 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > What is the gain in having these checks? recv functions need to be safe\n> > > against arbitrary input, so a type crosscheck doesn't buy additional\n> > > safety in that regard. Not that a potential attacker couldn't just\n> > > change the content anyways?\n> >\n> > You're confusing security issues with user-friendliness issues.\n> > Detecting that you sent the wrong type via an OID mismatch error\n> > is a lot less painful than trying to figure out why you've got\n> > errors along the line of \"incorrect binary data format\".\n>\n> An oid mismatch error without knowing what that's about isn't very\n> helpful either.\n>\n> How about adding an errcontext that shows the \"source type oid\", the\n> target type oid & type name and, for records, the column name of the\n> target table? That'd make this a lot easier to debug.\n>\n\n\nSo looking at how to confirm that the subscriber has receive functions for\nall of the types.\n\nAFAICT we don't have that information since the publication determines what\nis sent?\n\nThis code line 482 in proto.c attempts to limit what is sent in binary. We\ncould certainly be more restrictive here.\n\n*if* (binary &&\n\nOidIsValid(typclass->typreceive) &&\n\n(att->atttypid < FirstNormalObjectId || typclass->typtype != 'c') &&\n\n(att->atttypid < FirstNormalObjectId || typclass->typelem == InvalidOid))\n\nDave Cramer\n\nOn Tue, 14 Jul 2020 at 22:47, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2020-07-14 22:28:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > What is the gain in having these checks? recv functions need to be safe\n> > against arbitrary input, so a type crosscheck doesn't buy additional\n> > safety in that regard. Not that a potential attacker couldn't just\n> > change the content anyways?\n> \n> You're confusing security issues with user-friendliness issues.\n> Detecting that you sent the wrong type via an OID mismatch error\n> is a lot less painful than trying to figure out why you've got\n> errors along the line of \"incorrect binary data format\".\n\nAn oid mismatch error without knowing what that's about isn't very\nhelpful either.\n\nHow about adding an errcontext that shows the \"source type oid\", the\ntarget type oid & type name and, for records, the column name of the\ntarget table? That'd make this a lot easier to debug.So looking at how to confirm that the subscriber has receive functions for all of the types. AFAICT we don't have that information since the publication determines what is sent? This code line 482 in proto.c attempts to limit what is sent in binary. We could certainly be more restrictive here.\nif (binary &&\n OidIsValid(typclass->typreceive) &&\n (att->atttypid < FirstNormalObjectId || typclass->typtype != 'c') &&\n (att->atttypid < FirstNormalObjectId || typclass->typelem == InvalidOid))Dave Cramer",
"msg_date": "Thu, 16 Jul 2020 09:58:13 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Working through this ... what is the rationale for having changed\nthe API of logicalrep_read_update? It seems kind of random,\nespecially since no comparable change was made to\nlogicalrep_read_insert. If there's actually a good reason,\nit seems like it'd apply to both. If there's not, I'd be\ninclined to not change the API, because this sort of thing\nis a recipe for bugs when making cross-version patches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Jul 2020 14:55:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "I've pushed this patch, with a number of adjustments, some cosmetic\nand some not so much (no pg_dump support!?). We're not quite\ndone though ...\n\nDave Cramer <davecramer@gmail.com> writes:\n> So looking at how to confirm that the subscriber has receive functions for\n> all of the types.\n> AFAICT we don't have that information since the publication determines what\n> is sent?\n\nYeah, at the point where we need to send the option, we seem not to have a\nlot of info. In practice, if the sender has a typsend function, the only\nway the subscriber doesn't have a matching typreceive function is if it's\nan older PG version. I think it's sufficient to document that you can't\nuse binary mode in that case, so that's what I did. (Note that\ngetTypeBinaryInputInfo will say \"no binary input function available for\ntype %s\" in such a case, so that seemed like adequate error handling.)\n\n> On Tue, 14 Jul 2020 at 22:47, Andres Freund <andres@anarazel.de> wrote:\n>> An oid mismatch error without knowing what that's about isn't very\n>> helpful either.\n>> How about adding an errcontext that shows the \"source type oid\", the\n>> target type oid & type name and, for records, the column name of the\n>> target table? That'd make this a lot easier to debug.\n\n> This code line 482 in proto.c attempts to limit what is sent in binary. We\n> could certainly be more restrictive here.\n\nI think Andres' point is to be *less* restrictive. I left that logic\nas-is in the committed patch, but we could do something like the attached\nto improve the situation.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 18 Jul 2020 12:53:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "On Sat, Jul 18, 2020 at 9:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've pushed this patch, with a number of adjustments, some cosmetic\n> and some not so much (no pg_dump support!?). We're not quite\n> done though ...\n\nSkink's latest run reports a failure that I surmise was caused by this patch:\n\n==722318== VALGRINDERROR-BEGIN\n==722318== Invalid read of size 1\n==722318== at 0x4F4CC9: apply_handle_update (worker.c:834)\n==722318== by 0x4F4F81: apply_dispatch (worker.c:1427)\n==722318== by 0x4F5104: LogicalRepApplyLoop (worker.c:1635)\n==722318== by 0x4F57BF: ApplyWorkerMain (worker.c:2141)\n==722318== by 0x4BD49E: StartBackgroundWorker (bgworker.c:813)\n==722318== by 0x4CBAB4: do_start_bgworker (postmaster.c:5865)\n==722318== by 0x4CBBF5: maybe_start_bgworkers (postmaster.c:6091)\n==722318== by 0x4CC4BF: sigusr1_handler (postmaster.c:5260)\n==722318== by 0x486413F: ??? (in\n/usr/lib/x86_64-linux-gnu/libpthread-2.31.so)\n==722318== by 0x4DC7845: select (select.c:41)\n==722318== by 0x4CCE40: ServerLoop (postmaster.c:1691)\n==722318== by 0x4CE106: PostmasterMain (postmaster.c:1400)\n==722318== Address 0x78cb0ab is 443 bytes inside a recently\nre-allocated block of size 8,192 alloc'd\n==722318== at 0x483877F: malloc (vg_replace_malloc.c:307)\n==722318== by 0x6A55BD: AllocSetContextCreateInternal (aset.c:468)\n==722318== by 0x280262: AtStart_Memory (xact.c:1108)\n==722318== by 0x2806ED: StartTransaction (xact.c:1979)\n==722318== by 0x282128: StartTransactionCommand (xact.c:2829)\n==722318== by 0x4F5514: ApplyWorkerMain (worker.c:2014)\n==722318== by 0x4BD49E: StartBackgroundWorker (bgworker.c:813)\n==722318== by 0x4CBAB4: do_start_bgworker (postmaster.c:5865)\n==722318== by 0x4CBBF5: maybe_start_bgworkers (postmaster.c:6091)\n==722318== by 0x4CC4BF: sigusr1_handler (postmaster.c:5260)\n==722318== by 0x486413F: ??? (in\n/usr/lib/x86_64-linux-gnu/libpthread-2.31.so)\n==722318== by 0x4DC7845: select (select.c:41)\n==722318==\n==722318== VALGRINDERROR-END\n\nSee https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2020-07-20%2002%3A37%3A51\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Jul 2020 08:17:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Skink's latest run reports a failure that I surmise was caused by this patch:\n\nYeah, I've just been digging through that. The patch didn't create\nthe bug, but it allowed valgrind to detect it, because the column\nstatus array is now \"just big enough\" rather than being always\nMaxTupleAttributeNumber entries. To wit, the problem is that the\ncode in apply_handle_update that computes target_rte->updatedCols\nis junk.\n\nThe immediate issue is that it fails to apply the remote-to-local\ncolumn number mapping, so that it's looking at the wrong colstatus\nentries, possibly including entries past the end of the array.\n\nI'm fixing that, but even after that, there's a semantic problem:\nLOGICALREP_COLUMN_UNCHANGED is just a weak optimization, cf the code\nthat sends it, in proto.c around line 480. colstatus will often *not*\nbe that for columns that were in fact not updated on the remote side.\nI wonder whether we need to take steps to improve that.\n\nCC'ing Peter E., as this issue arose with b9c130a1fdf.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jul 2020 11:51:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Hi,\n\nOn 20/07/2020 17:51, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> Skink's latest run reports a failure that I surmise was caused by this patch:\n> \n> Yeah, I've just been digging through that. The patch didn't create\n> the bug, but it allowed valgrind to detect it, because the column\n> status array is now \"just big enough\" rather than being always\n> MaxTupleAttributeNumber entries. To wit, the problem is that the\n> code in apply_handle_update that computes target_rte->updatedCols\n> is junk.\n> \n> The immediate issue is that it fails to apply the remote-to-local\n> column number mapping, so that it's looking at the wrong colstatus\n> entries, possibly including entries past the end of the array.\n> \n> I'm fixing that, but even after that, there's a semantic problem:\n> LOGICALREP_COLUMN_UNCHANGED is just a weak optimization, cf the code\n> that sends it, in proto.c around line 480. colstatus will often *not*\n> be that for columns that were in fact not updated on the remote side.\n> I wonder whether we need to take steps to improve that.\n> \n\nLOGICALREP_COLUMN_UNCHANGED is not trying to optimize anything, there is \ncertainly no effort made to not send columns that were not updated by \nlogical replication itself. It's just something we invented in order to \nhandle the fact that values for TOASTed columns that were not updated \nare simply not visible to logical decoding (unless table has REPLICA \nIDENTITY FULL) as they are not written to WAL nor accessible via \nhistoric snapshot. So the output plugin simply does not see the real value.\n\n-- \nPetr Jelinek\n2ndQuadrant - PostgreSQL Solutions for the Enterprise\nhttps://www.2ndQuadrant.com/\n\n\n",
"msg_date": "Mon, 20 Jul 2020 20:55:07 +0200",
"msg_from": "Petr Jelinek <petr@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
},
{
"msg_contents": "Petr Jelinek <petr@2ndquadrant.com> writes:\n> On 20/07/2020 17:51, Tom Lane wrote:\n>> I'm fixing that, but even after that, there's a semantic problem:\n>> LOGICALREP_COLUMN_UNCHANGED is just a weak optimization, cf the code\n>> that sends it, in proto.c around line 480. colstatus will often *not*\n>> be that for columns that were in fact not updated on the remote side.\n>> I wonder whether we need to take steps to improve that.\n\n> LOGICALREP_COLUMN_UNCHANGED is not trying to optimize anything, there is \n> certainly no effort made to not send columns that were not updated by \n> logical replication itself. It's just something we invented in order to \n> handle the fact that values for TOASTed columns that were not updated \n> are simply not visible to logical decoding (unless table has REPLICA \n> IDENTITY FULL) as they are not written to WAL nor accessible via \n> historic snapshot. So the output plugin simply does not see the real value.\n\nHm. So the comment I added a couple days ago is wrong; can you propose\na better one?\n\nHowever, be that as it may, we do have a provision in the protocol that\ncan handle marking columns unchanged. I'm thinking if we tried a bit\nharder to identify unchanged columns on the sending side, we could both\nfix this semantic deficiency for triggers and improve efficiency by\nreducing transmission of unneeded data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jul 2020 15:02:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Binary support for pgoutput plugin"
}
] |
[
{
"msg_contents": "Currently, WITH a AS NOT MATERIALIZED (INSERT ...) would silently \ndisregard the \"NOT MATERIALIZED\" instruction and execute the data-\nmodifying CTE to completion (as per the long-standing DML CTE rule).\n\nThis seems like an omission to me. Ideally, the presence of an explicit \n\"NOT MATERIALIZED\" clause on a data-modifying CTE should disable the \n\"run to completion\" logic.\n\nIt is understandably late in the 12 cycle, so maybe prohibit NOT \nMATERIALIZED with DML altogheter and revisit this in 13?\n\nThoughts?\n\n Elvis\n \n\n\n\n\n\n",
"msg_date": "Mon, 03 Jun 2019 11:45:51 -0400",
"msg_from": "Elvis Pranskevichus <elprans@gmail.com>",
"msg_from_op": true,
"msg_subject": "WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-03 11:45:51 -0400, Elvis Pranskevichus wrote:\n> Currently, WITH a AS NOT MATERIALIZED (INSERT ...) would silently \n> disregard the \"NOT MATERIALIZED\" instruction and execute the data-\n> modifying CTE to completion (as per the long-standing DML CTE rule).\n> \n> This seems like an omission to me. Ideally, the presence of an explicit \n> \"NOT MATERIALIZED\" clause on a data-modifying CTE should disable the \n> \"run to completion\" logic.\n\nI don't see us ever doing that. The result of minor costing and other\nplanner changes would yield different updated data. That'll just create\nendless bug reports.\n\n\n> It is understandably late in the 12 cycle, so maybe prohibit NOT \n> MATERIALIZED with DML altogheter and revisit this in 13?\n\nI could see us adding an error, or just continuing to silently ignore\nit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Jun 2019 08:50:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On Monday, June 3, 2019 11:50:15 A.M. EDT Andres Freund wrote:\n> > This seems like an omission to me. Ideally, the presence of an\n> > explicit \"NOT MATERIALIZED\" clause on a data-modifying CTE should\n> > disable the \"run to completion\" logic.\n> \n> I don't see us ever doing that. The result of minor costing and other\n> planner changes would yield different updated data. That'll just\n> create endless bug reports.\n\nI understand why the rule exists in the first place, but I think that an \nexplicit opt-in signals the assumption of responsibility and opens the \npossibility of using this in a well-defined evaluation context, such as \nCASE WHEN.\n\n Elvis\n\n\n\n\n",
"msg_date": "Mon, 03 Jun 2019 11:56:43 -0400",
"msg_from": "Elvis Pranskevichus <elprans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "Elvis Pranskevichus <elprans@gmail.com> writes:\n> On Monday, June 3, 2019 11:50:15 A.M. EDT Andres Freund wrote:\n>>> This seems like an omission to me. Ideally, the presence of an\n>>> explicit \"NOT MATERIALIZED\" clause on a data-modifying CTE should\n>>> disable the \"run to completion\" logic.\n\n>> I don't see us ever doing that. The result of minor costing and other\n>> planner changes would yield different updated data. That'll just\n>> create endless bug reports.\n\n> I understand why the rule exists in the first place, but I think that an \n> explicit opt-in signals the assumption of responsibility and opens the \n> possibility of using this in a well-defined evaluation context, such as \n> CASE WHEN.\n\nTBH, if you think it's well-defined, you're wrong. I concur with\nAndres that turning off run-to-completion for DMLs would be disastrous.\nFor just one obvious point, what about firing AFTER triggers?\n\nIt's already the case that the planner will silently ignore NOT\nMATERIALIZED for other cases where it can't inline the CTE for semantic\nor implementation reasons -- see comments in SS_process_ctes(). I see\nno good reason to treat the DML exception much differently from other\nexceptions, such as presence of volatile functions or recursion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 12:09:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On Monday, June 3, 2019 12:09:46 P.M. EDT Tom Lane wrote:\n> > I understand why the rule exists in the first place, but I think\n> > that an explicit opt-in signals the assumption of responsibility\n> > and opens the possibility of using this in a well-defined\n> > evaluation context, such as CASE WHEN.\n> \n> TBH, if you think it's well-defined, you're wrong.\n\nThe documentation seems to strongly suggest otherwise:\n\n\"When it is essential to force evaluation order, a CASE construct (see \nSection 9.17) can be used. ... CASE construct used in this fashion will \ndefeat optimization attempts\"\n\nAre there cases where this is not true outside of the documented \nexceptions (i.e. immutable early-eval and aggregates)?\n\n Elvis\n\n\n\n\n",
"msg_date": "Mon, 03 Jun 2019 12:29:41 -0400",
"msg_from": "Elvis Pranskevichus <elprans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "Elvis Pranskevichus <elprans@gmail.com> writes:\n> On Monday, June 3, 2019 12:09:46 P.M. EDT Tom Lane wrote:\n>>> I understand why the rule exists in the first place, but I think\n>>> that an explicit opt-in signals the assumption of responsibility\n>>> and opens the possibility of using this in a well-defined\n>>> evaluation context, such as CASE WHEN.\n\n>> TBH, if you think it's well-defined, you're wrong.\n\n> The documentation seems to strongly suggest otherwise:\n\n> \"When it is essential to force evaluation order, a CASE construct (see \n> Section 9.17) can be used. ... CASE construct used in this fashion will \n> defeat optimization attempts\"\n\n> Are there cases where this is not true outside of the documented \n> exceptions (i.e. immutable early-eval and aggregates)?\n\nCASE is a scalar-expression construct. It's got little to do with\nthe timing of scan/join operations such as row fetches. We offer\nusers essentially no control over when those happen ... other than\nthe guarantees about CTE materialization, which are exactly what\nyou say you want to give up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 13:03:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On Monday, June 3, 2019 1:03:44 P.M. EDT Tom Lane wrote:\n> CASE is a scalar-expression construct. It's got little to do with\n> the timing of scan/join operations such as row fetches. We offer\n> users essentially no control over when those happen ... other than\n> the guarantees about CTE materialization, which are exactly what\n> you say you want to give up.\n\nIn the general case, yes, but I *can* use a scalar-returning INSERT CTE \nin a THEN clause as a subquery. Useful for a conditional INSERT, when \nyou can't use ON CONFLICT.\n\nAnyway, I understand that the complications are probably not worth it.\n\nThanks,\n\n Elvis\n\n\n\n\n",
"msg_date": "Mon, 03 Jun 2019 13:44:44 -0400",
"msg_from": "Elvis Pranskevichus <elprans@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 11:45:51AM -0400, Elvis Pranskevichus wrote:\n> Currently, WITH a AS NOT MATERIALIZED (INSERT ...) would silently \n> disregard the \"NOT MATERIALIZED\" instruction and execute the data-\n> modifying CTE to completion (as per the long-standing DML CTE rule).\n> \n> This seems like an omission to me. Ideally, the presence of an explicit \n> \"NOT MATERIALIZED\" clause on a data-modifying CTE should disable the \n> \"run to completion\" logic.\n\nIt might be worth documenting the fact that NOT MATERIALIZED doesn't\naffect DML CTEs, just as it doesn't affect statements with volatile\nfunctions and recursive CTEs.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 4 Jun 2019 01:14:28 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> It might be worth documenting the fact that NOT MATERIALIZED doesn't\n> affect DML CTEs, just as it doesn't affect statements with volatile\n> functions and recursive CTEs.\n\nWe already do:\n\n However, if a WITH query is non-recursive and side-effect-free (that\n is, it is a SELECT containing no volatile functions) then it can be\n folded into the parent query, allowing joint optimization of the two\n query levels. By default, this happens if the parent query references\n the WITH query just once, but not if it references the WITH query more\n than once. You can override that decision by specifying MATERIALIZED\n to force separate calculation of the WITH query, or by specifying NOT\n MATERIALIZED to force it to be merged into the parent query. The\n latter choice risks duplicate computation of the WITH query, but it\n can still give a net savings if each usage of the WITH query needs\n only a small part of the WITH query's full output.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 19:33:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 07:33:35PM -0400, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > It might be worth documenting the fact that NOT MATERIALIZED doesn't\n> > affect DML CTEs, just as it doesn't affect statements with volatile\n> > functions and recursive CTEs.\n> \n> We already do:\n> \n> However, if a WITH query is non-recursive and side-effect-free (that\n> is, it is a SELECT containing no volatile functions) then it can be\n\nI guess this part makes it pretty clear that DML isn't part of the\nparty just yet.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 4 Jun 2019 02:40:02 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
},
{
"msg_contents": "On 2019-Jun-03, Andres Freund wrote:\n\n> On 2019-06-03 11:45:51 -0400, Elvis Pranskevichus wrote:\n> > It is understandably late in the 12 cycle, so maybe prohibit NOT \n> > MATERIALIZED with DML altogheter and revisit this in 13?\n> \n> I could see us adding an error, or just continuing to silently ignore\n> it.\n\nHmm, shouldn't we be throwing an error for that case? I'm not sure it's\ndefensible that we don't.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 11:58:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WITH NOT MATERIALIZED and DML CTEs"
}
] |
[
{
"msg_contents": "Hi,\n\nOn Saturday, we had a nice in-person conversation about the\nrequirements that zedstore has for an undo facility vs. the\nrequirements that zheap has vs. the current design of the undo patch\nset. The people present were: Heikki Linnakangas, Amit Kapila, Thomas\nMunro, Kuntal Ghosh, Andres Freund, and me. The remainder of this\nemail consists of the notes that I took during that conversation, with\nsome subsequent editing that I did to try to make them more broadly\nunderstandable.\n\nzedstore needs a very fast way to determine whether an undo pointer is\nold enough that it doesn't matter. Perhaps we should keep a\nbackend-local cache of discard pointers so that we can test an undo\npointer against the discard horizon without needing to touch shared\nmemory.\n\nzedstore needs a very fast way of obtaining the XID when it looks up\nan undo pointer. It seems inefficient to store the full XID\n(especially an 8-byte FullTransactionId) in every undo record, but\nstoring it only in the transaction header makes getting the XID\nprohibitively expensive. Heikki had the idea of storing a flag in\neach undo record saying 'same X as first tuple as on the page', where\nX might be XID, CID, relation OID, block number, etc. That seems like\na good idea. We'd need to decide exactly how many such flags to have\nand which fields they cover; and it's probably best not to depend on\nan undo record for another transaction that might be independently\ndiscarded. Another option is to put a \"default\" for each of these\nvalues in the page header and then store a value in individual undo\nrecords only if it differs from the value in the page header. We\ndon't have a way to do similar optimization for whatever individual\nclients of the undo machinery choose to store in the undo record's\npayload, which might be nice if we had a good idea how to do it.\n\nzedstore intends to store an undo pointer per tuple, whereas the\ncurrent zheap code stores an undo pointer per transaction slot.\nTherefore, zheap wants an undo record to point to the previous undo\nrecord for that transaction slot; whereas zedstore wants an undo\nrecord to point to the previous undo record for the same tuple (which\nmight not belong to the same transaction). The undo code that assumes\nper-block chaining (uur_blkprev) needs to be made more generic.\n\nFor either zedstore or zheap, newly-inserted tuples could potentially\npoint to an XID/CID rather than an undo record pointer, because\nvisibility checks don't need to look at the actual undo record. An\nundo record still needs to be written in case we abort, but we don't\nneed to be able to refer to it for visibility purposes.\n\nzedstore and zheap have different batching requirements. Both want to\nprocess undo records in TID order, but zedstore doesn't care about\npages. The current undo patch set calls the RMGR-specific callback\nonce per page; that needs to be generalized. Is it possible that some\nother table AM might want to sort by something other than TID?\n\nRight now, zedstore stores a separate btree for each column, and an\nextra btree for the visibility information, but it could have column\ngroups instead. If you put all of the columns and the visibility\ninformation into a single column group, you'd have a row store. How\nwould the performance of that solution compare with zheap?\n\nMaybe zheap should forget about having transaction slots, and just\nstore an undo pointer in each tuple. That would be worse in cases\nsuch as bulk loading, where all the tuples on the page are modified by\nthe same transaction ID. We could optimize that case by using\nsomething like a transaction slot, but then there's a problem if, say,\nevery tuple on the page is updated or deleted by a separate\ntransaction: the tuples need to get bigger to accommodate separate\nundo pointers, and the page might overflow. Perhaps we should design\na solution that allows for the temporary use of overflow space in such\ncases.\n\nTuple locking is complicated for both zheap and zedstore. Storing the\nlocker XIDs in the tuple or page is tempting, but you can run out of\nspace; where do you put the extras? Writing an undo record per lock\nis also tempting, but that could generate a very large amount of undo\nif there are many transactions that are each locking a whole table,\nwhereas the heap doesn't have that problem, because it uses\nMultiXacts. On the other hand, in the current heap, it's easily\npossible for N transactions to burn through O(N^2) MultiXactIds, which\nis worse than an undo record per lock.\n\nA perfect system would avoid permanently bloating the table when many\ntransactions each take many locks, but it's hard to design a system\nthat has that property AND ALSO is maximally compact for a table that\nis bulk-loaded and then never updated again. (Perhaps it's OK for the\ntable to expand a little bit if we find that we need to make space for\na MultiXactId pointer or similar in each tuple? Thomas later dubbed\nthis doubtless-uncontroversial design non-in-place select, and we're\nall looking forward to review comments on noninplace_select.c.)\n\nHeikki proposed an out-of-line btree of tuple locks, with compression\nto represent locks on TID ranges via a single entry, stored in memory\nif small and spilling to disk if large. It could be discarded on\nstartup, except for locks held by prepared transactions. Updates might\nnot need to enter their own tuple locks in this btree, but they would\nneed to propagate key share locks forward to new TIDs.\n\nRobert and Thomas came up with the idea of having a special kind of\nundo record that lives outside of a transaction; a backend could\nattach to an additional undo log, insert one of these special records,\nand the detach. These special undo records would have a special rule\nfor when they could be discarded; an rmgr callback would decide when\nit's OK to discard one. This could serve as a replacement for\nMultiXactIds; you could store a collection of XIDs in the payload of\none of these special records. (Andres was not very impressed by this\nidea.) Later, after Heikki left, we talked about perhaps also\nallowing these special undo records to have a special rule for when\nundo actions are executed, so that you could use the undo framework\nfor un-commit actions. It doesn't seem desirable to have to perform\non-commit actions frequently; one of the motivating ideas behind zheap\nis that commits should be as cheap as possible, even if that costs\nsomething in the abort case. But it might be useful to have the\noption available for use in rare or exceptional cases.\n\nAfter this discussion, Heikki is thinking that he might just allow\neach tuple in zedstore to store both an undo log pointer and a\nMultiXactId, elided when not needed. The MultiXact system would need\nto be updated to make MultiXactIds 64-bits in order to avoid needing\nto freeze. No one seemed keen to build a new storage engine that\nstill requires freezing.\n\nWhen subtransactions are used, the undo system intends to use the\ntoplevel XID for everything, rather than requiring additional XIDs for\nsubtransactions. After some discussion, this seems like it should\nwork well for both zheap and zedstore.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jun 2019 11:53:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "undo: zedstore vs. zheap"
}
] |
[
{
"msg_contents": "After many years of trying, it seems the -fsanitize=undefined checking\nin gcc is now working somewhat reliably. Attached is a patch that fixes\nall errors of the kind\n\nruntime error: null pointer passed as argument N, which is declared to\nnever be null\n\nMost of the cases are calls to memcpy(), memcmp(), etc. with a length of\nzero, so one appears to get away with passing a null pointer.\n\nNote that these are runtime errors, not static analysis, so the code in\nquestion is actually reached.\n\nTo reproduce, configure normally and then set\n\nCOPT=-fsanitize=undefined -fno-sanitize=alignment -fno-sanitize-recover=all\n\nand build and run make check-world. Unpatched, this will core dump in\nvarious places.\n\n(-fno-sanitize=alignment should also be fixed but I took it out here to\ndeal with it separately.)\n\nSee https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html for\nfurther documentation.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 3 Jun 2019 21:21:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 3:22 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> After many years of trying, it seems the -fsanitize=undefined checking\n> in gcc is now working somewhat reliably. Attached is a patch that fixes\n> all errors of the kind\n\nIs this as of some particular gcc version?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Jun 2019 15:30:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On 2019-06-05 21:30, Robert Haas wrote:\n> On Mon, Jun 3, 2019 at 3:22 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> After many years of trying, it seems the -fsanitize=undefined checking\n>> in gcc is now working somewhat reliably. Attached is a patch that fixes\n>> all errors of the kind\n> \n> Is this as of some particular gcc version?\n\nI used gcc-8.\n\nThe option has existed in gcc for quite some time, but in previous\nreleases it always tended to hang or get confused somewhere.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Jun 2019 11:36:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "Hi,\n\nI tested this patch with clang 7 on master.\n- On unpatched master I can't reproduce errors with make check-world in:\nsrc/backend/access/heap/heapam.c\nsrc/backend/utils/cache/relcache.c (IIRC I triggered this one in a pg\nprevious version)\nsrc/backend/utils/misc/guc.c\n\n- I have a hard to reproduce one not in this patched:\nsrc/backend/storage/ipc/shm_mq.c line 727\n\nAbout the changes\n- in\nsrc/fe_utils/print.c\nline memset(header_done, false, col_count * sizeof(bool));\nis redundant and should be remove not guarded with if (hearder_done),\nheader_done is either null or already zeroed, it's pg_malloc0 ed.\n\nIn all cases but one patched version shortcut an undefined no ops but in\nsrc/backend/access/transam/clog.c\nmemcmp 0 bytes return 0 thus current change modifies code path, before\nwith nsubxids == 0 if branch was taken now it's not.\nCould wait more often while taking lock, no idea if it's relevant.\n\nRegards\nDidier\n\nOn Thu, Jun 6, 2019 at 11:36 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-06-05 21:30, Robert Haas wrote:\n> > On Mon, Jun 3, 2019 at 3:22 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> >> After many years of trying, it seems the -fsanitize=undefined checking\n> >> in gcc is now working somewhat reliably. Attached is a patch that fixes\n> >> all errors of the kind\n> >\n> > Is this as of some particular gcc version?\n>\n> I used gcc-8.\n>\n> The option has existed in gcc for quite some time, but in previous\n> releases it always tended to hang or get confused somewhere.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n\n",
"msg_date": "Sat, 29 Jun 2019 18:16:45 +0200",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 09:21:48PM +0200, Peter Eisentraut wrote:\n> After many years of trying, it seems the -fsanitize=undefined checking\n> in gcc is now working somewhat reliably. Attached is a patch that fixes\n> all errors of the kind\n> \n> runtime error: null pointer passed as argument N, which is declared to\n> never be null\n> \n> Most of the cases are calls to memcpy(), memcmp(), etc. with a length of\n> zero, so one appears to get away with passing a null pointer.\n\nI just saw this proposal. The undefined behavior in question is strictly\nacademic. These changes do remove the need for new users to discover\n-fno-sanitize=nonnull-attribute, but they make the code longer and no clearer.\nGiven the variety of code this touches, I expect future commits will\nreintroduce the complained-of usage patterns, prompting yet more commits to\nrestore the invariant achieved here. Hence, I'm -0 for this change.\n\n\n",
"msg_date": "Thu, 4 Jul 2019 23:33:53 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On 2019-07-05 01:33, Noah Misch wrote:\n> I just saw this proposal. The undefined behavior in question is strictly\n> academic. These changes do remove the need for new users to discover\n> -fno-sanitize=nonnull-attribute, but they make the code longer and no clearer.\n> Given the variety of code this touches, I expect future commits will\n> reintroduce the complained-of usage patterns, prompting yet more commits to\n> restore the invariant achieved here. Hence, I'm -0 for this change.\n\nThis sanitizer has found real problems in the past. By removing these\ntrivial issues we can then set up a build farm animal or similar to\nautomatically check for any new issues.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 5 Jul 2019 18:14:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "> This sanitizer has found real problems in the past. By removing these\n> trivial issues we can then set up a build farm animal or similar to\n> automatically check for any new issues.\n\nWe have done exactly this in postgis with 2 different jobs (gcc and clang)\nand, even though it doesn't happen too often, it's really satisfying when\nit detects these issues automatically.\n\n-- \nRaúl Marín Rodríguez\ncarto.com\n\n\n",
"msg_date": "Fri, 5 Jul 2019 18:38:37 +0200",
"msg_from": "=?UTF-8?B?UmHDumwgTWFyw61uIFJvZHLDrWd1ZXo=?= <rmrodriguez@carto.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On Fri, Jul 05, 2019 at 06:14:31PM +0200, Peter Eisentraut wrote:\n> On 2019-07-05 01:33, Noah Misch wrote:\n> > I just saw this proposal. The undefined behavior in question is strictly\n> > academic. These changes do remove the need for new users to discover\n> > -fno-sanitize=nonnull-attribute, but they make the code longer and no clearer.\n> > Given the variety of code this touches, I expect future commits will\n> > reintroduce the complained-of usage patterns, prompting yet more commits to\n> > restore the invariant achieved here. Hence, I'm -0 for this change.\n> \n> This sanitizer has found real problems in the past. By removing these\n> trivial issues we can then set up a build farm animal or similar to\n> automatically check for any new issues.\n\nHas it found one real problem that it would not have found given\n\"-fno-sanitize=nonnull-attribute\"? I like UBSan in general, but I haven't\nfound a reason to prefer plain \"-fsanitize=undefined\" over\n\"-fsanitize=undefined -fno-sanitize=nonnull-attribute\".\n\n\n",
"msg_date": "Fri, 5 Jul 2019 09:58:30 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2019-07-05 01:33, Noah Misch wrote:\n>> I just saw this proposal. The undefined behavior in question is strictly\n>> academic. These changes do remove the need for new users to discover\n>> -fno-sanitize=nonnull-attribute, but they make the code longer and no clearer.\n>> Given the variety of code this touches, I expect future commits will\n>> reintroduce the complained-of usage patterns, prompting yet more commits to\n>> restore the invariant achieved here. Hence, I'm -0 for this change.\n\n> This sanitizer has found real problems in the past. By removing these\n> trivial issues we can then set up a build farm animal or similar to\n> automatically check for any new issues.\n\nI think Noah's point is just that we can do that with the addition of\n-fno-sanitize=nonnull-attribute. I agree with him that it's very\nunclear why we should bother to make the code clean against that\nspecific subset of warnings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jul 2019 13:10:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
},
{
"msg_contents": "On 2019-07-05 19:10, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2019-07-05 01:33, Noah Misch wrote:\n>>> I just saw this proposal. The undefined behavior in question is strictly\n>>> academic. These changes do remove the need for new users to discover\n>>> -fno-sanitize=nonnull-attribute, but they make the code longer and no clearer.\n>>> Given the variety of code this touches, I expect future commits will\n>>> reintroduce the complained-of usage patterns, prompting yet more commits to\n>>> restore the invariant achieved here. Hence, I'm -0 for this change.\n> \n>> This sanitizer has found real problems in the past. By removing these\n>> trivial issues we can then set up a build farm animal or similar to\n>> automatically check for any new issues.\n> \n> I think Noah's point is just that we can do that with the addition of\n> -fno-sanitize=nonnull-attribute. I agree with him that it's very\n> unclear why we should bother to make the code clean against that\n> specific subset of warnings.\n\nOK, I'm withdrawing this patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 13 Aug 2019 20:49:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix runtime errors from -fsanitize=undefined"
}
] |
[
{
"msg_contents": "Peter and I implemented this small (attached) patch to extend\nabbreviated key compare sort to macaddr8 datatype (currently supported\nfor macaddr).\n\nI tried checking to see if there is a performance difference using the\nattached DDL based on src/test/regress/sql/macaddr8.sql. I found\nthat the sort function is only exercised when creating an index (not,\nfor example, when doing some type of aggregation).\n\nWith the patch applied to current master and using the DDL attached,\nthe timing for creating the index hovered around 20 ms for master and\n15 ms for the patched version.\n\nMachine and version specs: PostgreSQL 12beta1 on x86_64-pc-linux-gnu\ncompiled by gcc (Ubuntu 8.3.0-6ubuntu1) 8.3.0, 64-bit\n\nI think that that seems like an improvement. I was thinking of\nregistering this patch for the next commitfest. Is that okay?\n\nI was just wondering what the accepted way to test and share\nperformance numbers is for a small patch like this. Is sharing DDL\nenough? Do I need to use pg_bench?\n\n-- \nMelanie Plageman",
"msg_date": "Mon, 3 Jun 2019 12:23:33 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Sort support for macaddr8"
},
{
"msg_contents": "On 6/3/19 3:23 PM, Melanie Plageman wrote:\n> Peter and I implemented this small (attached) patch to extend\n> abbreviated key compare sort to macaddr8 datatype (currently supported\n> for macaddr).\n\nAm I going cross-eyed, or would the memset be serving more of a purpose\nif it were in the SIZEOF_DATUM != 8 branch?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 3 Jun 2019 17:03:16 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 1:17 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I tried checking to see if there is a performance difference using the\n> attached DDL based on src/test/regress/sql/macaddr8.sql. I found\n> that the sort function is only exercised when creating an index (not,\n> for example, when doing some type of aggregation).\n\nAs you know, it's a bit weird that we're proposing adding sort support\nwith abbreviated keys for a type that is 8 bytes, since you'd expect\nit to also be pass-by-value on most platforms, which largely defeats\nthe purpose of having abbreviated keys (though sort support could\nstill make sense, for the same reason it makes sense to have it for\nint8). However, macaddr8 isn't actually pass-by-value, and it seems\ntoo late to do anything about that now, so abbreviated keys actually\nmake sense.\n\n> With the patch applied to current master and using the DDL attached,\n> the timing for creating the index hovered around 20 ms for master and\n> 15 ms for the patched version.\n\nI would expect a sufficiently large sort to execute in about half the\ntime with abbreviation, based on previous experience. However, I think\nthat this patch can be justified in a relatively straightforward way.\nIt extends sort support for macaddr to macaddr8, since these two types\nare almost identical in every other way. We should still validate the\nperformance out of an abundance of caution, but I would be very\nsurprised if there was much difference between the macaddr and\nmacaddr8 cases.\n\nIn short, users should not be surprised by the big gap in performance\nbetween macaddr and macaddr8. It's worth being consistent there.\n\n> I think that that seems like an improvement. I was thinking of\n> registering this patch for the next commitfest. Is that okay?\n\nDefinitely, yes.\n\n> I was just wondering what the accepted way to test and share\n> performance numbers is for a small patch like this. Is sharing DDL\n> enough? Do I need to use pg_bench?\n\nI've always used custom microbenchmarks for stuff like this.\nRepeatedly executing a particular query and taking the median\nexecution time as representative seems to be the best approach.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Jun 2019 14:39:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 2:03 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> Am I going cross-eyed, or would the memset be serving more of a purpose\n> if it were in the SIZEOF_DATUM != 8 branch?\n\nNo, it wouldn't -- that's the correct place for it with the macaddr\ntype. However, it isn't actually necessary to memset() at the\nequivalent point for macaddr8, since we cannot \"run out of bytes from\nthe authoritative representation\" that go in the Datum/abbreviated\nkey. I suppose that the memset() should simply be removed, since it is\nsuperfluous here.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Jun 2019 14:48:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On 6/3/19 5:03 PM, Chapman Flack wrote:\n> On 6/3/19 3:23 PM, Melanie Plageman wrote:\n>> Peter and I implemented this small (attached) patch to extend\n>> abbreviated key compare sort to macaddr8 datatype (currently supported\n>> for macaddr).\n> \n> Am I going cross-eyed, or would the memset be serving more of a purpose\n> if it were in the SIZEOF_DATUM != 8 branch?\n\nIt looks like a copy-pasto coming from mac.c, where the size of\nthe thing to be compared isn't itself 8 bytes.\n\nWith sizeof(macaddr) being 6, that original code may have had\nthese cases in mind:\n\n- SIZEOF_DATUM is something smaller than 6 (likely 4). The whole key\n doesn't fit, but that's ok, because abbreviated \"equality\" just means\n to recheck with the authoritative routine.\n- SIZEOF_DATUM is exactly 6. Probably not a thing.\n- SIZEOF_DATUM is anything larger than 6 (likely 8). Needs the memset.\n Also, in this case, abbreviated \"equality\" could be taken as true\n equality, never needing the authoritative fallback.\n\nFor macaddr8, the cases morph into these:\n\n- SIZEOF_DATUM is something smaller than 8 (likely 4). Ok; it's\n just an abbreviation.\n- SIZEOF_DATUM is exactly 8. Now an actual thing, even likely.\n- SIZEOF_DATUM is larger than 8. Our flying cars run postgres, and\n we need the memset to make sure they don't crash.\n\nThis leaves me with a couple of questions:\n\n1. (This one seems like a bug.) In the little-endian case, if\n SIZEOF_DATUM is smaller than the type, I'm not convinced by doing\n the DatumBigEndianToNative() after the memcpy(). Seems like that's\n too late to make sure the most-significant bytes got copied.\n\n2. (This one seems like an API opportunity.) If it becomes common to\n add abbreviation support for smallish types such that (as here,\n when SIZEOF_DATUM >= 8), an abbreviated \"equality\" result is in fact\n authoritative, would it be worthwhile to have some way for the sort\n support routine to announce that fact to the caller? That could\n spare the caller the effort of re-checking with the authoritative\n routine. It could also (by making the equality case less costly)\n end up changing the weight assigned to the cardinality estimate in\n deciding whether to abbrev..\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 3 Jun 2019 17:59:13 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On 6/3/19 5:59 PM, Chapman Flack wrote:\n> On 6/3/19 5:03 PM, Chapman Flack wrote:\n> 1. (This one seems like a bug.) In the little-endian case, if\n> SIZEOF_DATUM is smaller than the type, I'm not convinced by doing\n> the DatumBigEndianToNative() after the memcpy(). Seems like that's\n> too late to make sure the most-significant bytes got copied.\n\nWait, I definitely was cross-eyed for that one. It's the abbreviated\ncopy whose endianness varies. Never mind.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 3 Jun 2019 18:04:29 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 2:59 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> 1. (This one seems like a bug.) In the little-endian case, if\n> SIZEOF_DATUM is smaller than the type, I'm not convinced by doing\n> the DatumBigEndianToNative() after the memcpy(). Seems like that's\n> too late to make sure the most-significant bytes got copied.\n\nUh, when else would you do it? Before the memcpy()?\n\n> 2. (This one seems like an API opportunity.) If it becomes common to\n> add abbreviation support for smallish types such that (as here,\n> when SIZEOF_DATUM >= 8), an abbreviated \"equality\" result is in fact\n> authoritative, would it be worthwhile to have some way for the sort\n> support routine to announce that fact to the caller? That could\n> spare the caller the effort of re-checking with the authoritative\n> routine.\n\nIt's possible that that would make sense, but I don't think that this\npatch needs to do that. There is at least one pre-existing cases that\ndoes this -- macaddr.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Jun 2019 15:05:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Greetings,\n\n* Melanie Plageman (melanieplageman@gmail.com) wrote:\n> Peter and I implemented this small (attached) patch to extend\n> abbreviated key compare sort to macaddr8 datatype (currently supported\n> for macaddr).\n\nNice.\n\n> I think that that seems like an improvement. I was thinking of\n> registering this patch for the next commitfest. Is that okay?\n\nSure.\n\n> I was just wondering what the accepted way to test and share\n> performance numbers is for a small patch like this. Is sharing DDL\n> enough? Do I need to use pg_bench?\n\nDetailed (enough... doesn't need to include timing of every indivudal\nquery, but something like the average timing across 5 runs or similar\nwould be good) results posted to this list, with enough information\nabout how to reproduce the tests, would be the way to go.\n\nThe idea is to let others also test and make sure that they come up with\nsimilar results to yours, and if they don't, ideally have enough\ninformation to narrow down what's happening / what's different.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 4 Jun 2019 13:49:23 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 2:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n>\n> As you know, it's a bit weird that we're proposing adding sort support\n> with abbreviated keys for a type that is 8 bytes, since you'd expect\n> it to also be pass-by-value on most platforms, which largely defeats\n> the purpose of having abbreviated keys (though sort support could\n> still make sense, for the same reason it makes sense to have it for\n> int8). However, macaddr8 isn't actually pass-by-value, and it seems\n> too late to do anything about that now, so abbreviated keys actually\n> make sense.\n>\n>\nso, if what you say is true and it is either not worth it or\npotentially a breaking change to make macaddr8 pass-by-value and\nadding abbreviated sort support has the potential to avoid \"pointer\nchasing\" and guarantee equivalent relative performance for macaddr8\nand macaddr, then that seems worth it.\n\nWith regard to macaddr8_abbrev_convert() and memset(), I attached a patch\nwith the memset() removed, since it is superfluous here.\n\nmacaddr8_cmp_internal() already existed before this patch and I noticed\nthat it explicitly returns int32 whereas the return type of\nmacaddr_cmp_internal() is just specified as an int. I was wondering why.\n\nI also noticed that the prototype for macaddr8_cmp_internal() was not\nat the top of the file with the other static function prototypes. I\nadded it there, but I wasn't sure if there was some reason that it was\nlike that to begin with.\n\n-- \nMelanie Plageman",
"msg_date": "Tue, 4 Jun 2019 11:33:18 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 01:49:23PM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Melanie Plageman (melanieplageman@gmail.com) wrote:\n>> Peter and I implemented this small (attached) patch to extend\n>> abbreviated key compare sort to macaddr8 datatype (currently supported\n>> for macaddr).\n>\n>Nice.\n>\n>> I think that that seems like an improvement. I was thinking of\n>> registering this patch for the next commitfest. Is that okay?\n>\n>Sure.\n>\n>> I was just wondering what the accepted way to test and share\n>> performance numbers is for a small patch like this. Is sharing DDL\n>> enough? Do I need to use pg_bench?\n>\n>Detailed (enough... doesn't need to include timing of every indivudal\n>query, but something like the average timing across 5 runs or similar\n>would be good) results posted to this list, with enough information\n>about how to reproduce the tests, would be the way to go.\n>\n>The idea is to let others also test and make sure that they come up with\n>similar results to yours, and if they don't, ideally have enough\n>information to narrow down what's happening / what's different.\n>\n\nYeah, there's no \"approved way\" to do performance tests the contributors\nwould have to follow. That applies both to tooling and how detailed the\ndata need/should be. Ultimately, the goal is to convince others (and\nyourself) that the change is an improvement. Does a simple pgbench\nscript achieve that? Cool, use that. Do you need something more complex?\nSure, do a shell script or something like that.\n\nAs long as others can reasonably reproduce your tests, it's fine.\n\nFor me, the most critical part of benchmarking a change is deciding what\nto test - which queries, data sets, what amounts of data, config, etc.\n\nFor example, the data set you used has ~12k rows. Does the behavior\nchange with 10x or 100x that? It probably does not make sense to go\nabove available RAM (the I/O costs are likely to make everything else\nmostly irrelevant), but CPU caches may matter a lot. Different work_mem\n(and maintenance_work_mem) values may be useful too.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 4 Jun 2019 23:30:03 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On 2019-Jun-03, Peter Geoghegan wrote:\n\n> As you know, it's a bit weird that we're proposing adding sort support\n> with abbreviated keys for a type that is 8 bytes, since you'd expect\n> it to also be pass-by-value on most platforms, which largely defeats\n> the purpose of having abbreviated keys (though sort support could\n> still make sense, for the same reason it makes sense to have it for\n> int8). However, macaddr8 isn't actually pass-by-value, and it seems\n> too late to do anything about that now, so abbreviated keys actually\n> make sense.\n\nI'm not sure I understand why you say it's too late to change now.\nSurely the on-disk representation doesn't actually change, so it is not\nimpossible to change? And you make it sound like doing that change is\nworthwhile, performance-wise.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Jun 2019 17:37:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 17:37:35 -0400, Alvaro Herrera wrote:\n> On 2019-Jun-03, Peter Geoghegan wrote:\n> > As you know, it's a bit weird that we're proposing adding sort support\n> > with abbreviated keys for a type that is 8 bytes, since you'd expect\n> > it to also be pass-by-value on most platforms, which largely defeats\n> > the purpose of having abbreviated keys (though sort support could\n> > still make sense, for the same reason it makes sense to have it for\n> > int8). However, macaddr8 isn't actually pass-by-value, and it seems\n> > too late to do anything about that now, so abbreviated keys actually\n> > make sense.\n> \n> I'm not sure I understand why you say it's too late to change now.\n> Surely the on-disk representation doesn't actually change, so it is not\n> impossible to change? And you make it sound like doing that change is\n> worthwhile, performance-wise.\n\nYea, I don't immediately see a problem with doing that on a major\nversion boundary. Obviously that'd only be possible for sizeof(Datum) ==\n8 == sizeof(macaddr8) platforms, but that's the vast majority these\ndays. And generally, I think it's just about *always* worth to go for a\npass-by-value for the cases where that doesn't imply space wastage.\n\nI think it might be worthwhile to optimize things so that all typlen > 0\n&& typlen <= sizeof(Datum) are allowed for byval datums.\n\nSELECT typname, typlen FROM pg_type WHERE typlen > 0 AND typlen <= 8 AND NOT typbyval;\n┌──────────┬────────┐\n│ typname │ typlen │\n├──────────┼────────┤\n│ tid │ 6 │\n│ macaddr │ 6 │\n│ macaddr8 │ 8 │\n└──────────┴────────┘\n(3 rows)\n\nSeems like adding byval support for sizes outside of 1/2/4/8 bytes would\nbe worthwhile for tid alone. Not sure whether there's extensions with\nsignifcant use that have fixed-width types <= 8 bytes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 14:55:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 14:55:16 -0700, Andres Freund wrote:\n> On 2019-06-04 17:37:35 -0400, Alvaro Herrera wrote:\n> I think it might be worthwhile to optimize things so that all typlen > 0\n> && typlen <= sizeof(Datum) are allowed for byval datums.\n\nMaybe even just deprecate specifying byval at CREATE TYPE time, and\ninstead automatically infer it from the type length. We've had a number\nof blunders around this, and I can't really see any reason for\nspecifying byval = false when we internally could treat it as a byval.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 14:59:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, I don't immediately see a problem with doing that on a major\n> version boundary. Obviously that'd only be possible for sizeof(Datum) ==\n> 8 == sizeof(macaddr8) platforms, but that's the vast majority these\n> days. And generally, I think it's just about *always* worth to go for a\n> pass-by-value for the cases where that doesn't imply space wastage.\n\nIt would be faster to do it that way, I think. You would need a more\ncomplicated comparator than a classic abbreviated comparator (i.e. a\n3-way unsigned int comparator) that way, but it would very probably be\nfaster on balance.\n\nI'm glad to hear that it isn't *obviously* a problem from a\ncompatibility perspective -- I really wasn't sure about that, since\nretrofitting a type to be pass-by-value like this is something that\nmay never have been attempted before now (at least not since we\nstarted to care about pg_upgrade).\n\n> I think it might be worthwhile to optimize things so that all typlen > 0\n> && typlen <= sizeof(Datum) are allowed for byval datums.\n>\n> SELECT typname, typlen FROM pg_type WHERE typlen > 0 AND typlen <= 8 AND NOT typbyval;\n> ┌──────────┬────────┐\n> │ typname │ typlen │\n> ├──────────┼────────┤\n> │ tid │ 6 │\n> │ macaddr │ 6 │\n> │ macaddr8 │ 8 │\n> └──────────┴────────┘\n> (3 rows)\n\nThis is half the reason why I ended up implementing itemptr_encode()\nto accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\nyears back -- TID is 6 bytes wide, which doesn't have the necessary\nmacro support within postgres.h. There is no reason why that couldn't\nbe added for the benefit of both TID and macaddr types, though it\nprobably wouldn't be worth it. And, as long as we're not going to\nthose lengths, there may be some value in keeping the macaddr8 code in\nline with macaddr code -- the two types are currently almost the same\n(the glaring difference is the lack of macaddr8 sort support).\n\nWe'll need to draw the line somewhere, and that is likely to be a bit\narbitrary. This was what I meant by \"weird\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:10:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-04 15:10:07 -0700, Peter Geoghegan wrote:\n> On Tue, Jun 4, 2019 at 2:55 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, I don't immediately see a problem with doing that on a major\n> > version boundary. Obviously that'd only be possible for sizeof(Datum) ==\n> > 8 == sizeof(macaddr8) platforms, but that's the vast majority these\n> > days. And generally, I think it's just about *always* worth to go for a\n> > pass-by-value for the cases where that doesn't imply space wastage.\n> \n> It would be faster to do it that way, I think. You would need a more\n> complicated comparator than a classic abbreviated comparator (i.e. a\n> 3-way unsigned int comparator) that way, but it would very probably be\n> faster on balance.\n\nI'd be surprised if it weren't.\n\n\n> I'm glad to hear that it isn't *obviously* a problem from a\n> compatibility perspective -- I really wasn't sure about that, since\n> retrofitting a type to be pass-by-value like this is something that\n> may never have been attempted before now (at least not since we\n> started to care about pg_upgrade).\n\nObviously we have to test it, but I don't really see any compat\nproblems. Both have the same size on disk, after all. We couldn't make\nsuch a change in a minor version, as DatumGetMacaddr*,\nDatumGetItemPointer obviously need to change, but it ought to otherwise\nbe transparent. It would, I think, be different if we still supported\nv0 calling conventions, but we don't...\n\n\n> This is half the reason why I ended up implementing itemptr_encode()\n> to accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\n> years back -- TID is 6 bytes wide, which doesn't have the necessary\n> macro support within postgres.h. There is no reason why that couldn't\n> be added for the benefit of both TID and macaddr types, though it\n> probably wouldn't be worth it.\n\nI think we should definitely do that. It seems not unlikely that other\npeople want to write new fixed width types, and we shouldn't have them\ndeal with all this complexity unnecessarily.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:23:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > This is half the reason why I ended up implementing itemptr_encode()\n> > to accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\n> > years back -- TID is 6 bytes wide, which doesn't have the necessary\n> > macro support within postgres.h. There is no reason why that couldn't\n> > be added for the benefit of both TID and macaddr types, though it\n> > probably wouldn't be worth it.\n>\n> I think we should definitely do that. It seems not unlikely that other\n> people want to write new fixed width types, and we shouldn't have them\n> deal with all this complexity unnecessarily.\n\nOn second thought, maybe there is something to be said for being\nexhaustive here.\n\nIt seems like there is a preference for making macaddr8 pass-by-value\ninstead of adding abbreviated keys support to macaddr8, and possibly\ndoing the same with the original macaddr type.\n\nDo you think that you'll be able to work on the project with this\nexpanded scope, Melanie?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:49:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 3:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Jun 4, 2019 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > > This is half the reason why I ended up implementing itemptr_encode()\n> > > to accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\n> > > years back -- TID is 6 bytes wide, which doesn't have the necessary\n> > > macro support within postgres.h. There is no reason why that couldn't\n> > > be added for the benefit of both TID and macaddr types, though it\n> > > probably wouldn't be worth it.\n> >\n> > I think we should definitely do that. It seems not unlikely that other\n> > people want to write new fixed width types, and we shouldn't have them\n> > deal with all this complexity unnecessarily.\n>\n> On second thought, maybe there is something to be said for being\n> exhaustive here.\n>\n> It seems like there is a preference for making macaddr8 pass-by-value\n> instead of adding abbreviated keys support to macaddr8, and possibly\n> doing the same with the original macaddr type.\n>\n> Do you think that you'll be able to work on the project with this\n> expanded scope, Melanie?\n>\n>\nI can take on making macaddr8 pass-by-value\nI tinkered a bit last night and got in/out mostly working (I think).\nI'm not sure about macaddr, TID, and user-defined types.\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 4, 2019 at 3:50 PM Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Jun 4, 2019 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > This is half the reason why I ended up implementing itemptr_encode()\n> > to accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\n> > years back -- TID is 6 bytes wide, which doesn't have the necessary\n> > macro support within postgres.h. There is no reason why that couldn't\n> > be added for the benefit of both TID and macaddr types, though it\n> > probably wouldn't be worth it.\n>\n> I think we should definitely do that. It seems not unlikely that other\n> people want to write new fixed width types, and we shouldn't have them\n> deal with all this complexity unnecessarily.\n\nOn second thought, maybe there is something to be said for being\nexhaustive here.\n\nIt seems like there is a preference for making macaddr8 pass-by-value\ninstead of adding abbreviated keys support to macaddr8, and possibly\ndoing the same with the original macaddr type.\n\nDo you think that you'll be able to work on the project with this\nexpanded scope, Melanie?\nI can take on making macaddr8 pass-by-valueI tinkered a bit last night and got in/out mostly working (I think).I'm not sure about macaddr, TID, and user-defined types.-- Melanie Plageman",
"msg_date": "Wed, 5 Jun 2019 09:18:34 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On 2019-Jun-05, Melanie Plageman wrote:\n\n> I can take on making macaddr8 pass-by-value\n> I tinkered a bit last night and got in/out mostly working (I think).\n> I'm not sure about macaddr, TID, and user-defined types.\n\nYeah, let's see what macaddr8 looks like, and we can move from there --\nI suppose adapting for macaddr would not be terribly different, but we\ndon't have to do both in a single commit. I don't expect that TID would\nnecessarily be similar since we have lots of bespoke code for that in\nlots of places; it might not affect anything (it should not!) but then\nit might. No reason not to move forward incrementally.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 13:41:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-05 09:18:34 -0700, Melanie Plageman wrote:\n> On Tue, Jun 4, 2019 at 3:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> > On Tue, Jun 4, 2019 at 3:23 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > This is half the reason why I ended up implementing itemptr_encode()\n> > > > to accelerate the TID sort used by CREATE INDEX CONCURRENTLY some\n> > > > years back -- TID is 6 bytes wide, which doesn't have the necessary\n> > > > macro support within postgres.h. There is no reason why that couldn't\n> > > > be added for the benefit of both TID and macaddr types, though it\n> > > > probably wouldn't be worth it.\n> > >\n> > > I think we should definitely do that. It seems not unlikely that other\n> > > people want to write new fixed width types, and we shouldn't have them\n> > > deal with all this complexity unnecessarily.\n> >\n> > On second thought, maybe there is something to be said for being\n> > exhaustive here.\n> >\n> > It seems like there is a preference for making macaddr8 pass-by-value\n> > instead of adding abbreviated keys support to macaddr8, and possibly\n> > doing the same with the original macaddr type.\n> >\n> > Do you think that you'll be able to work on the project with this\n> > expanded scope, Melanie?\n> >\n> >\n> I can take on making macaddr8 pass-by-value\n> I tinkered a bit last night and got in/out mostly working (I think).\n> I'm not sure about macaddr, TID, and user-defined types.\n\nI'd much rather see this tackled in a general way than fiddling with\nindividual datatypes. I think we should:\n\n1) make fetch_att(), store_att_byval() etc support datums of any length\n between 1 and <= sizeof(Datum). Probably also convert them to inline\n functions. There's a few more functions to be adjusted, but not many,\n I think.\n\n2) Remove ability to pass PASSEDBYVALUE to CREATE TYPE, but instead\n compute whether attyval is possible, solely based on INTERNALLENGTH\n (when INTERNALLENGTH > 0 obviously).\n\n3) Fix the fallout, by fixing a few of the Datum<->type conversion\n functions affected by this change. That'll require a bit of work, but\n not too much. We should write those conversion routines in a way\n that'll keep them working for the scenarios where the type is\n actually passable by value, and not (required for > 4 byte datums).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Jun 2019 11:55:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "On 2019-Jun-05, Andres Freund wrote:\n\n> I'd much rather see this tackled in a general way than fiddling with\n> individual datatypes. I think we should:\n> \n> 1) make fetch_att(), store_att_byval() etc support datums of any length\n> between 1 and <= sizeof(Datum). Probably also convert them to inline\n> functions. There's a few more functions to be adjusted, but not many,\n> I think.\n\nDoes this mean that datatypes that are >4 and <=8 bytes need to handle\nboth cases? Is it possible for them to detect the current environment?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 15:14:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn June 5, 2019 12:14:42 PM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>On 2019-Jun-05, Andres Freund wrote:\n>\n>> I'd much rather see this tackled in a general way than fiddling with\n>> individual datatypes. I think we should:\n>> \n>> 1) make fetch_att(), store_att_byval() etc support datums of any\n>length\n>> between 1 and <= sizeof(Datum). Probably also convert them to\n>inline\n>> functions. There's a few more functions to be adjusted, but not\n>many,\n>> I think.\n>\n>Does this mean that datatypes that are >4 and <=8 bytes need to handle\n>both cases? Is it possible for them to detect the current environment?\n\nWell, the conversion macros need to know. You can look at float8 for an example of the difference - it's pretty centralized. We should provide a few helper macros to abstract that away.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Wed, 05 Jun 2019 12:17:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On June 5, 2019 12:14:42 PM PDT, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> Does this mean that datatypes that are >4 and <=8 bytes need to handle\n>> both cases? Is it possible for them to detect the current environment?\n\n> Well, the conversion macros need to know. You can look at float8 for an example of the difference - it's pretty centralized. We should provide a few helper macros to abstract that away.\n\nFWIW, I disagree with Andres on this being a reasonable way to proceed.\n\nThe fact that we support both pass-by-value and pass-by-ref for float8\nand int8 is because those are important data types that are worth taking\nextra pains to optimize. It's a very long way from there to insisting\nthat every datatype between 5 and 8 bytes long must get the same\ntreatment, and even further to Andres' apparent position that we should\nforce third-party types to do it whether they care about\nmicro-optimization or not.\n\nAnd I *entirely* fail to get the point of adding such support for\ndatatypes of 5 or 7 bytes. No such types exist, or are on the horizon\nAFAIK.\n\nLastly, I don't think adding additional allowed widths of pass-by-value\ntypes would be cost-free, because it would add cycles to the inner loops\nof the tuple forming and deforming functions. (No, I don't believe that\nJIT makes that an ignorable concern.)\n\nI'm not really sure that either macaddr or macaddr8 are used widely\nenough to justify expending optimization effort on them. But if they\nare, let's just do that, not move the goal posts into the next stadium.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jun 2019 13:39:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-06 13:39:50 -0400, Tom Lane wrote:\n> Lastly, I don't think adding additional allowed widths of pass-by-value\n> types would be cost-free, because it would add cycles to the inner loops\n> of the tuple forming and deforming functions.\n\nI'm not sure I quite buy that.\n\nI think that we have branches over a fixed number of lengths is largely\nunnecessary. att_addlength_pointer() doesn't care - it just uses the\nlength. And I think we should just consider doing the same for\nfetch_att(). E.g. by using memcpy().\n\nThat'd also have the advantage that we'd not be *forced* to rely\nalignment of byval types. The only reason we actually need that is the\nheaptuple-to-struct mapping for catalogs. Outside of that we don't have\npointers to individual byval tuples, and waste a fair bit of padding due\nto that.\n\nAdditionally we'd get rid of needing separate versions for SIZEOF_DATUM\n!= 8/not.\n\n\n> (No, I don't believe that JIT makes that an ignorable concern.)\n\nObviously not.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jun 2019 12:14:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sort support for macaddr8"
}
] |
[
{
"msg_contents": "Discovered while looking into issue here: https://github.com/citusdata/citus/pull/2733\n\nFor completeness I'll quote the example code to demonstrate the issue:\n\npostgres=# create table events_table (id integer primary key, user_id integer); CREATE TABLE postgres=# create table users_table_ref (id integer primary key, value_2 integer); CREATE TABLE postgres=# create view asdf as SELECT r FROM\n (SELECT user_id_deep, random() as r -- prevent pulling up the subquery\n FROM (events_table\n INNER JOIN\n users_table_ref ON (events_table.user_id = users_table_ref.value_2)) AS join_alias(user_id_dee\np)) AS bar,\n (events_table\n INNER JOIN\n users_table_ref ON (events_table.user_id = users_table_ref.value_2)) AS join_alias(user_id_deep) WHERE (bar.user_id_deep = join_alias.user_id_deep); CREATE VIEW postgres=# \\d+ asdf\n View \"public.asdf\"\n Column | Type | Collation | Nullable | Default | Storage | Description \n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n--------+------------------+-----------+----------+---------+---------+-\n r | double precision | | | | plain | View definition:\n SELECT bar.r\n FROM ( SELECT join_alias_1.user_id_deep,\n random() AS r\n FROM (events_table events_table_1\n JOIN users_table_ref users_table_ref_1 ON events_table_1.user_id = users_table_ref_1.value_2) join_alias(user_id_deep, user_id, id, value_2)) bar,\n (events_table\n JOIN users_table_ref ON events_table.user_id = users_table_ref.value_2) join_alias(user_id_deep, user_id, id, value_2)\n WHERE bar.user_id_deep = join_alias.user_id_deep;\n\nWhere the 2nd join_alias should be renamed to join_alias_1",
"msg_date": "Mon, 3 Jun 2019 21:42:50 +0000",
"msg_from": "=?iso-8859-1?Q?Philip_Dub=E9?= <Philip.Dub@microsoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] ruleutils: Fix subqueries with shadowed aliases"
},
{
"msg_contents": "=?iso-8859-1?Q?Philip_Dub=E9?= <Philip.Dub@microsoft.com> writes:\n> Discovered while looking into issue here: https://github.com/citusdata/citus/pull/2733\n> For completeness I'll quote the example code to demonstrate the issue:\n> ...\n> Where the 2nd join_alias should be renamed to join_alias_1\n\nGood catch! The proposed test case is less good though, because\nit doesn't actually exercise the bug, ie the test case passes\nwith or without the code change. (You also stuck it into the\nmiddle of a bunch of not-very-related test cases.) I adapted\nyour example into a better test case and pushed it. Thanks\nfor the report and fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 19:46:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ruleutils: Fix subqueries with shadowed aliases"
}
] |
[
{
"msg_contents": "Hello, we have some confusion over the planner's use of an index.\n\nSuppose we have a table \"parades\" with columns:\n\n \"city_id\" of type integer\n \"description\" of type text\n \"start_time\" of type timestamp without time zone\n\nSuppose also we have indexes:\n\n \"parades_city_id_description_tsv_index\" gin (city_id,\nto_tsvector('simple'::regconfig, description)) WHERE description IS NOT NULL\n \"parades_city_id_start_time_index\" btree (city_id, start_time)\n\nWhen we EXPLAIN the query\n\n SELECT * FROM \"parades\" WHERE ((description IS NOT NULL) AND\n(to_tsvector('simple', description) @@ to_tsquery('simple', 'fun')) AND\n(\"city_id\" IN (<roughly 50 ids>)));\n\nWe get\n\n Bitmap Heap Scan on parades (cost=12691.97..18831.21 rows=2559\nwidth=886)\n Recheck Cond: ((to_tsvector('simple'::regconfig, description) @@\n'''fun'''::tsquery) AND (description IS NOT NULL) AND (city_id = ANY\n('{<roughly 50 ids>}'::integer[])))\n -> BitmapAnd (cost=12691.97..12691.97 rows=2559 width=0)\n -> Bitmap Index Scan on parades_city_id_description_tsv_index\n (cost=0.00..2902.97 rows=229463 width=0)\n Index Cond: (to_tsvector('simple'::regconfig, title) @@\n'''fun'''::tsquery)\n -> Bitmap Index Scan on parades_city_id_start_time_index\n (cost=0.00..9787.47 rows=565483 width=0)\n Index Cond: (city_id = ANY ('{<roughly 50\nids>}'::integer[]))\n\nWhen we EXPLAIN the same query but with one city_id\n\n SELECT * FROM \"parades\" WHERE ((description IS NOT NULL) AND\n(to_tsvector('simple', description) @@ to_tsquery('simple', 'fun')) AND\n(\"city_id\" IN (1)));\n\nWe get\n\n Bitmap Heap Scan on parades (cost=36.20..81.45 rows=20 width=886)\n Recheck Cond: ((city_id = 1) AND (to_tsvector('simple'::regconfig,\ndescription) @@ '''fun'''::tsquery) AND (description IS NOT NULL))\n -> Bitmap Index Scan on parades_city_id_description_tsv_index\n(cost=0.00..36.20\nrows=20 width=0)\n Index Cond: ((city_id = 1) AND\n(to_tsvector('simple'::regconfig, description) @@ '''fun'''::tsquery))\n\nThis leaves us with two questions:\n\n1. How is postgres able to use parades_city_id_description_tsv_index in the\nfirst explain result without any filter on \"city_id\"?\n2. Why does the planner in the first query decide not to simply use\nparades_city_id_description_tsv_index (as in the second explain result)\nwhen the cardinality of the set of \"city_id\"s is high?\n\nThanks,\nJared\n\nHello, we have some confusion over the planner's use of an index.Suppose we have a table \"parades\" with columns: \"city_id\" of type integer \"description\" of type text \"start_time\" of type timestamp without time zoneSuppose also we have indexes: \"parades_city_id_description_tsv_index\" gin (city_id, to_tsvector('simple'::regconfig, description)) WHERE description IS NOT NULL \"parades_city_id_start_time_index\" btree (city_id, start_time)When we EXPLAIN the query SELECT * FROM \"parades\" WHERE ((description IS NOT NULL) AND (to_tsvector('simple', description) @@ to_tsquery('simple', 'fun')) AND (\"city_id\" IN (<roughly 50 ids>)));We get Bitmap Heap Scan on parades (cost=12691.97..18831.21 rows=2559 width=886) Recheck Cond: ((to_tsvector('simple'::regconfig, description) @@ '''fun'''::tsquery) AND (description IS NOT NULL) AND (city_id = ANY ('{<roughly 50 ids>}'::integer[]))) -> BitmapAnd (cost=12691.97..12691.97 rows=2559 width=0) -> Bitmap Index Scan on parades_city_id_description_tsv_index (cost=0.00..2902.97 rows=229463 width=0) Index Cond: (to_tsvector('simple'::regconfig, title) @@ '''fun'''::tsquery) -> Bitmap Index Scan on parades_city_id_start_time_index (cost=0.00..9787.47 rows=565483 width=0) Index Cond: (city_id = ANY ('{<roughly 50 ids>}'::integer[]))When we EXPLAIN the same query but with one city_id SELECT * FROM \"parades\" WHERE ((description IS NOT NULL) AND (to_tsvector('simple', description) @@ to_tsquery('simple', 'fun')) AND (\"city_id\" IN (1)));We get Bitmap Heap Scan on parades (cost=36.20..81.45 rows=20 width=886) Recheck Cond: ((city_id = 1) AND (to_tsvector('simple'::regconfig, description) @@ '''fun'''::tsquery) AND (description IS NOT NULL)) -> Bitmap Index Scan on parades_city_id_description_tsv_index (cost=0.00..36.20 rows=20 width=0) Index Cond: ((city_id = 1) AND (to_tsvector('simple'::regconfig, description) @@ '''fun'''::tsquery))This leaves us with two questions:1. How is postgres able to use parades_city_id_description_tsv_index in the first explain result without any filter on \"city_id\"?2. Why does the planner in the first query decide not to simply use parades_city_id_description_tsv_index (as in the second explain result) when the cardinality of the set of \"city_id\"s is high?Thanks,Jared",
"msg_date": "Mon, 3 Jun 2019 14:54:41 -0700",
"msg_from": "Jared Rulison <jared@affinity.co>",
"msg_from_op": true,
"msg_subject": "Use of multi-column gin index"
},
{
"msg_contents": "Jared Rulison <jared@affinity.co> writes:\n> Hello, we have some confusion over the planner's use of an index.\n> ...\n> 1. How is postgres able to use parades_city_id_description_tsv_index in the\n> first explain result without any filter on \"city_id\"?\n\nGIN indexes don't have any particular bias towards earlier or later\ncolumns (unlike btrees). So this isn't any harder than if you'd\nput the index columns in the other order.\n\n> 2. Why does the planner in the first query decide not to simply use\n> parades_city_id_description_tsv_index (as in the second explain result)\n> when the cardinality of the set of \"city_id\"s is high?\n\n[ shrug... ] It thinks it's cheaper. Whether it's correct is impossible\nto say from the given data, but there is a moderately complex cost model\nin there. The comments for gincost_scalararrayopexpr note\n\n * A ScalarArrayOpExpr will give rise to N separate indexscans at runtime,\n * each of which involves one value from the RHS array, plus all the\n * non-array quals (if any).\n\nI haven't checked the actual execution code, but this seems to be saying\nthat the GIN indexscan executor always does ANDs before ORs. That means\nthat doing everything in the same GIN indexscan would require executing\nthe to_tsvector part 50 times, so I can definitely believe that shoving\nthe IN part to a different index and AND'ing afterwards is a better idea.\n(Whether the GIN executor should be made smarter to avoid that is a\nseparate question ;-))\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 18:36:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of multi-column gin index"
}
] |
[
{
"msg_contents": "Hi,\n\nJIT slot_compile_deform assumes there's at least 'natts' in TupleDesc, eg\n /*\n * Iterate over each attribute that needs to be deformed, build code to\n * deform it.\n */\n for (attnum = 0; attnum < natts; attnum++)\n {\n Form_pg_attribute att = TupleDescAttr(desc, attnum);\n\nbut a new TupleDesc has no attribute and the caller only tests\nTupleDesc is not null.",
"msg_date": "Tue, 4 Jun 2019 07:47:24 +0200",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG 11 JIT deform failure"
},
{
"msg_contents": "didier <did447@gmail.com> writes:\n> JIT slot_compile_deform assumes there's at least 'natts' in TupleDesc, eg\n> /*\n> * Iterate over each attribute that needs to be deformed, build code to\n> * deform it.\n> */\n> for (attnum = 0; attnum < natts; attnum++)\n> {\n> Form_pg_attribute att = TupleDescAttr(desc, attnum);\n\n> but a new TupleDesc has no attribute and the caller only tests\n> TupleDesc is not null.\n\nI looked at this, but I find it quite unconvincing. Under what\ncircumstances would we not have a correctly filled-in tupdesc here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:46:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG 11 JIT deform failure"
},
{
"msg_contents": "Extensions can do it, timescaledb in this case with:\nINSERT INTO ... RETURNING *;\n\nOr replacing the test in llvm_compile_expr with an Assert in\nslot_compile_deform ?\n\n\n",
"msg_date": "Thu, 13 Jun 2019 20:08:15 +0200",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 11 JIT deform failure"
},
{
"msg_contents": "Hi,\n\nOn June 13, 2019 11:08:15 AM PDT, didier <did447@gmail.com> wrote:\n>Extensions can do it, timescaledb in this case with:\n>INSERT INTO ... RETURNING *;\n>\n>Or replacing the test in llvm_compile_expr with an Assert in\n>slot_compile_deform ?\n\nIn that case we ought to never generate a deform expression step - core code doesn't afair. That's only done I'd there's actually something to deform. I'm fine with adding an assert tough\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 11:35:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 11 JIT deform failure"
},
{
"msg_contents": "Hi,\n\nI searched the mailing list but found nothing. Any reason why\nTupleDescAttr is a macro and not a static inline?\n\nRather than adding an Assert attached POC replace TupleDescAttr macro\nby a static inline function with AssertArg.\nIt:\n- Factorize Assert.\n\n- Trigger an Assert in JIT_deform if natts is wrong.\n\n- Currently In HEAD\nsrc/backend/access/common/tupdesc.c:TupleDescCopyEntry() compiler can\noptimize out AssertArg(PointerIsValid(...)), no idea\n if compiling with both cassert and -O2 make sense though).\n\n- Remove two UB in memcpy when natts is zero.\n\nNote:\nComment line 1480 in ../contrib/tablefunc/tablefunc.c is wrong it's\nthe fourth column.\n\nRegards\nDidier\n\n\nOn Thu, Jun 13, 2019 at 8:35 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On June 13, 2019 11:08:15 AM PDT, didier <did447@gmail.com> wrote:\n> >Extensions can do it, timescaledb in this case with:\n> >INSERT INTO ... RETURNING *;\n> >\n> >Or replacing the test in llvm_compile_expr with an Assert in\n> >slot_compile_deform ?\n>\n> In that case we ought to never generate a deform expression step - core code doesn't afair. That's only done I'd there's actually something to deform. I'm fine with adding an assert tough\n>\n> Andres\n> --\n> Sent from my Android device with K-9 Mail. Please excuse my brevity.",
"msg_date": "Thu, 27 Jun 2019 15:54:28 +0200",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG 11 JIT deform failure"
},
{
"msg_contents": "Hi,\n\nI still haven't heard an explanation why you see a problem here.\n\n\nOn 2019-06-27 15:54:28 +0200, didier wrote:\n> I searched the mailing list but found nothing. Any reason why\n> TupleDescAttr is a macro and not a static inline?\n\nIt's present in branches that can't rely on static inlines being\npresent. Obviously we can still change it in HEAD, because there we rely\non static inlien functions working (althoug we might need to surround it\nwith #ifndef FRONTEND, if tupdesc.h is included from other headers\nlegitimately needed from frontend code).\n\n\n> Rather than adding an Assert attached POC replace TupleDescAttr macro\n> by a static inline function with AssertArg.\n\n> It:\n> - Factorize Assert.\n> \n> - Trigger an Assert in JIT_deform if natts is wrong.\n\n\n> - Currently In HEAD\n> src/backend/access/common/tupdesc.c:TupleDescCopyEntry() compiler can\n> optimize out AssertArg(PointerIsValid(...)), no idea\n> if compiling with both cassert and -O2 make sense though).\n\nIt's not important.\n\n\n> - Remove two UB in memcpy when natts is zero.\n\nI don't think it matters, but I'm not actually sure this is actually\nUB. It's IIRC legal to form a pointer to one after the end of an array\n(but not dereference, obviously), and memcpy with a 0 length byte also\nis legal.\n\n\n\n> Note:\n> Comment line 1480 in ../contrib/tablefunc/tablefunc.c is wrong it's\n> the fourth column.\n\nHuh, this is of very long-standing vintage. Think it's been introduced\nin\n\ncommit a265b7f70aa01a34ae30554186ee8c2089e035d8\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: 2003-07-27 03:51:59 +0000\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 15:32:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG 11 JIT deform failure"
}
] |
[
{
"msg_contents": "Hoi hackers,\n\nWe've been having issues with NOTIFYs blocking over multiple databases\n(see [1] for more details). That was 9.4 but we've updated the\ndatabase to 11.3 and still have the same issue. Now however we could\nuse perf to do profiling and got the following profile (useless\ndetails elided):\n\n--32.83%--ProcessClientReadInterrupt\n --32.68%--ProcessNotifyInterrupt\n --32.16%--asyncQueueReadAllNotifications\n --23.37%--asyncQueueAdvanceTail\n --20.49%--LWLockAcquire\n --18.93%--LWLockQueueSelf\n --12.99%--LWLockWaitListLock\n\n(from: perf record -F 99 -ag -- sleep 600)\n\nThat shows that more than 20% of the time is spent in that single\nfunction, waiting for an exclusive lock on the AsyncQueueLock. This\nwill block any concurrent session doing a NOTIFY in any database on\nthe system. This would certainly explain the symptoms we're seeing\n(process xxx still waiting for AccessExclusiveLock on object 0 of\nclass 1262 of database 0).\n\nAnalysis of the code leads me to the following hypothesis (and hence\nto the attached patches):\n\nWe have ~150 databases, each of which has 2 active backends with an\nactive LISTEN. When a NOTIFY happens anywhere on any database it\n(under an exclusive lock) makes a list of 300 backends to send a\nsignal to. It then wakes up all of those backends. Each backend then\nexamines the message and all but one discards it as being for the\nwrong database. Each backend then calls asyncQueueAdvanceTail (because\nthe current position of the each backend was the tail) which then\ntakes an exclusive lock and checks all the other backends to see if\nthe tail can be advanced. All of these will conclude 'no', except the\nvery last one which concludes the tail can be advanced by about 50\nbytes or so.\n\nSo the inner loop of asyncQueueAdvanceTail will, while holding a\nglobal exclusive lock, execute 2*150*4000 (max backends) = 1.2 million\ntimes for basically no benefit. During this time, no other transaction\nanywhere in the system that does a NOTIFY will be able to commit.\n\nThe attached patches attempt reduce the overhead in two ways:\n\nPatch 1: Changes asyncQueueAdvanceTail to do nothing unless the\nQUEUE_HEAD is on a different page than the QUEUE_TAIL. The idea is\nthat there's no point trying to advance the tail unless we can\nactually usefully truncate the SLRU. This does however mean that\nasyncQueueReadAllNotifications always has to call\nasyncQueueAdvanceTail since it can no longer be guaranteed that any\nbackend is still at the tail, which is one of the assumptions of the\ncurrent code. Not sure if this is a problem or if it can be improved\nwithout tracking much more state.\n\nPatch 2: Changes SignalBackends to only notify other backends when (a)\nthey're the same database as me or (b) the notify queue has advanced\nto a new SLRU page. This avoids backends being woken up for messages\nwhich they are not interested in.\n\nAs a consequence of these changes, we can reduce the number of\nexclusive locks and backend wake ups in our case by a factor of 300.\nYou still however get a thundering herd at the end of each SLRU page.\n\nNote: these patches have not yet been extensively tested, and so\nshould only be used as basis for discussion.\n\nComments? Suggestions?\n\n[1] https://www.postgresql.org/message-id/CADWG95t0j9zF0uwdcMH81KMnDsiTAVHxmBvgYqrRJcD-iLwQhw@mail.gmail.com\n\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/",
"msg_date": "Tue, 4 Jun 2019 09:08:15 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "Hoi hackers,\n\nPlease find attached updated versions of the patches, I've now tested\nthem. Also attached is a reproduction script to verify that they\nactually work.\n\nTo test you need to create 150 databases as described in the script,\nthen simply execute it. Before patching you get the following results\n(last figure is the CPU usage of Postgres):\n\n1559749330 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 269.07%\n1559749335 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 268.07%\n1559749340 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 270.94%\n\nAfter patching you get the following:\n\n1559749840 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.02, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 5.09%\n1559749845 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 5.06%\n1559749850 Sent: 500, Recv: 1000, Delays: Min: 0.01, Max: 0.01, Avg:\n0.01 [0.01/0.01/0.01/0.01/0.01], 4.75%\n\nThe async queue functions in postgres also no longer appear in the\nperf output (below measuring threshold).\n\nAs for general method, it seems like the actual optimisation here is\nthat the async queue tail pointer is only updated once per SLRU page\ninstead of every message. This would require a significantly larger\npatch, but shouldn't be too difficult. Thoughts?\n\nHave a nice day,\nMartijn\n\nOn Tue, 4 Jun 2019 at 09:08, Martijn van Oosterhout <kleptog@gmail.com> wrote:\n>\n> Hoi hackers,\n>\n> We've been having issues with NOTIFYs blocking over multiple databases\n> (see [1] for more details). That was 9.4 but we've updated the\n> database to 11.3 and still have the same issue. Now however we could\n> use perf to do profiling and got the following profile (useless\n> details elided):\n>\n> --32.83%--ProcessClientReadInterrupt\n> --32.68%--ProcessNotifyInterrupt\n> --32.16%--asyncQueueReadAllNotifications\n> --23.37%--asyncQueueAdvanceTail\n> --20.49%--LWLockAcquire\n> --18.93%--LWLockQueueSelf\n> --12.99%--LWLockWaitListLock\n>\n> (from: perf record -F 99 -ag -- sleep 600)\n>\n> That shows that more than 20% of the time is spent in that single\n> function, waiting for an exclusive lock on the AsyncQueueLock. This\n> will block any concurrent session doing a NOTIFY in any database on\n> the system. This would certainly explain the symptoms we're seeing\n> (process xxx still waiting for AccessExclusiveLock on object 0 of\n> class 1262 of database 0).\n>\n> Analysis of the code leads me to the following hypothesis (and hence\n> to the attached patches):\n>\n> We have ~150 databases, each of which has 2 active backends with an\n> active LISTEN. When a NOTIFY happens anywhere on any database it\n> (under an exclusive lock) makes a list of 300 backends to send a\n> signal to. It then wakes up all of those backends. Each backend then\n> examines the message and all but one discards it as being for the\n> wrong database. Each backend then calls asyncQueueAdvanceTail (because\n> the current position of the each backend was the tail) which then\n> takes an exclusive lock and checks all the other backends to see if\n> the tail can be advanced. All of these will conclude 'no', except the\n> very last one which concludes the tail can be advanced by about 50\n> bytes or so.\n>\n> So the inner loop of asyncQueueAdvanceTail will, while holding a\n> global exclusive lock, execute 2*150*4000 (max backends) = 1.2 million\n> times for basically no benefit. During this time, no other transaction\n> anywhere in the system that does a NOTIFY will be able to commit.\n>\n> The attached patches attempt reduce the overhead in two ways:\n>\n> Patch 1: Changes asyncQueueAdvanceTail to do nothing unless the\n> QUEUE_HEAD is on a different page than the QUEUE_TAIL. The idea is\n> that there's no point trying to advance the tail unless we can\n> actually usefully truncate the SLRU. This does however mean that\n> asyncQueueReadAllNotifications always has to call\n> asyncQueueAdvanceTail since it can no longer be guaranteed that any\n> backend is still at the tail, which is one of the assumptions of the\n> current code. Not sure if this is a problem or if it can be improved\n> without tracking much more state.\n>\n> Patch 2: Changes SignalBackends to only notify other backends when (a)\n> they're the same database as me or (b) the notify queue has advanced\n> to a new SLRU page. This avoids backends being woken up for messages\n> which they are not interested in.\n>\n> As a consequence of these changes, we can reduce the number of\n> exclusive locks and backend wake ups in our case by a factor of 300.\n> You still however get a thundering herd at the end of each SLRU page.\n>\n> Note: these patches have not yet been extensively tested, and so\n> should only be used as basis for discussion.\n>\n> Comments? Suggestions?\n>\n> [1] https://www.postgresql.org/message-id/CADWG95t0j9zF0uwdcMH81KMnDsiTAVHxmBvgYqrRJcD-iLwQhw@mail.gmail.com\n>\n> --\n> Martijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/",
"msg_date": "Wed, 5 Jun 2019 18:10:04 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> Please find attached updated versions of the patches, I've now tested\n> them. Also attached is a reproduction script to verify that they\n> actually work.\n\nI looked through these (a bit cursorily).\n\nI'm generally on board with the idea of 0001, but not with the patch\ndetails. As coded, asyncQueueAdvanceTail is supposing that it can\nexamine the shared QUEUE_HEAD and QUEUE_TAIL pointers without any\nlock whatsoever. That's probably unsafe, and if it is safe for some\nreason, you haven't made the argument why. Moreover, it seems\nunnecessary to make any such assumption. Why not put back the\nadvanceTail tests you removed, but adjust them so that advanceTail\nisn't set true unless QUEUE_HEAD and QUEUE_TAIL point to different\npages? (Note that in the existing coding, those tests are made\nwhile holding an appropriate lock, so it's safe to look at those\npointers there.)\n\nIt might be a good idea to make a macro encapsulating this new,\nmore complicated rule for setting advanceTail, instead of relying\non keeping the various call sites in sync.\n\nMore attention to comments is also needed. For instance, you've\nmade a lie out of the documentation of the tail pointer:\n\n QueuePosition tail; /* the global tail is equivalent to the pos of\n * the \"slowest\" backend */\n\nIt needs to say something like \"is <= the pos of the slowest backend\",\ninstead. I think the explanation of why this algorithm is good could\nuse more effort, too.\n\nComments for 0002 are about the same: for no explained reason, and\ncertainly no savings, you've put the notify_all test in an unsafe\nplace rather than a safe one (viz, two lines down, *after* taking\nthe relevant lock). And 0002 needs more commentary about why\nits optimization is safe and useful, too. In particular it's\nnot obvious why QUEUE_HEAD being on a different page from QUEUE_TAIL\nhas anything to do with whether we should wake up other backends.\n\nI'm not very persuaded by 0003, mainly because it seems likely to\nme that 0001 and 0002 will greatly reduce the possibility that\nthe early-exit can happen. So it seems like it's adding cycles\n(in a spot where we hold exclusive lock) without a good chance of\nsaving any cycles.\n\nTaking a step back in hopes of seeing the bigger picture ...\nas you already noted, these changes don't really fix the \"thundering\nherd of wakeups\" problem, they just arrange for it to happen\nonce per SLRU page rather than once per message. I wonder if we\ncould improve matters by stealing an idea from the sinval code:\nwhen we're trying to cause advance of the global QUEUE_TAIL, waken\nonly the slowest backend, and have it waken the next-slowest after\nit's done. In sinval there are some additional provisions to prevent\na nonresponsive backend from delaying matters for other backends,\nbut I think maybe we don't need that here. async.c doesn't have\nanything equivalent to sinval reset, so there's no chance of\noverruling a slow backend's failure to advance its pos pointer,\nso there's not much reason not to just wait till it does do so.\n\nA related idea is to awaken only one backend at a time when we\nsend a new message (i.e., advance QUEUE_HEAD) but I think that\nwould likely be bad. The hazard with the chained-wakeups method\nis that a slow backend blocks wakeup of everything else. We don't\ncare about that hugely for QUEUE_TAIL advance, because we're just\nhoping to free some SLRU space. But for QUEUE_HEAD advance it'd\nmean failing to deliver notifies in a timely way, which we do care\nabout. (Also, if I remember correctly, the processing on that side\nonly requires shared lock so it's less of a problem if many backends\ndo it at once.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Jul 2019 15:12:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "Hoi Tom,\n\nThank you for the detailed response. Sorry the delay, I was on holiday.\n\nYou are absolutely correct when you point out that the queue pointers\nwere accessed without the lock and this is probably unsafe. For the\nfirst two patches this is can be remedied, though I find it makes the\nlogic a bit harder to follow. The comments will need to be updated to\nreflect the new logic. I hope to post something soon.\n\nAs for your point about the third patch, you are right that it's\nprobably not saving many cycles. However I do think it's worthwhile\nactually optimising this loop, because the number of backends that are\nlistening is likely to be much smaller than the total number of\nbackends, so there's a lot of cycles being wasted here already. Fixing\nthe thundering herd issue (like in sinval as you point out) doesn't\nactually reduce the amount of work being done, just spreads it out.\nSince readers and writers block each other, blocking a writer means\nblocking commits across the whole cluster.\n\nThere are a number of possible improvements here:\n\n1. Do what sinval does and separate the reader and writer locks so\nthey can't block each other. This is the ultimate solution, but it's a\nsignificant refactor and it's not clear that's actually worthwhile\nhere. This would almost be adopting the sinvaladt structure wholesale.\n\n2. Add a field to AsyncQueueEntry which points to the next listening\nbackend. This would allow the loops over all listening backends to\ncomplete much faster, especially in the normal case where there are\nnot many listeners relative to the number of backends. The downside is\nthis requires an exclusive lock to remove listeners, but that doesn't\nseem a big problem.\n\n3. The other idea from sinval where you only wake up one worker at a\ntime is a good one as you point out. This seems quite doable, however,\nit seems wasteful to try and wake everyone up the moment we switch to\na new page. The longer you delay the lower the chance you need to wake\nanyone at all because they've because they'll have caught up by\nthemselves. A single SLRU page can hold hundreds, or even thousands of\nmessages.\n\nDo 2 & 3 seem like a good direction to go? I can probably work something up.\n\nThanks in advance,\nMartijn\n\n\n> Martijn van Oosterhout <kleptog@gmail.com> writes:\n> > Please find attached updated versions of the patches, I've now tested\n> > them. Also attached is a reproduction script to verify that they\n> > actually work.\n>\n> I looked through these (a bit cursorily).\n>\n> I'm generally on board with the idea of 0001, but not with the patch\n> details. As coded, asyncQueueAdvanceTail is supposing that it can\n> examine the shared QUEUE_HEAD and QUEUE_TAIL pointers without any\n> lock whatsoever. That's probably unsafe, and if it is safe for some\n> reason, you haven't made the argument why. Moreover, it seems\n> unnecessary to make any such assumption. Why not put back the\n> advanceTail tests you removed, but adjust them so that advanceTail\n> isn't set true unless QUEUE_HEAD and QUEUE_TAIL point to different\n> pages? (Note that in the existing coding, those tests are made\n> while holding an appropriate lock, so it's safe to look at those\n> pointers there.)\n>\n> It might be a good idea to make a macro encapsulating this new,\n> more complicated rule for setting advanceTail, instead of relying\n> on keeping the various call sites in sync.\n>\n> More attention to comments is also needed. For instance, you've\n> made a lie out of the documentation of the tail pointer:\n>\n> QueuePosition tail; /* the global tail is equivalent to the pos of\n> * the \"slowest\" backend */\n>\n> It needs to say something like \"is <= the pos of the slowest backend\",\n> instead. I think the explanation of why this algorithm is good could\n> use more effort, too.\n>\n> Comments for 0002 are about the same: for no explained reason, and\n> certainly no savings, you've put the notify_all test in an unsafe\n> place rather than a safe one (viz, two lines down, *after* taking\n> the relevant lock). And 0002 needs more commentary about why\n> its optimization is safe and useful, too. In particular it's\n> not obvious why QUEUE_HEAD being on a different page from QUEUE_TAIL\n> has anything to do with whether we should wake up other backends.\n>\n> I'm not very persuaded by 0003, mainly because it seems likely to\n> me that 0001 and 0002 will greatly reduce the possibility that\n> the early-exit can happen. So it seems like it's adding cycles\n> (in a spot where we hold exclusive lock) without a good chance of\n> saving any cycles.\n>\n> Taking a step back in hopes of seeing the bigger picture ...\n> as you already noted, these changes don't really fix the \"thundering\n> herd of wakeups\" problem, they just arrange for it to happen\n> once per SLRU page rather than once per message. I wonder if we\n> could improve matters by stealing an idea from the sinval code:\n> when we're trying to cause advance of the global QUEUE_TAIL, waken\n> only the slowest backend, and have it waken the next-slowest after\n> it's done. In sinval there are some additional provisions to prevent\n> a nonresponsive backend from delaying matters for other backends,\n> but I think maybe we don't need that here. async.c doesn't have\n> anything equivalent to sinval reset, so there's no chance of\n> overruling a slow backend's failure to advance its pos pointer,\n> so there's not much reason not to just wait till it does do so.\n>\n> A related idea is to awaken only one backend at a time when we\n> send a new message (i.e., advance QUEUE_HEAD) but I think that\n> would likely be bad. The hazard with the chained-wakeups method\n> is that a slow backend blocks wakeup of everything else. We don't\n> care about that hugely for QUEUE_TAIL advance, because we're just\n> hoping to free some SLRU space. But for QUEUE_HEAD advance it'd\n> mean failing to deliver notifies in a timely way, which we do care\n> about. (Also, if I remember correctly, the processing on that side\n> only requires shared lock so it's less of a problem if many backends\n> do it at once.)\n>\n> regards, tom lane\n\n\n\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Tue, 23 Jul 2019 16:46:37 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> There are a number of possible improvements here:\n\n> 1. Do what sinval does and separate the reader and writer locks so\n> they can't block each other. This is the ultimate solution, but it's a\n> significant refactor and it's not clear that's actually worthwhile\n> here. This would almost be adopting the sinvaladt structure wholesale.\n\nI agree that that's probably more ambitious than is warranted.\n\n> 2. Add a field to AsyncQueueEntry which points to the next listening\n> backend. This would allow the loops over all listening backends to\n> complete much faster, especially in the normal case where there are\n> not many listeners relative to the number of backends. The downside is\n> this requires an exclusive lock to remove listeners, but that doesn't\n> seem a big problem.\n\nI don't understand how that would work? The sending backend doesn't\nknow what the \"next listening backend\" is. Having to scan the whole\nqueue when a listener unlistens seems pretty awful too, especially\nif you need exclusive lock while doing so.\n\n> 3. The other idea from sinval where you only wake up one worker at a\n> time is a good one as you point out. This seems quite doable, however,\n> it seems wasteful to try and wake everyone up the moment we switch to\n> a new page. The longer you delay the lower the chance you need to wake\n> anyone at all because they've because they'll have caught up by\n> themselves. A single SLRU page can hold hundreds, or even thousands of\n> messages.\n\nNot entirely following your comment here either. The point of the change\nis exactly that we'd wake up only one backend at a time (and only the\nfurthest-behind one, so that anyone who catches up of their own accord\nstops being a factor). Also, \"hundreds or thousands\" seems\nover-optimistic given that the minimum size of AsyncQueueEntry is 20\nbytes --- in practice it'll be more because people don't use empty\nstrings as notify channel names. I think a few hundred messages per\npage is the upper limit, and it could be a lot less.\n\n> Do 2 & 3 seem like a good direction to go? I can probably work something up.\n\nI'm on board with 3, obviously. Not following what you have in mind\nfor 2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 13:21:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "On Tue, 23 Jul 2019 at 19:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Martijn van Oosterhout <kleptog@gmail.com> writes:\n> > 2. Add a field to AsyncQueueEntry which points to the next listening\n> > backend. This would allow the loops over all listening backends to\n> > complete much faster, especially in the normal case where there are\n> > not many listeners relative to the number of backends. The downside is\n> > this requires an exclusive lock to remove listeners, but that doesn't\n> > seem a big problem.\n>\n> I don't understand how that would work? The sending backend doesn't\n> know what the \"next listening backend\" is. Having to scan the whole\n> queue when a listener unlistens seems pretty awful too, especially\n> if you need exclusive lock while doing so.\n\nI mean tracking the listening backends specifically, so you can\nreplace the loops:\n\nfor (i=0; i < MaxBackends; i++)\n\nwith\n\nfor (i=QUEUE_FIRST_LISTENING_BACKEND; i; i = QUEUE_NEXT_LISTENING_BACKEND(i))\n\nSuch loops occur often when trying to advance the tail, when adding a\nnew listener,\nwhen sending a notify, etc, all while holding a (exclusive) lock.\nSeems like such an easy win\nto only loop over the listening backends rather than all of them.\n\n> > Do 2 & 3 seem like a good direction to go? I can probably work something up.\n>\n> I'm on board with 3, obviously. Not following what you have in mind\n> for 2.\n\nHope this clears it up a bit. Only waking up one at a time is a good\nidea, but needs to some\ncareful thinking to prove it actually works.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Tue, 23 Jul 2019 21:48:14 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@gmail.com> writes:\n> On Tue, 23 Jul 2019 at 19:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Martijn van Oosterhout <kleptog@gmail.com> writes:\n>>> 2. Add a field to AsyncQueueEntry which points to the next listening\n>>> backend. This would allow the loops over all listening backends to\n>>> complete much faster, especially in the normal case where there are\n>>> not many listeners relative to the number of backends. The downside is\n>>> this requires an exclusive lock to remove listeners, but that doesn't\n>>> seem a big problem.\n\n>> I don't understand how that would work? The sending backend doesn't\n>> know what the \"next listening backend\" is. Having to scan the whole\n>> queue when a listener unlistens seems pretty awful too, especially\n>> if you need exclusive lock while doing so.\n\n> I mean tracking the listening backends specifically, so you can\n> replace the loops:\n> for (i=0; i < MaxBackends; i++)\n> with\n> for (i=QUEUE_FIRST_LISTENING_BACKEND; i; i = QUEUE_NEXT_LISTENING_BACKEND(i))\n\nAh ... but surely you would not put such info in AsyncQueueEntry,\nwhere there'd be a separate copy for each message. I think you\nmeant to add the info to AsyncQueueControl.\n\nIt might be better to redefine the backends[] array as being mostly\ncontiguous (ie, a new backend takes the first free slot not the one\nindexed by its own BackendId), at the price of needing to store\nBackendId in each slot explicitly instead of assuming it's equal to\nthe array index. I suspect the existing data structure is putting too\nmuch of a premium on making sizeof(QueueBackendStatus) a power of 2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Jul 2019 17:32:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "On Tue, 23 Jul 2019 at 23:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Martijn van Oosterhout <kleptog@gmail.com> writes:\n> > I mean tracking the listening backends specifically, so you can\n> > replace the loops:\n> > for (i=0; i < MaxBackends; i++)\n> > with\n> > for (i=QUEUE_FIRST_LISTENING_BACKEND; i; i = QUEUE_NEXT_LISTENING_BACKEND(i))\n>\n> Ah ... but surely you would not put such info in AsyncQueueEntry,\n> where there'd be a separate copy for each message. I think you\n> meant to add the info to AsyncQueueControl.\n\nUmm, yeah. Got that mixed up.\n\n> It might be better to redefine the backends[] array as being mostly\n> contiguous (ie, a new backend takes the first free slot not the one\n> indexed by its own BackendId), at the price of needing to store\n> BackendId in each slot explicitly instead of assuming it's equal to\n> the array index. I suspect the existing data structure is putting too\n> much of a premium on making sizeof(QueueBackendStatus) a power of 2.\n\nThis would require adding a \"MyListenerId\" to each backend which I'm not sure\nhelps the readability. And there's a chance of mixing the id up. The\npower-of-2-ness\nis I think indeed overrated.\n\nI'll give it a shot a see how it looks.\n\nHave a nice day,\n\n-- \nMartijn van Oosterhout <kleptog@gmail.com> http://svana.org/kleptog/\n\n\n",
"msg_date": "Wed, 24 Jul 2019 10:30:12 +0200",
"msg_from": "Martijn van Oosterhout <kleptog@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 8:30 PM Martijn van Oosterhout\n<kleptog@gmail.com> wrote:\n> I'll give it a shot a see how it looks.\n\nMoved to September CF, \"Waiting on Author\".\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 22:53:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve performance of NOTIFY over many databases (issue\n blocking on AccessExclusiveLock on object 0 of class 1262 of database 0)"
}
] |
[
{
"msg_contents": "In src/backend/utils/mb/wchar.c, function ucs_wcwidth(), there is a list\nof Unicode combining characters, so that those can be ignored for\ncomputing the display length of a Unicode string. It seems to me that\nthat list is either outdated or plain incorrect.\n\nFor example, the list starts with\n\n {0x0300, 0x034E}, {0x0360, 0x0362}, {0x0483, 0x0486},\n\nLet's look at the characters around the first \"gap\":\n\n(https://www.unicode.org/Public/UCD/latest/ucd/UnicodeData.txt)\n\n034C;COMBINING ALMOST EQUAL TO ABOVE;Mn;230;NSM;;;;;N;;;;;\n034D;COMBINING LEFT RIGHT ARROW BELOW;Mn;220;NSM;;;;;N;;;;;\n034E;COMBINING UPWARDS ARROW BELOW;Mn;220;NSM;;;;;N;;;;;\n034F;COMBINING GRAPHEME JOINER;Mn;0;NSM;;;;;N;;;;;\n0350;COMBINING RIGHT ARROWHEAD ABOVE;Mn;230;NSM;;;;;N;;;;;\n0351;COMBINING LEFT HALF RING ABOVE;Mn;230;NSM;;;;;N;;;;;\n\nSo these are all in the \"Mn\" category, so they should be treated all the\nsame here. Indeed, psql doesn't compute the width of some of them\ncorrectly:\n\npostgres=> select u&'|oo\\034Coo|';\n+----------+\n| ?column? |\n+----------+\n| |oXoo| |\n+----------+\n\npostgres=> select u&'|oo\\0350oo|';\n+----------+\n| ?column? |\n+----------+\n| |oXoo| |\n+----------+\n\n(I have replaced the combined character with X above so that the mail\nclient rendering doesn't add another layer of uncertainty to this issue.\n The point is that the box is off in the second example.)\n\nAFAICT, these Unicode definitions haven't changed since that list was\nput in originally around 2006, so I wonder what's going on there.\n\nI have written a script that recomputes that list from the current\nUnicode data. Patch and script are attached. This makes those above\ncases all render correctly. (This should eventually get better built\nsystem integration.)\n\nThoughts?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 4 Jun 2019 22:58:46 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Update list of combining characters"
},
{
"msg_contents": "On 2019-06-04 22:58, Peter Eisentraut wrote:\n> AFAICT, these Unicode definitions haven't changed since that list was\n> put in originally around 2006, so I wonder what's going on there.\n> \n> I have written a script that recomputes that list from the current\n> Unicode data. Patch and script are attached. This makes those above\n> cases all render correctly. (This should eventually get better built\n> system integration.)\n\nAny thoughts about applying this as\n\na) a bug fix with backpatching\nb) just to master\nc) wait for PG13\nd) it's all wrong?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:16:29 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> Any thoughts about applying this as\n\n> a) a bug fix with backpatching\n> b) just to master\n> c) wait for PG13\n> d) it's all wrong?\n\nWell, it's a behavioral change, and we've not gotten field complaints,\nso I'm about -0.1 on back-patching. No objection to apply to master\nthough.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:33:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "\nI think there's an off-by-one bug in your script. I picked one value at\nrandom to verify -- 0x0BC0. Old:\n\n> -\t\t{0x0BC0, 0x0BC0}, {0x0BCD, 0x0BCD}, {0x0C3E, 0x0C40},\n\nNew:\n\n> +\t\t{0x0BC0, 0x0BC1}, {0x0BCD, 0x0BD0}, {0x0C00, 0x0C01},\n\nthe UCD file has:\n\n0BC0;TAMIL VOWEL SIGN II;Mn;0;NSM;;;;;N;;;;;\n0BC1;TAMIL VOWEL SIGN U;Mc;0;L;;;;;N;;;;;\n\n0BCD;TAMIL SIGN VIRAMA;Mn;9;NSM;;;;;N;;;;;\n0BD0;TAMIL OM;Lo;0;L;;;;;N;;;;;\n\nSo it appears that the inclusion of both 0x0BC1 and 0x0BD0 are mistakes.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:52:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "On 2019-06-13 15:52, Alvaro Herrera wrote:\n> I think there's an off-by-one bug in your script.\n\nIndeed. Here is an updated script and patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 14 Jun 2019 11:36:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "On 2019-06-14 11:36, Peter Eisentraut wrote:\n> On 2019-06-13 15:52, Alvaro Herrera wrote:\n>> I think there's an off-by-one bug in your script.\n> \n> Indeed. Here is an updated script and patch.\n\ncommitted (to master)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 21:39:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> Indeed. Here is an updated script and patch.\n\n> committed (to master)\n\nCool, but should we also put your recalculation script into git, to help\nthe next time we decide that we need to update this list? It's\ndemonstrated to be nontrivial to get it right ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jun 2019 15:55:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Update list of combining characters"
},
{
"msg_contents": "On 2019-06-19 21:55, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>>> Indeed. Here is an updated script and patch.\n> \n>> committed (to master)\n> \n> Cool, but should we also put your recalculation script into git, to help\n> the next time we decide that we need to update this list? It's\n> demonstrated to be nontrivial to get it right ;-)\n\nFor PG12, having the script in the archives is sufficient, I think. Per\nthread \"more Unicode data updates\", we should come up with a method that\nupdates all (currently three) places where Unicode data is applied,\nwhich would involve some larger restructuring, probably.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 24 Jun 2019 22:58:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Update list of combining characters"
}
] |
[
{
"msg_contents": "Hi, all\r\nLately I was researching Parallelism of Postgres 10.7(and it is same in all version), and I was confused when reading the comment of function ExecParallelEstimate :\r\n(in src/backend/executor/execParallel.c)\r\n----------------------------------------------\r\n\r\n* While we're at it, count the number of PlanState nodes in the tree, so\r\n* we know how many SharedPlanStateInstrumentation structures we need.\r\nstatic bool\r\nExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e)\r\n----------------------------------------------\r\n\r\nThe structure SharedPlanStateInstrumentation is not exists at all. And I noticed that the so called “SharedPlanStateInstrumentation”\r\nmaybe is the structure instrumentation now, which is used for storing information of planstate in parallelism. The function count the number\r\nof planState nodes and stored it in ExecParallelEstimateContext-> nnodes ,then use it to Estimate space for instrumentation structure in\r\nfunction ExecInitParallelPlan.\r\n\r\n\r\nSo, I think the comment is out of date now, isn’t it?\r\n\r\nMaybe we can modified this piece of comment from “SharedPlanStateInstrumentation” to “instrumentation” for clear\r\n\r\n--\r\nBest Regards\r\n-----------------------------------------------------\r\nWu Fei\r\nDevelopment Department II\r\nSoftware Division III\r\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\r\nADDR.: No.6 Wenzhu Road, Software Avenue,\r\n Nanjing, 210012, China\r\nTEL : +86+25-86630566-9356\r\nCOINS: 7998-9356\r\nFAX: +86+25-83317685\r\nMAIL:wufei.fnst@cn.fujitsu.com\r\nhttp://www.fujitsu.com/cn/fnst/\r\n---------------------------------------------------\r\n\r\n\n\n\n\n\n\n\n\n\n\n\n \nHi, all\nLately I was researching Parallelism of Postgres 10.7(and it is same in all version), and I was confused when reading the comment of function ExecParallelEstimate :\n(in src/backend/executor/execParallel.c)\n----------------------------------------------\n \n* While we're at it, count the number of PlanState nodes in the tree, so\n* we know how many SharedPlanStateInstrumentation structures we need.\nstatic bool\nExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e)\n----------------------------------------------\n \nThe structure SharedPlanStateInstrumentation is not exists at all. And I noticed that the so called “SharedPlanStateInstrumentation”\n\nmaybe is the structure instrumentation now, which is used for storing information of planstate in parallelism. The function count the number\n\nof planState nodes and stored it in ExecParallelEstimateContext->\nnnodes ,then use it to \nEstimate space for instrumentation structure in \nfunction ExecInitParallelPlan.\n \n \nSo, I think the comment is out of date now, isn’t it?\n \nMaybe we can modified this piece of comment from “SharedPlanStateInstrumentation” to “instrumentation” for clear\n \n--\nBest Regards\n-----------------------------------------------------\n\nWu Fei\nDevelopment Department II\nSoftware Division III\nNanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)\nADDR.: No.6 Wenzhu Road, Software Avenue,\n Nanjing, 210012, China\nTEL : +86+25-86630566-9356\nCOINS: 7998-9356\nFAX: +86+25-83317685\nMAIL:wufei.fnst@cn.fujitsu.com\nhttp://www.fujitsu.com/cn/fnst/\n---------------------------------------------------",
"msg_date": "Wed, 5 Jun 2019 03:54:18 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "On Wed, Jun 5, 2019 at 9:24 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\n>\n> Hi, all\n>\n> Lately I was researching Parallelism of Postgres 10.7(and it is same in all version), and I was confused when reading the comment of function ExecParallelEstimate :\n>\n> (in src/backend/executor/execParallel.c)\n>\n> ----------------------------------------------\n>\n>\n>\n> * While we're at it, count the number of PlanState nodes in the tree, so\n>\n> * we know how many SharedPlanStateInstrumentation structures we need.\n>\n> static bool\n>\n> ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext *e)\n>\n> ----------------------------------------------\n>\n>\n>\n> The structure SharedPlanStateInstrumentation is not exists at all. And I noticed that the so called “SharedPlanStateInstrumentation”\n>\n> maybe is the structure instrumentation now, which is used for storing information of planstate in parallelism. The function count the number\n>\n> of planState nodes and stored it in ExecParallelEstimateContext-> nnodes ,then use it to Estimate space for instrumentation structure in\n>\n> function ExecInitParallelPlan.\n>\n\nI think here it refers to SharedExecutorInstrumentation. This\nstructure is used for accumulating per-PlanState instrumentation. So,\nit is not totally wrong, but I guess we can change it to\nSharedExecutorInstrumentation to avoid confusion? What do you think?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2019 09:50:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "Thanks for your reply.\r\nFrom the code below:\r\n(https://github.com/postgres/postgres/blob/REL_10_7/src/backend/executor/execParallel.c) \r\n#######################################################################################\r\n443\t/*\t\t\t\t\t\r\n444\t\t * Give parallel-aware nodes a chance to add to the estimates, and get a\t\t\t\t\r\n445\t\t * count of how many PlanState nodes there are.\t\t\t\t\r\n446\t\t */\t\t\t\t\r\n447\t\te.pcxt = pcxt;\t\t\t\t\r\n448\t\te.nnodes = 0;\t\t\t\t\r\n449\t\tExecParallelEstimate(planstate, &e);\t\t\t\t\r\n450\t\t\t\t\t\t\r\n451\t\t/* Estimate space for instrumentation, if required. */\t\t\t\t\r\n452\t\tif (estate->es_instrument)\t\t\t\t\r\n453\t\t{\t\t\t\t\r\n454\t\t\tinstrumentation_len =\t\t\t\r\n455\t\t\t\toffsetof(SharedExecutorInstrumentation, plan_node_id) +\t\t\r\n456\t\t\t\tsizeof(int) * e.nnodes;\t\t\r\n457\t\t\tinstrumentation_len = MAXALIGN(instrumentation_len);\t\t\t\r\n458\t\t\tinstrument_offset = instrumentation_len;\t\t\t\r\n459\t\t\tinstrumentation_len +=\t\t\t\r\n460\t\t\t\tmul_size(sizeof(Instrumentation),\t\t\r\n461\t\t\t\t\t\t mul_size(e.nnodes, nworkers));\r\n462\t\t\tshm_toc_estimate_chunk(&pcxt->estimator, instrumentation_len);\t\t\t\r\n463\t\t\tshm_toc_estimate_keys(&pcxt->estimator, 1);\t\r\n\r\n#######################################################################################\r\nIt seems that e.nnodes which returns from ExecParallelEstimate(planstate, &e) , determines how much instrumentation structures in DSM(line459~line461). \r\nAnd e.nnodes also determines the length of SharedExecutorInstrumentation-> plan_node_id(line454~line456).\r\n\r\nSo, I think here it refers to instrumentation. \r\n\r\nSharedExecutorInstrumentation is just likes a master that hold the metadata: \r\nstruct SharedExecutorInstrumentation\r\n{\r\n\tint\t\t\tinstrument_options;\r\n\tint\t\t\tinstrument_offset;\r\n\tint\t\t\tnum_workers;\r\n\tint\t\t\tnum_plan_nodes; // this equals to e.nnodes from the source code \r\n\tint\t\t\tplan_node_id[FLEXIBLE_ARRAY_MEMBER];\r\n\t/* array of num_plan_nodes * num_workers Instrumentation objects follows */\r\n};\r\n\r\nWhat do you think?\r\n\r\nWith Regards,\r\nWu Fei\r\n\r\n\r\n-----Original Message-----\r\nFrom: Amit Kapila [mailto:amit.kapila16@gmail.com] \r\nSent: Wednesday, June 05, 2019 12:20 PM\r\nTo: Wu, Fei/吴 非 <wufei.fnst@cn.fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: Confusing comment for function ExecParallelEstimate\r\n\r\nOn Wed, Jun 5, 2019 at 9:24 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\r\n>\r\n> Hi, all\r\n>\r\n> Lately I was researching Parallelism of Postgres 10.7(and it is same in all version), and I was confused when reading the comment of function ExecParallelEstimate :\r\n>\r\n> (in src/backend/executor/execParallel.c)\r\n>\r\n> ----------------------------------------------\r\n>\r\n>\r\n>\r\n> * While we're at it, count the number of PlanState nodes in the tree, \r\n> so\r\n>\r\n> * we know how many SharedPlanStateInstrumentation structures we need.\r\n>\r\n> static bool\r\n>\r\n> ExecParallelEstimate(PlanState *planstate, ExecParallelEstimateContext \r\n> *e)\r\n>\r\n> ----------------------------------------------\r\n>\r\n>\r\n>\r\n> The structure SharedPlanStateInstrumentation is not exists at all. And I noticed that the so called “SharedPlanStateInstrumentation”\r\n>\r\n> maybe is the structure instrumentation now, which is used for storing \r\n> information of planstate in parallelism. The function count the \r\n> number\r\n>\r\n> of planState nodes and stored it in ExecParallelEstimateContext-> \r\n> nnodes ,then use it to Estimate space for instrumentation structure in\r\n>\r\n> function ExecInitParallelPlan.\r\n>\r\n\r\nI think here it refers to SharedExecutorInstrumentation. This structure is used for accumulating per-PlanState instrumentation. So, it is not totally wrong, but I guess we can change it to SharedExecutorInstrumentation to avoid confusion? What do you think?\r\n\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\nEnterpriseDB: http://www.enterprisedb.com\r\n\r\n\r\n\n\n",
"msg_date": "Wed, 5 Jun 2019 05:57:31 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "On Wed, Jun 5, 2019 at 11:27 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\n>\n> Thanks for your reply.\n> From the code below:\n> (https://github.com/postgres/postgres/blob/REL_10_7/src/backend/executor/execParallel.c)\n> #######################################################################################\n> 443 /*\n> 444 * Give parallel-aware nodes a chance to add to the estimates, and get a\n> 445 * count of how many PlanState nodes there are.\n> 446 */\n> 447 e.pcxt = pcxt;\n> 448 e.nnodes = 0;\n> 449 ExecParallelEstimate(planstate, &e);\n> 450\n> 451 /* Estimate space for instrumentation, if required. */\n> 452 if (estate->es_instrument)\n> 453 {\n> 454 instrumentation_len =\n> 455 offsetof(SharedExecutorInstrumentation, plan_node_id) +\n> 456 sizeof(int) * e.nnodes;\n> 457 instrumentation_len = MAXALIGN(instrumentation_len);\n> 458 instrument_offset = instrumentation_len;\n> 459 instrumentation_len +=\n> 460 mul_size(sizeof(Instrumentation),\n> 461 mul_size(e.nnodes, nworkers));\n> 462 shm_toc_estimate_chunk(&pcxt->estimator, instrumentation_len);\n> 463 shm_toc_estimate_keys(&pcxt->estimator, 1);\n>\n> #######################################################################################\n> It seems that e.nnodes which returns from ExecParallelEstimate(planstate, &e) , determines how much instrumentation structures in DSM(line459~line461).\n> And e.nnodes also determines the length of SharedExecutorInstrumentation-> plan_node_id(line454~line456).\n>\n> So, I think here it refers to instrumentation.\n>\n\nRight. I think the way it is mentioned\n(SharedPlanStateInstrumentation structures ..) in the comment can\nconfuse readers. We can replace SharedPlanStateInstrumentation with\nInstrumentation in the comment.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jun 2019 16:48:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "Sorry, Last mail forget to CC the mailing list.\r\n\r\nNow the comment is confusing, Maybe someone should correct it.\r\n\r\nHere is a simple patch, What do you think ?\r\n\r\nWith Regards,\r\nWu Fei\r\n\r\n-----Original Message-----\r\nFrom: Amit Kapila [mailto:amit.kapila16@gmail.com] \r\nSent: Wednesday, June 05, 2019 7:18 PM\r\nTo: Wu, Fei/吴 非 <wufei.fnst@cn.fujitsu.com>\r\nCc: pgsql-hackers@postgresql.org\r\nSubject: Re: Confusing comment for function ExecParallelEstimate\r\n\r\nOn Wed, Jun 5, 2019 at 11:27 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\r\n>\r\n> Thanks for your reply.\r\n> From the code below:\r\n> (https://github.com/postgres/postgres/blob/REL_10_7/src/backend/execut\r\n> or/execParallel.c) \r\n> #######################################################################################\r\n> 443 /*\r\n> 444 * Give parallel-aware nodes a chance to add to the estimates, and get a\r\n> 445 * count of how many PlanState nodes there are.\r\n> 446 */\r\n> 447 e.pcxt = pcxt;\r\n> 448 e.nnodes = 0;\r\n> 449 ExecParallelEstimate(planstate, &e);\r\n> 450\r\n> 451 /* Estimate space for instrumentation, if required. */\r\n> 452 if (estate->es_instrument)\r\n> 453 {\r\n> 454 instrumentation_len =\r\n> 455 offsetof(SharedExecutorInstrumentation, plan_node_id) +\r\n> 456 sizeof(int) * e.nnodes;\r\n> 457 instrumentation_len = MAXALIGN(instrumentation_len);\r\n> 458 instrument_offset = instrumentation_len;\r\n> 459 instrumentation_len +=\r\n> 460 mul_size(sizeof(Instrumentation),\r\n> 461 mul_size(e.nnodes, nworkers));\r\n> 462 shm_toc_estimate_chunk(&pcxt->estimator, instrumentation_len);\r\n> 463 shm_toc_estimate_keys(&pcxt->estimator, 1);\r\n>\r\n> ######################################################################\r\n> ################# It seems that e.nnodes which returns from \r\n> ExecParallelEstimate(planstate, &e) , determines how much instrumentation structures in DSM(line459~line461).\r\n> And e.nnodes also determines the length of SharedExecutorInstrumentation-> plan_node_id(line454~line456).\r\n>\r\n> So, I think here it refers to instrumentation.\r\n>\r\n\r\nRight. I think the way it is mentioned\r\n(SharedPlanStateInstrumentation structures ..) in the comment can confuse readers. We can replace SharedPlanStateInstrumentation with Instrumentation in the comment.\r\n\r\n--\r\nWith Regards,\r\nAmit Kapila.\r\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 6 Jun 2019 02:06:50 +0000",
"msg_from": "\"Wu, Fei\" <wufei.fnst@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 7:37 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\n>\n> Sorry, Last mail forget to CC the mailing list.\n>\n> Now the comment is confusing, Maybe someone should correct it.\n>\n\nSure, for the sake of clarity, when this code was initially introduced\nin commit d1b7c1ff, the structure used was\nSharedPlanStateInstrumentation, but later when it got changed to\nInstrumentation structure in commit b287df70, we forgot to update the\ncomment. So, we should backpatch this till 9.6 where it got\nintroduced. I will commit this change by tomorrow or so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jun 2019 08:12:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 8:12 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 6, 2019 at 7:37 AM Wu, Fei <wufei.fnst@cn.fujitsu.com> wrote:\n> >\n> > Sorry, Last mail forget to CC the mailing list.\n> >\n> > Now the comment is confusing, Maybe someone should correct it.\n> >\n>\n> Sure, for the sake of clarity, when this code was initially introduced\n> in commit d1b7c1ff, the structure used was\n> SharedPlanStateInstrumentation, but later when it got changed to\n> Instrumentation structure in commit b287df70, we forgot to update the\n> comment. So, we should backpatch this till 9.6 where it got\n> introduced. I will commit this change by tomorrow or so.\n>\n\nPushed. Note, I was not able to apply your patch using patch -p1 command.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Jun 2019 05:55:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing comment for function ExecParallelEstimate"
},
{
"msg_contents": "On 2019-Jun-07, Amit Kapila wrote:\n\n> Pushed. Note, I was not able to apply your patch using patch -p1 command.\n\nYeah, it's a \"normal\" diff (old school), not a unified or context diff.\npatch doesn't like normal diff, for good reasons, but you can force it\nto apply with \"patch --normal\" (not really recommended).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 00:41:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing comment for function ExecParallelEstimate"
}
] |
[
{
"msg_contents": "Tom, thanks for operational response reaction.\nBased on this topic and some nearby ones\nthe problem turned out to be deeper than\nexpceted... as always.\n\np.s. Sorry for cyrillic in the mailing list.\nAt the beginning I wrote from corporate email\nand could not change the sender name.\nIf you can, please, replace.\n\nRegards,\nGeorge\n\nTom, thanks for operational response reaction.Based on this topic and some nearby onesthe problem turned out to be deeper thanexpceted... as always.p.s. Sorry for cyrillic in the mailing list.At the beginning I wrote from corporate email and could not change the sender name.If you can, please, replace.Regards,George",
"msg_date": "Wed, 5 Jun 2019 09:55:27 +0300",
"msg_from": "George Tarasov <george.v.tarasov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: compiling PL/pgSQL plugin with C++"
}
] |
[
{
"msg_contents": "Hi,\n\n*I noticed pg_basebackup failure when default_table_access_method option is\nset.*\n\n*Test steps:*\n\nStep 1: Init database\n./initdb -D data\n\nStep 2: Start Server\n./postgres -D data &\n\nStep 3: Set Guc option\nexport PGOPTIONS='-c default_table_access_method=zheap'\n\nStep 4: Peform backup\n/pg_basebackup -D backup -p 5432 --no-sync\n2019-06-05 20:35:04.088 IST [11601] FATAL: cannot read pg_class without\nhaving selected a database\npg_basebackup: error: could not connect to server: FATAL: cannot read\npg_class without having selected a database\n\n*Reason why it is failing:*\npg_basebackup does not use any database to connect to server as it backs up\nthe whole data instance.\nAs the option default_table_access_method is set.\nIt tries to validate this option, but while validating the option in\nScanPgRelation function:\nif (!OidIsValid(MyDatabaseId))\nelog(FATAL, \"cannot read pg_class without having selected a database\");\n\nHere as pg_basebackup uses no database the command fails.\n\nFix:\nThe patch has the fix for the above issue:\n\nLet me know your opinion on this.\n\n-- \nRegards,\nvignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 5 Jun 2019 21:16:23 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 1:46 AM vignesh C <vignesh21@gmail.com> wrote:\n\n>\n> Hi,\n>\n> *I noticed pg_basebackup failure when default_table_access_method option\n> is set.*\n>\n> *Test steps:*\n>\n> Step 1: Init database\n> ./initdb -D data\n>\n> Step 2: Start Server\n> ./postgres -D data &\n>\n> Step 3: Set Guc option\n> export PGOPTIONS='-c default_table_access_method=zheap'\n>\n> Step 4: Peform backup\n> /pg_basebackup -D backup -p 5432 --no-sync\n> 2019-06-05 20:35:04.088 IST [11601] FATAL: cannot read pg_class without\n> having selected a database\n> pg_basebackup: error: could not connect to server: FATAL: cannot read\n> pg_class without having selected a database\n>\n> *Reason why it is failing:*\n> pg_basebackup does not use any database to connect to server as it backs\n> up the whole data instance.\n> As the option default_table_access_method is set.\n> It tries to validate this option, but while validating the option in\n> ScanPgRelation function:\n> if (!OidIsValid(MyDatabaseId))\n> elog(FATAL, \"cannot read pg_class without having selected a database\");\n>\n> Here as pg_basebackup uses no database the command fails.\n>\n\nThanks for the details steps to reproduce the bug, I am also able to\nreproduce the problem.\n\n\n\n> Fix:\n> The patch has the fix for the above issue:\n>\n> Let me know your opinion on this.\n>\n\nThanks for the patch and it fixes the problem.\n\nRegards,\nHaribabu Kommi\nFujitsu Australia\n\nOn Thu, Jun 6, 2019 at 1:46 AM vignesh C <vignesh21@gmail.com> wrote:Hi,I noticed pg_basebackup failure when default_table_access_method option is set.Test steps:Step 1: Init database ./initdb -D dataStep 2: Start Server./postgres -D data &Step 3: Set Guc option export PGOPTIONS='-c default_table_access_method=zheap'Step 4: Peform backup/pg_basebackup -D backup -p 5432 --no-sync2019-06-05 20:35:04.088 IST [11601] FATAL: cannot read pg_class without having selected a databasepg_basebackup: error: could not connect to server: FATAL: cannot read pg_class without having selected a databaseReason why it is failing:pg_basebackup does not use any database to connect to server as it backs up the whole data instance. As the option default_table_access_method is set.It tries to validate this option, but while validating the option in ScanPgRelation function: if (!OidIsValid(MyDatabaseId)) elog(FATAL, \"cannot read pg_class without having selected a database\");Here as pg_basebackup uses no database the command fails.Thanks for the details steps to reproduce the bug, I am also able to reproduce the problem. Fix:The patch has the fix for the above issue:Let me know your opinion on this.Thanks for the patch and it fixes the problem.Regards,Haribabu KommiFujitsu Australia",
"msg_date": "Thu, 6 Jun 2019 11:19:48 +1000",
"msg_from": "Haribabu Kommi <kommi.haribabu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 11:19:48AM +1000, Haribabu Kommi wrote:\n> Thanks for the details steps to reproduce the bug, I am also able to\n> reproduce the problem.\n\nThis way is even more simple, no need for zheap to be around:\n=# create access method heap2 TYPE table HANDLER heap_tableam_handler;\nCREATE ACCESS METHOD\nAnd then:\nPGOPTIONS=\"-c default_table_access_method=heap2\" psql \"replication=1\"\npsql: error: could not connect to server: FATAL: cannot read pg_class\nwithout having selected a database\n\n> Thanks for the patch and it fixes the problem.\n\nI was wondering if we actually need at all a catalog lookup at this\nstage, simplifying get_table_am_oid() on the way so as we always\nthrow an error (its missing_ok is here to allow a proper error in the\nGUC context). The table AM lookup happens only when creating a table,\nso we could just get a failure when attempting to create a table with\nthis incorrect value.\n\nActually, when updating a value and reloading and/or restarting the\nserver, it is possible to easily get in a state where we have an\ninvalid table AM parameter stored in the GUC, which is what the\ncallback is here to avoid. If you attempt to update the parameter\nwith ALTER SYSTEM, then the command complains. So it seems to me that\nthe user experience is inconsistent.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 16:06:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "Thanks Hari for helping in verifying.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\nOn Thu, Jun 6, 2019 at 6:50 AM Haribabu Kommi <kommi.haribabu@gmail.com>\nwrote:\n\n>\n> On Thu, Jun 6, 2019 at 1:46 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n>>\n>> Hi,\n>>\n>> *I noticed pg_basebackup failure when default_table_access_method option\n>> is set.*\n>>\n>> *Test steps:*\n>>\n>> Step 1: Init database\n>> ./initdb -D data\n>>\n>> Step 2: Start Server\n>> ./postgres -D data &\n>>\n>> Step 3: Set Guc option\n>> export PGOPTIONS='-c default_table_access_method=zheap'\n>>\n>> Step 4: Peform backup\n>> /pg_basebackup -D backup -p 5432 --no-sync\n>> 2019-06-05 20:35:04.088 IST [11601] FATAL: cannot read pg_class without\n>> having selected a database\n>> pg_basebackup: error: could not connect to server: FATAL: cannot read\n>> pg_class without having selected a database\n>>\n>> *Reason why it is failing:*\n>> pg_basebackup does not use any database to connect to server as it backs\n>> up the whole data instance.\n>> As the option default_table_access_method is set.\n>> It tries to validate this option, but while validating the option in\n>> ScanPgRelation function:\n>> if (!OidIsValid(MyDatabaseId))\n>> elog(FATAL, \"cannot read pg_class without having selected a database\");\n>>\n>> Here as pg_basebackup uses no database the command fails.\n>>\n>\n> Thanks for the details steps to reproduce the bug, I am also able to\n> reproduce the problem.\n>\n>\n>\n>> Fix:\n>> The patch has the fix for the above issue:\n>>\n>> Let me know your opinion on this.\n>>\n>\n> Thanks for the patch and it fixes the problem.\n>\n> Regards,\n> Haribabu Kommi\n> Fujitsu Australia\n>\n\n\n-- \nRegards,\nvignesh\n Have a nice day\n\nThanks Hari for helping in verifying.Regards,VigneshEnterpriseDB: http://www.enterprisedb.comOn Thu, Jun 6, 2019 at 6:50 AM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:On Thu, Jun 6, 2019 at 1:46 AM vignesh C <vignesh21@gmail.com> wrote:Hi,I noticed pg_basebackup failure when default_table_access_method option is set.Test steps:Step 1: Init database ./initdb -D dataStep 2: Start Server./postgres -D data &Step 3: Set Guc option export PGOPTIONS='-c default_table_access_method=zheap'Step 4: Peform backup/pg_basebackup -D backup -p 5432 --no-sync2019-06-05 20:35:04.088 IST [11601] FATAL: cannot read pg_class without having selected a databasepg_basebackup: error: could not connect to server: FATAL: cannot read pg_class without having selected a databaseReason why it is failing:pg_basebackup does not use any database to connect to server as it backs up the whole data instance. As the option default_table_access_method is set.It tries to validate this option, but while validating the option in ScanPgRelation function: if (!OidIsValid(MyDatabaseId)) elog(FATAL, \"cannot read pg_class without having selected a database\");Here as pg_basebackup uses no database the command fails.Thanks for the details steps to reproduce the bug, I am also able to reproduce the problem. Fix:The patch has the fix for the above issue:Let me know your opinion on this.Thanks for the patch and it fixes the problem.Regards,Haribabu KommiFujitsu Australia\n-- Regards,vignesh Have a nice day",
"msg_date": "Thu, 6 Jun 2019 13:41:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "> On Thu, Jun 6, 2019 at 9:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I was wondering if we actually need at all a catalog lookup at this\n> stage, simplifying get_table_am_oid() on the way so as we always\n> throw an error (its missing_ok is here to allow a proper error in the\n> GUC context).\n\nJust for me to understand, do you suggest to not check\ndefault_table_access_method existence in check_default_table_access_method? If\nyes, then\n\n> The table AM lookup happens only when creating a table, so we could just get\n> a failure when attempting to create a table with this incorrect value.\n\nis correct, but doesn't it leave the room for some problems in the future with\na wrong assumptions about correctness of default_table_access_method?\n\n> Actually, when updating a value and reloading and/or restarting the\n> server, it is possible to easily get in a state where we have an\n> invalid table AM parameter stored in the GUC, which is what the\n> callback is here to avoid. If you attempt to update the parameter\n> with ALTER SYSTEM, then the command complains. So it seems to me that\n> the user experience is inconsistent.\n\nRight, as far as I see the there is the same for e.g. default_tablespace due to\nIsTransactionState condition. In fact looks like one can see the very same\nissue with this option too, so probably it also needs to have MyDatabaseId\ncheck.\n\n\n",
"msg_date": "Sat, 8 Jun 2019 16:03:09 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-06 16:06:36 +0900, Michael Paquier wrote:\n> On Thu, Jun 06, 2019 at 11:19:48AM +1000, Haribabu Kommi wrote:\n> > Thanks for the details steps to reproduce the bug, I am also able to\n> > reproduce the problem.\n> \n> This way is even more simple, no need for zheap to be around:\n> =# create access method heap2 TYPE table HANDLER heap_tableam_handler;\n> CREATE ACCESS METHOD\n> And then:\n> PGOPTIONS=\"-c default_table_access_method=heap2\" psql \"replication=1\"\n> psql: error: could not connect to server: FATAL: cannot read pg_class\n> without having selected a database\n\nYea, need to fix that.\n\n\n> > Thanks for the patch and it fixes the problem.\n> \n> I was wondering if we actually need at all a catalog lookup at this\n> stage, simplifying get_table_am_oid() on the way so as we always\n> throw an error (its missing_ok is here to allow a proper error in the\n> GUC context). The table AM lookup happens only when creating a table,\n> so we could just get a failure when attempting to create a table with\n> this incorrect value.\n\nI think that'd be a bad plan. We check other such GUCs,\ne.g. default_tablespace where this behaviour has been copied from, even\nif not bulletproof.\n\n\n> Actually, when updating a value and reloading and/or restarting the\n> server, it is possible to easily get in a state where we have an\n> invalid table AM parameter stored in the GUC, which is what the\n> callback is here to avoid.\n\nWe have plenty other callbacks that aren't bulletproof, so I don't think\nthis is really something we should / can change in isolation here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Jun 2019 08:26:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-08 16:03:09 +0200, Dmitry Dolgov wrote:\n> > On Thu, Jun 6, 2019 at 9:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > The table AM lookup happens only when creating a table, so we could just get\n> > a failure when attempting to create a table with this incorrect value.\n> \n> is correct, but doesn't it leave the room for some problems in the future with\n> a wrong assumptions about correctness of default_table_access_method?\n\nWhat do you mean by that? Every single use of\ndefault_table_access_method (and similarly default_tablespace) has to\ncheck the value, because it could be outdated / not checked due to wrong\ncontext.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 8 Jun 2019 08:30:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "> On Sat, Jun 8, 2019 at 5:30 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2019-06-08 16:03:09 +0200, Dmitry Dolgov wrote:\n> > > On Thu, Jun 6, 2019 at 9:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > The table AM lookup happens only when creating a table, so we could just get\n> > > a failure when attempting to create a table with this incorrect value.\n> >\n> > is correct, but doesn't it leave the room for some problems in the future with\n> > a wrong assumptions about correctness of default_table_access_method?\n>\n> What do you mean by that?\n\nI didn't have any particular problem in mind, just an abstract and probably\nwrong observation. One more observation is that this\n\n> Every single use of default_table_access_method (and similarly\n> default_tablespace) has to check the value, because it could be outdated /\n> not checked due to wrong context.\n\nfor default_tablespace clearly expressed in GetDefaultTablespace function (if\nyou see something like that, obviously you better use it), but there is nothing\nlike similar for default_table_access_method so one have to keep it in mind\n(although of course it's not a problem so far, since it's being used in only\none place).\n\n\n",
"msg_date": "Sat, 8 Jun 2019 18:45:55 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "On Sat, Jun 08, 2019 at 08:26:07AM -0700, Andres Freund wrote:\n> We have plenty other callbacks that aren't bulletproof, so I don't think\n> this is really something we should / can change in isolation here.\n\nGood point. I was looking at the check callbacks in guc.c for similar\nchanges, and missed these. After testing, only these parameters fail\nwith the same error:\n- default_table_access_method\n- default_text_search_config\n\nFor the second one it's much older.\n--\nMichael",
"msg_date": "Mon, 10 Jun 2019 16:37:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-10 16:37:33 +0900, Michael Paquier wrote:\n> On Sat, Jun 08, 2019 at 08:26:07AM -0700, Andres Freund wrote:\n> > We have plenty other callbacks that aren't bulletproof, so I don't think\n> > this is really something we should / can change in isolation here.\n> \n> Good point. I was looking at the check callbacks in guc.c for similar\n> changes, and missed these. After testing, only these parameters fail\n> with the same error:\n> - default_table_access_method\n> - default_text_search_config\n> \n> For the second one it's much older.\n\nYea, that's where the default_table_access_method code originates from,\nobviously. I'll backpatch the default_text_search_config fix separately\n(and first).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jun 2019 22:33:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 10:33:37PM -0700, Andres Freund wrote:\n> Yea, that's where the default_table_access_method code originates from,\n> obviously. I'll backpatch the default_text_search_config fix separately\n> (and first).\n\nSo you are just planning to add a check on MyDatabaseId for both? No\nobjections to that.\n--\nMichael",
"msg_date": "Tue, 11 Jun 2019 14:56:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 14:56:36 +0900, Michael Paquier wrote:\n> On Mon, Jun 10, 2019 at 10:33:37PM -0700, Andres Freund wrote:\n> > Yea, that's where the default_table_access_method code originates from,\n> > obviously. I'll backpatch the default_text_search_config fix separately\n> > (and first).\n> \n> So you are just planning to add a check on MyDatabaseId for both? No\n> objections to that.\n\nWell, all four. Given it's just copied code I don't see much code in\nsplitting the commit anymore.\n\nI noticed some other uglyness: check_timezone calls interval_in(),\nwithout any checks. Not a huge fan of doing all that in postmaster, even\nleaving the wrong error reporting aside :(. But that seems like a\nplenty different enough issue to fix it separately, if we decide we want\nto do so.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jun 2019 23:49:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 11:49:03PM -0700, Andres Freund wrote:\n> Well, all four. Given it's just copied code I don't see much code in\n> splitting the commit anymore.\n\nThanks for pushing the fix, the result looks fine.\n\n> I noticed some other uglyness: check_timezone calls interval_in(),\n> without any checks. Not a huge fan of doing all that in postmaster, even\n> leaving the wrong error reporting aside :(. But that seems like a\n> plenty different enough issue to fix it separately, if we decide we want\n> to do so.\n\nIndeed, I have not noticed this one :(\n--\nMichael",
"msg_date": "Wed, 12 Jun 2019 17:00:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup failure after setting default_table_access_method\n option"
}
] |
[
{
"msg_contents": "I propose this patch to add a LOCALE option to CREATE DATABASE. This\nsets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is\nalready supported in initdb, CREATE COLLATION, and createdb.\n\nWith collation providers other than libc, having separate lc_collate and\nlc_ctype settings is not necessarily applicable, so this is also\npreparation for such future functionality.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 5 Jun 2019 22:17:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n>\n> I propose this patch to add a LOCALE option to CREATE DATABASE. This\n> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is\n> already supported in initdb, CREATE COLLATION, and createdb.\n>\n> With collation providers other than libc, having separate lc_collate and\n> lc_ctype settings is not necessarily applicable, so this is also\n> preparation for such future functionality.\n>\n\nCool... would be nice also add some test cases.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:>> I propose this patch to add a LOCALE option to CREATE DATABASE. This> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is> already supported in initdb, CREATE COLLATION, and createdb.>> With collation providers other than libc, having separate lc_collate and> lc_ctype settings is not necessarily applicable, so this is also> preparation for such future functionality.>Cool... would be nice also add some test cases.Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Wed, 5 Jun 2019 17:31:37 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 2019-06-05 22:31, Fabrízio de Royes Mello wrote:\n> On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com\n> <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n>>\n>> I propose this patch to add a LOCALE option to CREATE DATABASE. This\n>> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is\n>> already supported in initdb, CREATE COLLATION, and createdb.\n>>\n>> With collation providers other than libc, having separate lc_collate and\n>> lc_ctype settings is not necessarily applicable, so this is also\n>> preparation for such future functionality.\n> \n> Cool... would be nice also add some test cases.\n\nRight. Any suggestions where to put them?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Jun 2019 11:38:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 6:38 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2019-06-05 22:31, Fabrízio de Royes Mello wrote:\n> > On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com\n> > <mailto:peter.eisentraut@2ndquadrant.com>> wrote:\n> >>\n> >> I propose this patch to add a LOCALE option to CREATE DATABASE. This\n> >> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is\n> >> already supported in initdb, CREATE COLLATION, and createdb.\n> >>\n> >> With collation providers other than libc, having separate lc_collate\nand\n> >> lc_ctype settings is not necessarily applicable, so this is also\n> >> preparation for such future functionality.\n> >\n> > Cool... would be nice also add some test cases.\n>\n> Right. Any suggestions where to put them?\n>\n\nHmm... good question... I thought we already have some regression tests for\n{CREATE|DROP} DATABASE but actually we don't... should we add a new one?\n\nAtt,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Thu, Jun 6, 2019 at 6:38 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:>> On 2019-06-05 22:31, Fabrízio de Royes Mello wrote:> > On Wed, Jun 5, 2019 at 5:17 PM Peter Eisentraut> > <peter.eisentraut@2ndquadrant.com> > <mailto:peter.eisentraut@2ndquadrant.com>> wrote:> >>> >> I propose this patch to add a LOCALE option to CREATE DATABASE. This> >> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is> >> already supported in initdb, CREATE COLLATION, and createdb.> >>> >> With collation providers other than libc, having separate lc_collate and> >> lc_ctype settings is not necessarily applicable, so this is also> >> preparation for such future functionality.> >> > Cool... would be nice also add some test cases.>> Right. Any suggestions where to put them?>Hmm... good question... I thought we already have some regression tests for {CREATE|DROP} DATABASE but actually we don't... should we add a new one?Att,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 6 Jun 2019 16:03:48 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 2019-Jun-06, Fabr�zio de Royes Mello wrote:\n\n> > > Cool... would be nice also add some test cases.\n> >\n> > Right. Any suggestions where to put them?\n> \n> Hmm... good question... I thought we already have some regression tests for\n> {CREATE|DROP} DATABASE but actually we don't... should we add a new one?\n\nI think pg_dump/t/002_pg_dump.pl might be a good place. Not the easiest\nprogram in the world to work with, admittedly.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Jun 2019 15:52:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 05/06/2019 23:17, Peter Eisentraut wrote:\n> I propose this patch to add a LOCALE option to CREATE DATABASE. This\n> sets both LC_COLLATE and LC_CTYPE with one option. Similar behavior is\n> already supported in initdb, CREATE COLLATION, and createdb.\n> \n> With collation providers other than libc, having separate lc_collate and\n> lc_ctype settings is not necessarily applicable, so this is also\n> preparation for such future functionality.\n\nOne objection is that the proposed LOCALE option would only affect \nLC_COLLATE and LC_CTYPE. What about lc_messages, lc_monetary, lc_numeric \nand lc_time? initdb's --locale option sets those, too. Should CREATE \nDATABASE LOCALE set those as well?\n\nOn the whole, +1 on adding the option. In practice, you always want to \nset LC_COLLATE and LC_CTYPE to the same value, so we should make that \neasy. But let's consider those other variables too, at least we've got \nto document it carefully.\n\n\nPS. There was some discussion on doing this when the LC_COLLATE and \nLC_CTYPE options were added: \nhttps://www.postgresql.org/message-id/491862F7.1060501%40enterprisedb.com. \nMy reading of that is that there was no strong consensus, so we just let \nit be.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 14 Jun 2019 12:57:58 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 2019-06-06 21:52, Alvaro Herrera wrote:\n> On 2019-Jun-06, Fabrízio de Royes Mello wrote:\n> \n>>>> Cool... would be nice also add some test cases.\n>>>\n>>> Right. Any suggestions where to put them?\n>>\n>> Hmm... good question... I thought we already have some regression tests for\n>> {CREATE|DROP} DATABASE but actually we don't... should we add a new one?\n> \n> I think pg_dump/t/002_pg_dump.pl might be a good place. Not the easiest\n> program in the world to work with, admittedly.\n\nUpdated patch with test and expanded documentation.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 23 Jun 2019 20:13:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "\nHello Peter,\n\n>> I think pg_dump/t/002_pg_dump.pl might be a good place. Not the easiest\n>> program in the world to work with, admittedly.\n>\n> Updated patch with test and expanded documentation.\n\nPatch v2 applies cleanly, compiles, make check-world ok with tap enabled. \nDoc gen ok.\n\nThe addition looks reasonable.\n\nThe second error message about conflicting option could more explicit than \na terse \"conflicting or redundant options\"? The user may expect later \noptions to superseedes earlier options, for instance.\n\nAbout the pg_dump code, I'm wondering whether it is worth generating \nLOCALE as it breaks backward compatibility (eg dumping a new db to load it \ninto a older version).\n\nIf it is to be generated, I'd do merge the two conditions instead of \nnesting.\n\n if (strlen(collate) > 0 && strcmp(collate, ctype) == 0)\n // generate LOCALE\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 13 Jul 2019 19:20:12 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 2019-07-13 19:20, Fabien COELHO wrote:\n> The second error message about conflicting option could more explicit than \n> a terse \"conflicting or redundant options\"? The user may expect later \n> options to superseedes earlier options, for instance.\n\ndone\n\n> About the pg_dump code, I'm wondering whether it is worth generating \n> LOCALE as it breaks backward compatibility (eg dumping a new db to load it \n> into a older version).\n\nWe don't really care about backward compatibility here. Moving forward,\nwith ICU and such, we don't want to have to drag around old syntax forever.\n\n> If it is to be generated, I'd do merge the two conditions instead of \n> nesting.\n> \n> if (strlen(collate) > 0 && strcmp(collate, ctype) == 0)\n> // generate LOCALE\n\ndone\n\nHow about this patch?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 22 Jul 2019 20:36:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "\nHello Peter,\n\n>> About the pg_dump code, I'm wondering whether it is worth generating\n>> LOCALE as it breaks backward compatibility (eg dumping a new db to load it\n>> into a older version).\n>\n> We don't really care about backward compatibility here. Moving forward,\n> with ICU and such, we don't want to have to drag around old syntax forever.\n\nWe will drag it anyway because LOCALE is just a shortcut for the other two \nLC_* when they have the same value.\n\n> How about this patch?\n\nIt applies cleanly, compiles, global & pg_dump make check ok, doc gen ok.\n\nI'm still unconvinced of the interest of breaking backward compatibility, \nbut this is no big deal.\n\nI do not like much calling strlen() to check whether a string is empty, \nbut this is pre-existing.\n\nI switched the patch to READY.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Jul 2019 22:18:19 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
},
{
"msg_contents": "On 2019-07-23 00:18, Fabien COELHO wrote:\n> It applies cleanly, compiles, global & pg_dump make check ok, doc gen ok.\n> \n> I'm still unconvinced of the interest of breaking backward compatibility, \n> but this is no big deal.\n> \n> I do not like much calling strlen() to check whether a string is empty, \n> but this is pre-existing.\n> \n> I switched the patch to READY.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 15:00:25 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add CREATE DATABASE LOCALE option"
}
] |
[
{
"msg_contents": "Meskes-san\n\nThis thread is branched from the following.\nhttps://www.postgresql.org/message-id/03040DFF97E6E54E88D3BFEE5F5480F74ABEADE7@G01JPEXMBYT04\n\n> > Type1. Bugs or intentional unsupported features.\n> > - EXPLAIN EXECUTE\n> > - **CREATE TABLE AS with using clause**\n\nI noticed that CREATE AS EXECUTE with using clause needs a new\nimplementation that all parameters in using clause must be embedded into\nexpr-list of EXECUTE in text-format as the following because there is\nno interface of protocol for our purpose. \nIt spends more time for implementing. Do you have any advice?\n\n int id = 100;\n EXEC SQL CREATE TABLE test AS EXECUTE stmt using :id;\n -->\n PQexec(\"CREATE TABLE test AS EXECUTE stmt(100)\");\n\n\ne.g. PQexecParamas(\"CREATE TABLE test AS EXECUTE stmt\", {23,0},{\"100\",0},{3,0},NULL)\n It sends the following.\n\n To backend> Msg P\n To backend> \"\"\n To backend> \"create table test as execute stmt\"\n :\n To backend> Msg B\n To backend> \"\"\n To backend> \"\" ---> It means execute request \"create table test as execute stmt\" with the value.\n To backend (2#)> 1 But the create statement has no $x. Since the value may be discard.\n To backend (2#)> 0 In result, the following error is occurred.\n To backend (2#)> 1\n To backend (4#)> 3\n To backend> 100\n To backend (2#)> 1\n To backend (2#)> 0\n :\n 2019-06-06 07:26:35.252 UTC [1630] ERROR: wrong number of parameters for prepared statement \"stmt\"\n 2019-06-06 07:26:35.252 UTC [1630] DETAIL: Expected 1 parameters but got 0.\n 2019-06-06 07:26:35.252 UTC [1630] STATEMENT: create table test2 as execute stmt\n\nRegards\nRyo Matsumura\n\n\n\n",
"msg_date": "Thu, 6 Jun 2019 07:38:34 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Bug: ECPG: Cannot use CREATE AS EXECUTE statemnt"
},
{
"msg_contents": "Matsumura-san,\n\n> I noticed that CREATE AS EXECUTE with using clause needs a new\n> implementation that all parameters in using clause must be embedded\n> into\n> expr-list of EXECUTE in text-format as the following because there is\n> no interface of protocol for our purpose. \n> It spends more time for implementing. Do you have any advice?\n> ...\n\nUnfortunately no, I have no advice. Originally all statements needed\nthis treatment. :)\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Tue, 02 Jul 2019 04:04:37 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Bug: ECPG: Cannot use CREATE AS EXECUTE statemnt"
},
{
"msg_contents": "Meskes-san\r\n\r\nThank you for your comment.\r\n\r\nI attach a patch.\r\nIt doesn't include tests, but it passed some test(*1).\r\n\r\nExplanation about the patch:\r\n\r\n- Add a new ECPGst_exec_embedded_in_other_stmt whether EXECUTE\r\n statement has exprlist or not.\r\n\r\n This type name may not be good.\r\n It is a type for [CREATE TABLE ... AS EXECUTE ...].\r\n But I doesn't consider about [EXPLAIN EXECUTE ...].\r\n\r\n- If statement type is a new one, ecpglib embeds variables into \r\n query in text format at ecpg_build_params().\r\n Even if the statement does not have exprlist, ecpglib makes\r\n exprlist and embeds into it.\r\n The list is expanded incrementally in loop of ecpg_build_params().\r\n\r\n- ecpg_build_params() is difficult to read and insert the above\r\n logic. Therefore, I refactor it. The deitail is described in comments.\r\n\r\n(*1) The followings run expectively.\r\n exec sql create table if not exists foo (c1 int);\r\n exec sql insert into foo select generate_series(1, 20);\r\n exec sql prepare st as select * from foo where c1 % $1 = 0 and c1 % $2 = 0;\r\n\r\n exec sql execute st using :v1,:v2;\r\n exec sql execute st(:v1,:v2);\r\n exec sql create table if not exists bar (c1) as execute st(2, 3);\r\n exec sql create table if not exists bar (c1) as execute st using 2,3;\r\n exec sql create table if not exists bar (c1) as execute st using :v1,:v2;\r\n exec sql create table bar (c1) as execute st using :v1,:v2;\r\n\r\nRegards\r\nRyo Matsumura",
"msg_date": "Wed, 17 Jul 2019 02:40:20 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Bug: ECPG: Cannot use CREATE AS EXECUTE statemnt"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI’ve been paying my query-rewrite for MVs EXTENSION a bit of attention recently, and I was looking at how to enable people to turn it on and off without requiring a user of it to get too much into it’s guts. \n\nHowever, the add_X_reloption() APIs seems to need to be paired with a change to core code, and so that rather puts them off limits for EXTENSIONs. \n\nI wonder if I’m understanding or using it wrong.\n\nMy specific use case is how to flag a given MV as being a potential candidate that it is worth my EXTENSION’s logic (which runs in the planner, so is relatively time-sensitive) reviewing it for a match against the currently-being-planned query. The significant end user in my use case is a DBA, or the DB-skilled dev in a dev team. \n\nGUCs look a bit of a hack for this use case, so I’ve dismissed them. \n\nAround the EXTENSION landscape, people seem to use pgplsql packages to admin. This also seems a bit hacky, especially as the way people typically illustrate them is to SELECT from some administrative function. It works, and it’s low tech. TBH, it has the advantage of being the “accepted way” on PostgreSQL, and I’ve seen similar in Oracle, so it’s not without precedent.\n\nI can see why generalised extensions to the SQL parser are basically not starters.\n\nBut reloptions, or “storage_parameters”, seem syntactically just the ticket. I’m envisaging something like “ALTER MV xyz SET (rewrite_enabled = true)”.\n\nI guess my question is, and I correctly understanding that reloptions are basically off-limits to EXTENSIONS?\n\nI did see a long 2014 thread discussing, and that got quite heated. So perhaps it is still a tricky question to answer...\n\nTo develop my question a bit more... I wonder if I’ve stumbled upon use case that should work, but doesn’t. Have I found a bug? (Which leads obviously to, should it be fixed?)\n\nThanks,\nd.\n\n\n",
"msg_date": "Thu, 6 Jun 2019 09:07:06 +0100",
"msg_from": "Dent John <denty@qqdd.eu>",
"msg_from_op": true,
"msg_subject": "Use of reloptions by EXTENSIONs"
},
{
"msg_contents": "Dent John <denty@qqdd.eu> writes:\n> I guess my question is, and I correctly understanding that reloptions are basically off-limits to EXTENSIONS?\n\nIIRC that's basically true. There's a lot of dissatisfaction with the\ncurrent implementation of reloptions, although I think that it's been\nmainly focused on the fact that adding new ones is hard/error-prone\neven within the core code. If you want to help move this along, you\ncould review the existing patch in the area:\n\nhttps://www.postgresql.org/message-id/flat/2083183.Rn7qOxG4Ov@x200m\n\nand/or propose additional changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jun 2019 11:21:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of reloptions by EXTENSIONs"
},
{
"msg_contents": "Thank you, Tom. \n\n(And sorry for the delay following up.)\n\nI’ve marked myself down for review for this patch in the next CF.\n\nI’ll see if I can get the patch applied and feed back on how much it move towards making my use case a viable proposition. \n\nd.\n\n> On 9 Jun 2019, at 17:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Dent John <denty@qqdd.eu> writes:\n>> I guess my question is, and I correctly understanding that reloptions are basically off-limits to EXTENSIONS?\n> \n> IIRC that's basically true. There's a lot of dissatisfaction with the\n> current implementation of reloptions, although I think that it's been\n> mainly focused on the fact that adding new ones is hard/error-prone\n> even within the core code. If you want to help move this along, you\n> could review the existing patch in the area:\n> \n> https://www.postgresql.org/message-id/flat/2083183.Rn7qOxG4Ov@x200m\n> \n> and/or propose additional changes.\n> \n> regards, tom lane\n\n\n\n",
"msg_date": "Mon, 24 Jun 2019 11:47:09 +0200",
"msg_from": "Dent John <denty@qqdd.eu>",
"msg_from_op": true,
"msg_subject": "Re: Use of reloptions by EXTENSIONs"
},
{
"msg_contents": "On Mon, Jun 24, 2019 at 11:47:09AM +0200, Dent John wrote:\n> I’ll see if I can get the patch applied and feed back on how much it\n> move towards making my use case a viable proposition. \n\nThere is another patch which provides more coverage for reloptions:\nhttps://commitfest.postgresql.org/23/2064/\n\nBased on my last lookup, I was rather unhappy with its shape because\nof the assumptions behind the tests and the extra useless work it was\ndoing with parameter strings (the set of WARNING is also something we\ndon't need). If we get that first in, we can then make sure that any\nextra refactoring has hopefully no impact.\n--\nMichael",
"msg_date": "Tue, 25 Jun 2019 10:16:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use of reloptions by EXTENSIONs"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reading the pg_checksums code I found the following comment\n\"Check if cluster is running\" is not placed at right place.\n\n /* Check if cluster is running */\n ControlFile = get_controlfile(DataDir, &crc_ok);\n if (!crc_ok)\n {\n pg_log_error(\"pg_control CRC value is incorrect\");\n exit(1);\n }\n\n if (ControlFile->pg_control_version != PG_CONTROL_VERSION)\n {\n pg_log_error(\"cluster is not compatible with this version of\npg_checksums\");\n exit(1);\n }\n\n if (ControlFile->blcksz != BLCKSZ)\n {\n pg_log_error(\"database cluster is not compatible\");\n fprintf(stderr, _(\"The database cluster was initialized with\nblock size %u, but pg_checksums was compiled with block size %u.\\n\"),\n ControlFile->blcksz, BLCKSZ);\n exit(1);\n }\n\n if (ControlFile->state != DB_SHUTDOWNED &&\n ControlFile->state != DB_SHUTDOWNED_IN_RECOVERY)\n {\n pg_log_error(\"cluster must be shut down\");\n exit(1);\n }\n\nSo I'd like to propose a small fix for that; move the comment to the\nright place and add another comment. Please find an attached small\npatch.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Thu, 6 Jun 2019 17:16:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small review comment on pg_checksums"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 05:16:30PM +0900, Masahiko Sawada wrote:\n> So I'd like to propose a small fix for that; move the comment to the\n> right place and add another comment. Please find an attached small\n> patch.\n\nNo objections to that. Perhaps we should also mention that this does\nnot protect from someone starting the cluster concurrently and that\nthe reason why we require a clean shutdown is that we may get checksum\nfailures because of torn pages?\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 22:21:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small review comment on pg_checksums"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 10:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jun 06, 2019 at 05:16:30PM +0900, Masahiko Sawada wrote:\n> > So I'd like to propose a small fix for that; move the comment to the\n> > right place and add another comment. Please find an attached small\n> > patch.\n>\n> No objections to that. Perhaps we should also mention that this does\n> not protect from someone starting the cluster concurrently and that\n> the reason why we require a clean shutdown is that we may get checksum\n> failures because of torn pages?\n\nAgreed. Please find an attached patch.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Fri, 7 Jun 2019 15:30:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small review comment on pg_checksums"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 03:30:35PM +0900, Masahiko Sawada wrote:\n> Agreed. Please find an attached patch.\n\nThanks, committed.\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 20:52:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small review comment on pg_checksums"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 8:52 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jun 07, 2019 at 03:30:35PM +0900, Masahiko Sawada wrote:\n> > Agreed. Please find an attached patch.\n>\n> Thanks, committed.\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Jun 2019 09:21:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small review comment on pg_checksums"
}
] |
[
{
"msg_contents": "Hi\n\nI like the idea of sampling slow statements via log_statement_sample_rate.\nBut I miss some parameter that can ensure so every query executed over this\nlimit is logged.\n\nCan we introduce new option\n\nlog_statement_sampling_limit\n\nThe query with execution time over this limit is logged every time.\n\nWhat do you think about this?\n\nRegards\n\nPavel\n\nHiI like the idea of sampling slow statements via log_statement_sample_rate. But I miss some parameter that can ensure so every query executed over this limit is logged.Can we introduce new option log_statement_sampling_limitThe query with execution time over this limit is logged every time.What do you think about this?RegardsPavel",
"msg_date": "Thu, 6 Jun 2019 10:38:04 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Le 06/06/2019 à 10:38, Pavel Stehule a écrit :\n> Hi\n>\n> I like the idea of sampling slow statements via \n> log_statement_sample_rate. But I miss some parameter that can ensure\n> so every query executed over this limit is logged.\n>\n> Can we introduce new option\n>\n> log_statement_sampling_limit\n>\n> The query with execution time over this limit is logged every time.\n>\n> What do you think about this?\n>\n> Regards\n>\n> Pavel\n\n\n+1, log_min_duration_statement is modulated by log_statement_sample_rate\nthat mean that there is no more way to log all statements over a certain\nduration limit. log_statement_sampling_limit might probably always be\nupper than log_min_duration_statement.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n",
"msg_date": "Thu, 6 Jun 2019 10:48:28 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi\n\nčt 6. 6. 2019 v 10:48 odesílatel Gilles Darold <gilles@darold.net> napsal:\n\n> Le 06/06/2019 à 10:38, Pavel Stehule a écrit :\n> > Hi\n> >\n> > I like the idea of sampling slow statements via\n> > log_statement_sample_rate. But I miss some parameter that can ensure\n> > so every query executed over this limit is logged.\n> >\n> > Can we introduce new option\n> >\n> > log_statement_sampling_limit\n> >\n> > The query with execution time over this limit is logged every time.\n> >\n> > What do you think about this?\n> >\n> > Regards\n> >\n> > Pavel\n>\n>\n> +1, log_min_duration_statement is modulated by log_statement_sample_rate\n> that mean that there is no more way to log all statements over a certain\n> duration limit. log_statement_sampling_limit might probably always be\n> upper than log_min_duration_statement.\n>\n\nHere is a patch\n\nRegards\n\nPavel\n\n\n>\n> --\n> Gilles Darold\n> http://www.darold.net/\n>\n>",
"msg_date": "Mon, 17 Jun 2019 22:40:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi\n\npo 17. 6. 2019 v 22:40 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> čt 6. 6. 2019 v 10:48 odesílatel Gilles Darold <gilles@darold.net> napsal:\n>\n>> Le 06/06/2019 à 10:38, Pavel Stehule a écrit :\n>> > Hi\n>> >\n>> > I like the idea of sampling slow statements via\n>> > log_statement_sample_rate. But I miss some parameter that can ensure\n>> > so every query executed over this limit is logged.\n>> >\n>> > Can we introduce new option\n>> >\n>> > log_statement_sampling_limit\n>> >\n>> > The query with execution time over this limit is logged every time.\n>> >\n>> > What do you think about this?\n>> >\n>> > Regards\n>> >\n>> > Pavel\n>>\n>>\n>> +1, log_min_duration_statement is modulated by log_statement_sample_rate\n>> that mean that there is no more way to log all statements over a certain\n>> duration limit. log_statement_sampling_limit might probably always be\n>> upper than log_min_duration_statement.\n>>\n>\n> Here is a patch\n>\n\nI did error in logic - fixed\n\n\n\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> --\n>> Gilles Darold\n>> http://www.darold.net/\n>>\n>>",
"msg_date": "Tue, 18 Jun 2019 05:30:38 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi,\n\nI tried the patch, here my comment:\n\n> gettext_noop(\"Zero effective disables sampling. \"\n> \"-1 use sampling every time (without limit).\"),\n\nI do not agree with the zero case. In fact, sampling is disabled as soon as\nsetting is less than log_min_duration_statements. Furthermore, I think we should\nprovide a more straightforward description for users.\n\nI changed few comments and documentation:\n\n * As we added much more logic in this function with statement and transaction\nsampling. And now with statement_sample_rate, it is not easy to understand the\nlogic on first look. I reword comment in check_log_duration, I hope it is more\nstraightforward.\n\n * I am not sure if \"every_time\" is a good naming for the variable. In fact, if\nduration exceeds limit we disable sampling. Maybe sampling_disabled is more clear?\n\n * I propose to add some words in log_min_duration_statement and\nlog_statement_sample_rate documentation.\n\n * Rephrased log_statement_sample_limit documentation, I hope it help\nunderstanding.\n\nPatch attached.\n\nRegards,\n\n-- \nAdrien",
"msg_date": "Tue, 18 Jun 2019 14:03:27 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "út 18. 6. 2019 v 14:03 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info>\nnapsal:\n\n> Hi,\n>\n> I tried the patch, here my comment:\n>\n> > gettext_noop(\"Zero effective disables sampling. \"\n> > \"-1 use sampling every time (without limit).\"),\n>\n> I do not agree with the zero case. In fact, sampling is disabled as soon as\n> setting is less than log_min_duration_statements. Furthermore, I think we\n> should\n> provide a more straightforward description for users.\n>\n\nYou have true, but I have not a idea,how to describe it in one line. In\nthis case the zero is corner case, and sampling is disabled without\ndependency on log_min_duration_statement.\n\nI think so this design has only few useful values and ranges\n\na) higher than log_min_duration_statement .. sampling is active with limit\nb) 0 .. for this case - other way how to effective disable sampling - no\ndependency on other\nc) -1 or negative value - sampling is allowed every time.\n\nSure, there is range (0..log_min_duration_statement), but for this range\nthis value has not sense. I think so this case cannot be mentioned in short\ndescription. But it should be mentioned in documentation.\n\n\n> I changed few comments and documentation:\n>\n> * As we added much more logic in this function with statement and\n> transaction\n> sampling. And now with statement_sample_rate, it is not easy to understand\n> the\n> logic on first look. I reword comment in check_log_duration, I hope it is\n> more\n> straightforward.\n>\n> * I am not sure if \"every_time\" is a good naming for the variable. In\n> fact, if\n> duration exceeds limit we disable sampling. Maybe sampling_disabled is\n> more clear?\n>\n\nFor me important is following line\n\n(exceeded && (in_sample || every_time))\n\nI think so \"every_time\" or \"always\" or \"every\" is in this context more\nillustrative than \"sampling_disabled\". But my opinion is not strong in this\ncase, and I have not a problem accept common opinion.\n\n\n>\n> * I propose to add some words in log_min_duration_statement and\n> log_statement_sample_rate documentation.\n>\n> * Rephrased log_statement_sample_limit documentation, I hope it help\n> understanding.\n>\n> Patch attached.\n>\n> Regards,\n>\n> --\n> Adrien\n>\n\nút 18. 6. 2019 v 14:03 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info> napsal:Hi,\n\nI tried the patch, here my comment:\n\n> gettext_noop(\"Zero effective disables sampling. \"\n> \"-1 use sampling every time (without limit).\"),\n\nI do not agree with the zero case. In fact, sampling is disabled as soon as\nsetting is less than log_min_duration_statements. Furthermore, I think we should\nprovide a more straightforward description for users.You have true, but I have not a idea,how to describe it in one line. In this case the zero is corner case, and sampling is disabled without dependency on log_min_duration_statement. I think so this design has only few useful values and ranges a) higher than log_min_duration_statement .. sampling is active with limitb) 0 .. for this case - other way how to effective disable sampling - no dependency on otherc) -1 or negative value - sampling is allowed every time.Sure, there is range (0..log_min_duration_statement), but for this range this value has not sense. I think so this case cannot be mentioned in short description. But it should be mentioned in documentation. \n\nI changed few comments and documentation:\n\n * As we added much more logic in this function with statement and transaction\nsampling. And now with statement_sample_rate, it is not easy to understand the\nlogic on first look. I reword comment in check_log_duration, I hope it is more\nstraightforward.\n\n * I am not sure if \"every_time\" is a good naming for the variable. In fact, if\nduration exceeds limit we disable sampling. Maybe sampling_disabled is more clear?For me important is following line(exceeded && (in_sample || every_time))I think so \"every_time\" or \"always\" or \"every\" is in this context more illustrative than \"sampling_disabled\". But my opinion is not strong in this case, and I have not a problem accept common opinion. \n\n * I propose to add some words in log_min_duration_statement and\nlog_statement_sample_rate documentation.\n\n * Rephrased log_statement_sample_limit documentation, I hope it help\nunderstanding.\n\nPatch attached.\n\nRegards,\n\n-- \nAdrien",
"msg_date": "Tue, 18 Jun 2019 20:29:12 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On 6/18/19 8:29 PM, Pavel Stehule wrote:\n> \n> \n> út 18. 6. 2019 v 14:03 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info\n> <mailto:adrien.nayrat@anayrat.info>> napsal:\n> \n> Hi,\n> \n> I tried the patch, here my comment:\n> \n> > gettext_noop(\"Zero effective disables sampling. \"\n> > \"-1 use sampling every time (without limit).\"),\n> \n> I do not agree with the zero case. In fact, sampling is disabled as soon as\n> setting is less than log_min_duration_statements. Furthermore, I think we should\n> provide a more straightforward description for users.\n> \n> \n> You have true, but I have not a idea,how to describe it in one line. In this\n> case the zero is corner case, and sampling is disabled without dependency on\n> log_min_duration_statement.\n> \n> I think so this design has only few useful values and ranges\n> \n> a) higher than log_min_duration_statement .. sampling is active with limit\n> b) 0 .. for this case - other way how to effective disable sampling - no\n> dependency on other\n> c) -1 or negative value - sampling is allowed every time.\n> \n> Sure, there is range (0..log_min_duration_statement), but for this range this\n> value has not sense. I think so this case cannot be mentioned in short\n> description. But it should be mentioned in documentation.\n\nYes, it took me a while to understand :) I am ok to keep simple in GUC\ndescription and give more information in documentation.\n\n> \n> \n> I changed few comments and documentation:\n> \n> * As we added much more logic in this function with statement and transaction\n> sampling. And now with statement_sample_rate, it is not easy to understand the\n> logic on first look. I reword comment in check_log_duration, I hope it is more\n> straightforward.\n> \n> * I am not sure if \"every_time\" is a good naming for the variable. In fact, if\n> duration exceeds limit we disable sampling. Maybe sampling_disabled is more\n> clear?\n> \n> \n> For me important is following line\n> \n> (exceeded && (in_sample || every_time))\n> \n> I think so \"every_time\" or \"always\" or \"every\" is in this context more\n> illustrative than \"sampling_disabled\". But my opinion is not strong in this\n> case, and I have not a problem accept common opinion.\n\nOh, yes, that's correct. I do not have a strong opinion too. Maybe someone else\nwill have better idea.\n\n-- \nAdrien",
"msg_date": "Wed, 19 Jun 2019 10:49:23 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "st 19. 6. 2019 v 10:49 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info>\nnapsal:\n\n> On 6/18/19 8:29 PM, Pavel Stehule wrote:\n> >\n> >\n> > út 18. 6. 2019 v 14:03 odesílatel Adrien Nayrat <\n> adrien.nayrat@anayrat.info\n> > <mailto:adrien.nayrat@anayrat.info>> napsal:\n> >\n> > Hi,\n> >\n> > I tried the patch, here my comment:\n> >\n> > > gettext_noop(\"Zero effective disables sampling. \"\n> > > \"-1 use sampling every time (without\n> limit).\"),\n> >\n> > I do not agree with the zero case. In fact, sampling is disabled as\n> soon as\n> > setting is less than log_min_duration_statements. Furthermore, I\n> think we should\n> > provide a more straightforward description for users.\n> >\n> >\n> > You have true, but I have not a idea,how to describe it in one line. In\n> this\n> > case the zero is corner case, and sampling is disabled without\n> dependency on\n> > log_min_duration_statement.\n> >\n> > I think so this design has only few useful values and ranges\n> >\n> > a) higher than log_min_duration_statement .. sampling is active with\n> limit\n> > b) 0 .. for this case - other way how to effective disable sampling - no\n> > dependency on other\n> > c) -1 or negative value - sampling is allowed every time.\n> >\n> > Sure, there is range (0..log_min_duration_statement), but for this range\n> this\n> > value has not sense. I think so this case cannot be mentioned in short\n> > description. But it should be mentioned in documentation.\n>\n> Yes, it took me a while to understand :) I am ok to keep simple in GUC\n> description and give more information in documentation.\n>\n\nMaybe some like. \"The zero block sampling. Negative value forces sampling\nwithout limit\"\n\n\n> >\n> >\n> > I changed few comments and documentation:\n> >\n> > * As we added much more logic in this function with statement and\n> transaction\n> > sampling. And now with statement_sample_rate, it is not easy to\n> understand the\n> > logic on first look. I reword comment in check_log_duration, I hope\n> it is more\n> > straightforward.\n> >\n> > * I am not sure if \"every_time\" is a good naming for the variable.\n> In fact, if\n> > duration exceeds limit we disable sampling. Maybe sampling_disabled\n> is more\n> > clear?\n> >\n> >\n> > For me important is following line\n> >\n> > (exceeded && (in_sample || every_time))\n> >\n> > I think so \"every_time\" or \"always\" or \"every\" is in this context more\n> > illustrative than \"sampling_disabled\". But my opinion is not strong in\n> this\n> > case, and I have not a problem accept common opinion.\n>\n> Oh, yes, that's correct. I do not have a strong opinion too. Maybe someone\n> else\n> will have better idea.\n>\n\nthe naming in this case is not hard issue, and comitter can decide.\n\nRegards\n\nPavel\n\n>\n> --\n> Adrien\n>\n>\n\nst 19. 6. 2019 v 10:49 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info> napsal:On 6/18/19 8:29 PM, Pavel Stehule wrote:\n> \n> \n> út 18. 6. 2019 v 14:03 odesílatel Adrien Nayrat <adrien.nayrat@anayrat.info\n> <mailto:adrien.nayrat@anayrat.info>> napsal:\n> \n> Hi,\n> \n> I tried the patch, here my comment:\n> \n> > gettext_noop(\"Zero effective disables sampling. \"\n> > \"-1 use sampling every time (without limit).\"),\n> \n> I do not agree with the zero case. In fact, sampling is disabled as soon as\n> setting is less than log_min_duration_statements. Furthermore, I think we should\n> provide a more straightforward description for users.\n> \n> \n> You have true, but I have not a idea,how to describe it in one line. In this\n> case the zero is corner case, and sampling is disabled without dependency on\n> log_min_duration_statement.\n> \n> I think so this design has only few useful values and ranges\n> \n> a) higher than log_min_duration_statement .. sampling is active with limit\n> b) 0 .. for this case - other way how to effective disable sampling - no\n> dependency on other\n> c) -1 or negative value - sampling is allowed every time.\n> \n> Sure, there is range (0..log_min_duration_statement), but for this range this\n> value has not sense. I think so this case cannot be mentioned in short\n> description. But it should be mentioned in documentation.\n\nYes, it took me a while to understand :) I am ok to keep simple in GUC\ndescription and give more information in documentation.Maybe some like. \"The zero block sampling. Negative value forces sampling without limit\" \n\n> \n> \n> I changed few comments and documentation:\n> \n> * As we added much more logic in this function with statement and transaction\n> sampling. And now with statement_sample_rate, it is not easy to understand the\n> logic on first look. I reword comment in check_log_duration, I hope it is more\n> straightforward.\n> \n> * I am not sure if \"every_time\" is a good naming for the variable. In fact, if\n> duration exceeds limit we disable sampling. Maybe sampling_disabled is more\n> clear?\n> \n> \n> For me important is following line\n> \n> (exceeded && (in_sample || every_time))\n> \n> I think so \"every_time\" or \"always\" or \"every\" is in this context more\n> illustrative than \"sampling_disabled\". But my opinion is not strong in this\n> case, and I have not a problem accept common opinion.\n\nOh, yes, that's correct. I do not have a strong opinion too. Maybe someone else\nwill have better idea.the naming in this case is not hard issue, and comitter can decide.RegardsPavel\n\n-- \nAdrien",
"msg_date": "Wed, 19 Jun 2019 19:46:46 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nI test the latest patch attached to this thread (log_statement_sample_limit-3.patch). Everything looks good to me.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 12 Jul 2019 10:58:43 +0000",
"msg_from": "Adrien Nayrat <adrien.nayrat@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nSorry, I forgot to tick \"passed\" boxes.",
"msg_date": "Fri, 12 Jul 2019 11:06:13 +0000",
"msg_from": "Adrien Nayrat <adrien.nayrat@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi\n\npá 12. 7. 2019 v 13:07 odesílatel Adrien Nayrat <adrien.nayrat@gmail.com>\nnapsal:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> Sorry, I forgot to tick \"passed\" boxes.\n\n\nThank you\n\nPavel\n\nHipá 12. 7. 2019 v 13:07 odesílatel Adrien Nayrat <adrien.nayrat@gmail.com> napsal:The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nSorry, I forgot to tick \"passed\" boxes.Thank youPavel",
"msg_date": "Fri, 12 Jul 2019 18:37:53 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi,\n\nI've started reviewing this patch, thinking that maybe I could get it\ncommitted as it's marked as RFC. In general I agree with having this\nfuature, but I think we need to rethink the GUC because the current\napproach is just confusing.\n\nThe way the current patch works is that we have three GUCs:\n\n log_min_duration_statement\n log_statement_sample_limit\n log_statement_sample_rate\n\nand it essentially works like this:\n\n- If the duration exceeds log_min_duration_statement, we start sampling\n the commands with log_statement_sample rate.\n\n- If the duration exceeds log_statement_sample_limit, we just log the\n command every time (i.e. we disable sampling, using sample rate 1.0).\n\nIMO that's bound to be confusing for users, because one threshold\nbehaves as minimum while the other behaves as maximum.\n\n\nWhat I think we should do instead is to use two minimum thresholds.\n\n1) log_min_duration_sample - enables sampling of commands, using the\nexisting GUC log_statement_sample_rate\n\n2) log_min_duration_statement - logs all commands exceeding this\n\n\nI think this is going to be much easier for users to understand.\n\n\nThe one difference between those approaches is in how they work with\nexisting current settings. That is, let's say you have\n\n log_min_duration_statement = 1000\n log_statement_sample_rate = 0.01\n\nthen no queries below 1000ms will be logged, and 1% of longer queries\nwill be sampled. And with the original config (as proposed in v3 of the\npatch), this would still work the same way.\n\nWith the new approach (two min thresholds) it'd behave differently,\nbecause we'd log *all* queries longer than 1000ms (not just 1%). And\nwhether we'd sample any queries (using log_statement_sample_rate) would\ndepend on how we'd pick the default value for the other threshold.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 28 Jul 2019 00:19:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I've started reviewing this patch, thinking that maybe I could get it\n> committed as it's marked as RFC. In general I agree with having this\n> fuature, but I think we need to rethink the GUC because the current\n> approach is just confusing.\n> ...\n> What I think we should do instead is to use two minimum thresholds.\n> 1) log_min_duration_sample - enables sampling of commands, using the\n> existing GUC log_statement_sample_rate\n> 2) log_min_duration_statement - logs all commands exceeding this\n> I think this is going to be much easier for users to understand.\n\nI agree with Tomas' idea.\n\n> The one difference between those approaches is in how they work with\n> existing current settings. That is, let's say you have\n> log_min_duration_statement = 1000\n> log_statement_sample_rate = 0.01\n> then no queries below 1000ms will be logged, and 1% of longer queries\n> will be sampled. And with the original config (as proposed in v3 of the\n> patch), this would still work the same way.\n> With the new approach (two min thresholds) it'd behave differently,\n> because we'd log *all* queries longer than 1000ms (not just 1%). And\n> whether we'd sample any queries (using log_statement_sample_rate) would\n> depend on how we'd pick the default value for the other threshold.\n\nWell, we do not need to have a backwards-compatibility problem\nhere, because we have yet to release a version containing\nlog_statement_sample_rate. I do not think it's too late to decide\nthat v12's semantics for that are broken, and either revert that\npatch in v12, or back-patch a fix to make it match this idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Jul 2019 15:43:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 03:43:58PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I've started reviewing this patch, thinking that maybe I could get it\n>> committed as it's marked as RFC. In general I agree with having this\n>> fuature, but I think we need to rethink the GUC because the current\n>> approach is just confusing.\n>> ...\n>> What I think we should do instead is to use two minimum thresholds.\n>> 1) log_min_duration_sample - enables sampling of commands, using the\n>> existing GUC log_statement_sample_rate\n>> 2) log_min_duration_statement - logs all commands exceeding this\n>> I think this is going to be much easier for users to understand.\n>\n>I agree with Tomas' idea.\n>\n>> The one difference between those approaches is in how they work with\n>> existing current settings. That is, let's say you have\n>> log_min_duration_statement = 1000\n>> log_statement_sample_rate = 0.01\n>> then no queries below 1000ms will be logged, and 1% of longer queries\n>> will be sampled. And with the original config (as proposed in v3 of the\n>> patch), this would still work the same way.\n>> With the new approach (two min thresholds) it'd behave differently,\n>> because we'd log *all* queries longer than 1000ms (not just 1%). And\n>> whether we'd sample any queries (using log_statement_sample_rate) would\n>> depend on how we'd pick the default value for the other threshold.\n>\n>Well, we do not need to have a backwards-compatibility problem\n>here, because we have yet to release a version containing\n>log_statement_sample_rate. I do not think it's too late to decide\n>that v12's semantics for that are broken, and either revert that\n>patch in v12, or back-patch a fix to make it match this idea.\n>\n\nI'm willing to try fixing this to salvage the feature for v12. The\nquestion is how would that fix look like - IMO we'd need to introduce\nthe new threshold GUC, essentially implementing what this thread is\nabout. It's not a complex patch, but it kinda flies in the face of\nfeature freeze. OTOH if we call it a fix ...\n\nThe patch itself is not that complicated - attached is a WIP version,\n(particularly) the docs need more work.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 30 Jul 2019 23:17:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 03:43:58PM -0400, Tom Lane wrote:\n> Well, we do not need to have a backwards-compatibility problem\n> here, because we have yet to release a version containing\n> log_statement_sample_rate. I do not think it's too late to decide\n> that v12's semantics for that are broken, and either revert that\n> patch in v12, or back-patch a fix to make it match this idea.\n\nWith my RTM hat on, if we think that the current semantics of\nlog_statement_sample_rate are broken and need a redesign, then I would\ntake the safest path and just revert the original patch in v12, and\nfinally make sure that it brews correctly for v13. We are in beta2\nand close to a beta3, so redesigning things at this stage on a stable\nbranch sounds wrong.\n--\nMichael",
"msg_date": "Wed, 31 Jul 2019 10:40:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On 7/28/19 12:19 AM, Tomas Vondra wrote:\n> Hi,\n> \n> I've started reviewing this patch, thinking that maybe I could get it\n> committed as it's marked as RFC. In general I agree with having this\n> fuature, but I think we need to rethink the GUC because the current\n> approach is just confusing.\n> \n> The way the current patch works is that we have three GUCs:\n> \n> log_min_duration_statement\n> log_statement_sample_limit\n> log_statement_sample_rate\n> \n> and it essentially works like this:\n> \n> - If the duration exceeds log_min_duration_statement, we start sampling\n> the commands with log_statement_sample rate.\n> \n> - If the duration exceeds log_statement_sample_limit, we just log the\n> command every time (i.e. we disable sampling, using sample rate 1.0).\n> \n> IMO that's bound to be confusing for users, because one threshold\n> behaves as minimum while the other behaves as maximum.\n\nI agree, it took me a while to understand how it behave with the three GUC. That\nwhy I tried to enrich documentation, but it may mean that the functionality is\nnot properly implemented.\n\n> \n> \n> What I think we should do instead is to use two minimum thresholds.\n> \n> 1) log_min_duration_sample - enables sampling of commands, using the\n> existing GUC log_statement_sample_rate\n> \n> 2) log_min_duration_statement - logs all commands exceeding this\n> \n> \n> I think this is going to be much easier for users to understand.\n\n+1, I like this idea.\n\nI don't really have an opinion if we have to revert the whole feature or try to\nfix it for v12. But it is true it is a late to fix it.\n\nFurthermore, users who really want this feature in v12 can use an extension for\nthat purpose [1].\n\n1: I made this extension with same kind of features :\nhttps://github.com/anayrat/pg_sampletolog",
"msg_date": "Wed, 31 Jul 2019 11:17:00 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Hi,\n\nAs we are at the end of this CF and there is still discussions about whether we\nshould revert log_statement_sample_limit and log_statement_sample_rate, or try\nto fix it in v12.\nI moved this patch to next commit fest and change status from \"ready for\ncommiter\" to \"need review\". I hope I didn't make a mistake.\n\nBest regards,",
"msg_date": "Thu, 1 Aug 2019 11:47:46 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Thu, Aug 01, 2019 at 11:47:46AM +0200, Adrien Nayrat wrote:\n>Hi,\n>\n>As we are at the end of this CF and there is still discussions about whether we\n>should revert log_statement_sample_limit and log_statement_sample_rate, or try\n>to fix it in v12.\n>I moved this patch to next commit fest and change status from \"ready for\n>commiter\" to \"need review\". I hope I didn't make a mistake.\n>\n\nThanks. The RFC status was clearly stale, so thanks for updating. I should\nhave done that after my review. I think the patch would be moved to the\nnext CF at the end, but I might be wrong. In any case, I don't think\nyou've done any mistake.\n\nAs for the sampling patch - I think we'll end up reverting the feature for\nv12 - it's far too late to rework it at this point. Sorry about that, I\nknow it's not a warm feeling when you get something done, and then it gets\nreverted on the last minute. :-(\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 1 Aug 2019 12:04:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On 8/1/19 12:04 PM, Tomas Vondra wrote:\n> On Thu, Aug 01, 2019 at 11:47:46AM +0200, Adrien Nayrat wrote:\n>> Hi,\n>>\n>> As we are at the end of this CF and there is still discussions about whether we\n>> should revert log_statement_sample_limit and log_statement_sample_rate, or try\n>> to fix it in v12.\n>> I moved this patch to next commit fest and change status from \"ready for\n>> commiter\" to \"need review\". I hope I didn't make a mistake.\n>>\n> \n> Thanks. The RFC status was clearly stale, so thanks for updating. I should\n> have done that after my review. I think the patch would be moved to the\n> next CF at the end, but I might be wrong. In any case, I don't think\n> you've done any mistake.\n> \n> As for the sampling patch - I think we'll end up reverting the feature for\n> v12 - it's far too late to rework it at this point. Sorry about that, I\n> know it's not a warm feeling when you get something done, and then it gets\n> reverted on the last minute. :-(\n> \n\nDon't worry, I understand. It is better to add straigforward GUC in v13 than\nconfusionning in v12 we will regret.\n\n\n\n\n",
"msg_date": "Fri, 2 Aug 2019 09:53:40 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 09:53:40AM +0200, Adrien Nayrat wrote:\n>On 8/1/19 12:04 PM, Tomas Vondra wrote:\n>> On Thu, Aug 01, 2019 at 11:47:46AM +0200, Adrien Nayrat wrote:\n>>> Hi,\n>>>\n>>> As we are at the end of this CF and there is still discussions about whether we\n>>> should revert log_statement_sample_limit and log_statement_sample_rate, or try\n>>> to fix it in v12.\n>>> I moved this patch to next commit fest and change status from \"ready for\n>>> commiter\" to \"need review\". I hope I didn't make a mistake.\n>>>\n>>\n>> Thanks. The RFC status was clearly stale, so thanks for updating. I should\n>> have done that after my review. I think the patch would be moved to the\n>> next CF at the end, but I might be wrong. In any case, I don't think\n>> you've done any mistake.\n>>\n>> As for the sampling patch - I think we'll end up reverting the feature for\n>> v12 - it's far too late to rework it at this point. Sorry about that, I\n>> know it's not a warm feeling when you get something done, and then it gets\n>> reverted on the last minute. :-(\n>>\n>\n>Don't worry, I understand. It is better to add straigforward GUC in v13 than\n>confusionning in v12 we will regret.\n>\n\nOK, I have the revert ready. The one thing I'm wondering about is\nwhether we need to revert log_transaction_sample_rate too? I think it's\npretty much independent feature, so I think we can keep that. Opinions?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 21:10:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> OK, I have the revert ready. The one thing I'm wondering about is\n> whether we need to revert log_transaction_sample_rate too? I think it's\n> pretty much independent feature, so I think we can keep that. Opinions?\n\nIsn't the issue here the interaction between log_transaction_sample_rate\nand log_min_duration_statement? Seems like we have that question\nregardless of whether log_statement_sample_limit exists.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 15:16:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 03:16:12PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> OK, I have the revert ready. The one thing I'm wondering about is\n>> whether we need to revert log_transaction_sample_rate too? I think it's\n>> pretty much independent feature, so I think we can keep that. Opinions?\n>\n>Isn't the issue here the interaction between log_transaction_sample_rate\n>and log_min_duration_statement? Seems like we have that question\n>regardless of whether log_statement_sample_limit exists.\n>\n\nNo, that interaction only affects statement-level sampling.\n\nFor transaction-level sampling we do the sampling independently of the\nstatement duration, i.e. we when starting a transaction we determine\nwhether the whole transaction will be sampled. It has nothing to do with\nthe proposed log_statement_sample_limit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 21:58:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Sun, Aug 04, 2019 at 03:16:12PM -0400, Tom Lane wrote:\n>> Isn't the issue here the interaction between log_transaction_sample_rate\n>> and log_min_duration_statement?\n\n> No, that interaction only affects statement-level sampling.\n\nOK, I was confusing the features.\n\n> For transaction-level sampling we do the sampling independently of the\n> statement duration, i.e. we when starting a transaction we determine\n> whether the whole transaction will be sampled. It has nothing to do with\n> the proposed log_statement_sample_limit.\n\nSo, to clarify: our plan is that a given statement will be logged\nif any of these various partial-logging features says to do so?\n\n(And the knock on HEAD's behavior is exactly that it breaks that\nindependence for log_min_duration_statement.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Aug 2019 16:25:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 04:25:12PM -0400, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Sun, Aug 04, 2019 at 03:16:12PM -0400, Tom Lane wrote:\n>>> Isn't the issue here the interaction between log_transaction_sample_rate\n>>> and log_min_duration_statement?\n>\n>> No, that interaction only affects statement-level sampling.\n>\n>OK, I was confusing the features.\n>\n>> For transaction-level sampling we do the sampling independently of the\n>> statement duration, i.e. we when starting a transaction we determine\n>> whether the whole transaction will be sampled. It has nothing to do with\n>> the proposed log_statement_sample_limit.\n>\n>So, to clarify: our plan is that a given statement will be logged\n>if any of these various partial-logging features says to do so?\n>\n\nYes, I think that's the expected behavior.\n\n- did it exceed log_min_duration_statement? -> log it\n- is it part of sampled xact? -> log it\n- maybe sample the statement (to be reverted / reimplemented)\n\n>(And the knock on HEAD's behavior is exactly that it breaks that\n>independence for log_min_duration_statement.)\n>\n\nYeah. There's no way to use sampling, while ensure logging of all\nqueries longer than some limit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 22:48:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 10:48:48PM +0200, Tomas Vondra wrote:\n>On Sun, Aug 04, 2019 at 04:25:12PM -0400, Tom Lane wrote:\n>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>On Sun, Aug 04, 2019 at 03:16:12PM -0400, Tom Lane wrote:\n>>>>Isn't the issue here the interaction between log_transaction_sample_rate\n>>>>and log_min_duration_statement?\n>>\n>>>No, that interaction only affects statement-level sampling.\n>>\n>>OK, I was confusing the features.\n>>\n>>>For transaction-level sampling we do the sampling independently of the\n>>>statement duration, i.e. we when starting a transaction we determine\n>>>whether the whole transaction will be sampled. It has nothing to do with\n>>>the proposed log_statement_sample_limit.\n>>\n>>So, to clarify: our plan is that a given statement will be logged\n>>if any of these various partial-logging features says to do so?\n>>\n>\n>Yes, I think that's the expected behavior.\n>\n>- did it exceed log_min_duration_statement? -> log it\n>- is it part of sampled xact? -> log it\n>- maybe sample the statement (to be reverted / reimplemented)\n>\n>>(And the knock on HEAD's behavior is exactly that it breaks that\n>>independence for log_min_duration_statement.)\n>>\n>\n>Yeah. There's no way to use sampling, while ensure logging of all\n>queries longer than some limit.\n>\n\nFWIW I've reverted the log_statement_sample_rate (both from master and\nREL_12_STABLE). May the buildfarm be merciful to me.\n\nI've left the log_transaction_sample_rate in, as that seems unaffected\nby this discussion.\n\n\nregards\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 4 Aug 2019 23:41:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Sun, Aug 04, 2019 at 11:41:54PM +0200, Tomas Vondra wrote:\n>On Sun, Aug 04, 2019 at 10:48:48PM +0200, Tomas Vondra wrote:\n>>On Sun, Aug 04, 2019 at 04:25:12PM -0400, Tom Lane wrote:\n>>>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>>On Sun, Aug 04, 2019 at 03:16:12PM -0400, Tom Lane wrote:\n>>>>>Isn't the issue here the interaction between log_transaction_sample_rate\n>>>>>and log_min_duration_statement?\n>>>\n>>>>No, that interaction only affects statement-level sampling.\n>>>\n>>>OK, I was confusing the features.\n>>>\n>>>>For transaction-level sampling we do the sampling independently of the\n>>>>statement duration, i.e. we when starting a transaction we determine\n>>>>whether the whole transaction will be sampled. It has nothing to do with\n>>>>the proposed log_statement_sample_limit.\n>>>\n>>>So, to clarify: our plan is that a given statement will be logged\n>>>if any of these various partial-logging features says to do so?\n>>>\n>>\n>>Yes, I think that's the expected behavior.\n>>\n>>- did it exceed log_min_duration_statement? -> log it\n>>- is it part of sampled xact? -> log it\n>>- maybe sample the statement (to be reverted / reimplemented)\n>>\n>>>(And the knock on HEAD's behavior is exactly that it breaks that\n>>>independence for log_min_duration_statement.)\n>>>\n>>\n>>Yeah. There's no way to use sampling, while ensure logging of all\n>>queries longer than some limit.\n>>\n>\n>FWIW I've reverted the log_statement_sample_rate (both from master and\n>REL_12_STABLE). May the buildfarm be merciful to me.\n>\n>I've left the log_transaction_sample_rate in, as that seems unaffected\n>by this discussion.\n>\n\nI've pushed the reworked version of log_statement_sample_rate patch [1].\nIf I understand correctly, that makes this patch unnecessary, and we\nshould mark it as rejected. Or do we still need it?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 6 Nov 2019 19:21:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On 11/6/19 7:21 PM, Tomas Vondra wrote:\n> I've pushed the reworked version of log_statement_sample_rate patch [1].\n> If I understand correctly, that makes this patch unnecessary, and we\n> should mark it as rejected. Or do we still need it?\n\nYes, the goal of this patch was to disable sampling and log all queries whose\nduration exceed log_statement_sample_limit.\n\nFor now it is possible just with log_min_duration_statement which log all\nqueries whose duration exceed it.\n\n-- \nAdrien\n\n\n\n",
"msg_date": "Wed, 6 Nov 2019 20:00:57 +0100",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
},
{
"msg_contents": "On Wed, Nov 06, 2019 at 08:00:57PM +0100, Adrien Nayrat wrote:\n>On 11/6/19 7:21 PM, Tomas Vondra wrote:\n>> I've pushed the reworked version of log_statement_sample_rate patch [1].\n>> If I understand correctly, that makes this patch unnecessary, and we\n>> should mark it as rejected. Or do we still need it?\n>\n>Yes, the goal of this patch was to disable sampling and log all queries whose\n>duration exceed log_statement_sample_limit.\n>\n>For now it is possible just with log_min_duration_statement which log all\n>queries whose duration exceed it.\n>\n\nOK, I've marked it as rejected. If someone thinks we should still have\nsomething like it, please submit a patch implementing it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 6 Nov 2019 20:22:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: idea: log_statement_sample_rate - bottom limit for sampling"
}
] |
[
{
"msg_contents": "Hello.\n\n# My email address has changed.\n\nI found a string that ought to be translatable but actually not,\nin pg_checksums.c.\n\n> fprintf(stderr, \"%*s/%s MB (%d%%) computed\",\n\nIt seems to be the only instance in the file.\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 6 Jun 2019 18:55:14 +0900",
"msg_from": "Horiguchi Kyotaro <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_checksums has an untranslatable string."
}
] |
[
{
"msg_contents": "Hello.\n\n# My email address has changed. Apologize in advance for possible\n# duplicate of this mail because this is the seconf try after\n# mail server seems to have failed the first try...\n\nI found a string that ought to be translatable but actually not,\nin pg_checksums.c.\n\n> fprintf(stderr, \"%*s/%s MB (%d%%) computed\",\n\nIt seems to be the only instance in the file.\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 06 Jun 2019 20:06:12 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_checksums has an untranslatable string."
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 08:06:12PM +0900, Kyotaro Horiguchi wrote:\n> It seems to be the only instance in the file.\n\nConfirmed and committed. Thanks for the report.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 22:11:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksums has an untranslatable string."
},
{
"msg_contents": "Hello\n\n> Confirmed and committed. Thanks for the report.\n\nThanks for committing.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 7 Jun 2019 09:46:09 +0900",
"msg_from": "Horiguchi Kyotaro <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_checksums has an untranslatable string."
}
] |
[
{
"msg_contents": "Hi\n\nCommit be8a7a68662 added custom GUC \"pg_trgm.strict_word_similarity_threshold\",\nbut omitted to document this in the section \"GUC Parameters\"; proposed patch\nattached.\n\nI suggest backpatching to Pg11, where it was introduced.\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 6 Jun 2019 22:19:05 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 10:19 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n>\n> Hi\n>\n> Commit be8a7a68662 added custom GUC \"pg_trgm.strict_word_similarity_threshold\",\n> but omitted to document this in the section \"GUC Parameters\";\n\nIndeed.\n\nBTW while looking GUC variables defined in trgm_op.c the operators in\neach short description seems not correct; there is an extra percent\nsign. Should we also fix them?\n\npostgres(1:43133)=# select name, short_desc from pg_settings where\nname like 'pg_trgm%';\n name | short_desc\n------------------------------------------+-----------------------------------------------\n pg_trgm.similarity_threshold | Sets the threshold used by\nthe %% operator.\n pg_trgm.strict_word_similarity_threshold | Sets the threshold used by\nthe <<%% operator.\n pg_trgm.word_similarity_threshold | Sets the threshold used by\nthe <%% operator.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 7 Jun 2019 15:44:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 03:44:14PM +0900, Masahiko Sawada wrote:\n> BTW while looking GUC variables defined in trgm_op.c the operators in\n> each short description seems not correct; there is an extra percent\n> sign. Should we also fix them?\n\nBoth of you are right here, and the addition documentation looks fine\nto me (except the indentation). The fix for the parameter\ndescriptions can be back-patched safely as they would reload correctly\nonce the version is updated. Or is that not worth bothering except on\nHEAD? Thoughts?\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 21:00:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On 6/7/19 9:00 PM, Michael Paquier wrote:\n> On Fri, Jun 07, 2019 at 03:44:14PM +0900, Masahiko Sawada wrote:\n>> BTW while looking GUC variables defined in trgm_op.c the operators in\n>> each short description seems not correct; there is an extra percent\n>> sign. Should we also fix them?\n> \n> Both of you are right here\n\nI did notice the double percent signs but my brain skipped over them\nassuming they were translatable strings, thanks for catching that.\n\n> and the addition documentation looks fine to me (except the indentation).\n\nThe indentation in the additional documentation seems fine to me, it's\nthe section for the preceding GUC which is offset one column to the right.\nPatch attached for that.\n\n > The fix for the parameter descriptions can be back-patched safely as they\n > would reload correctly once the version is updated.\n\nYup, they would appear the first time one of the pg_trgm functions is called\nin a session after the new object file is installed.\n\n > Or is that not worth bothering except on HEAD? Thoughts?\n\nPersonally I don't think it's that critical, but not bothered either way.\nPresumably no-one has complained so far anyway (I only chanced upon the missing\nGUC description because I was poking about looking for examples of custom\nGUC handling...)\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 8 Jun 2019 00:02:04 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 6:02 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n> On 6/7/19 9:00 PM, Michael Paquier wrote:\n> > On Fri, Jun 07, 2019 at 03:44:14PM +0900, Masahiko Sawada wrote:\n> > Or is that not worth bothering except on HEAD? Thoughts?\n>\n> Personally I don't think it's that critical, but not bothered either way.\n> Presumably no-one has complained so far anyway (I only chanced upon the missing\n> GUC description because I was poking about looking for examples of custom\n> GUC handling...)\n\nI think it worth maintaining consistent documentation and GUC\ndescriptions in back branches. So, I'm +1 for backpatching.\n\nI'm going to commit all 3 patches (documentation, GUC description,\ndocumentation indentation) on no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 8 Jun 2019 20:17:40 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On Sat, Jun 8, 2019 at 8:17 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Fri, Jun 7, 2019 at 6:02 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n> > On 6/7/19 9:00 PM, Michael Paquier wrote:\n> > > On Fri, Jun 07, 2019 at 03:44:14PM +0900, Masahiko Sawada wrote:\n> > > Or is that not worth bothering except on HEAD? Thoughts?\n> >\n> > Personally I don't think it's that critical, but not bothered either way.\n> > Presumably no-one has complained so far anyway (I only chanced upon the missing\n> > GUC description because I was poking about looking for examples of custom\n> > GUC handling...)\n>\n> I think it worth maintaining consistent documentation and GUC\n> descriptions in back branches. So, I'm +1 for backpatching.\n>\n> I'm going to commit all 3 patches (documentation, GUC description,\n> documentation indentation) on no objections.\n\nPushed!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 10 Jun 2019 20:33:38 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On 6/11/19 2:33 AM, Alexander Korotkov wrote:\n> On Sat, Jun 8, 2019 at 8:17 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n>> On Fri, Jun 7, 2019 at 6:02 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n>>> On 6/7/19 9:00 PM, Michael Paquier wrote:\n>>>> On Fri, Jun 07, 2019 at 03:44:14PM +0900, Masahiko Sawada wrote:\n>>> > Or is that not worth bothering except on HEAD? Thoughts?\n>>>\n>>> Personally I don't think it's that critical, but not bothered either way.\n>>> Presumably no-one has complained so far anyway (I only chanced upon the missing\n>>> GUC description because I was poking about looking for examples of custom\n>>> GUC handling...)\n>>\n>> I think it worth maintaining consistent documentation and GUC\n>> descriptions in back branches. So, I'm +1 for backpatching.\n>>\n>> I'm going to commit all 3 patches (documentation, GUC description,\n>> documentation indentation) on no objections.\n> \n> Pushed!\n\nThanks!\n\n\nRegards\n\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 10:05:11 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "Em seg, 10 de jun de 2019 às 14:34, Alexander Korotkov\n<a.korotkov@postgrespro.ru> escreveu:\n>\n> Pushed!\n>\nAlexander, this commit is ok for 11 and so. However, GUC\nstrict_word_similarity_threshold does not exist in 9.6 and 10. The\nattached patch revert this part. It should apply cleanly in 9.6 and\n10.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 12 Sep 2019 09:39:00 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
},
{
"msg_contents": "On Thu, Sep 12, 2019 at 3:39 PM Euler Taveira <euler@timbira.com.br> wrote:\n> Em seg, 10 de jun de 2019 às 14:34, Alexander Korotkov\n> <a.korotkov@postgrespro.ru> escreveu:\n> >\n> > Pushed!\n> >\n> Alexander, this commit is ok for 11 and so. However, GUC\n> strict_word_similarity_threshold does not exist in 9.6 and 10. The\n> attached patch revert this part. It should apply cleanly in 9.6 and\n> 10.\n\nThank you for pointing this out.\nPushed.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 16:18:07 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: doc: pg_trgm missing description for GUC\n \"pg_trgm.strict_word_similarity_threshold\""
}
] |
[
{
"msg_contents": "Seawasp (using experimental clang 9.0) has been complaining of late:\n\n/home/fabien/clgtk/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -D_DEBUG -D_GNU_SOURCE -I/home/fabien/clgtk/include -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\nIn file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n from llvmjit_inline.cpp:45:\n/home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:12: error: macro \"Min\" requires 2 arguments, but only 1 given\n : Min(Min), Scalable(Scalable) {}\n ^\nIn file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n from llvmjit_inline.cpp:45:\n/home/fabien/clgtk/include/llvm/Support/ScalableSize.h: In constructor \\xe2\\x80\\x98llvm::ElementCount::ElementCount(unsigned int, bool)\\xe2\\x80\\x99:\n/home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:13: error: expected \\xe2\\x80\\x98(\\xe2\\x80\\x99 before \\xe2\\x80\\x98,\\xe2\\x80\\x99 token\n : Min(Min), Scalable(Scalable) {}\n ^\n<builtin>: recipe for target 'llvmjit_inline.o' failed\n\nThis was working earlier, and as far as I can tell the cpluspluscheck\nfixes are not the cause (because those happened earlier than the first\nfailure). Apparently clang got upgraded from \"trunk 361691\" to \"trunk\n362290\" ... is the new clang broken?\n\n-- \n�lvaro Herrera 39�50'S 73�21'W\n\n\n",
"msg_date": "Thu, 6 Jun 2019 13:32:16 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "LLVM compile failing in seawasp"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-06 13:32:16 -0400, Alvaro Herrera wrote:\n> Seawasp (using experimental clang 9.0) has been complaining of late:\n> \n> /home/fabien/clgtk/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -D_DEBUG -D_GNU_SOURCE -I/home/fabien/clgtk/include -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\n> In file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n> from llvmjit_inline.cpp:45:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:12: error: macro \"Min\" requires 2 arguments, but only 1 given\n> : Min(Min), Scalable(Scalable) {}\n> ^\n> In file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n> from llvmjit_inline.cpp:45:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h: In constructor \\xe2\\x80\\x98llvm::ElementCount::ElementCount(unsigned int, bool)\\xe2\\x80\\x99:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:13: error: expected \\xe2\\x80\\x98(\\xe2\\x80\\x99 before \\xe2\\x80\\x98,\\xe2\\x80\\x99 token\n> : Min(Min), Scalable(Scalable) {}\n> ^\n> <builtin>: recipe for target 'llvmjit_inline.o' failed\n> \n> This was working earlier, and as far as I can tell the cpluspluscheck\n> fixes are not the cause (because those happened earlier than the first\n> failure). Apparently clang got upgraded from \"trunk 361691\" to \"trunk\n> 362290\" ... is the new clang broken?\n\nI think that machine might also update llvm to a trunk checkout. Is that\nright Fabien? If so that's possible \"just\" a minor API break.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 Jun 2019 10:38:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "c.h defines a C Min macro conflicting with llvm new class\nllvm:ElementCount Min member\n\nOn Thu, Jun 6, 2019 at 7:32 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> Seawasp (using experimental clang 9.0) has been complaining of late:\n>\n> /home/fabien/clgtk/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2 -D__STDC_LIMIT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_CONSTANT_MACROS -D_DEBUG -D_GNU_SOURCE -I/home/fabien/clgtk/include -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -flto=thin -emit-llvm -c -o llvmjit_types.bc llvmjit_types.c\n> In file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n> from llvmjit_inline.cpp:45:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:12: error: macro \"Min\" requires 2 arguments, but only 1 given\n> : Min(Min), Scalable(Scalable) {}\n> ^\n> In file included from /home/fabien/clgtk/include/llvm/ADT/DenseMapInfo.h:20:0,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseMap.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/DenseSet.h:16,\n> from /home/fabien/clgtk/include/llvm/ADT/SetVector.h:23,\n> from llvmjit_inline.cpp:45:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h: In constructor \\xe2\\x80\\x98llvm::ElementCount::ElementCount(unsigned int, bool)\\xe2\\x80\\x99:\n> /home/fabien/clgtk/include/llvm/Support/ScalableSize.h:27:13: error: expected \\xe2\\x80\\x98(\\xe2\\x80\\x99 before \\xe2\\x80\\x98,\\xe2\\x80\\x99 token\n> : Min(Min), Scalable(Scalable) {}\n> ^\n> <builtin>: recipe for target 'llvmjit_inline.o' failed\n>\n> This was working earlier, and as far as I can tell the cpluspluscheck\n> fixes are not the cause (because those happened earlier than the first\n> failure). Apparently clang got upgraded from \"trunk 361691\" to \"trunk\n> 362290\" ... is the new clang broken?\n>\n> --\n> Álvaro Herrera 39°50'S 73°21'W\n>\n>\n\n\n",
"msg_date": "Thu, 6 Jun 2019 19:57:05 +0200",
"msg_from": "didier <did447@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "\n>> failure). Apparently clang got upgraded from \"trunk 361691\" to \"trunk\n>> 362290\" ... is the new clang broken?\n>\n> I think that machine might also update llvm to a trunk checkout. Is that\n> right Fabien?\n\nYes, the version is recompiled from sources on every Saturday.\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 6 Jun 2019 20:35:56 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "didier <did447@gmail.com> writes:\n> c.h defines a C Min macro conflicting with llvm new class\n> llvm:ElementCount Min member\n\nReally? Well, we will hardly be the only code they broke with that.\nI think we can just wait for them to reconsider.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Jun 2019 20:12:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> didier <did447@gmail.com> writes:\n> > c.h defines a C Min macro conflicting with llvm new class\n> > llvm:ElementCount Min member\n>\n> Really? Well, we will hardly be the only code they broke with that.\n> I think we can just wait for them to reconsider.\n\nFYI This is now on LLVM's release_90 branch, due out on August 28.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Jul 2019 17:12:19 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": ">>> c.h defines a C Min macro conflicting with llvm new class\n>>> llvm:ElementCount Min member\n>>\n>> Really? Well, we will hardly be the only code they broke with that.\n>> I think we can just wait for them to reconsider.\n>\n> FYI This is now on LLVM's release_90 branch, due out on August 28.\n\nMaybe we should consider doing an explicit bug report, but I would not bet \nthat they are going to fold… or fixing the issue pg side, eg \"pg_Min\", \nless than 400 hundred instances, and backpatch to all supported \nversions:-(\n\n-- \nFabien.",
"msg_date": "Sat, 27 Jul 2019 07:05:58 +0000 (GMT)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 7:06 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> >>> c.h defines a C Min macro conflicting with llvm new class\n> >>> llvm:ElementCount Min member\n> >>\n> >> Really? Well, we will hardly be the only code they broke with that.\n> >> I think we can just wait for them to reconsider.\n> >\n> > FYI This is now on LLVM's release_90 branch, due out on August 28.\n>\n> Maybe we should consider doing an explicit bug report, but I would not bet\n> that they are going to fold… or fixing the issue pg side, eg \"pg_Min\",\n> less than 400 hundred instances, and backpatch to all supported\n> versions:-(\n\nI would just #undef Min for our small number of .cpp files that\ninclude LLVM headers. It's not as though you need it in C++, which\nhas std::min() from <algorithm>.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Jul 2019 19:12:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "On Sat, Jul 27, 2019 at 7:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Jul 27, 2019 at 7:06 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > Maybe we should consider doing an explicit bug report, but I would not bet\n> > that they are going to fold… or fixing the issue pg side, eg \"pg_Min\",\n> > less than 400 hundred instances, and backpatch to all supported\n> > versions:-(\n>\n> I would just #undef Min for our small number of .cpp files that\n> include LLVM headers. It's not as though you need it in C++, which\n> has std::min() from <algorithm>.\n\nLike so. Fixes the problem for me (llvm-devel-9.0.d20190712).\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Sat, 27 Jul 2019 21:40:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "\nHello Thomas,\n\n>> I would just #undef Min for our small number of .cpp files that\n>> include LLVM headers. It's not as though you need it in C++, which\n>> has std::min() from <algorithm>.\n>\n> Like so. Fixes the problem for me (llvm-devel-9.0.d20190712).\n\nHmmm. Not so nice, but if it works, why not, at least the impact is \nmuch smaller than renaming.\n\nNote that the Min macro is used in several pg headers (ginblock.h, \nginxlog.h, hash.h, simplehash.h, spgist_private.h), so you might really \nneed it depending on what is being done later.\n\nOtherwise, why not simply move llvm C++ includes *before* postgres \nincludes? They should be fully independent anyway, so the order should \nnot matter?\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 28 Jul 2019 09:47:17 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Otherwise, why not simply move llvm C++ includes *before* postgres \n> includes?\n\nWe've been burnt in the past by putting other headers before postgres.h.\n(A typical issue is that the interpretation of <stdio.h> varies depending\non _LARGE_FILES or a similar macro, so you get problems if something\ncauses that to be included before pg_config.h has set that macro.)\nMaybe none of the platforms where that's an issue have C++, but that\ndoesn't seem like a great assumption.\n\n> They should be fully independent anyway, so the order should \n> not matter?\n\nOn what grounds do you claim that's true anywhere, let alone\neverywhere?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jul 2019 09:54:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "\nHello Tom,\n\n>> They should be fully independent anyway, so the order should\n>> not matter?\n>\n> On what grounds do you claim that's true anywhere, let alone\n> everywhere?\n\nI mean that the intersection of Postgres realm, a database written in C, \nand LLVM realm, a compiler written in C++, should not interfere much one \nwith the other, bar the jit compilation stuff which mixes both, so having \none set of realm-specific includes before/after the other *should* not \nmatter.\n\nObviously the Min macro is a counter example of that, but that is indeed \nthe problem to solve, and it is really accidental. It would be very \nunlucky if there was an issue the other way around. But maybe not.\n\nAnyway, I'm just trying to suggest a minimum fuss solution. One point of \n\"seawasp\" and \"jellyfish\" is to have early warning of compilation issues \nwith future compilers, and it is serving this purpose beautifully. Another \npoint is to detect compiler bugs early when compiling a significant \nproject, and I have reported issues about both clang & gcc in the past, so \nit works there too.\n\nIf reordering includes is not an option, too bad. Then \"#undef Min\" which \nI find disputable, allthough I've done much worse... it might or might not \nwork depending on what is done afterwards. Or rename the macro, as I \nsuggested first, but there are many instances. Or convince LLVM people \nthat they should change their stuff. Or document that pg jit will cannot \nuse the latest LLVM, as a feature. Or find another solution:-)\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 28 Jul 2019 22:02:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 8:03 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> If reordering includes is not an option, too bad. Then \"#undef Min\" which\n> I find disputable, allthough I've done much worse... it might or might not\n> work depending on what is done afterwards. Or rename the macro, as I\n> suggested first, but there are many instances. Or convince LLVM people\n> that they should change their stuff. Or document that pg jit will cannot\n> use the latest LLVM, as a feature. Or find another solution:-)\n\nLet's just commit the #undef so that seawasp is green and back to\nbeing ready to tell us if something else breaks. Personally, I don't\nsee any reason why <random other project> should entertain a request\nto change their variable names to avoid our short common word macros\nthat aren't even all-caps, but if someone asks them and they agree to\ndo that before the final 9.0 release we can just revert.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jul 2019 09:50:32 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Let's just commit the #undef so that seawasp is green and back to\n> being ready to tell us if something else breaks.\n\n+1. I was afraid that working around this would be impossibly\npainful ... but if it just takes one judiciously placed #undef,\nlet's do that and not argue about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Jul 2019 17:55:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "On Mon, Jul 29, 2019 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Let's just commit the #undef so that seawasp is green and back to\n> > being ready to tell us if something else breaks.\n>\n> +1. I was afraid that working around this would be impossibly\n> painful ... but if it just takes one judiciously placed #undef,\n> let's do that and not argue about it.\n\nDone.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Jul 2019 10:27:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-29 10:27:54 +1200, Thomas Munro wrote:\n> On Mon, Jul 29, 2019 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Let's just commit the #undef so that seawasp is green and back to\n> > > being ready to tell us if something else breaks.\n> >\n> > +1. I was afraid that working around this would be impossibly\n> > painful ... but if it just takes one judiciously placed #undef,\n> > let's do that and not argue about it.\n> \n> Done.\n\ncool, thanks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 28 Jul 2019 22:06:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
},
{
"msg_contents": "\n>>> Let's just commit the #undef so that seawasp is green and back to\n>>> being ready to tell us if something else breaks.\n>>\n>> +1. I was afraid that working around this would be impossibly\n>> painful ... but if it just takes one judiciously placed #undef,\n>> let's do that and not argue about it.\n>\n> Done.\n\nSeawasp is back to green.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 29 Jul 2019 07:08:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: LLVM compile failing in seawasp"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks to me as though any table AM that uses the relation forks\nsupported by PostgreSQL in a more or less normal manner is likely to\nrequire an implementation of the relation_size callback that is\nidentical to the one for heap, and an implementation of the\nestimate_rel_size method that is extremely similar to the one for\nheap. The latter is especially troubling as the amount of code\nduplication is non-trivial, and it's full of special hacks.\n\nHere is a patch that tries to improve the situation. I don't know\nwhether there is some better approach; this seemed like the obvious\nthing to do.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 6 Jun 2019 16:40:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 04:40:53PM -0400, Robert Haas wrote:\n> It looks to me as though any table AM that uses the relation forks\n> supported by PostgreSQL in a more or less normal manner is likely to\n> require an implementation of the relation_size callback that is\n> identical to the one for heap, and an implementation of the\n> estimate_rel_size method that is extremely similar to the one for\n> heap. The latter is especially troubling as the amount of code\n> duplication is non-trivial, and it's full of special hacks.\n> \n> Here is a patch that tries to improve the situation. I don't know\n> whether there is some better approach; this seemed like the obvious\n> thing to do.\n\nLooks like a neat split.\n\n\"allvisfrac\", \"pages\" and \"tuples\" had better be documented about\nwhich result they represent.\n\n+ * usable_bytes_per_page should contain the approximate number of bytes per\n+ * page usable for tuple data, excluding the page header and any anticipated\n+ * special space.\n[...]\n+table_block_estimate_rel_size(Relation rel, int32 *attr_widths,\n+ BlockNumber *pages, double *tuples,\n+ double *allvisfrac,\n+ Size overhead_bytes_per_tuple,\n+ Size usable_bytes_per_page)\n\nCould you explain what's the use cases you have in mind for\nusable_bytes_per_page? All AMs using smgr need to have a page header,\nwhich implies that the usable number of bytes is (BLCKSZ - page\nheader) like heap for tuple data. In the AMs you got to work with, do\nyou store some extra data in the page which is not used for tuple\nstorage? I think that makes sense, just wondering about the use\ncase.\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 11:08:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "> On 6 Jun 2019, at 22:40, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> It looks to me as though any table AM that uses the relation forks\n> supported by PostgreSQL in a more or less normal manner is likely to\n> require an implementation of the relation_size callback that is\n> identical to the one for heap, and an implementation of the\n> estimate_rel_size method that is extremely similar to the one for\n> heap. The latter is especially troubling as the amount of code\n> duplication is non-trivial, and it's full of special hacks.\n\nMakes sense. Regarding one of the hacks:\n\n+\t * HACK: if the relation has never yet been vacuumed, use a minimum size\n+\t * estimate of 10 pages. The idea here is to avoid assuming a\n+\t * newly-created table is really small, even if it currently is, because\n+\t * that may not be true once some data gets loaded into it.\n\nWhen this is a generic function for AM’s, would it make sense to make the\nminimum estimate a passed in value rather than hardcoded at 10? I don’t have a\ncase in mind, but I also don’t think that assumptions made for heap necessarily\nmakes sense for all AM’s. Just thinking out loud.\n\n> Here is a patch that tries to improve the situation. I don't know\n> whether there is some better approach; this seemed like the obvious\n> thing to do.\n\nA small nitpick on the patch:\n\n+ * estimate_rel_size callback, because it has a few additional paramters.\n\ns/paramters/parameters/\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 7 Jun 2019 10:11:45 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 10:08 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looks like a neat split.\n\nThanks.\n\n> \"allvisfrac\", \"pages\" and \"tuples\" had better be documented about\n> which result they represent.\n\nA lot of the table AM stuff (and the related slot stuff) lacks\nfunction header comments; I don't like that and think it should be\nimproved. However, that's not the job of this patch. I think it's\ncompletely correct for this patch to document, as it does, that the\narguments have the same meaning as for the estimate_rel_size method,\nand leave it at that. There is certainly negative value in duplicating\nthe definitions in multiple places, where they might get out of sync.\nThe header comment for table_relation_estimate_size() refers the\nreader to the comments for estimate_rel_size(), which says:\n\n * estimate_rel_size - estimate # pages and # tuples in a table or index\n *\n * We also estimate the fraction of the pages that are marked all-visible in\n * the visibility map, for use in estimation of index-only scans.\n *\n * If attr_widths isn't NULL, it points to the zero-index entry of the\n * relation's attr_widths[] cache; we fill this in if we have need to compute\n * the attribute widths for estimation purposes.\n\n...which AFAICT constitutes as much documentation of these parameters\nas we have got. I think this is all a bit byzantine and could\nprobably be made clearer, but (1) fortunately it's not all that hard\nto guess what these are supposed to mean and (2) I don't feel obliged\nto do semi-related comment cleanup every time I patch tableam.\n\n> Could you explain what's the use cases you have in mind for\n> usable_bytes_per_page? All AMs using smgr need to have a page header,\n> which implies that the usable number of bytes is (BLCKSZ - page\n> header) like heap for tuple data. In the AMs you got to work with, do\n> you store some extra data in the page which is not used for tuple\n> storage? I think that makes sense, just wondering about the use\n> case.\n\nExactly. BLCKSZ - page header is probably the maximum unless you roll\nyour own page format, but you could easily have less if you use the\nspecial space. zheap is storing transaction slots there; you could\nstore an epoch to avoid freezing, and there's probably quite a few\nother reasonable possibilities.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 08:32:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 4:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Makes sense. Regarding one of the hacks:\n>\n> + * HACK: if the relation has never yet been vacuumed, use a minimum size\n> + * estimate of 10 pages. The idea here is to avoid assuming a\n> + * newly-created table is really small, even if it currently is, because\n> + * that may not be true once some data gets loaded into it.\n>\n> When this is a generic function for AM’s, would it make sense to make the\n> minimum estimate a passed in value rather than hardcoded at 10? I don’t have a\n> case in mind, but I also don’t think that assumptions made for heap necessarily\n> makes sense for all AM’s. Just thinking out loud.\n\nI think that's probably going in the wrong direction. It's arguable,\nof course. However, it seems to me that there's nothing heap-specific\nabout the number 10. It's not computed based on the format of a heap\npage or a heap tuple. It's just somebody's guess (likely Tom's) about\nhow to plan with empty relations. If somebody finds that another\nnumber works better for their AM, it's probably also better for heap,\nat least on that person's workload. It seems more likely to me that\nthis needs to be a GUC or reloption than that it needs to be an\nAM-specific property. It also seems to me that if the parameters of\nthe hack randomly vary from one AM to another, it's likely to create\nconfusing plan differences that have nothing to do with the actual\nmerits of what the AMs are doing. But all that being said, I'm not\nblocking anybody from fooling around with this; nobody's obliged to\nuse the helper function. It's just there for people who want the same\nAM-independent logic that the heap uses.\n\n> > Here is a patch that tries to improve the situation. I don't know\n> > whether there is some better approach; this seemed like the obvious\n> > thing to do.\n>\n> A small nitpick on the patch:\n>\n> + * estimate_rel_size callback, because it has a few additional paramters.\n>\n> s/paramters/parameters/\n\nGood catch, and now I notice that the callback is not called\nestimate_rel_size but relation_estimate_size. Updated patch attached;\nthanks for the review.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 7 Jun 2019 08:43:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 8:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Good catch, and now I notice that the callback is not called\n> estimate_rel_size but relation_estimate_size. Updated patch attached;\n> thanks for the review.\n\nLet's try that one more time, and this time perhaps I'll make it compile.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 7 Jun 2019 11:14:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 08:32:37 -0400, Robert Haas wrote:\n> On Thu, Jun 6, 2019 at 10:08 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > \"allvisfrac\", \"pages\" and \"tuples\" had better be documented about\n> > which result they represent.\n> \n> A lot of the table AM stuff (and the related slot stuff) lacks\n> function header comments; I don't like that and think it should be\n> improved. However, that's not the job of this patch. I think it's\n> completely correct for this patch to document, as it does, that the\n> arguments have the same meaning as for the estimate_rel_size method,\n> and leave it at that. There is certainly negative value in duplicating\n> the definitions in multiple places, where they might get out of sync.\n> The header comment for table_relation_estimate_size() refers the\n> reader to the comments for estimate_rel_size(), which says:\n\nNote that these function ended up that way by precisely this logic... ;)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 08:29:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On 2019-Jun-07, Robert Haas wrote:\n\n> On Fri, Jun 7, 2019 at 8:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Good catch, and now I notice that the callback is not called\n> > estimate_rel_size but relation_estimate_size. Updated patch attached;\n> > thanks for the review.\n> \n> Let's try that one more time, and this time perhaps I'll make it compile.\n\nIt looks good to me, passes tests. This version seems to introduce a warning\nin my build:\n\n/pgsql/source/master/src/backend/access/table/tableam.c: In function 'table_block_relation_estimate_size':\n/pgsql/source/master/src/backend/access/table/tableam.c:633:12: warning: implicit declaration of function 'rint' [-Wimplicit-function-declaration]\n *tuples = rint(density * (double) curpages);\n ^~~~\n/pgsql/source/master/src/backend/access/table/tableam.c:633:12: warning: incompatible implicit declaration of built-in function 'rint'\n/pgsql/source/master/src/backend/access/table/tableam.c:633:12: note: include '<math.h>' or provide a declaration of 'rint'\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 12:05:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "> On 7 Jun 2019, at 14:43, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I think that's probably going in the wrong direction. It's arguable,\n> of course. However, it seems to me that there's nothing heap-specific\n> about the number 10. It's not computed based on the format of a heap\n> page or a heap tuple. It's just somebody's guess (likely Tom's) about\n> how to plan with empty relations. If somebody finds that another\n> number works better for their AM, it's probably also better for heap,\n> at least on that person's workload. \n\nFair enough, that makes sense.\n\n> Good catch, and now I notice that the callback is not called\n> estimate_rel_size but relation_estimate_size. Updated patch attached;\n> thanks for the review.\n\nThe commit message still refers to it as estimate_rel_size though. The comment on\ntable_block_relation_estimate_size does too but that one might be intentional.\n\nThe v3 patch applies cleanly and passes tests (I did not see the warning that\nAlvaro mentioned, so yay for testing on multiple compilers).\n\nDuring re-review, the below part stood out as a bit odd however:\n\n+\tif (curpages < 10 &&\n+\t\trelpages == 0 &&\n+\t\t!rel->rd_rel->relhassubclass)\n+\t\tcurpages = 10;\n+\n+\t/* report estimated # pages */\n+\t*pages = curpages;\n+\t/* quick exit if rel is clearly empty */\n+\tif (curpages == 0)\n+\t{\n+\t\t*tuples = 0;\n+\t\t*allvisfrac = 0;\n+\t\treturn;\n+\t}\n\nWhile I know this codepath isn’t introduced by this patch (it was introduced in\n696d78469f3), I hadn’t seen it before so sorry for thread-jacking slightly.\n\nMaybe I’m a bit thick but if the rel is totally empty and without children,\nthen curpages as well as relpages would be both zero. But if so, how can we\nenter the second \"quick exit” block since curpages by then will be increased to\n10 in the block just before for the empty case? It seems to me that the blocks\nshould be the other way around to really have a fast path, but I might be\nmissing something.\n\ncheers ./daniel\n\n",
"msg_date": "Sat, 8 Jun 2019 00:41:55 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 6:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > Good catch, and now I notice that the callback is not called\n> > estimate_rel_size but relation_estimate_size. Updated patch attached;\n> > thanks for the review.\n>\n> The commit message still refers to it as estimate_rel_size though. The comment on\n> table_block_relation_estimate_size does too but that one might be intentional.\n\nOops. New version attached, hopefully fixing those and the compiler\nwarning Alvaro noted.\n\n> During re-review, the below part stood out as a bit odd however:\n>\n> + if (curpages < 10 &&\n> + relpages == 0 &&\n> + !rel->rd_rel->relhassubclass)\n> + curpages = 10;\n> +\n> + /* report estimated # pages */\n> + *pages = curpages;\n> + /* quick exit if rel is clearly empty */\n> + if (curpages == 0)\n> + {\n> + *tuples = 0;\n> + *allvisfrac = 0;\n> + return;\n> + }\n>\n> While I know this codepath isn’t introduced by this patch (it was introduced in\n> 696d78469f3), I hadn’t seen it before so sorry for thread-jacking slightly.\n>\n> Maybe I’m a bit thick but if the rel is totally empty and without children,\n> then curpages as well as relpages would be both zero. But if so, how can we\n> enter the second \"quick exit” block since curpages by then will be increased to\n> 10 in the block just before for the empty case? It seems to me that the blocks\n> should be the other way around to really have a fast path, but I might be\n> missing something.\n\nWell, as you say, I'm just moving the code. However, note that\ncurpages is the size of the relation RIGHT NOW whereas relpages is the\nsize the last time the relation was analyzed. So I guess the case\nyou're wondering about would happen if the relation were analyzed and\nthen truncated. It seems there are lots of things that could be done\nhere in the hopes of improving things, like keeping track in pg_class\nof whether analyze has ever happened rather than using relpages == 0\nas a bad approximation, but I'd rather not drift further OT, so if\nyou're in the mood to talk about that stuff, I would appreciate it if\nyou could start a new thread.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 10 Jun 2019 15:35:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On 2019-Jun-10, Robert Haas wrote:\n\n> On Fri, Jun 7, 2019 at 6:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > The commit message still refers to it as estimate_rel_size though. The comment on\n> > table_block_relation_estimate_size does too but that one might be intentional.\n> \n> Oops. New version attached, hopefully fixing those and the compiler\n> warning Alvaro noted.\n\nIt does fix the warning, thanks.\n\n> > Maybe I’m a bit thick but if the rel is totally empty and without children,\n> > then curpages as well as relpages would be both zero. But if so, how can we\n> > enter the second \"quick exit” block since curpages by then will be increased to\n> > 10 in the block just before for the empty case? It seems to me that the blocks\n> > should be the other way around to really have a fast path, but I might be\n> > missing something.\n> \n> Well, as you say, I'm just moving the code.\n\nI agree that you're just moving the code, but this seems to have been\nrecently broken in 696d78469f37 -- it was correct before that (the\nheuristic for never vacuumed rels was in optimizer/plancat.c). So in\nreality the problem that Daniel pointed out is an open item for pg12.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 15:46:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 3:46 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I agree that you're just moving the code, but this seems to have been\n> recently broken in 696d78469f37 -- it was correct before that (the\n> heuristic for never vacuumed rels was in optimizer/plancat.c). So in\n> reality the problem that Daniel pointed out is an open item for pg12.\n\nI took a look at this but I don't see that Andres did anything in that\ncommit other than move code. In the new code,\nheapam_estimate_rel_size() does this:\n\n+ if (curpages < 10 &&\n+ relpages == 0 &&\n+ !rel->rd_rel->relhassubclass)\n+ curpages = 10;\n+\n+ /* report estimated # pages */\n+ *pages = curpages;\n+ /* quick exit if rel is clearly empty */\n+ if (curpages == 0)\n+ {\n+ *tuples = 0;\n+ *allvisfrac = 0;\n+ return;\n+ }\n\nAnd here's what the code in estimate_rel_size looked like before the\ncommit you mention:\n\n if (curpages < 10 &&\n rel->rd_rel->relpages == 0 &&\n !rel->rd_rel->relhassubclass &&\n rel->rd_rel->relkind != RELKIND_INDEX)\n curpages = 10;\n\n /* report estimated # pages */\n *pages = curpages;\n /* quick exit if rel is clearly empty */\n if (curpages == 0)\n {\n *tuples = 0;\n *allvisfrac = 0;\n break;\n }\n\nIt's all the same, except that now that the test is in heap-specific\ncode it no longer needs to test for RELKIND_INDEX.\n\nI may be missing something here, but I don't know what it is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 Jun 2019 09:17:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "> On 11 Jun 2019, at 15:17, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I may be missing something here, but I don't know what it is.\n\nAfter looking at it closer yesterday I think my original question came from a\nmisunderstanding of the codepath, so I too don’t think there is an issue here\n(unless I’m also missing something).\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 11 Jun 2019 15:23:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On 2019-Jun-11, Robert Haas wrote:\n\n> I may be missing something here, but I don't know what it is.\n\nHuh, you're right, I misread the diff. Thanks for double-checking.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 09:45:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "> On 10 Jun 2019, at 21:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Jun 7, 2019 at 6:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Good catch, and now I notice that the callback is not called\n>>> estimate_rel_size but relation_estimate_size. Updated patch attached;\n>>> thanks for the review.\n>> \n>> The commit message still refers to it as estimate_rel_size though. The comment on\n>> table_block_relation_estimate_size does too but that one might be intentional.\n> \n> Oops. New version attached, hopefully fixing those and the compiler\n> warning Alvaro noted.\n\n+1 on this version of the patch, no warning, passes tests and looks good.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 12 Jun 2019 00:22:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-10 15:35:18 -0400, Robert Haas wrote:\n> On Fri, Jun 7, 2019 at 6:42 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > Good catch, and now I notice that the callback is not called\n> > > estimate_rel_size but relation_estimate_size. Updated patch attached;\n> > > thanks for the review.\n> >\n> > The commit message still refers to it as estimate_rel_size though. The comment on\n> > table_block_relation_estimate_size does too but that one might be intentional.\n> \n> Oops. New version attached, hopefully fixing those and the compiler\n> warning Alvaro noted.\n\nJust to understand: What version are you targeting? It seems pretty\nclearly v13 material to me?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:17:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 7:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Just to understand: What version are you targeting? It seems pretty\n> clearly v13 material to me?\n\nMy current plan is to commit this to v13 as soon as the tree opens.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:14:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 9:14 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Jun 11, 2019 at 7:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Just to understand: What version are you targeting? It seems pretty\n> > clearly v13 material to me?\n>\n> My current plan is to commit this to v13 as soon as the tree opens.\n\nCommitted.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:15:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: abstracting relation sizing code"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs mentioned on another thread about test coverage, I have noticed\nthat be-gssapi-common.h is not placed at the correct location, even\nits its identication path at the top points to where the file should\nbe:\nhttps://www.postgresql.org/message-id/20190604014630.GH1529@paquier.xyz\n\nThe file has been introduced at its current location as of b0b39f72.\nAny objections to something like the attached?\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 7 Jun 2019 13:34:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "be-gssapi-common.h should be located in src/include/libpq/"
},
{
"msg_contents": "> On 7 Jun 2019, at 06:34, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Any objections to something like the attached?\n\nNo objections to moving the file per the patch.\n\nWhile looking at libpq.h I noticed what seems to be a few nitpicks: the GSS\nfunction prototype isn’t using the common format of having a comment specifying\nthe file it belongs to; ssl_loaded_verify_locations is defined as extern even\nthough it’s only available under USE_SSL (which works fine since it’s only\naccessed under USE_SSL but seems kinda wrong); and FeBeWaitSet is not listed\nunder the pqcomm.c prototypes like how the extern vars from be-secure.c are.\nAll of these are in the attached.\n\ncheers ./daniel",
"msg_date": "Fri, 7 Jun 2019 09:52:26 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: be-gssapi-common.h should be located in src/include/libpq/"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> As mentioned on another thread about test coverage, I have noticed\n> that be-gssapi-common.h is not placed at the correct location, even\n> its its identication path at the top points to where the file should\n> be:\n> https://www.postgresql.org/message-id/20190604014630.GH1529@paquier.xyz\n> \n> The file has been introduced at its current location as of b0b39f72.\n> Any objections to something like the attached?\n\nI'm pretty sure it ended up there just because that's how things are in\nsrc/interfaces/libpq. I don't have any objection to moving it, I had\nreally just been waiting to see where that thread ended up going.\n\nOn a quick look, the patch looks fine to me.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 7 Jun 2019 08:11:07 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: be-gssapi-common.h should be located in src/include/libpq/"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 08:11:07AM -0400, Stephen Frost wrote:\n> I'm pretty sure it ended up there just because that's how things are in\n> src/interfaces/libpq. I don't have any objection to moving it, I had\n> really just been waiting to see where that thread ended up going.\n> \n> On a quick look, the patch looks fine to me.\n\nOK thanks. I have committed this portion of the patch for now. If\nthere are any remaining issues let's take care of them afterwards.\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 10:21:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-gssapi-common.h should be located in src/include/libpq/"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 09:52:26AM +0200, Daniel Gustafsson wrote:\n> While looking at libpq.h I noticed what seems to be a few nitpicks: the GSS\n> function prototype isn’t using the common format of having a comment specifying\n> the file it belongs to; ssl_loaded_verify_locations is defined as extern even\n> though it’s only available under USE_SSL (which works fine since it’s only\n> accessed under USE_SSL but seems kinda wrong); and FeBeWaitSet is not listed\n> under the pqcomm.c prototypes like how the extern vars from be-secure.c are.\n> All of these are in the attached.\n\nIndeed, this makes the header more consistent. Thanks for noticing.\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 10:24:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-gssapi-common.h should be located in src/include/libpq/"
},
{
"msg_contents": "On Sat, Jun 08, 2019 at 10:24:39AM +0900, Michael Paquier wrote:\n> Indeed, this makes the header more consistent. Thanks for noticing.\n\nDouble-checked the surroundings, and done.\n--\nMichael",
"msg_date": "Sun, 9 Jun 2019 11:41:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-gssapi-common.h should be located in src/include/libpq/"
}
] |
[
{
"msg_contents": "Hello.\n\nIn guc.c many of the variables are described as \"Set_s_ something\"\nas if the variable name is the omitted subject. A few seem being\nwrongly written as \"Set something\" with the same intention.\n\nIs it useful to unify them to the majority?\n\nwal_level\n> gettext_noop(\"Set the level of information written to the WAL.\"),\n\nlog_transaction_sample_rage\n> gettext_noop(\"Set the fraction of transactions to log for new transactions.\"),\n\n\nThough, recovery_target seems written as intended.\n\n> gettext_noop(\"Set to 'immediate' to end recovery as soon as a consistent state is reached.\n\n# rather it seems to be the detaied description..\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 7 Jun 2019 17:06:09 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wording variations in descriptions of gucs."
}
] |
[
{
"msg_contents": "Hi,\n\nWe support ALTER TABLE ADD COLUMN .. GENERATED ALWAYS AS .. but the\ndoc is missing it. Attached small patch fixes this.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Fri, 7 Jun 2019 18:07:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Missing generated column in ALTER TABLE ADD COLUMN doc"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 06:07:34PM +0900, Masahiko Sawada wrote:\n> We support ALTER TABLE ADD COLUMN .. GENERATED ALWAYS AS .. but the\n> doc is missing it. Attached small patch fixes this.\n\nYour patch updates the section related to constraint_name. Don't you\nneed an extra line for the \"action\" part?\n--\nMichael",
"msg_date": "Mon, 10 Jun 2019 17:05:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing generated column in ALTER TABLE ADD COLUMN doc"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 5:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jun 07, 2019 at 06:07:34PM +0900, Masahiko Sawada wrote:\n> > We support ALTER TABLE ADD COLUMN .. GENERATED ALWAYS AS .. but the\n> > doc is missing it. Attached small patch fixes this.\n>\n> Your patch updates the section related to constraint_name. Don't you\n> need an extra line for the \"action\" part?\n\nWe already have the following line in action part but you mean we need\nan extra line for that?\n\n ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE\ncollation ] [ column_constraint [ ... ] ]\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 10 Jun 2019 18:09:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing generated column in ALTER TABLE ADD COLUMN doc"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 06:09:53PM +0900, Masahiko Sawada wrote:\n> We already have the following line in action part but you mean we need\n> an extra line for that?\n> \n> ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE\n> collation ] [ column_constraint [ ... ] ]\n\nI was looking at the grammar extensions for ADD GENERATED and noticed\nwhat looked like inconsistencies, but your patch as well as the parsed\nquery are right. Committed, thanks!\n--\nMichael",
"msg_date": "Tue, 11 Jun 2019 13:02:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Missing generated column in ALTER TABLE ADD COLUMN doc"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 1:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jun 10, 2019 at 06:09:53PM +0900, Masahiko Sawada wrote:\n> > We already have the following line in action part but you mean we need\n> > an extra line for that?\n> >\n> > ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE\n> > collation ] [ column_constraint [ ... ] ]\n>\n> I was looking at the grammar extensions for ADD GENERATED and noticed\n> what looked like inconsistencies, but your patch as well as the parsed\n> query are right. Committed, thanks!\n\nThank you!\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Jun 2019 13:49:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Missing generated column in ALTER TABLE ADD COLUMN doc"
}
] |
[
{
"msg_contents": "TableAmRoutine's index_build_range_scan thinks that parameter #8 is\ncalled end_blockno, but table_index_build_range_scan and\nheapam_index_build_range_scan and BRIN's summarize_range all agree\nthat it's the number of blocks to be scanned. I assume that this got\nchanged at some point while Andres was hacking on it and this one\nplace just never got updated.\n\nProposed patch attached. Since this seems like a bug, albeit a\nharmless one, I propose to commit this to v12.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 7 Jun 2019 12:37:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "tableam: inconsistent parameter name"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 12:37:33 -0400, Robert Haas wrote:\n> TableAmRoutine's index_build_range_scan thinks that parameter #8 is\n> called end_blockno, but table_index_build_range_scan and\n> heapam_index_build_range_scan and BRIN's summarize_range all agree\n> that it's the number of blocks to be scanned. I assume that this got\n> changed at some point while Andres was hacking on it and this one\n> place just never got updated.\n\nNot sure where it came from :/\n\n> Proposed patch attached. Since this seems like a bug, albeit a\n> harmless one, I propose to commit this to v12.\n\nYea, please do!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 09:52:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam: inconsistent parameter name"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 12:52 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-07 12:37:33 -0400, Robert Haas wrote:\n> > TableAmRoutine's index_build_range_scan thinks that parameter #8 is\n> > called end_blockno, but table_index_build_range_scan and\n> > heapam_index_build_range_scan and BRIN's summarize_range all agree\n> > that it's the number of blocks to be scanned. I assume that this got\n> > changed at some point while Andres was hacking on it and this one\n> > place just never got updated.\n>\n> Not sure where it came from :/\n>\n> > Proposed patch attached. Since this seems like a bug, albeit a\n> > harmless one, I propose to commit this to v12.\n>\n> Yea, please do!\n\nI found what appears to be another typo very nearby. Attached please\nfind v2, fixing both issues.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 7 Jun 2019 13:11:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: inconsistent parameter name"
},
{
"msg_contents": "On 2019-06-07 13:11:21 -0400, Robert Haas wrote:\n> I found what appears to be another typo very nearby. Attached please\n> find v2, fixing both issues.\n\nHm, I thinks that's fixed already?\n\n\n",
"msg_date": "Fri, 7 Jun 2019 10:19:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam: inconsistent parameter name"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 1:19 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-06-07 13:11:21 -0400, Robert Haas wrote:\n> > I found what appears to be another typo very nearby. Attached please\n> > find v2, fixing both issues.\n>\n> Hm, I thinks that's fixed already?\n\nOops, you're right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 13:33:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam: inconsistent parameter name"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15840\nLogged by: Thierry Husson\nEmail address: thusson@informiciel.com\nPostgreSQL version: 12beta1\nOperating system: Ubuntu 18.04.2 LTS\nDescription: \n\nI was doing tables COPY between my old server with PG10.8 and the new one\nwith 12Beta1. After each table is done, I make a vacuum of it.\r\nHowever PG12 has stopped working for wraparound protection. I was doing it\non around 10 cpu, 1 table by cpu.\r\nThe is the end of the log of the copy program in Python3 with Psycopg2:\r\n...\r\n2019-06-06 23:15:26 prog_sync Vacuum\nusr_ops.prg_hrdps_n1500n_voisin_ade_metar... 4s.\r\n2019-06-06 23:15:30 prog_sync Vacuum\nusr_ops.flt_hrdps_n1500n_voisin_ade_metar... 0s.\r\n2019-06-06 23:15:30 prog_sync CPU 0 - Sync done 8857sec.\r\n2019-06-06 23:15:30 prog_sync Tables Skipped:0, Already sync:0, Copied\nfrom pravda:1. Copied from zhen:0.\r\n2019-06-06 23:15:30 prog_sync Sync done for 1 tables of 106451311 records in\n8858s. (12018 rec./sec.)\r\n\r\nTraceback (most recent call last):\r\n File \"/home/semt700/emet/script/prog_sync.py\", line 316, in syncTable\r\n ioResult = e.flushCopyBuffer(ioResult, curPG[slave][procId],\nprogTable[slave], columns)\r\n File \"/fs/home/fs1/eccc/cmd/cmdn/semt700/emet/script/emetlib.py\", line\n607, in flushCopyBuffer\r\n cursorObj.copy_from(ioBuffer, tableName, sep='\\t', columns=columnName,\nnull='NULL')\r\npsycopg2.OperationalError: database is not accepting commands to avoid\nwraparound data loss in database \"emet_zhen\"\r\nHINT: Stop the postmaster and vacuum that database in single-user mode.\r\nYou might also need to commit or roll back old prepared transactions, or\ndrop stale replication slots.\r\nCONTEXT: SQL statement \"INSERT INTO\nusr_ops.prg_gdps_g1610n_voisin_ade_synop_swob_metar_201903 SELECT $1.*\"\r\nPL/pgSQL function prog_insert() line 17 at EXECUTE\r\nCOPY prg_gdps_g1610n_voisin_ade_synop_swob_metar, line 132822: \"284532738 \n 2019-03-20 00:00:00 2019-03-29 12:00:00 11011 37980000 \n-101750000 75597472 NULL -5.4617 1 ...\"\r\n\r\nI did a DB shutdown and started a vacuum with:\r\npostgres --single emet_zhen\r\nVACUUM FREEZE VERBOSE;\r\n\r\nIt worked a few hours and when I was thinking it was done as nothing was\nloggin anymore, I made a ctrl-\\ and restarted the DB.\r\nI was still getting wraparound protection messages so I shutdown the DB\nagain & redo the vacuum command but it didn't work anymore:\r\nzhen:semt700 $ postgres --single emet_zhen\r\n2019-06-07 17:23:36 UTC 7251 WARNING: database with OID 16394 must be\nvacuumed within 999995 transactions\r\n2019-06-07 17:23:36 UTC 7251 HINT: To avoid a database shutdown, execute a\ndatabase-wide VACUUM in that database.\r\n You might also need to commit or roll back old prepared\ntransactions, or drop stale replication slots.\r\nPostgreSQL stand-alone backend 12beta1\r\nbackend> VACUUM VERBOSE;\r\n2019-06-07 17:23:59 UTC 7251 WARNING: database \"emet_zhen\" must be\nvacuumed within 999995 transactions\r\n2019-06-07 17:23:59 UTC 7251 HINT: To avoid a database shutdown, execute a\ndatabase-wide VACUUM in that database.\r\n You might also need to commit or roll back old prepared\ntransactions, or drop stale replication slots.\r\n2019-06-07 17:23:59 UTC 7251 LOG: duration: 2417.639 ms statement: VACUUM\nVERBOSE;\r\n\r\nI tried with various options but none worked. It also tried to restard the\nDB and use vacuumdb --all -v , or various options, but always get the same\nmessage for each table:\r\n\r\nINFO: aggressively vacuuming \"pg_catalog.pg_publication\"\r\nINFO: index \"pg_publication_oid_index\" now contains 0 row versions in 1\npages\r\nDETAIL: 0 index row versions were removed.\r\n0 index pages have been deleted, 0 are currently reusable.\r\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\r\nINFO: index \"pg_publication_pubname_index\" now contains 0 row versions in 1\npages\r\nDETAIL: 0 index row versions were removed.\r\n0 index pages have been deleted, 0 are currently reusable.\r\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\r\nINFO: \"pg_publication\": found 0 removable, 0 nonremovable row versions in 0\nout of 0 pages\r\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin:\n2146520116\r\nThere were 0 unused item identifiers.\r\nSkipped 0 pages due to buffer pins, 0 frozen pages.\r\n0 pages are entirely empty.\r\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\r\nWARNING: database \"emet_zhen\" must be vacuumed within 999995 transactions\r\nHINT: To avoid a database shutdown, execute a database-wide VACUUM in that\ndatabase.\r\nYou might also need to commit or roll back old prepared transactions, or\ndrop stale replication slots.\r\n\r\nI out of clues of what to try next. I already got this situation with PG 9.x\n& PG10.x but system wide in exclusive mode usualy worked.\r\n\r\nSeems like a PG12 bug that will certainly prevent us from upgrading even if\nthe new fonctionnalities look really great.\r\n\r\nThanks a lot!\r\n\r\nThierry",
"msg_date": "Fri, 07 Jun 2019 18:22:20 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 18:22:20 +0000, PG Bug reporting form wrote:\n> I was doing tables COPY between my old server with PG10.8 and the new one\n> with 12Beta1. After each table is done, I make a vacuum of it.\n> However PG12 has stopped working for wraparound protection. I was doing it\n> on around 10 cpu, 1 table by cpu.\n\nThat was a new postgres 12 cluster, not a pg_upgraded one? And you just\ndid a bunch of COPYs? How many?\n\nI'm not clear as to how the cluster got to wraparound if that's the\nscenario. We use one xid per transaction, and copy doesn't use multiple\ntransactions internally. Any chance you have triggers on these tables\nthat use savepoints internally?\n\n\n> postgres --single emet_zhen\n> VACUUM FREEZE VERBOSE;\n\nDon't FREEZE in wraparound cases, that just makes it take longer.\n\n\n> It worked a few hours and when I was thinking it was done as nothing was\n> loggin anymore, I made a ctrl-\\ and restarted the DB.\n> I was still getting wraparound protection messages so I shutdown the DB\n> again & redo the vacuum command but it didn't work anymore:\n\n> zhen:semt700 $ postgres --single emet_zhen\n> 2019-06-07 17:23:36 UTC 7251 WARNING: database with OID 16394 must be\n> vacuumed within 999995 transactions\n> 2019-06-07 17:23:36 UTC 7251 HINT: To avoid a database shutdown, execute a\n> database-wide VACUUM in that database.\n> You might also need to commit or roll back old prepared\n> transactions, or drop stale replication slots.\n> PostgreSQL stand-alone backend 12beta1\n> backend> VACUUM VERBOSE;\n> 2019-06-07 17:23:59 UTC 7251 WARNING: database \"emet_zhen\" must be\n> vacuumed within 999995 transactions\n> 2019-06-07 17:23:59 UTC 7251 HINT: To avoid a database shutdown, execute a\n> database-wide VACUUM in that database.\n> You might also need to commit or roll back old prepared\n> transactions, or drop stale replication slots.\n> 2019-06-07 17:23:59 UTC 7251 LOG: duration: 2417.639 ms statement: VACUUM\n> VERBOSE;\n\nWhat do you mean by \"didn't work anymore\"? As far as I can tell the\nVACUUM here succeeded?\n\n\n> HINT: To avoid a database shutdown, execute a database-wide VACUUM in that\n> database.\n> You might also need to commit or roll back old prepared transactions, or\n> drop stale replication slots.\n\nDid you check whether any of these are the case?\n\nSELECT * FROM pg_replication_slots;\nSELECT * FROM pg_prepared_xacts;\n\nCould you also show\n\nSELECT oid, datname, datfrozenxid, age(datfrozenxid), datminmxid, mxid_age(datminmxid) FROM pg_database ORDER BY age(datfrozenxid) DESC;\nSELECT * FROM pg_control_checkpoint();\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 12:02:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi Andres,\n\nThank you for your anwser. Precisions bellow:\n\nAndres Freund <andres@anarazel.de> a écrit :\n\n> Hi,\n>\n> On 2019-06-07 18:22:20 +0000, PG Bug reporting form wrote:\n>> I was doing tables COPY between my old server with PG10.8 and the new one\n>> with 12Beta1. After each table is done, I make a vacuum of it.\n>> However PG12 has stopped working for wraparound protection. I was doing it\n>> on around 10 cpu, 1 table by cpu.\n>\n> That was a new postgres 12 cluster, not a pg_upgraded one? And you just\n> did a bunch of COPYs? How many?\n>\n> I'm not clear as to how the cluster got to wraparound if that's the\n> scenario. We use one xid per transaction, and copy doesn't use multiple\n> transactions internally. Any chance you have triggers on these tables\n> that use savepoints internally?\n\nYes it was a new cluster. Around 30 copy were done.\nYes there is a trigger to manage partitions. Around 1200 tables were \ncreated. 10 billions records transfered, I need to tranfert 180BR over \n1700 tables.\nI just realize I made vacuum on partitions for the first 8BR rows and \nforgot for the last 2BR That would explain the wraparound protection.\n\n>\n>\n>> postgres --single emet_zhen\n>> VACUUM FREEZE VERBOSE;\n>\n> Don't FREEZE in wraparound cases, that just makes it take longer.\n>\n>\n>> It worked a few hours and when I was thinking it was done as nothing was\n>> loggin anymore, I made a ctrl-\\ and restarted the DB.\n>> I was still getting wraparound protection messages so I shutdown the DB\n>> again & redo the vacuum command but it didn't work anymore:\n>\n>> zhen:semt700 $ postgres --single emet_zhen\n>> 2019-06-07 17:23:36 UTC 7251 WARNING: database with OID 16394 must be\n>> vacuumed within 999995 transactions\n>> 2019-06-07 17:23:36 UTC 7251 HINT: To avoid a database shutdown, execute a\n>> database-wide VACUUM in that database.\n>> You might also need to commit or roll back old prepared\n>> transactions, or drop stale replication slots.\n>> PostgreSQL stand-alone backend 12beta1\n>> backend> VACUUM VERBOSE;\n>> 2019-06-07 17:23:59 UTC 7251 WARNING: database \"emet_zhen\" must be\n>> vacuumed within 999995 transactions\n>> 2019-06-07 17:23:59 UTC 7251 HINT: To avoid a database shutdown, execute a\n>> database-wide VACUUM in that database.\n>> You might also need to commit or roll back old prepared\n>> transactions, or drop stale replication slots.\n>> 2019-06-07 17:23:59 UTC 7251 LOG: duration: 2417.639 ms statement: VACUUM\n>> VERBOSE;\n>\n> What do you mean by \"didn't work anymore\"? As far as I can tell the\n> VACUUM here succeeded?\n>\n>\n>> HINT: To avoid a database shutdown, execute a database-wide VACUUM in that\n>> database.\n>> You might also need to commit or roll back old prepared transactions, or\n>> drop stale replication slots.\n>\n> Did you check whether any of these are the case?\n>\n> SELECT * FROM pg_replication_slots;\n> SELECT * FROM pg_prepared_xacts;\nThese are empty.\n\nemet_zhen=# SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * \n100.0 AS \"Percentage of transaction ID's used\" FROM pg_database;\n Percentage of transaction ID's used\n-------------------------------------\n 99.953434057533740997000\n\n>\n> Could you also show\n>\n> SELECT oid, datname, datfrozenxid, age(datfrozenxid), datminmxid, \n> mxid_age(datminmxid) FROM pg_database ORDER BY age(datfrozenxid) DESC;\n oid | datname | datfrozenxid | age | datminmxid | mxid_age\n-------+-----------+--------------+------------+------------+----------\n 16394 | emet_zhen | 36464 | 2146483652 | 1 | 0\n 12672 | template0 | 504982897 | 1641537219 | 1 | 0\n 12673 | postgres | 2096520116 | 50000000 | 1 | 0\n 1 | template1 | 2096520116 | 50000000 | 1 | 0\n\n> SELECT * FROM pg_control_checkpoint();\n checkpoint_lsn | redo_lsn | redo_wal_file | \ntimeline_id | prev_timeline_id | full_page_writes | next_xid | \nnext_oid | next_multixact_id | next_multi_offset | oldest_xid | \noldest_xid_dbid | oldest_active_xid | oldest_multi_xid | \noldest_multi_dbid | oldest_commit_ts_xid | newest_commit_ts_xid | \ncheckpoint_time\n 32D/54074EC0 | 32D/54074E88 | 000000010000032D00000054 | \n1 | 1 | t | 0:2146520116 | 475782 | \n 1 | 0 | 36464 | 16394 | \n 2146520116 | 1 | 16394 | \n 0 | 0 | 2019-06-07 18:11:39+00\n(1 row)\n\nCould it be that PG12 considers \"vacuum\" as a transaction and trigger \nwraparound protection against it?\n\n\n>\n> Greetings,\n>\n> Andres Freund\n\n\n\n\n\n",
"msg_date": "Fri, 07 Jun 2019 14:59:11 -0500",
"msg_from": "Thierry Husson <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 14:59:11 -0500, Thierry Husson wrote:\n> Thank you for your anwser. Precisions bellow:\n> Andres Freund <andres@anarazel.de> a �crit�:\n> > On 2019-06-07 18:22:20 +0000, PG Bug reporting form wrote:\n> > > I was doing tables COPY between my old server with PG10.8 and the new one\n> > > with 12Beta1. After each table is done, I make a vacuum of it.\n> > > However PG12 has stopped working for wraparound protection. I was doing it\n> > > on around 10 cpu, 1 table by cpu.\n> > \n> > That was a new postgres 12 cluster, not a pg_upgraded one? And you just\n> > did a bunch of COPYs? How many?\n> > \n> > I'm not clear as to how the cluster got to wraparound if that's the\n> > scenario. We use one xid per transaction, and copy doesn't use multiple\n> > transactions internally. Any chance you have triggers on these tables\n> > that use savepoints internally?\n> \n> Yes it was a new cluster. Around 30 copy were done.\n> Yes there is a trigger to manage partitions. Around 1200 tables were\n> created. 10 billions records transfered, I need to tranfert 180BR over 1700\n> tables.\n> I just realize I made vacuum on partitions for the first 8BR rows and forgot\n> for the last 2BR That would explain the wraparound protection.\n\nDo those triggers use savepoints / EXCEPTION handling?\n\nMight be worthwhile to check - independent of this issue - if you still\nneed the partition handling via trigger, now that pg's builtin\npartitioning can handle COPY (and likely *much* faster).\n\n\n> > Could you also show\n> > \n> > SELECT oid, datname, datfrozenxid, age(datfrozenxid), datminmxid,\n> > mxid_age(datminmxid) FROM pg_database ORDER BY age(datfrozenxid) DESC;\n> oid | datname | datfrozenxid | age | datminmxid | mxid_age\n> -------+-----------+--------------+------------+------------+----------\n> 16394 | emet_zhen | 36464 | 2146483652 | 1 | 0\n\nOk, so it's xids, and clearly not multixids. Could you connect to\nemet_zhen and show the output of:\n\nSELECT oid, oid::regclass, relkind, relfrozenxid, age(relfrozenxid) FROM pg_class WHERE relfrozenxid <> 0 AND age(relfrozenxid) > 1800000000 ORDER BY age(relfrozenxid) DESC;\n\nthat will tell us which relations need to be vacuumed, and then we can\nsee why that doesn't work.\n\n\n> Could it be that PG12 considers \"vacuum\" as a transaction and trigger\n> wraparound protection against it?\n\nI'm still somewhat confused - the output you showed didn't include\nvacuum failing, as far as I can tell?\n\n- Andres\n\n\n",
"msg_date": "Fri, 7 Jun 2019 13:10:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Thanks again Andres,\n\nAndres Freund <andres@anarazel.de> a écrit :\n\n> Hi,\n>\n> On 2019-06-07 14:59:11 -0500, Thierry Husson wrote:\n>> Thank you for your anwser. Precisions bellow:\n>> Andres Freund <andres@anarazel.de> a écrit :\n>> > On 2019-06-07 18:22:20 +0000, PG Bug reporting form wrote:\n>> > > I was doing tables COPY between my old server with PG10.8 and \n>> the new one\n>> > > with 12Beta1. After each table is done, I make a vacuum of it.\n>> > > However PG12 has stopped working for wraparound protection. I \n>> was doing it\n>> > > on around 10 cpu, 1 table by cpu.\n>> >\n>> > That was a new postgres 12 cluster, not a pg_upgraded one? And you just\n>> > did a bunch of COPYs? How many?\n>> >\n>> > I'm not clear as to how the cluster got to wraparound if that's the\n>> > scenario. We use one xid per transaction, and copy doesn't use multiple\n>> > transactions internally. Any chance you have triggers on these tables\n>> > that use savepoints internally?\n>>\n>> Yes it was a new cluster. Around 30 copy were done.\n>> Yes there is a trigger to manage partitions. Around 1200 tables were\n>> created. 10 billions records transfered, I need to tranfert 180BR over 1700\n>> tables.\n>> I just realize I made vacuum on partitions for the first 8BR rows and forgot\n>> for the last 2BR That would explain the wraparound protection.\n>\n> Do those triggers use savepoints / EXCEPTION handling?\n>\n> Might be worthwhile to check - independent of this issue - if you still\n> need the partition handling via trigger, now that pg's builtin\n> partitioning can handle COPY (and likely *much* faster).\n\nYes, those triggers use exception handling (if partition doesn't \nexist, create it) but no savepoint.\nThanks for the suggestion, I take that in note!\n\n>> > Could you also show\n>> >\n>> > SELECT oid, datname, datfrozenxid, age(datfrozenxid), datminmxid,\n>> > mxid_age(datminmxid) FROM pg_database ORDER BY age(datfrozenxid) DESC;\n>> oid | datname | datfrozenxid | age | datminmxid | mxid_age\n>> -------+-----------+--------------+------------+------------+----------\n>> 16394 | emet_zhen | 36464 | 2146483652 | 1 | 0\n>\n> Ok, so it's xids, and clearly not multixids. Could you connect to\n> emet_zhen and show the output of:\n>\n> SELECT oid, oid::regclass, relkind, relfrozenxid, age(relfrozenxid) \n> FROM pg_class WHERE relfrozenxid <> 0 AND age(relfrozenxid) > \n> 1800000000 ORDER BY age(relfrozenxid) DESC;\n> that will tell us which relations need to be vacuumed, and then we can\n> see why that doesn't work.\n>> Could it be that PG12 considers \"vacuum\" as a transaction and trigger\n>> wraparound protection against it?\n>\n> I'm still somewhat confused - the output you showed didn't include\n> vacuum failing, as far as I can tell?\n>\n> - Andres\n\n oid | oid | relkind | \nrelfrozenxid | age\n--------+--------------------------------------+---------+--------------+------------\n 460564 | pg_temp_3.cur_semt700_progsync_4996 | r | \n36464 | 2146483652\n 460764 | pg_temp_8.cur_semt700_progsync_5568 | r | \n19836544 | 2126683572\n 460718 | pg_temp_4.cur_semt700_progsync_5564 | r | \n19836544 | 2126683572\n 460721 | pg_temp_5.cur_semt700_progsync_5565 | r | \n19836544 | 2126683572\n 461068 | pg_temp_22.cur_semt700_progsync_5581 | r | \n19836544 | 2126683572\n\nThese are temporary tables to manage concurrency & server load. It \nseems the sudden disconnection due to wraparound protection didn't get \nthem removed. I removed them manually under single mode and there is \nno more warning now, vacuum command included. Your command is very \ninteresting to know.\n\nIt annoying PG create a xId for empty temporary tables. You can't \nclear it with a vacuum as there is no record. I have to terminate \nconnexions of my deamon processes daily to avoid wraparound \nprotection. Is there a way to tell PG to forget these tables on its \nage estimation?\n\nThank you so much Andres! You saved me!\n\nThierry\n\n\n\n\n",
"msg_date": "Fri, 07 Jun 2019 16:40:27 -0500",
"msg_from": "Thierry Husson <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 16:40:27 -0500, Thierry Husson wrote:\n> oid | oid | relkind | relfrozenxid |\n> age\n> --------+--------------------------------------+---------+--------------+------------\n> 460564 | pg_temp_3.cur_semt700_progsync_4996 | r | 36464 |\n> 2146483652\n> 460764 | pg_temp_8.cur_semt700_progsync_5568 | r | 19836544 |\n> 2126683572\n> 460718 | pg_temp_4.cur_semt700_progsync_5564 | r | 19836544 |\n> 2126683572\n> 460721 | pg_temp_5.cur_semt700_progsync_5565 | r | 19836544 |\n> 2126683572\n> 461068 | pg_temp_22.cur_semt700_progsync_5581 | r | 19836544 |\n> 2126683572\n> \n> These are temporary tables to manage concurrency & server load. It seems the\n> sudden disconnection due to wraparound protection didn't get them removed. I\n> removed them manually under single mode and there is no more warning now,\n> vacuum command included. Your command is very interesting to know.\n\nHm. But you do have autovacuum enabled, is that right? If enabled, have\nyou tuned it at all? It seems quite possible that given your load (10\nparallel loads), the default settings werent aggressive enough.\n\n\n> It annoying PG create a xId for empty temporary tables. You can't clear it\n> with a vacuum as there is no record. I have to terminate connexions of my\n> deamon processes daily to avoid wraparound protection. Is there a way to\n> tell PG to forget these tables on its age estimation?\n\nNormally postgres would drop such \"orphaned\" temp tables on its own, in\nautovacuum (triggering it when close to a wraparound, even if\ndisabled). But if it can't keep up for some reason, then that's not\nnecessarily good enough with very rapid xid usage as you seem to have.\n\nI'll start a thread about this subtopic on -hackers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 14:47:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi Andres,\n\nAndres Freund <andres@anarazel.de> a écrit :\n\n> Hi,\n>\n> On 2019-06-07 16:40:27 -0500, Thierry Husson wrote:\n>> oid | oid | relkind | relfrozenxid |\n>> age\n>> --------+--------------------------------------+---------+--------------+------------\n>> 460564 | pg_temp_3.cur_semt700_progsync_4996 | r | 36464 |\n>> 2146483652\n>> 460764 | pg_temp_8.cur_semt700_progsync_5568 | r | 19836544 |\n>> 2126683572\n>> 460718 | pg_temp_4.cur_semt700_progsync_5564 | r | 19836544 |\n>> 2126683572\n>> 460721 | pg_temp_5.cur_semt700_progsync_5565 | r | 19836544 |\n>> 2126683572\n>> 461068 | pg_temp_22.cur_semt700_progsync_5581 | r | 19836544 |\n>> 2126683572\n>>\n>> These are temporary tables to manage concurrency & server load. It seems the\n>> sudden disconnection due to wraparound protection didn't get them removed. I\n>> removed them manually under single mode and there is no more warning now,\n>> vacuum command included. Your command is very interesting to know.\n>\n> Hm. But you do have autovacuum enabled, is that right? If enabled, have\n> you tuned it at all? It seems quite possible that given your load (10\n> parallel loads), the default settings werent aggressive enough.\n\nYes autovacuum is enabled. Aggressiveness was effectively a recent \nproblem I had and putting its max_worker to 8 wasn't a solution, there \nwere all busy 24/7 and I had to do a daily script to help it. The \nsolution was to push vacuum_cost_limit to 2000, since then it works \nlike a charm. Another issue was autovaccuums were taking the lock over \nmy running vacuums, making them waiting for 5 days instead of taking \naround 1 hour. I could do another post on that but it's not PG12 \nspecific, I have it with 10.x\n\n>> It annoying PG create a xId for empty temporary tables. You can't clear it\n>> with a vacuum as there is no record. I have to terminate connexions of my\n>> deamon processes daily to avoid wraparound protection. Is there a way to\n>> tell PG to forget these tables on its age estimation?\n>\n> Normally postgres would drop such \"orphaned\" temp tables on its own, in\n> autovacuum (triggering it when close to a wraparound, even if\n> disabled). But if it can't keep up for some reason, then that's not\n> necessarily good enough with very rapid xid usage as you seem to have.\n>\n> I'll start a thread about this subtopic on -hackers.\n> Greetings,\n>\n> Andres Freund\n\nWhat is the link to this forum? I'm very very interested to follow \nthat subtopic & I could make some tests if necessary.\n\nHave a great weekend & thanks for your time :)\n\nThierry\n\n\n\n\n\n",
"msg_date": "Fri, 07 Jun 2019 17:49:52 -0500",
"msg_from": "Thierry Husson <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi,\n\n(Moving a part of this discussion to hackers)\n\nIn #15840 Thierry had the situation that autovacuum apparently could not\nkeep up, and he ended up with a wraparound situation. Following the\nhints and shutting down the cluster and vacuuming the relevant DB in\nsingle user mode did not resolve the issue however. That's because there\nwas a session with temp tables:\n\nOn 2019-06-07 16:40:27 -0500, Thierry Husson wrote:\n> oid | oid | relkind | relfrozenxid |\n> age\n> --------+--------------------------------------+---------+--------------+------------\n> 460564 | pg_temp_3.cur_semt700_progsync_4996 | r | 36464 |\n> 2146483652\n> 460764 | pg_temp_8.cur_semt700_progsync_5568 | r | 19836544 |\n> 2126683572\n> 460718 | pg_temp_4.cur_semt700_progsync_5564 | r | 19836544 |\n> 2126683572\n> 460721 | pg_temp_5.cur_semt700_progsync_5565 | r | 19836544 |\n> 2126683572\n> 461068 | pg_temp_22.cur_semt700_progsync_5581 | r | 19836544 |\n> 2126683572\n> \n> These are temporary tables to manage concurrency & server load. It seems the\n> sudden disconnection due to wraparound protection didn't get them removed. I\n> removed them manually under single mode and there is no more warning now,\n> vacuum command included. Your command is very interesting to know.\n\nAnd our logic for dropping temp tables only kicks in autovacuum, but not\nin a database manual VACUUM.\n\nWhich means that currently the advice we give, namely to shut down and\nvacuum the database in singleuser mode plainly doesn't work. Without any\nwarnings hinting in the right direction.\n\nDo we need to move the orphan temp cleanup code into database vacuums or\nsuch?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 15:58:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Temp table handling after anti-wraparound shutdown (Was: BUG #15840)"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-07 17:49:52 -0500, Thierry Husson wrote:\n> Andres Freund <andres@anarazel.de> a �crit�:\n> > I'll start a thread about this subtopic on -hackers.\n\n> What is the link to this forum? I'm very very interested to follow that\n> subtopic & I could make some tests if necessary.\n\nIt's now (was interrupted by something else) at:\nhttps://postgr.es/m/20190607225843.z73jqqyy6hhc6qnp%40alap3.anarazel.de\n\nand you're CCed in the discussion.\n\nHave a nice weekend as well!\n\nAndres\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:01:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 03:58:43PM -0700, Andres Freund wrote:\n> Do we need to move the orphan temp cleanup code into database vacuums or\n> such?\n\nWhen entering into the vacuum() code path for an autovacuum, only one\nrelation at a time is processed, and we have prior that extra\nprocessing related to toast relations when selecting the relations to\nwork on, or potentially delete orphaned temp tables. For a manual\nvacuum, we finish by deciding which relation to process in\nget_all_vacuum_rels(), so the localized processing is a bit different\nthan what's done in do_autovacuum() when scanning pg_class for\nrelations. \n\nTechnically, I think that it would work to give up on the gathering of\nthe orphaned OIDs in a gathering and let them be gathered in the list\nof items to vacuum, and then put the deletion logic down to\nvacuum_rel(). However, there is a take: for autovacuum we gather the\norphaned entries and the other relations to process, then drop all the\norphaned OIDs, and finally vacuum/analyze the entries collected. So\nif you put the deletion logic down into vacuum_rel() then we won't be\nable to drop orphaned tables before working on a database, which would\nbe bad if we know about an orphaned set, but autovacuum works for a\nlong time on other legit entries first.\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 08:59:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "On 2019-Jun-07, Thierry Husson wrote:\n\n> Yes autovacuum is enabled. Aggressiveness was effectively a recent problem I\n> had and putting its max_worker to 8 wasn't a solution, there were all busy\n> 24/7 and I had to do a daily script to help it. The solution was to push\n> vacuum_cost_limit to 2000, since then it works like a charm.\n\nNote the I/O cost balancing thing, which seems to bite many people: if\nyou raise max_workers without changing cost_delay or cost_limit, it\ndoesn't have much of an effect, because each worker goes slower to\naccomodate. Raising the cost limit (or lowering the cost delay) does\nhave a useful impact. In pg12 we changed the default cost_delay to 2ms\n(from 20ms).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 20:25:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15840: Vacuum does not work after database stopped for\n wraparound protection. Database seems unrepearable."
},
{
"msg_contents": "Hi,\n\nOn 2019-06-08 08:59:37 +0900, Michael Paquier wrote:\n> On Fri, Jun 07, 2019 at 03:58:43PM -0700, Andres Freund wrote:\n> > Do we need to move the orphan temp cleanup code into database vacuums or\n> > such?\n> \n> When entering into the vacuum() code path for an autovacuum, only one\n> relation at a time is processed, and we have prior that extra\n> processing related to toast relations when selecting the relations to\n> work on, or potentially delete orphaned temp tables. For a manual\n> vacuum, we finish by deciding which relation to process in\n> get_all_vacuum_rels(), so the localized processing is a bit different\n> than what's done in do_autovacuum() when scanning pg_class for\n> relations.\n\nYea, I know. I didn't mean that we should only handle orphan cleanup\nonly within database wide vacuums, just *also* there. ISTM that'd mean\nthat at least some of the code ought to be in vacuum.c, and then also\ncalled by autovacuum.c.\n\n\n> Technically, I think that it would work to give up on the gathering of\n> the orphaned OIDs in a gathering and let them be gathered in the list\n> of items to vacuum, and then put the deletion logic down to\n> vacuum_rel().\n\nI don't think it makes much sense to go there. The API would probably\nlook pretty bad.\n\nI was more thinking that we'd move the check for orphaned-ness into a\nseparate function (maybe IsOrphanedRelation()), and move the code to\ndrop orphan relations into a separate function (maybe\nDropOrphanRelations()). That'd limit the amount of code duplication for\ndoing this both in autovacuum and all-database vacuums quite\nconsiderably.\n\nA more aggressive approach would be to teach vac_update_datfrozenxid()\nto ignore orphaned temp tables - perhaps even by heap_inplace'ing an\norphaned table's relfrozenxid/relminmxid with InvalidTransactionId. We'd\nnot want to do that in do_autovacuum() - otherwise the schema won't get\ncleaned up, but for database widevacuums that seems like it could be\ngood approach.\n\n\n\nRandom observation while re-reading this code: Having do_autovacuum()\nand ExecVacuum() both go through vacuum() seems like it adds too much\ncomplexity to be worth it. Like half of the file is only concerned with\nchecks related to that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Jun 2019 17:26:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 05:26:32PM -0700, Andres Freund wrote:\n> I was more thinking that we'd move the check for orphaned-ness into a\n> separate function (maybe IsOrphanedRelation()), and move the code to\n> drop orphan relations into a separate function (maybe\n> DropOrphanRelations()). That'd limit the amount of code duplication for\n> doing this both in autovacuum and all-database vacuums quite\n> considerably.\n\nA separation makes sense. At some point we should actually try to\nseparate vacuum and orphan relation cleanup, so separate functions\nmake sense. The only reason why we are doing it with autovacuum is\nthat it is the only thing in-core spawning a worker connected to a\ndatabase which does a full scan of pg_class.\n--\nMichael",
"msg_date": "Sat, 8 Jun 2019 10:45:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "I like the approach proposed by Andres: A more aggressive approach \nwould be to teach vac_update_datfrozenxid() to ignore orphaned temp \ntables... In fact, I suppose all temporary tables and their content \ncould be completly ignored by MVCC principles as they are not subject \nto concurrency being unmodifiable/unreadable by other connections.\n\nThat would solve a major problem I have because I automaticaly create \nan empty temporary table for each connection in each DB process to \nmanage users' activities/system resources. Even when everything goes \nwell, these tables take age as long as they exists, even if I \nexplicitly vacuum them, frozen or not. So any connection kept open for \na long time will finish by causing a anti-wraparound shutdown. For now \nthe only solution I have is to kill my deamons connections every day.\n\nI suppose this could be tested by a simple PSQL left open after a \nCREATE TEMP TABLE toto (a INT). Any vacuum can't reduce its age.\n\nThe separate situation, as noted by Michael, could be done at \nconnection time, when PG gives a temporay schema to it. When it create \na pg_temp_XXX schema, it could make sure it's completely empty and \notherwise remove everything in it. I already had a DB corruption \nbecause system tables weren't in sync about these tables/schemas after \na badly closed connection, so it was impossible to make a drop table \non them. So it could be even safer to clear everything directly from \nsystem tables instead of calling drop table for each leftover temp \ntable.\n\nThierry\n\nMichael Paquier <michael@paquier.xyz> a écrit :\n\n> On Fri, Jun 07, 2019 at 05:26:32PM -0700, Andres Freund wrote:\n>> I was more thinking that we'd move the check for orphaned-ness into a\n>> separate function (maybe IsOrphanedRelation()), and move the code to\n>> drop orphan relations into a separate function (maybe\n>> DropOrphanRelations()). That'd limit the amount of code duplication for\n>> doing this both in autovacuum and all-database vacuums quite\n>> considerably.\n>\n> A separation makes sense. At some point we should actually try to\n> separate vacuum and orphan relation cleanup, so separate functions\n> make sense. The only reason why we are doing it with autovacuum is\n> that it is the only thing in-core spawning a worker connected to a\n> database which does a full scan of pg_class.\n> --\n> Michael\n\n\n\n\n\n",
"msg_date": "Sat, 08 Jun 2019 04:06:39 -0500",
"msg_from": "Thierry Husson <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "Hi,\n\n(on postgres lists, please do not top-quote).\n\nOn 2019-06-08 04:06:39 -0500, Thierry Husson wrote:\n> In fact, I suppose all temporary tables and their content could be\n> completly ignored by MVCC principles as they are not subject to\n> concurrency being unmodifiable/unreadable by other connections.\n\nThat'd cause corruption, because vacuum would then remove resources that\nthe temp table might rely on (commit log, multixacts, ...).\n\n\n> The separate situation, as noted by Michael, could be done at connection\n> time, when PG gives a temporay schema to it. When it create a pg_temp_XXX\n> schema, it could make sure it's completely empty and otherwise remove\n> everything in it.\n\nThat already happens, but unfortunately only too late. IIRC We only do\nso once the first temp table in a session is created.\n\n\n> I already had a DB corruption because system tables weren't in sync\n> about these tables/schemas after a badly closed connection, so it was\n> impossible to make a drop table on them. So it could be even safer to\n> clear everything directly from system tables instead of calling drop\n> table for each leftover temp table.\n\nHm, I'd like to know more about that corruption. Did you report it when\nit occured?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jun 2019 14:31:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "\n\n> Hm, I'd like to know more about that corruption. Did you report it when> it occured?Thanks Andres for explanations. I didn't reported the corruption when itoccured as it was my fault, not a PG bug, as the main cause was that I was using network drive, knowingly it's unreliable for DB but management didn'tbelieve me.I had these kind of errors:pg_dump emet_istina -F c -n usr_...pg_dump: schema with OID 308991 does not exist\\dt+ pg_temp*.*ERROR: catalog is missing 1 attribute(s) for relid 5733555drop schema pg_temp_9;ERROR: cache lookup failed for relation 5733715drop schema pg_temp_6;ERROR: cannot drop schema pg_temp_6 because other objects depend on itDETAIL: table pg_temp_6.cur_dde000_105577 depends on schema pg_temp_6HINT: Use DROP ... CASCADE to drop the dependent objects too.I had to manualy remove/edit records from pg_class, pg_type, pg_namespace,pg_depend, pg_shdepend.I finaly managed to make it works and could dump everything and rebuild theDB for more security. Server was down for 1 week, and that event gave meproven arguments to have local storage. That was with 9.6 and I took the opportunity to upgrade to 10.3 at the same time.Now it's more clear it's a PG9/10/12 (I didn't tried 11) problem withvacuum/autovacuum not changing xid on temp tables. So, as long a temp tableexists, it take age and finish by causing a wraparound protection.Thierry\n",
"msg_date": "Mon, 10 Jun 2019 12:47:42 -0400 (EDT)",
"msg_from": "- - <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "\n\nSorry for previous messup.> Hm, I'd like to know more about that corruption. Did you report it when> it occured?Thanks Andres for explanations. I didn't reported the corruption when itoccured as it was my fault, not a PG bug, as the main cause was that I wasusing network drive, knowingly it's unreliable for DB but management didn'tbelieve me.I had these kind of errors:pg_dump emet_istina -F c -n usr_...pg_dump: schema with OID 308991 does not exist\\dt+ pg_temp*.*ERROR: catalog is missing 1 attribute(s) for relid 5733555drop schema pg_temp_9;ERROR: cache lookup failed for relation 5733715drop schema pg_temp_6;ERROR: cannot drop schema pg_temp_6 because other objects depend on itDETAIL: table pg_temp_6.cur_dde000_105577 depends on schema pg_temp_6HINT: Use DROP ... CASCADE to drop the dependent objects too.I had to manualy remove/edit records from pg_class, pg_type, pg_namespace,pg_depend, pg_shdepend.I finaly managed to make it works and could dump everything and rebuild theDB for more security. Server was down for 1 week, and that event gave meproven arguments to have local storage. That was with 9.6 and I took theopportunity to upgrade to 10.3 at the same time.Now it's more clear it's a PG9/10/12 (I didn't tried 11) problem withvacuum/autovacuum not changing xid on temp tables. So, as long a temp tableexists, it take age and finish by causing a wraparound protection.Thierry\n",
"msg_date": "Mon, 10 Jun 2019 14:22:41 -0400 (EDT)",
"msg_from": "- - <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "> Hm, I'd like to know more about that corruption. Did you report it when\n> it occured?\n>\n> Greetings,\n>\n> Andres Freund\n\nThanks Andres for explanations, sorry for my previous mess. I didn't \nreported the corruption when it occured as it was my fault, not a PG \nbug, as the main cause was that I was using network drive, knowingly \nit's unreliable for DB but management didn't believe me.\n\nI had these kind of errors:\n\npg_dump emet_istina -F c -n usr_...\npg_dump: schema with OID 308991 does not exist\n\n\\dt+ pg_temp*.*\nERROR: catalog is missing 1 attribute(s) for relid 5733555\n\ndrop schema pg_temp_9;\nERROR: cache lookup failed for relation 5733715\n\ndrop schema pg_temp_6;\nERROR: cannot drop schema pg_temp_6 because other objects depend on it\nDETAIL: table pg_temp_6.cur_dde000_105577 depends on schema pg_temp_6\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nI had to manualy remove/edit records from pg_class, pg_type, pg_namespace,\npg_depend, pg_shdepend.\n\nI finaly managed to make it works and could dump everything and \nrebuild the DB for more security. Server was down for 1 week, and that \nevent gave me proven arguments to have local storage. That was with \n9.6 and I took the opportunity to upgrade to 10.3 at the same time.\n\nNow it's more clear it's a PG9/10/12 problem (didn't tried 11) with \nvacuum/autovacuum not changing xid on temp tables. So, as long a temp \ntable exists, it take age and finish by causing a wraparound protection.\n\nThierry\n\n\n\n\n",
"msg_date": "Mon, 10 Jun 2019 18:45:38 -0500",
"msg_from": "Thierry Husson <thusson@informiciel.com>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
},
{
"msg_contents": "On Sat, Jun 8, 2019 at 9:26 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> A more aggressive approach would be to teach vac_update_datfrozenxid()\n> to ignore orphaned temp tables - perhaps even by heap_inplace'ing an\n> orphaned table's relfrozenxid/relminmxid with InvalidTransactionId. We'd\n> not want to do that in do_autovacuum() - otherwise the schema won't get\n> cleaned up, but for database widevacuums that seems like it could be\n> good approach.\n>\n\nFWIW I like this approach. We don't need to calculate new datfrozenxid\nwhile including orphaned temp tables. It both improves behavior and\nfixes this issue. Also with that approach we will not need stop\ndatabase cluster and do vacuuming in single user mode. The making the\nvacuum command cleanup orphaned temp tables would be helpful in the\ncase where we reached to wraparound while having active temp tables,\nit doesn't happen in normal use case though.\n\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Jun 2019 21:08:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Temp table handling after anti-wraparound shutdown (Was: BUG\n #15840)"
}
] |
[
{
"msg_contents": "I spent some time today studying heapam_index_build_range_scan and\nquickly reached the conclusion that it's kind of a mess. At heart\nit's pretty simple: loop over all the table, check each tuple against\nany qual, and pass the visible ones to the callback. However, in an\nattempt to make it cater to various needs slightly outside of its\noriginal design purpose, various warts have been added, and there are\nenough of them now that I at least find it fairly difficult to\nunderstand. One of those warts is anyvisible, which I gather was\nadded in support of BRIN.\n\nI first spent some time looking at how the 'anyvisible' flag affects\nthe behavior of the function. AFAICS, setting the flag to true results\nin three behavior changes:\n\n1. The elog(WARNING, ...) calls about a concurrent insert/delete calls\nin progress can't be reached.\n2. In some cases, reltuples += 1 might not occur where it would've\nhappened otherwise.\n3. If we encounter a HOT-updated which was deleted by our own\ntransaction, we index it instead of skipping it.\n\nChange #2 doesn't matter because the only caller that passes\nanyvisible = true seems to be BRIN, and BRIN ignores the return value.\nI initially thought that change #1 must not matter either, because\nfunction has comments in several places saying that the caller must\nhold ShareLock or better. And I thought change #3 must also not\nmatter, because as the comments explain, this function is used to\nbuild indexes, and if our CREATE INDEX command commits, then any\ndeletions that it has already performed will commit too, so the fact\nthat we haven't indexed the now-deleted tuples will be fine. Then I\nrealized that brin_summarize_new_values() is calling this function\n*without* ShareLock and for *not* for the purpose of creating a new\nindex but rather for the purpose of updating an existing index, which\nmeans #1 and #3 do matter after all. But I think it's kind of\nconfusing because anyvisible doesn't change anything about which\ntuples are visible. SnapshotAny is already making \"any\" tuple\n\"visible.\" This flag really means \"caller is holding a\nlower-than-normal lock level and is not inserting into a brand new\nrelfilnode\".\n\nThere may be more than just a cosmetic problem here, because the comments say:\n\n * It might look unsafe to use this information across buffer\n * lock/unlock. However, we hold ShareLock on the table so no\n * ordinary insert/update/delete should occur; and we hold pin on the\n * buffer continuously while visiting the page, so no pruning\n * operation can occur either.\n\nIn the BRIN case that doesn't apply; I don't know whether this is safe\nin that case for some other reason.\n\nI also note that amcheck's bt_check_every_level can also call this\nwithout ShareLock. It doesn't need to set anyvisible because passing a\nsnapshot bypasses the WARNINGs anyway, but it might have whatever\nproblem the above comment is thinking about.\n\nAlso, it's just cosmetic, but this comment definitely needs updating:\n\n /*\n * We could possibly get away with not locking the buffer here,\n * since caller should hold ShareLock on the relation, but let's\n * be conservative about it. (This remark is still correct even\n * with HOT-pruning: our pin on the buffer prevents pruning.)\n */\n LockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\n\nOne more thing. Assuming that there are no live bugs here, or that we\nfix them, another possible simplification would be to remove the\nanyvisible = true flag and have BRIN pass SnapshotNonVacuumable.\nSnapshotNonVacuumable returns true when HeapTupleSatisfiesVacuum\ndoesn't return HEAPTUPLE_DEAD, so I think we'd get exactly the same\nbehavior (again, modulo reltuples, which doesn't matter).\nheap_getnext() would perform functionally the same check as the\nbespoke code internally, and just wouldn't return the dead tuples in\nthe first place. There's an assertion that would trip, but we could\nprobably just change it. BRIN's callback might also get a different\nvalue for tupleIsAlive in some cases, but it ignores that value\nanyway.\n\nSo to summarize:\n\n1. Is this function really safe with < ShareLock? Both BRIN and\namcheck think so, but the function itself isn't sure. If yes, we need\nto adapt the comments. If no, we need to think about some other fix.\n\n2. anyvisible is a funny name given what the flag really does. Maybe\nwe can simplify by replacing it with SnapshotNonVacuumable().\nOtherwise maybe we should rename the flag.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:18:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On 2019-Jun-07, Robert Haas wrote:\n\n> I spent some time today studying heapam_index_build_range_scan and\n> quickly reached the conclusion that it's kind of a mess. At heart\n> it's pretty simple: loop over all the table, check each tuple against\n> any qual, and pass the visible ones to the callback. However, in an\n> attempt to make it cater to various needs slightly outside of its\n> original design purpose, various warts have been added, and there are\n> enough of them now that I at least find it fairly difficult to\n> understand. One of those warts is anyvisible, which I gather was\n> added in support of BRIN.\n\nYes, commit 2834855cb9fd added that flag. SnapshotNonVacuumable did not\nexist back then. It seems like maybe it would work to remove the flag\nand replace with passing SnapshotNonVacuumable. The case that caused\nthat flag to be added is tested by a dedicated isolation test, so if\nBRIN becomes broken by the change at least it'd be obvious ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:30:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 4:30 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jun-07, Robert Haas wrote:\n> > I spent some time today studying heapam_index_build_range_scan and\n> > quickly reached the conclusion that it's kind of a mess. At heart\n> > it's pretty simple: loop over all the table, check each tuple against\n> > any qual, and pass the visible ones to the callback. However, in an\n> > attempt to make it cater to various needs slightly outside of its\n> > original design purpose, various warts have been added, and there are\n> > enough of them now that I at least find it fairly difficult to\n> > understand. One of those warts is anyvisible, which I gather was\n> > added in support of BRIN.\n>\n> Yes, commit 2834855cb9fd added that flag. SnapshotNonVacuumable did not\n> exist back then. It seems like maybe it would work to remove the flag\n> and replace with passing SnapshotNonVacuumable. The case that caused\n> that flag to be added is tested by a dedicated isolation test, so if\n> BRIN becomes broken by the change at least it'd be obvious ...\n\nYeah, I wondered whether SnapshotNonVacuumable might've been added\nlater, but I was too lazy to check the commit log. I'll try coding up\nthat approach and see how it looks.\n\nBut do you have any comment on the question of whether this function\nis actually safe with < ShareLock, per the comments about caching\nHOT-related state across buffer lock releases?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 17:11:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On 2019-Jun-07, Robert Haas wrote:\n\n> Yeah, I wondered whether SnapshotNonVacuumable might've been added\n> later, but I was too lazy to check the commit log. I'll try coding up\n> that approach and see how it looks.\n\nThanks.\n\n> But do you have any comment on the question of whether this function\n> is actually safe with < ShareLock, per the comments about caching\n> HOT-related state across buffer lock releases?\n\nWell, as far as I understand we do hold a buffer pin on the page the\nwhole time until we abandon it, which prevents HOT pruning, so the root\noffset cache should be safe (since heap_page_prune requires cleanup\nlock). The thing we don't keep held is a buffer lock, so I/U/D could\noccur, but those are not supposed to be hazards for the BRIN use, since\nthat's covered by the anyvisible / SnapshotNonVacuumable\nhack^Wtechnique.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 17:26:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I spent some time today studying heapam_index_build_range_scan and\n> quickly reached the conclusion that it's kind of a mess.\n>\n\nWhile at it might be helpful and better to also decouple HeapTuple\ndependency for\nIndexBuildCallback. Currently, all AM needs to build HeapTuple in\nindex_build_range_scan function. I looked into all the callback functions\nand only htup->t_self is used from heaptuple in all the functions (unless I\nmissed something). So, if seems fine will be happy to write patch to make\nthat argument ItemPointer instead of HeapTuple?\n\nOn Fri, Jun 7, 2019 at 1:19 PM Robert Haas <robertmhaas@gmail.com> wrote:I spent some time today studying heapam_index_build_range_scan and\nquickly reached the conclusion that it's kind of a mess.While at it might be helpful and better to also decouple HeapTuple dependency for IndexBuildCallback. Currently, all AM needs to build HeapTuple in index_build_range_scan function. I looked into all the callback functions and only htup->t_self is used from heaptuple in all the functions (unless I missed something). So, if seems fine will be happy to write patch to make that argument ItemPointer instead of HeapTuple?",
"msg_date": "Mon, 10 Jun 2019 13:48:54 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-10 13:48:54 -0700, Ashwin Agrawal wrote:\n> While at it might be helpful and better to also decouple HeapTuple\n> dependency for IndexBuildCallback.\n\nIndeed.\n\n\n> Currently, all AM needs to build HeapTuple in\n> index_build_range_scan function. I looked into all the callback functions\n> and only htup->t_self is used from heaptuple in all the functions (unless I\n> missed something). So, if seems fine will be happy to write patch to make\n> that argument ItemPointer instead of HeapTuple?\n\nI wonder if it should better be the slot? It's not inconceivable that\nsome AMs could benefit from that. Although it'd add some complication\nto the heap HeapTupleIsHeapOnly case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jun 2019 14:56:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n\n> > Currently, all AM needs to build HeapTuple in\n> > index_build_range_scan function. I looked into all the callback functions\n> > and only htup->t_self is used from heaptuple in all the functions\n> (unless I\n> > missed something). So, if seems fine will be happy to write patch to make\n> > that argument ItemPointer instead of HeapTuple?\n>\n> I wonder if it should better be the slot? It's not inconceivable that\n> some AMs could benefit from that. Although it'd add some complication\n> to the heap HeapTupleIsHeapOnly case.\n>\n\nI like the slot suggestion, only if can push FormIndexDatum() out of AM\ncode as a result and pass slot to the callback. Not sure what else can it\nhelp. HeapTupleIsHeapOnly possibly can be made to work with slot similar to\ncurrent hack of updating the t_self and slot's tid field, maybe.\n\nIndex callback using the slot can form the index datum. Though that would\nmean every Index AM callback function needs to do it, not sure which way is\nbetter. Plus, need to create ExecutorState for the same. With current setup\nevery AM needs to do. Feels good if belongs to indexing code though instead\nof AM.\n\nCurrently, index build needing to create ExecutorState and all at AM layer\nseems not nice either. Maybe there is quite generic logic here and possible\ncan be extracted into common code which either most of AM leverage. Or\npossibly the API itself can be simplified to get minimum input from AM and\nhave rest of flow/machinery at upper layer. As Robert pointed at start of\nthread at heart its pretty simple flow and possibly will remain same for\nany AM.\n\nOn Mon, Jun 10, 2019 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Currently, all AM needs to build HeapTuple in\n> index_build_range_scan function. I looked into all the callback functions\n> and only htup->t_self is used from heaptuple in all the functions (unless I\n> missed something). So, if seems fine will be happy to write patch to make\n> that argument ItemPointer instead of HeapTuple?\n\nI wonder if it should better be the slot? It's not inconceivable that\nsome AMs could benefit from that. Although it'd add some complication\nto the heap HeapTupleIsHeapOnly case.I like the slot suggestion, only if can push FormIndexDatum() out of AM code as a result and pass slot to the callback. Not sure what else can it help. HeapTupleIsHeapOnly possibly can be made to work with slot similar to current hack of updating the t_self and slot's tid field, maybe.Index callback using the slot can form the index datum. Though that would mean every Index AM callback function needs to do it, not sure which way is better. Plus, need to create ExecutorState for the same. With current setup every AM needs to do. Feels good if belongs to indexing code though instead of AM. Currently, index build needing to create ExecutorState and all at AM layer seems not nice either. Maybe there is quite generic logic here and possible can be extracted into common code which either most of AM leverage. Or possibly the API itself can be simplified to get minimum input from AM and have rest of flow/machinery at upper layer. As Robert pointed at start of thread at heart its pretty simple flow and possibly will remain same for any AM.",
"msg_date": "Mon, 10 Jun 2019 17:38:59 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 5:38 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n\n>\n> On Mon, Jun 10, 2019 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>> > Currently, all AM needs to build HeapTuple in\n>> > index_build_range_scan function. I looked into all the callback\n>> functions\n>> > and only htup->t_self is used from heaptuple in all the functions\n>> (unless I\n>> > missed something). So, if seems fine will be happy to write patch to\n>> make\n>> > that argument ItemPointer instead of HeapTuple?\n>>\n>> I wonder if it should better be the slot? It's not inconceivable that\n>> some AMs could benefit from that. Although it'd add some complication\n>> to the heap HeapTupleIsHeapOnly case.\n>>\n>\n> I like the slot suggestion, only if can push FormIndexDatum() out of AM\n> code as a result and pass slot to the callback. Not sure what else can it\n> help. HeapTupleIsHeapOnly possibly can be made to work with slot similar to\n> current hack of updating the t_self and slot's tid field, maybe.\n>\n> Index callback using the slot can form the index datum. Though that would\n> mean every Index AM callback function needs to do it, not sure which way is\n> better. Plus, need to create ExecutorState for the same. With current setup\n> every AM needs to do. Feels good if belongs to indexing code though instead\n> of AM.\n>\n> Currently, index build needing to create ExecutorState and all at AM layer\n> seems not nice either. Maybe there is quite generic logic here and possible\n> can be extracted into common code which either most of AM leverage. Or\n> possibly the API itself can be simplified to get minimum input from AM and\n> have rest of flow/machinery at upper layer. As Robert pointed at start of\n> thread at heart its pretty simple flow and possibly will remain same for\n> any AM.\n>\n>\nPlease find attached the patch to remove IndexBuildCallback's dependency on\nHeapTuple, as discussed. Changed to have the argument as ItemPointer\ninstead of HeapTuple. Other larger refactoring if feasible for\nindex_build_range_scan API itself can be performed as follow-up changes.",
"msg_date": "Thu, 11 Jul 2019 17:27:46 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-11 17:27:46 -0700, Ashwin Agrawal wrote:\n> Please find attached the patch to remove IndexBuildCallback's dependency on\n> HeapTuple, as discussed. Changed to have the argument as ItemPointer\n> instead of HeapTuple. Other larger refactoring if feasible for\n> index_build_range_scan API itself can be performed as follow-up changes.\n\n> From f73b0061795f0c320f96ecfed0c0602ae318d73e Mon Sep 17 00:00:00 2001\n> From: Ashwin Agrawal <aagrawal@pivotal.io>\n> Date: Thu, 11 Jul 2019 16:58:50 -0700\n> Subject: [PATCH v1] Remove IndexBuildCallback's dependency on HeapTuple.\n>\n> With IndexBuildCallback taking input as HeapTuple, all table AMs\n> irrespective of storing the tuples in HeapTuple form or not, are\n> forced to construct HeapTuple, to insert the tuple in Index. Since,\n> only thing required by the index callbacks is TID and not really the\n> full tuple, modify callback to only take ItemPointer.\n\nLooks good to me. Planning to apply this unless somebody wants to argue\nagainst it soon?\n\n- Andres\n\n\n",
"msg_date": "Tue, 16 Jul 2019 10:21:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:22 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-07-11 17:27:46 -0700, Ashwin Agrawal wrote:\n> > Please find attached the patch to remove IndexBuildCallback's dependency\n> on\n> > HeapTuple, as discussed. Changed to have the argument as ItemPointer\n> > instead of HeapTuple. Other larger refactoring if feasible for\n> > index_build_range_scan API itself can be performed as follow-up changes.\n>\n> > From f73b0061795f0c320f96ecfed0c0602ae318d73e Mon Sep 17 00:00:00 2001\n> > From: Ashwin Agrawal <aagrawal@pivotal.io>\n> > Date: Thu, 11 Jul 2019 16:58:50 -0700\n> > Subject: [PATCH v1] Remove IndexBuildCallback's dependency on HeapTuple.\n> >\n> > With IndexBuildCallback taking input as HeapTuple, all table AMs\n> > irrespective of storing the tuples in HeapTuple form or not, are\n> > forced to construct HeapTuple, to insert the tuple in Index. Since,\n> > only thing required by the index callbacks is TID and not really the\n> > full tuple, modify callback to only take ItemPointer.\n>\n> Looks good to me. Planning to apply this unless somebody wants to argue\n> against it soon?\n>\n\nAndres, I didn't yet register this for next commitfest. If its going in\nsoon anyways will not do it otherwise let me know and I will add it to the\nlist.\n\nOn Tue, Jul 16, 2019 at 10:22 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-07-11 17:27:46 -0700, Ashwin Agrawal wrote:\n> Please find attached the patch to remove IndexBuildCallback's dependency on\n> HeapTuple, as discussed. Changed to have the argument as ItemPointer\n> instead of HeapTuple. Other larger refactoring if feasible for\n> index_build_range_scan API itself can be performed as follow-up changes.\n\n> From f73b0061795f0c320f96ecfed0c0602ae318d73e Mon Sep 17 00:00:00 2001\n> From: Ashwin Agrawal <aagrawal@pivotal.io>\n> Date: Thu, 11 Jul 2019 16:58:50 -0700\n> Subject: [PATCH v1] Remove IndexBuildCallback's dependency on HeapTuple.\n>\n> With IndexBuildCallback taking input as HeapTuple, all table AMs\n> irrespective of storing the tuples in HeapTuple form or not, are\n> forced to construct HeapTuple, to insert the tuple in Index. Since,\n> only thing required by the index callbacks is TID and not really the\n> full tuple, modify callback to only take ItemPointer.\n\nLooks good to me. Planning to apply this unless somebody wants to argue\nagainst it soon?Andres, I didn't yet register this for next commitfest. If its going in soon anyways will not do it otherwise let me know and I will add it to the list.",
"msg_date": "Tue, 30 Jul 2019 13:54:59 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On 2019-Jul-30, Ashwin Agrawal wrote:\n\n> On Tue, Jul 16, 2019 at 10:22 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > Looks good to me. Planning to apply this unless somebody wants to argue\n> > against it soon?\n> \n> Andres, I didn't yet register this for next commitfest. If its going in\n> soon anyways will not do it otherwise let me know and I will add it to the\n> list.\n\nSounds OK ... except that Travis points out that Ashwin forgot to patch contrib:\n\nmake[4]: Entering directory '/home/travis/build/postgresql-cfbot/postgresql/contrib/amcheck'\ngcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -Wall -Werror -fPIC -I. -I. -I../../src/include -I/usr/include/x86_64-linux-gnu -D_GNU_SOURCE -c -o verify_nbtree.o verify_nbtree.c\nverify_nbtree.c: In function ‘bt_check_every_level’:\nverify_nbtree.c:614:11: error: passing argument 6 of ‘table_index_build_scan’ from incompatible pointer type [-Werror=incompatible-pointer-types]\n bt_tuple_present_callback, (void *) state, scan);\n ^\nIn file included from verify_nbtree.c:29:0:\n../../src/include/access/tableam.h:1499:1: note: expected ‘IndexBuildCallback {aka void (*)(struct RelationData *, struct ItemPointerData *, long unsigned int *, _Bool *, _Bool, void *)}’ but argument is of type ‘void (*)(struct RelationData *, HeapTupleData *, Datum *, _Bool *, _Bool, void *) {aka void (*)(struct RelationData *, struct HeapTupleData *, long unsigned int *, _Bool *, _Bool, void *)}’\n table_index_build_scan(Relation table_rel,\n ^\ncc1: all warnings being treated as errors\n<builtin>: recipe for target 'verify_nbtree.o' failed\nmake[4]: *** [verify_nbtree.o] Error 1\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:52:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 1:52 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> Sounds OK ... except that Travis points out that Ashwin forgot to patch\n> contrib:\n>\n> make[4]: Entering directory\n> '/home/travis/build/postgresql-cfbot/postgresql/contrib/amcheck'\n> gcc -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -g -O2 -Wall -Werror -fPIC -I. -I.\n> -I../../src/include -I/usr/include/x86_64-linux-gnu -D_GNU_SOURCE -c -o\n> verify_nbtree.o verify_nbtree.c\n> verify_nbtree.c: In function ‘bt_check_every_level’:\n> verify_nbtree.c:614:11: error: passing argument 6 of\n> ‘table_index_build_scan’ from incompatible pointer type\n> [-Werror=incompatible-pointer-types]\n> bt_tuple_present_callback, (void *) state, scan);\n> ^\n> In file included from verify_nbtree.c:29:0:\n> ../../src/include/access/tableam.h:1499:1: note: expected\n> ‘IndexBuildCallback {aka void (*)(struct RelationData *, struct\n> ItemPointerData *, long unsigned int *, _Bool *, _Bool, void *)}’ but\n> argument is of type ‘void (*)(struct RelationData *, HeapTupleData *, Datum\n> *, _Bool *, _Bool, void *) {aka void (*)(struct RelationData *, struct\n> HeapTupleData *, long unsigned int *, _Bool *, _Bool, void *)}’\n> table_index_build_scan(Relation table_rel,\n> ^\n> cc1: all warnings being treated as errors\n> <builtin>: recipe for target 'verify_nbtree.o' failed\n> make[4]: *** [verify_nbtree.o] Error 1\n>\n\nThanks for reporting, I did indeed missed out contrib. Please find attached\nthe v2 of the patch which includes the change required in contrib as well.",
"msg_date": "Wed, 25 Sep 2019 22:24:05 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Wed, Sep 25, 2019 at 10:24:05PM -0700, Ashwin Agrawal wrote:\n> Thanks for reporting, I did indeed missed out contrib. Please find attached\n> the v2 of the patch which includes the change required in contrib as well.\n\nOkay, that makes sense. The patch looks good to me so I have switched\nit to ready for committer. Andres, Robert, would you prefer\ncommitting this one yourself? If not, I'll take care of it tomorrow\nafter a second look.\n--\nMichael",
"msg_date": "Thu, 7 Nov 2019 17:02:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-07 17:02:36 +0900, Michael Paquier wrote:\n> On Wed, Sep 25, 2019 at 10:24:05PM -0700, Ashwin Agrawal wrote:\n> > Thanks for reporting, I did indeed missed out contrib. Please find attached\n> > the v2 of the patch which includes the change required in contrib as well.\n> \n> Okay, that makes sense. The patch looks good to me so I have switched\n> it to ready for committer. Andres, Robert, would you prefer\n> committing this one yourself? If not, I'll take care of it tomorrow\n> after a second look.\n\nLet me take a look this afternoon. Swapped out of my brain right now\nunfortunately.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:25:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Thu, Nov 07, 2019 at 09:25:40AM -0800, Andres Freund wrote:\n> Let me take a look this afternoon. Swapped out of my brain right now\n> unfortunately.\n\nThanks for the update.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 09:03:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-07 09:25:40 -0800, Andres Freund wrote:\n> On 2019-11-07 17:02:36 +0900, Michael Paquier wrote:\n> > On Wed, Sep 25, 2019 at 10:24:05PM -0700, Ashwin Agrawal wrote:\n> > > Thanks for reporting, I did indeed missed out contrib. Please find attached\n> > > the v2 of the patch which includes the change required in contrib as well.\n> > \n> > Okay, that makes sense. The patch looks good to me so I have switched\n> > it to ready for committer. Andres, Robert, would you prefer\n> > committing this one yourself? If not, I'll take care of it tomorrow\n> > after a second look.\n> \n> Let me take a look this afternoon. Swapped out of my brain right now\n> unfortunately.\n\nLooks good to me (minus a formatting change in one or two places,\nundoing linebreaks). I was about to push, but after trying to write a\nsentence in the commit message like three times, I'instead push first\nthing tomorrow..\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Nov 2019 01:22:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On 2019-11-08 01:22:45 -0800, Andres Freund wrote:\n> On 2019-11-07 09:25:40 -0800, Andres Freund wrote:\n> > On 2019-11-07 17:02:36 +0900, Michael Paquier wrote:\n> > > On Wed, Sep 25, 2019 at 10:24:05PM -0700, Ashwin Agrawal wrote:\n> > > > Thanks for reporting, I did indeed missed out contrib. Please find attached\n> > > > the v2 of the patch which includes the change required in contrib as well.\n> > > \n> > > Okay, that makes sense. The patch looks good to me so I have switched\n> > > it to ready for committer. Andres, Robert, would you prefer\n> > > committing this one yourself? If not, I'll take care of it tomorrow\n> > > after a second look.\n> > \n> > Let me take a look this afternoon. Swapped out of my brain right now\n> > unfortunately.\n> \n> Looks good to me (minus a formatting change in one or two places,\n> undoing linebreaks). I was about to push, but after trying to write a\n> sentence in the commit message like three times, I'instead push first\n> thing tomorrow..\n\nAnd pushed. Sorry that this took so long.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Nov 2019 12:10:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 12:10:35PM -0800, Andres Freund wrote:\n> And pushed. Sorry that this took so long.\n\nThanks Andres. I have updated the status of the patch in the CF app\naccordingly: https://commitfest.postgresql.org/25/2235/.\n--\nMichael",
"msg_date": "Sat, 9 Nov 2019 09:45:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: heapam_index_build_range_scan's anyvisible"
}
] |
[
{
"msg_contents": "Hi All,\n\nI was testing bloom indexes today. I understand bloom indexes uses bloom\nfilters.\n\nAs i understand, a bloom filter is a bit array of m bits and a constant \"k\"\nnumber of hash functions are used to generate hashes for the data. And then\nthe appropriate bits are set to 1.\n\nI was doing the following test where i was generating 10 million records\nand testing bloom indexes.\n\nCREATE TABLE foo.bar (id int, dept int, id2 int, id3 int, id4 int, id5\nint,id6 int,id7 int,details text, zipcode int);\n\nINSERT INTO foo.bar SELECT (random() * 1000000)::int, (random() *\n1000000)::int,(random() * 1000000)::int,(random() * 1000000)::int,(random()\n* 1000000)::int,(random() * 1000000)::int, (random() *\n1000000)::int,(random() * 1000000)::int,md5(g::text), floor(random()*\n(20000-9999 + 1) + 9999) from generate_series(1,100*1e6) g;\n\nAs per the documentation, bloom indexes accepts 2 parameters. *length* and\nthe *number of bits for each column*.\n\nHere is the problem or the question i have.\n\nI have created the following 2 Indexes.\n\n*Index 1*\nCREATE INDEX idx_bloom_bar ON foo.bar\nUSING bloom(id, dept, id2, id3, id4, id5, id6, zipcode)\nWITH (length=48, col1=4, col2=4, col3=4, col4=4, col5=4, col6=4, col7=4,\ncol8=4);\n\n*Index 2*\nCREATE INDEX idx_bloom_bar ON foo.bar\nUSING bloom(id, dept, id2, id3, id4, id5, id6, zipcode)\nWITH (length=48, col1=2, col2=2, col3=2, col4=2, col5=2, col6=2, col7=2,\ncol8=2);\n\nWith change in length, we of course see a difference in the Index size\nwhich is understandable. Here i have the same length for both indexes. But,\ni reduced the number of bits per each index column from 4 to 2. Both the\nabove indexes are of the same size. But, there is a very slight difference\nin the execution time between Index 1 and Index 2 but with the same cost.\n\nSo the question here is -\nI assume - number of bits = k. Where k is the total number of hash\nfunctions used on top of the data that needs to validated. Is that correct\n? If yes, why do we see the Index 1 performing better than Index 2 ?\nBecause, the data has to go through more hash functions (4 vs 2) in Index 1\nthan Index 2. So, with Index 1 it should take more time.\nAlso, both the indexes have ZERO false positives.\nPlease let me know if there is anything simple that i am missing here.\n\n*Query *\n\nEXPLAIN (ANALYZE, BUFFERS, VERBOSE) select * from foo.bar where id = 736833\nand dept = 89861 and id2 = 573221 and id3 = 675911 and id4 = 943394 and id5\n= 326756 and id6 = 597560 and zipcode = 10545;\n\n*With Index 1 *\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo.bar (cost=2647060.00..2647061.03 rows=1 width=69)\n(actual time=307.000..307.000 rows=0 loops=1)\n Output: id, dept, id2, id3, id4, id5, id6, id7, details, zipcode\n Recheck Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 =\n573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 =\n326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545))\n Buffers: shared hit=147059\n -> Bitmap Index Scan on idx_bloom_bar (cost=0.00..2647060.00 rows=1\nwidth=0) (actual time=306.997..306.997 rows=0 loops=1)\n Index Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2\n= 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 =\n326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545))\n Buffers: shared hit=147059\n Planning Time: 0.106 ms\n* Execution Time: 307.030 ms*\n(9 rows)\n\n*With Index 2 *\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on foo.bar (cost=2647060.00..2647061.03 rows=1 width=69)\n(actual time=420.881..420.881 rows=0 loops=1)\n Output: id, dept, id2, id3, id4, id5, id6, id7, details, zipcode\n Recheck Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 =\n573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 =\n326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545))\n Buffers: shared hit=147059\n -> Bitmap Index Scan on idx_bloom_bar (cost=0.00..2647060.00 rows=1\nwidth=0) (actual time=420.878..420.878 rows=0 loops=1)\n Index Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2\n= 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 =\n326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545))\n Buffers: shared hit=147059\n Planning Time: 0.104 ms\n* Execution Time: 420.913 ms*\n(9 rows)\n\nThanks,\nAvinash.\n\nHi All,I was testing bloom indexes today. I understand bloom indexes uses bloom filters.As i understand, a bloom filter is a bit array of m bits and a constant \"k\" number of hash functions are used to generate hashes for the data. And then the appropriate bits are set to 1. I was doing the following test where i was generating 10 million records and testing bloom indexes. CREATE TABLE foo.bar (id int, dept int, id2 int, id3 int, id4 int, id5 int,id6 int,id7 int,details text, zipcode int);INSERT INTO foo.bar SELECT (random() * 1000000)::int, (random() * 1000000)::int,(random() * 1000000)::int,(random() * 1000000)::int,(random() * 1000000)::int,(random() * 1000000)::int, (random() * 1000000)::int,(random() * 1000000)::int,md5(g::text), floor(random()* (20000-9999 + 1) + 9999) from generate_series(1,100*1e6) g;As per the documentation, bloom indexes accepts 2 parameters. length and the number of bits for each column.Here is the problem or the question i have. I have created the following 2 Indexes. Index 1CREATE INDEX idx_bloom_bar ON foo.bar USING bloom(id, dept, id2, id3, id4, id5, id6, zipcode) WITH (length=48, col1=4, col2=4, col3=4, col4=4, col5=4, col6=4, col7=4, col8=4);Index 2CREATE INDEX idx_bloom_bar ON foo.bar USING bloom(id, dept, id2, id3, id4, id5, id6, zipcode) WITH (length=48, col1=2, col2=2, col3=2, col4=2, col5=2, col6=2, col7=2, col8=2);With change in length, we of course see a difference in the Index size which is understandable. Here i have the same length for both indexes. But, i reduced the number of bits per each index column from 4 to 2. Both the above indexes are of the same size. But, there is a very slight difference in the execution time between Index 1 and Index 2 but with the same cost. So the question here is - I assume - number of bits = k. Where k is the total number of hash functions used on top of the data that needs to validated. Is that correct ? If yes, why do we see the Index 1 performing better than Index 2 ? Because, the data has to go through more hash functions (4 vs 2) in Index 1 than Index 2. So, with Index 1 it should take more time.Also, both the indexes have ZERO false positives. Please let me know if there is anything simple that i am missing here. Query EXPLAIN (ANALYZE, BUFFERS, VERBOSE) select * from foo.bar where id = 736833 and dept = 89861 and id2 = 573221 and id3 = 675911 and id4 = 943394 and id5 = 326756 and id6 = 597560 and zipcode = 10545; With Index 1 QUERY PLAN ---------------------------------------------------------------------------------------------------- Bitmap Heap Scan on foo.bar (cost=2647060.00..2647061.03 rows=1 width=69) (actual time=307.000..307.000 rows=0 loops=1) Output: id, dept, id2, id3, id4, id5, id6, id7, details, zipcode Recheck Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 = 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 = 326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545)) Buffers: shared hit=147059 -> Bitmap Index Scan on idx_bloom_bar (cost=0.00..2647060.00 rows=1 width=0) (actual time=306.997..306.997 rows=0 loops=1) Index Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 = 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 = 326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545)) Buffers: shared hit=147059 Planning Time: 0.106 ms Execution Time: 307.030 ms(9 rows)With Index 2 QUERY PLAN ---------------------------------------------------------------------------------------------------- Bitmap Heap Scan on foo.bar (cost=2647060.00..2647061.03 rows=1 width=69) (actual time=420.881..420.881 rows=0 loops=1) Output: id, dept, id2, id3, id4, id5, id6, id7, details, zipcode Recheck Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 = 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 = 326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545)) Buffers: shared hit=147059 -> Bitmap Index Scan on idx_bloom_bar (cost=0.00..2647060.00 rows=1 width=0) (actual time=420.878..420.878 rows=0 loops=1) Index Cond: ((bar.id = 736833) AND (bar.dept = 89861) AND (bar.id2 = 573221) AND (bar.id3 = 675911) AND (bar.id4 = 943394) AND (bar.id5 = 326756) AND (bar.id6 = 597560) AND (bar.zipcode = 10545)) Buffers: shared hit=147059 Planning Time: 0.104 ms Execution Time: 420.913 ms(9 rows)Thanks,Avinash.",
"msg_date": "Fri, 7 Jun 2019 23:43:08 -0300",
"msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Bloom Indexes - bit array length and the total number of bits (or\n hash functions ?? ) !"
},
{
"msg_contents": "\nHello Avinash,\n\n> I was testing bloom indexes today. I understand bloom indexes uses bloom\n> filters. [...]\n>\n> So the question here is -\n> I assume - number of bits = k. Where k is the total number of hash\n> functions used on top of the data that needs to validated. Is that correct\n> ? If yes, why do we see the Index 1 performing better than Index 2 ?\n> Because, the data has to go through more hash functions (4 vs 2) in Index 1\n> than Index 2. So, with Index 1 it should take more time.\n> Also, both the indexes have ZERO false positives.\n> Please let me know if there is anything simple that i am missing here.\n\nYou may have a look at the blog entry about these parameters I redacted a \nfew year ago:\n\n http://blog.coelho.net/database/2016/12/11/postgresql-bloom-index.html\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 8 Jun 2019 08:11:03 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Bloom Indexes - bit array length and the total number of bits\n (or hash functions ?? ) !"
},
{
"msg_contents": "Thanks Fabien,\n\nBut the 2 direct questions i have are :\n\n1. What is the structure of the Bloom Index ? Can you please let me know\nwhat are the fields of a Bloom Index ? Is it just the Item Pointer\nand BloomSignatureWord ?\nWhen i describe my bloom index it looks like following.\n\npostgres=# \\d+ foo.idx_bloom_bar\n Index \"foo.idx_bloom_bar\"\n Column | Type | Key? | Definition | Storage | Stats target\n---------+---------+------+------------+---------+--------------\n id | integer | yes | id | plain |\n dept | integer | yes | dept | plain |\n id2 | integer | yes | id2 | plain |\n id3 | integer | yes | id3 | plain |\n id4 | integer | yes | id4 | plain |\n id5 | integer | yes | id5 | plain |\n id6 | integer | yes | id6 | plain |\n zipcode | integer | yes | zipcode | plain |\nbloom, for table \"foo.bar\"\nOptions: length=64, col1=4, col2=4, col3=4, col4=4, col5=4, col6=4, col7=4,\ncol8=4\n\n2. If we are storing just one signature word per row, how is this\naggregated for all column values of that row into one signature in high\nlevel ?\nFor example, if length = 64, does it mean that a bit array of 64 bits is\ngenerated per each row ?\nIf col1=4, does it mean the value of col1 is passed to 4 hash functions\nthat generate 4 positions that can be set to 1 in the bit array ?\n\nOn Sat, Jun 8, 2019 at 3:11 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n\n>\n> Hello Avinash,\n>\n> > I was testing bloom indexes today. I understand bloom indexes uses bloom\n> > filters. [...]\n> >\n> > So the question here is -\n> > I assume - number of bits = k. Where k is the total number of hash\n> > functions used on top of the data that needs to validated. Is that\n> correct\n> > ? If yes, why do we see the Index 1 performing better than Index 2 ?\n> > Because, the data has to go through more hash functions (4 vs 2) in\n> Index 1\n> > than Index 2. So, with Index 1 it should take more time.\n> > Also, both the indexes have ZERO false positives.\n> > Please let me know if there is anything simple that i am missing here.\n>\n> You may have a look at the blog entry about these parameters I redacted a\n> few year ago:\n>\n> http://blog.coelho.net/database/2016/12/11/postgresql-bloom-index.html\n>\n> --\n> Fabien.\n>\n>\n>\n\n-- \n9000799060\n\nThanks Fabien,But the 2 direct questions i have are : 1. What is the structure of the Bloom Index ? Can you please let me know what are the fields of a Bloom Index ? Is it just the Item Pointer and BloomSignatureWord ?When i describe my bloom index it looks like following. postgres=# \\d+ foo.idx_bloom_bar Index \"foo.idx_bloom_bar\" Column | Type | Key? | Definition | Storage | Stats target ---------+---------+------+------------+---------+-------------- id | integer | yes | id | plain | dept | integer | yes | dept | plain | id2 | integer | yes | id2 | plain | id3 | integer | yes | id3 | plain | id4 | integer | yes | id4 | plain | id5 | integer | yes | id5 | plain | id6 | integer | yes | id6 | plain | zipcode | integer | yes | zipcode | plain | bloom, for table \"foo.bar\"Options: length=64, col1=4, col2=4, col3=4, col4=4, col5=4, col6=4, col7=4, col8=42. If we are storing just one signature word per row, how is this aggregated for all column values of that row into one signature in high level ? For example, if length = 64, does it mean that a bit array of 64 bits is generated per each row ? If col1=4, does it mean the value of col1 is passed to 4 hash functions that generate 4 positions that can be set to 1 in the bit array ?On Sat, Jun 8, 2019 at 3:11 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\nHello Avinash,\n\n> I was testing bloom indexes today. I understand bloom indexes uses bloom\n> filters. [...]\n>\n> So the question here is -\n> I assume - number of bits = k. Where k is the total number of hash\n> functions used on top of the data that needs to validated. Is that correct\n> ? If yes, why do we see the Index 1 performing better than Index 2 ?\n> Because, the data has to go through more hash functions (4 vs 2) in Index 1\n> than Index 2. So, with Index 1 it should take more time.\n> Also, both the indexes have ZERO false positives.\n> Please let me know if there is anything simple that i am missing here.\n\nYou may have a look at the blog entry about these parameters I redacted a \nfew year ago:\n\n http://blog.coelho.net/database/2016/12/11/postgresql-bloom-index.html\n\n-- \nFabien.\n\n\n-- 9000799060",
"msg_date": "Sun, 9 Jun 2019 13:54:05 -0300",
"msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bloom Indexes - bit array length and the total number of bits (or\n hash functions ?? ) !"
},
{
"msg_contents": "\n> But the 2 direct questions i have are :\n>\n> 1. What is the structure of the Bloom Index ? Can you please let me know\n> what are the fields of a Bloom Index ? Is it just the Item Pointer\n> and BloomSignatureWord ?\n\nI'm not sure of Postgres actual implementation, I have just looked at the \nunderlying hash functions implementation.\n\n> When i describe my bloom index it looks like following.\n>\n> postgres=# \\d+ foo.idx_bloom_bar\n> Index \"foo.idx_bloom_bar\"\n> Column | Type | Key? | Definition | Storage | Stats target\n> ---------+---------+------+------------+---------+--------------\n> id | integer | yes | id | plain |\n> dept | integer | yes | dept | plain |\n> id2 | integer | yes | id2 | plain |\n> id3 | integer | yes | id3 | plain |\n> id4 | integer | yes | id4 | plain |\n> id5 | integer | yes | id5 | plain |\n> id6 | integer | yes | id6 | plain |\n> zipcode | integer | yes | zipcode | plain |\n> bloom, for table \"foo.bar\"\n\nThe bloom index associates a signature, i.e. a bitfield the size of which \nis specified by the parameter \"length\", to the tuple location. The \nbitfield is computed by hashing the value of columns which are listed \nabove in the index definition. The many values are somehow compressed into \nthe small signature.\n\n> Options: length=64, col1=4, col2=4, col3=4, col4=4, col5=4, col6=4, col7=4,\n> col8=4\n\nI doubt that these parameters are good. The is no point to include a \nunique column into a bloom index. If you look at my blog, the number of \nbits associated to each field should depend on the expected selectivity, \nwhich depends on the entropy of each field and the signature size. The \ncolumn entropy can be computed with a query.\n\n> 2. If we are storing just one signature word per row, how is this\n> aggregated for all column values of that row into one signature in high\n> level ?\n\nThe aggregation, if performed, is not very useful in practice because it \ncan only be effective on a few (first) bits, which are randomly computed \nanyway and the value of the query is not likely to hit them.\n\nFundamentally all bitfields are scanned to extract which tuples are \npossibly of interest, i.e. are not excluded by the index. The \"full scan\" \nof the bloom index is not a bad thing if it is much smaller than the table \nitself.\n\n> For example, if length = 64, does it mean that a bit array of 64 bits is\n> generated per each row ?\n\nYes.\n\n> If col1=4, does it mean the value of col1 is passed to 4 hash functions\n> that generate 4 positions that can be set to 1 in the bit array ?\n\nYes.\n\nTry to apply the formula in the blog to see what you get for your \nparameters, but it is likely to be < 4.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 10 Jun 2019 08:24:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Bloom Indexes - bit array length and the total number of bits\n (or hash functions ?? ) !"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile fixing the breakage caused by the default number of trailing\ndigits output for real and double precision, I noticed that first\nrandom() call after setseed(0) doesn't return the same value as 10 and\nearlier (I tested 9.4 and later). It changed an expected behavior and\nit should be listed in incompatibilities section of the release notes.\nSome applications can rely on such behavior.\n\n$ psql postgres\npsql (10.4)\nType \"help\" for help.\n\npostgres=# select setseed(0);\n setseed\n---------\n\n(1 row)\n\npostgres=# select random();\n random\n-------------------\n 0.840187716763467\n(1 row)\n\n$ psql postgres\npsql (12beta1)\nType \"help\" for help.\n\npostgres=# select setseed(0);\n setseed\n---------\n\n(1 row)\n\npostgres=# select random();\n random\n-----------------------\n 3.907985046680551e-14\n(1 row)\n\nIt seems related to the pg_erand48() adoption at the end of the year\n[1] (commit 6645ad6bdd81e7d5a764e0d94ef52fae053a9e13).\n\n\n[1] https://www.postgresql.org/message-id/3859.1545849900@sss.pgh.pa.us\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Sat, 8 Jun 2019 11:09:34 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": true,
"msg_subject": "initial random incompatibility"
},
{
"msg_contents": "On 2019-Jun-08, Euler Taveira wrote:\n\n> While fixing the breakage caused by the default number of trailing\n> digits output for real and double precision, I noticed that first\n> random() call after setseed(0) doesn't return the same value as 10 and\n> earlier (I tested 9.4 and later). It changed an expected behavior and\n> it should be listed in incompatibilities section of the release notes.\n> Some applications can rely on such behavior.\n\nHmm. Tom argued about the backwards-compatibility argument in\nthe discussion that led to that commit:\nhttps://www.postgresql.org/message-id/3859.1545849900@sss.pgh.pa.us\nI think this is worth listing in the release notes. Can you propose\nsome wording?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 10:51:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initial random incompatibility"
},
{
"msg_contents": "I cannot find traces, but I believe there was a Twitter poll on which\nrandom do people get after setseed() in postgres, and we found at least\nthree distinct sequences across different builds.\n\nOn Mon, Jun 10, 2019 at 5:52 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> On 2019-Jun-08, Euler Taveira wrote:\n>\n> > While fixing the breakage caused by the default number of trailing\n> > digits output for real and double precision, I noticed that first\n> > random() call after setseed(0) doesn't return the same value as 10 and\n> > earlier (I tested 9.4 and later). It changed an expected behavior and\n> > it should be listed in incompatibilities section of the release notes.\n> > Some applications can rely on such behavior.\n>\n> Hmm. Tom argued about the backwards-compatibility argument in\n> the discussion that led to that commit:\n> https://www.postgresql.org/message-id/3859.1545849900@sss.pgh.pa.us\n> I think this is worth listing in the release notes. Can you propose\n> some wording?\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nI cannot find traces, but I believe there was a Twitter poll on which random do people get after setseed() in postgres, and we found at least three distinct sequences across different builds. On Mon, Jun 10, 2019 at 5:52 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:On 2019-Jun-08, Euler Taveira wrote:\n\n> While fixing the breakage caused by the default number of trailing\n> digits output for real and double precision, I noticed that first\n> random() call after setseed(0) doesn't return the same value as 10 and\n> earlier (I tested 9.4 and later). It changed an expected behavior and\n> it should be listed in incompatibilities section of the release notes.\n> Some applications can rely on such behavior.\n\nHmm. Tom argued about the backwards-compatibility argument in\nthe discussion that led to that commit:\nhttps://www.postgresql.org/message-id/3859.1545849900@sss.pgh.pa.us\nI think this is worth listing in the release notes. Can you propose\nsome wording?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Mon, 17 Jun 2019 19:55:44 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: initial random incompatibility"
},
{
"msg_contents": "On 2019-Jun-17, Darafei \"Komяpa\" Praliaskouski wrote:\n\n> I cannot find traces, but I believe there was a Twitter poll on which\n> random do people get after setseed() in postgres, and we found at least\n> three distinct sequences across different builds.\n\nIn different machines or different build options? I suppose that's\nacceptable ... the problem is changing the sequence in one release to\nthe next in the same machine with the same build options.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Jun 2019 13:09:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: initial random incompatibility"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-17, Darafei \"Komяpa\" Praliaskouski wrote:\n>> I cannot find traces, but I believe there was a Twitter poll on which\n>> random do people get after setseed() in postgres, and we found at least\n>> three distinct sequences across different builds.\n\n> In different machines or different build options? I suppose that's\n> acceptable ... the problem is changing the sequence in one release to\n> the next in the same machine with the same build options.\n\nFWIW, I agree that this change should be called out as a possible\ncompatibility hazard, even though anybody who was expecting repeatable\nbehavior from the old code was playing with fire.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jun 2019 13:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initial random incompatibility"
}
] |
[
{
"msg_contents": "I've been using cube extension recompiled with #define MAX_DIM 256.\nBut with a version 11.3 I'm getting the following error:failed to add item to index page in <index_name>\nThere's a regression unit test in contrib/cube/expected/cube.out:\nCREATE TABLE test_cube (c cube);\n\\copy test_cube from 'data/test_cube.data'\nCREATE INDEX test_cube_ix ON test_cube USING gist (c);\nSELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;\nI've created gist index in the same way, i.e. create index <index_name> on <table_name> using gist(<column_name>);\nIf MAX_DIM equals to 512, btree index complaints as:index row size 4112 exceeds maximum 2712 for index <index_name>\nHINT: Values larger than 1/3 of a buffer page cannot be indexed. \nConsider a function index of an MD5 hash of the value, or use full text indexing. \n\nThat's why 256 has been set.\nBut gist doesn't provide explanation on its error.\nThese are the places where the message might have been generated:src/backend/access/gist/gist.c:418: elog(ERROR, \"failed to add item to index page in \\\"%s\\\"\", RelationGetRelationName(rel));\nsrc/backend/access/gist/gist.c:540: elog(ERROR, \"failed to add item to index page in \\\"%s\\\"\",\n\nQuestion is what restrains from setting MAX_DIM bigger than 100 in a custom recompiled cube extension version?In practice the error messages are too cryptic.\ncontrib/cube/cube.c has the following methods regarding GIST:/*\n** GiST support methods\n*/\n\nPG_FUNCTION_INFO_V1(g_cube_consistent);\nPG_FUNCTION_INFO_V1(g_cube_compress);\nPG_FUNCTION_INFO_V1(g_cube_decompress);\nPG_FUNCTION_INFO_V1(g_cube_penalty);\nPG_FUNCTION_INFO_V1(g_cube_picksplit);\nPG_FUNCTION_INFO_V1(g_cube_union);\nPG_FUNCTION_INFO_V1(g_cube_same);\nPG_FUNCTION_INFO_V1(g_cube_distance);\n\ng_cube_compress has the following body: PG_RETURN_DATUM(PG_GETARG_DATUM(0));\n\nDoes it just returns void pointer to the underlying x array?\ncube data structure:\ntypedef struct NDBOX\n{\n /* varlena header (do not touch directly!) */\n int32 vl_len_;\n\n /*----------\n * Header contains info about NDBOX. For binary compatibility with old\n * versions, it is defined as \"unsigned int\".\n *\n * Following information is stored:\n *\n * bits 0-7 : number of cube dimensions;\n * bits 8-30 : unused, initialize to zero;\n * bit 31 : point flag. If set, the upper right coordinates are not\n * stored, and are implicitly the same as the lower left\n * coordinates.\n *----------\n */\n unsigned int header;\n\n /*\n * The lower left coordinates for each dimension come first, followed by\n * upper right coordinates unless the point flag is set.\n */\n double x[FLEXIBLE_ARRAY_MEMBER];\n} NDBOX;\n\nCan it be a problem of not fitting into some limits when building or updating gist index for cube with MAX_DIM > 100? \n\nI've been using cube extension recompiled with #define MAX_DIM 256.But with a version 11.3 I'm getting the following error:failed to add item to index page in <index_name>There's a regression unit test in contrib/cube/expected/cube.out:CREATE TABLE test_cube (c cube);\\copy test_cube from 'data/test_cube.data'CREATE INDEX test_cube_ix ON test_cube USING gist (c);SELECT * FROM test_cube WHERE c && '(3000,1000),(0,0)' ORDER BY c;I've created gist index in the same way, i.e. create index <index_name> on <table_name> using gist(<column_name>);If MAX_DIM equals to 512, btree index complaints as:index row size 4112 exceeds maximum 2712 for index <index_name>HINT: Values larger than 1/3 of a buffer page cannot be indexed. Consider a function index of an MD5 hash of the value, or use full text indexing. That's why 256 has been set.But gist doesn't provide explanation on its error.These are the places where the message might have been generated:src/backend/access/gist/gist.c:418: elog(ERROR, \"failed to add item to index page in \\\"%s\\\"\", RelationGetRelationName(rel));src/backend/access/gist/gist.c:540: elog(ERROR, \"failed to add item to index page in \\\"%s\\\"\",Question is what restrains from setting MAX_DIM bigger than 100 in a custom recompiled cube extension version?In practice the error messages are too cryptic.contrib/cube/cube.c has the following methods regarding GIST:/*** GiST support methods*/PG_FUNCTION_INFO_V1(g_cube_consistent);PG_FUNCTION_INFO_V1(g_cube_compress);PG_FUNCTION_INFO_V1(g_cube_decompress);PG_FUNCTION_INFO_V1(g_cube_penalty);PG_FUNCTION_INFO_V1(g_cube_picksplit);PG_FUNCTION_INFO_V1(g_cube_union);PG_FUNCTION_INFO_V1(g_cube_same);PG_FUNCTION_INFO_V1(g_cube_distance);g_cube_compress has the following body: PG_RETURN_DATUM(PG_GETARG_DATUM(0));Does it just returns void pointer to the underlying x array?cube data structure:typedef struct NDBOX{ /* varlena header (do not touch directly!) */ int32 vl_len_; /*---------- * Header contains info about NDBOX. For binary compatibility with old * versions, it is defined as \"unsigned int\". * * Following information is stored: * * bits 0-7 : number of cube dimensions; * bits 8-30 : unused, initialize to zero; * bit 31 : point flag. If set, the upper right coordinates are not * stored, and are implicitly the same as the lower left * coordinates. *---------- */ unsigned int header; /* * The lower left coordinates for each dimension come first, followed by * upper right coordinates unless the point flag is set. */ double x[FLEXIBLE_ARRAY_MEMBER];} NDBOX;Can it be a problem of not fitting into some limits when building or updating gist index for cube with MAX_DIM > 100?",
"msg_date": "Sun, 9 Jun 2019 18:05:20 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "> On 9 Jun 2019, at 20:05, Siarhei Siniak <siarheisiniak@yahoo.com> wrote:\n> \n> I've been using cube extension recompiled with\n> #define MAX_DIM 256.\n> \n> But with a version 11.3 I'm getting the following error:\n> failed to add item to index page in <index_name>\n\nThis sounds like a variant of the issue reported on -bugs in\nAM6PR06MB57318C9882C021879DD4101EA3100@AM6PR06MB5731.eurprd06.prod.outlook.com\nand is also reproducible on HEAD.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 10 Jun 2019 13:57:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "Can you point out a failling unit test in the codebase?\nP.S sorry for a late reply, has got this message in the spam folder )\n Le lundi 10 juin 2019 à 14:57:32 UTC+3, Daniel Gustafsson <daniel@yesql.se> a écrit : \n \n > On 9 Jun 2019, at 20:05, Siarhei Siniak <siarheisiniak@yahoo.com> wrote:\n> \n> I've been using cube extension recompiled with\n> #define MAX_DIM 256.\n> \n> But with a version 11.3 I'm getting the following error:\n> failed to add item to index page in <index_name>\n\nThis sounds like a variant of the issue reported on -bugs in\nAM6PR06MB57318C9882C021879DD4101EA3100@AM6PR06MB5731.eurprd06.prod.outlook.com\nand is also reproducible on HEAD.\n\ncheers ./daniel \n\nCan you point out a failling unit test in the codebase?P.S sorry for a late reply, has got this message in the spam folder )\n\n\n\n Le lundi 10 juin 2019 à 14:57:32 UTC+3, Daniel Gustafsson <daniel@yesql.se> a écrit :\n \n\n\n> On 9 Jun 2019, at 20:05, Siarhei Siniak <siarheisiniak@yahoo.com> wrote:> > I've been using cube extension recompiled with> #define MAX_DIM 256.> > But with a version 11.3 I'm getting the following error:> failed to add item to index page in <index_name>This sounds like a variant of the issue reported on -bugs inAM6PR06MB57318C9882C021879DD4101EA3100@AM6PR06MB5731.eurprd06.prod.outlook.comand is also reproducible on HEAD.cheers ./daniel",
"msg_date": "Wed, 12 Jun 2019 06:30:48 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "I've added debug prints to cube extension.g_custom_cube_a_f8\ng_custom_cube_picksplit\nare the only called methods\nafter that it prints\nimport psycopg2\nimport logging\nimport numpy\nimport unittest\n\n\nimport python.utils.logging\nimport python.custom_db.backends\nimport python.custom_db.backends.postgresql\n\n\nclass TestPostgresql(unittest.TestCase):\n def test_gist(self):\n b = python.custom_db.backends.postgresql.Postgresql(\n databases=dict(\n test=dict(\n minconn=1,\n maxconn=1\n )\n )\n )\n\n b.connect()\n\n try:\n c = b.get_connection(use='test')\n\n c2 = c[0]\n\n with c2.cursor() as cur:\n cur.execute(r'''\n drop table if exists test;\n create table test(image_id integer primary key, latent_code custom_cube);\n create index lc_idx on test using gist(latent_code);\n ''')\n c2.commit()\n\n with self.assertRaises(psycopg2.errors.InternalError_):\n for k in range(10):\n logging.info('test_postgresql.test_gist, k = %d' % k)\n cur.execute(\n r'''\n insert into test (image_id, latent_code)\n values (%s, custom_cube(%s))\n ''',\n [\n k,\n [float(x) for x in numpy.random.uniform(0, 1, 512)],\n ]\n )\n c2.commit()\n finally:\n b.put_connection(c2, 'test')\n\n``` \n\nI've added debug prints to cube extension.g_custom_cube_a_f8g_custom_cube_picksplitare the only called methodsafter that it printsimport psycopg2import loggingimport numpyimport unittestimport python.utils.loggingimport python.custom_db.backendsimport python.custom_db.backends.postgresqlclass TestPostgresql(unittest.TestCase): def test_gist(self): b = python.custom_db.backends.postgresql.Postgresql( databases=dict( test=dict( minconn=1, maxconn=1 ) ) ) b.connect() try: c = b.get_connection(use='test') c2 = c[0] with c2.cursor() as cur: cur.execute(r''' drop table if exists test; create table test(image_id integer primary key, latent_code custom_cube); create index lc_idx on test using gist(latent_code); ''') c2.commit() with self.assertRaises(psycopg2.errors.InternalError_): for k in range(10): logging.info('test_postgresql.test_gist, k = %d' % k) cur.execute( r''' insert into test (image_id, latent_code) values (%s, custom_cube(%s)) ''', [ k, [float(x) for x in numpy.random.uniform(0, 1, 512)], ] ) c2.commit() finally: b.put_connection(c2, 'test')```",
"msg_date": "Wed, 12 Jun 2019 07:50:22 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "I've added debug prints to cube extension. g_custom_cube_a_f8 and g_custom_cube_picksplit are the only called methods.\nAfter that it prints:\n ERROR: failed to add item to index page in \"lc_idx\" \nCube extension modifications:\n #define MAX_DIM (512)\nPython test source code has been attached to the letter.\n\nP.S.\nsorry for the previous letter, didn't configure plain text composition",
"msg_date": "Wed, 12 Jun 2019 07:59:41 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "Hi!\n\n> 9 июня 2019 г., в 23:05, Siarhei Siniak <siarheisiniak@yahoo.com> написал(а):\n> \n> I've been using cube extension recompiled with\n> #define MAX_DIM 256.\n> \n> But with a version 11.3 I'm getting the following error:\n> failed to add item to index page in <index_name>\n\nSo, you have changed source code (removing dim constraint) and get cryptic error after that. In some way this is expected...\n\nThough, the reason why cube has this limit is not physical. R-tree's (cube+gist) just do not work effectively with more than 10 non-correlated dimensions.\nIf you have some correlated dimensions - you, probably, should invent something more clever that just cube - plain array of D*2*doubles for each tuple.\n\n100 is upper bound for sane data that can be indexed in case of cube...\n\nNevertheless, we can improve AddTuple messages. But there is not such strict guidelines as with B-tree. Probably, tuples should not be bigger than half of usable page space.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 12 Jun 2019 13:45:08 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "A uniform set of points with a dimension 128 and type cube. That has a size of 50 ** 3. Can be queried for a nearest neighbor at a speed of 10 queries per second with limit varying from 1 to 25.\nIt works better than when no index used at all. So gist gives here a speed up.\nThe documentation of postgresql doesn't mention complexity of such an index. I've got confused as to its speed.\nDoes postgresql allow for an approximate nearest neighbor search?\n https://github.com/erikbern/ann-benchmarks has a lot of efficient implementations.\nA uniform set of points with a dimension 128 and type cube. That has a size of 50 ** 3. Can be queried for a nearest neighbor at a speed of 10 queries per second with limit varying from 1 to 25.It works better than when no index used at all. So gist gives here a speed up.The documentation of postgresql doesn't mention complexity of such an index. I've got confused as to its speed.Does postgresql allow for an approximate nearest neighbor search? https://github.com/erikbern/ann-benchmarks has a lot of efficient implementations.",
"msg_date": "Wed, 12 Jun 2019 10:11:05 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "\n\n> 12 июня 2019 г., в 15:11, Siarhei Siniak <siarheisiniak@yahoo.com> написал(а):\n> \n> A uniform set of points with a dimension 128 and type cube. That has a size of 50 ** 3. Can be queried for a nearest neighbor at a speed of 10 queries per second with limit varying from 1 to 25.\n> It works better than when no index used at all. So gist gives here a speed up.\n\nThen, I think, data is correlated. I believe it is possible to implement something better for high dimensional KNN in GiST than cube.\n\n\n> \n> The documentation of postgresql doesn't mention complexity of such an index. I've got confused as to its speed.\n> \n> Does postgresql allow for an approximate nearest neighbor search?\n> https://github.com/erikbern/ann-benchmarks has a lot of efficient implementations.\n\nANN is beyond concepts of SQL standard: database with index must return same results as without index.\nI can add ANN to github.com/x4m/ags which is a fork of GiST.\n\nBut PostgreSQL adds a lot of OLTP overhead:\n1. it is crash-safe\n2. it supports concurrent operations\n2a. in a very unexpected way, for example in serializable mode it guaranties you that if you were looking for nearest neighbor there will not appear any new closer neighbor until the end of your transaction.\n3. it allows extensibility and has abstraction layers\n4. it has declarative querying language\netcetc\n\nAll this comes at a cost of database that can do anything and be anything. It its very hard to compete with specialized indexes that only do ANN.\n\nYet, as far as I know, no one really pursued the idea of fast high dimensional ANN in PG.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 12 Jun 2019 20:14:34 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": ">ANN is beyond concepts of SQL standard: database with index must return same results as without index.\n>I can add ANN to github.com/x4m/ags which is a fork of GiST.How to recompile that extension and not to get a name conflict with a standard one?\nI've renamed everything for cube extension. When I needed to fork it.But it's impractical.\n \n\n>ANN is beyond concepts of SQL standard: database with index must return same results as without index.>I can add ANN to github.com/x4m/ags which is a fork of GiST.How to recompile that extension and not to get a name conflict with a standard one?I've renamed everything for cube extension. When I needed to fork it.But it's impractical.",
"msg_date": "Wed, 12 Jun 2019 15:24:35 +0000 (UTC)",
"msg_from": "Siarhei Siniak <siarheisiniak@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n>> 9 июня 2019 г., в 23:05, Siarhei Siniak <siarheisiniak@yahoo.com> написал(а):\n>> I've been using cube extension recompiled with\n>> #define MAX_DIM 256.\n>> But with a version 11.3 I'm getting the following error:\n>> failed to add item to index page in <index_name>\n\n> So, you have changed source code (removing dim constraint) and get cryptic error after that. In some way this is expected...\n\nYeah. There is necessarily a limit on the size of index entries,\nand it's getting exceeded.\n\n> Nevertheless, we can improve AddTuple messages. But there is not such strict guidelines as with B-tree. Probably, tuples should not be bigger than half of usable page space.\n\nI do not think \"improve AddTuple messages\" is going to be enough to fix\nthis. Daniel was correct that this is the same problem seen in\n\nhttps://www.postgresql.org/message-id/flat/AM6PR06MB57318C9882C021879DD4101EA3100%40AM6PR06MB5731.eurprd06.prod.outlook.com\n\nSee my reply there. I think we should continue this discussion on that\nthread, since it has seniority.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 14:49:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GiST limits on contrib/cube with dimension > 100?"
}
] |
[
{
"msg_contents": "Hi all\r\n\r\nMemory leaks occur when the ecpg_update_declare_statement() is called the second time.\r\n\r\nFILE:postgresql\\src\\interfaces\\ecpg\\ecpglib\\prepare.c\r\nvoid\r\necpg_update_declare_statement(const char *declared_name, const char *cursor_name, const int lineno)\r\n{\r\n\tstruct declared_statement *p = NULL;\r\n\r\n\tif (!declared_name || !cursor_name)\r\n\t\treturn;\r\n\r\n\t/* Find the declared node by declared name */\r\n\tp = ecpg_find_declared_statement(declared_name);\r\n\tif (p)\r\n\t\tp->cursor_name = ecpg_strdup(cursor_name, lineno); ★\r\n}\r\necpg_strdup() returns a pointer to a null-terminated byte string, which is a duplicate of the string pointed to by str.\r\nThe memory obtained is done dynamically using malloc and hence it can be freed using free().\r\n\r\nWhen the ecpg_update_declare_statement() is called for the second time, \r\nthe memory allocated for p->cursor_name is not freed. \r\n\r\nFor example:\r\n\r\n EXEC SQL BEGIN DECLARE SECTION;\r\n char *selectString = \"SELECT * FROM foo;\";\r\n int FooBar;\r\n char DooDad[17];\r\n EXEC SQL END DECLARE SECTION;\r\n\r\n EXEC SQL CONNECT TO postgres@localhost:5432 AS con1 USER postgres;\r\n\r\n EXEC SQL AT con1 DECLARE stmt_1 STATEMENT;\r\n EXEC SQL AT con1 PREPARE stmt_1 FROM :selectString;\r\n\r\n EXEC SQL AT con1 DECLARE cur_1 CURSOR FOR stmt_1; //★1 ECPGopen() --> ecpg_update_declare_statement()\r\n EXEC SQL AT con1 OPEN cur_1; \r\n\r\n EXEC SQL AT con1 DECLARE cur_2 CURSOR FOR stmt_1; //★2 ECPGopen() --> ecpg_update_declare_statement()\r\n EXEC SQL AT con1 OPEN cur_2; Memory leaks\r\n\r\n EXEC SQL FETCH cur_2 INTO:FooBar, :DooDad;\r\n EXEC SQL COMMIT;\r\n EXEC SQL DISCONNECT ALL;\r\n\r\n\r\nWe should free p->cursor_name before p->cursor_name = ecpg_strdup(cursor_name, lineno).\r\n#############################################################################\r\n\t\tif(p->cursor_name)\r\n\t\t\tecpg_free(p->cursor_name);\r\n\t\tp->cursor_name = ecpg_strdup(cursor_name,lineno);\r\n###########################################################################\r\nHere is a patch.\r\n\r\nBest Regards!",
"msg_date": "Mon, 10 Jun 2019 00:53:49 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Hi\r\n\r\nOn Mon. June. 10, 2019 at 09:54 AM Zhang, Jie\r\n< zhangjie2@cn.fujitsu.com > wrote:\r\n> \r\n> Memory leaks occur when the ecpg_update_declare_statement() is called the\r\n> second time.\r\n\r\nCertainly it is.\r\nBut I wonder if it is safe that the old cursor_name is forgotten.\r\n\r\nRegards\r\nRyo Matsumura\r\n\r\n\r\n> -----Original Message-----\r\n> From: Zhang, Jie [mailto:zhangjie2@cn.fujitsu.com]\r\n> Sent: Monday, June 10, 2019 9:54 AM\r\n> To: pgsql-hackers@lists.postgresql.org\r\n> Cc: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>\r\n> Subject: [PATCH] memory leak in ecpglib\r\n> \r\n> Hi all\r\n> \r\n> Memory leaks occur when the ecpg_update_declare_statement() is called the\r\n> second time.\r\n> \r\n> FILE:postgresql\\src\\interfaces\\ecpg\\ecpglib\\prepare.c\r\n> void\r\n> ecpg_update_declare_statement(const char *declared_name, const char\r\n> *cursor_name, const int lineno)\r\n> {\r\n> \tstruct declared_statement *p = NULL;\r\n> \r\n> \tif (!declared_name || !cursor_name)\r\n> \t\treturn;\r\n> \r\n> \t/* Find the declared node by declared name */\r\n> \tp = ecpg_find_declared_statement(declared_name);\r\n> \tif (p)\r\n> \t\tp->cursor_name = ecpg_strdup(cursor_name, lineno); ★\r\n> }\r\n> ecpg_strdup() returns a pointer to a null-terminated byte string, which is\r\n> a duplicate of the string pointed to by str.\r\n> The memory obtained is done dynamically using malloc and hence it can be freed\r\n> using free().\r\n> \r\n> When the ecpg_update_declare_statement() is called for the second time,\r\n> the memory allocated for p->cursor_name is not freed.\r\n> \r\n> For example:\r\n> \r\n> EXEC SQL BEGIN DECLARE SECTION;\r\n> char *selectString = \"SELECT * FROM foo;\";\r\n> int FooBar;\r\n> char DooDad[17];\r\n> EXEC SQL END DECLARE SECTION;\r\n> \r\n> EXEC SQL CONNECT TO postgres@localhost:5432 AS con1 USER postgres;\r\n> \r\n> EXEC SQL AT con1 DECLARE stmt_1 STATEMENT;\r\n> EXEC SQL AT con1 PREPARE stmt_1 FROM :selectString;\r\n> \r\n> EXEC SQL AT con1 DECLARE cur_1 CURSOR FOR stmt_1; //★1 ECPGopen()\r\n> --> ecpg_update_declare_statement()\r\n> EXEC SQL AT con1 OPEN cur_1;\r\n> \r\n> EXEC SQL AT con1 DECLARE cur_2 CURSOR FOR stmt_1; //★2 ECPGopen()\r\n> --> ecpg_update_declare_statement()\r\n> EXEC SQL AT con1 OPEN cur_2;\r\n> Memory leaks\r\n> \r\n> EXEC SQL FETCH cur_2 INTO:FooBar, :DooDad;\r\n> EXEC SQL COMMIT;\r\n> EXEC SQL DISCONNECT ALL;\r\n> \r\n> \r\n> We should free p->cursor_name before p->cursor_name = ecpg_strdup(cursor_name,\r\n> lineno).\r\n> #########################################################################\r\n> ####\r\n> \t\tif(p->cursor_name)\r\n> \t\t\tecpg_free(p->cursor_name);\r\n> \t\tp->cursor_name = ecpg_strdup(cursor_name,lineno);\r\n> #########################################################################\r\n> ##\r\n> Here is a patch.\r\n> \r\n> Best Regards!\r\n> \r\n> \r\n\r\n",
"msg_date": "Mon, 10 Jun 2019 09:52:10 +0000",
"msg_from": "\"Matsumura, Ryo\" <matsumura.ryo@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Hi\r\n\r\n> But I wonder if it is safe that the old cursor_name is forgotten.\r\nold cursor_name is not assigned to other pointers, so it is safe that the old cursor_name is forgotten.\r\n\r\nBest Regards!\r\n\r\n-----Original Message-----\r\nFrom: Matsumura, Ryo/松村 量 \r\nSent: Monday, June 10, 2019 5:52 PM\r\nTo: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>; pgsql-hackers@lists.postgresql.org\r\nCc: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>\r\nSubject: RE: [PATCH] memory leak in ecpglib\r\n\r\nHi\r\n\r\nOn Mon. June. 10, 2019 at 09:54 AM Zhang, Jie < zhangjie2@cn.fujitsu.com > wrote:\r\n> \r\n> Memory leaks occur when the ecpg_update_declare_statement() is called \r\n> the second time.\r\n\r\nCertainly it is.\r\nBut I wonder if it is safe that the old cursor_name is forgotten.\r\n\r\nRegards\r\nRyo Matsumura\r\n\r\n\r\n> -----Original Message-----\r\n> From: Zhang, Jie [mailto:zhangjie2@cn.fujitsu.com]\r\n> Sent: Monday, June 10, 2019 9:54 AM\r\n> To: pgsql-hackers@lists.postgresql.org\r\n> Cc: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>\r\n> Subject: [PATCH] memory leak in ecpglib\r\n> \r\n> Hi all\r\n> \r\n> Memory leaks occur when the ecpg_update_declare_statement() is called \r\n> the second time.\r\n> \r\n> FILE:postgresql\\src\\interfaces\\ecpg\\ecpglib\\prepare.c\r\n> void\r\n> ecpg_update_declare_statement(const char *declared_name, const char \r\n> *cursor_name, const int lineno) {\r\n> \tstruct declared_statement *p = NULL;\r\n> \r\n> \tif (!declared_name || !cursor_name)\r\n> \t\treturn;\r\n> \r\n> \t/* Find the declared node by declared name */\r\n> \tp = ecpg_find_declared_statement(declared_name);\r\n> \tif (p)\r\n> \t\tp->cursor_name = ecpg_strdup(cursor_name, lineno); ★ }\r\n> ecpg_strdup() returns a pointer to a null-terminated byte string, \r\n> which is a duplicate of the string pointed to by str.\r\n> The memory obtained is done dynamically using malloc and hence it can \r\n> be freed using free().\r\n> \r\n> When the ecpg_update_declare_statement() is called for the second \r\n> time, the memory allocated for p->cursor_name is not freed.\r\n> \r\n> For example:\r\n> \r\n> EXEC SQL BEGIN DECLARE SECTION;\r\n> char *selectString = \"SELECT * FROM foo;\";\r\n> int FooBar;\r\n> char DooDad[17];\r\n> EXEC SQL END DECLARE SECTION;\r\n> \r\n> EXEC SQL CONNECT TO postgres@localhost:5432 AS con1 USER postgres;\r\n> \r\n> EXEC SQL AT con1 DECLARE stmt_1 STATEMENT;\r\n> EXEC SQL AT con1 PREPARE stmt_1 FROM :selectString;\r\n> \r\n> EXEC SQL AT con1 DECLARE cur_1 CURSOR FOR stmt_1; //★1 ECPGopen()\r\n> --> ecpg_update_declare_statement()\r\n> EXEC SQL AT con1 OPEN cur_1;\r\n> \r\n> EXEC SQL AT con1 DECLARE cur_2 CURSOR FOR stmt_1; //★2 ECPGopen()\r\n> --> ecpg_update_declare_statement()\r\n> EXEC SQL AT con1 OPEN cur_2;\r\n> Memory leaks\r\n> \r\n> EXEC SQL FETCH cur_2 INTO:FooBar, :DooDad;\r\n> EXEC SQL COMMIT;\r\n> EXEC SQL DISCONNECT ALL;\r\n> \r\n> \r\n> We should free p->cursor_name before p->cursor_name = \r\n> ecpg_strdup(cursor_name, lineno).\r\n> ######################################################################\r\n> ###\r\n> ####\r\n> \t\tif(p->cursor_name)\r\n> \t\t\tecpg_free(p->cursor_name);\r\n> \t\tp->cursor_name = ecpg_strdup(cursor_name,lineno); \r\n> ######################################################################\r\n> ###\r\n> ##\r\n> Here is a patch.\r\n> \r\n> Best Regards!\r\n> \r\n> \r\n\r\n\n\n",
"msg_date": "Tue, 11 Jun 2019 04:10:02 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Dear Zhang,\r\n\r\n# I resend the email\r\n\r\nThank you for reporting a bug. I didn't care about this case.\r\n\r\n>> We should free p->cursor_name before p->cursor_name = \r\n>> ecpg_strdup(cursor_name, lineno).\r\n\r\nI'm wondering whether this approach is correct or not.\r\nIf your patch is committed, in your example, any operation for cur1 will not be accepted.\r\n\r\nMy idea is changing ecpg_update_declare_statement() for permitting one-to-many relation between a declared name and cursors.\r\nAn example is as below:\r\n\r\np = ecpg_find_declared_statement(declared_name);\r\nif (p && p->cursor_name == cursor_name)\r\n\tp->cursor_name = ecpg_strdup(cursor_name, lineno);\r\n\r\nDo you have any suggestions or comments for this?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n",
"msg_date": "Tue, 11 Jun 2019 06:35:36 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Hi Kuroda,\r\n\r\n>If your patch is committed, in your example, any operation for cur1 will not be accepted.\r\nAlthough the return value after calling ecpg_get_con_name_by_cursor_name(cur1) is NULL,\r\nin ecpg_get_connection(), actual_connection will be returned.\r\nso, operation for cur1 will be accepted,\r\n\r\n>p = ecpg_find_declared_statement(declared_name);\r\n>if (p && p->cursor_name == cursor_name)\r\n>p->cursor_name = ecpg_strdup(cursor_name, lineno);\r\nBecause the initial value of p->cursor_name is NULL, p->cursor_name will never be updated.\r\n\r\nBest Regards!\r\n\r\n-----Original Message-----\r\nFrom: Kuroda, Hayato/�\\田 隼人 \r\nSent: Tuesday, June 11, 2019 2:36 PM\r\nTo: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>; Matsumura, Ryo/松村 量 <matsumura.ryo@jp.fujitsu.com>; pgsql-hackers@lists.postgresql.org\r\nSubject: RE: [PATCH] memory leak in ecpglib\r\n\r\nDear Zhang,\r\n\r\n# I resend the email\r\n\r\nThank you for reporting a bug. I didn't care about this case.\r\n\r\n>> We should free p->cursor_name before p->cursor_name = \r\n>> ecpg_strdup(cursor_name, lineno).\r\n\r\nI'm wondering whether this approach is correct or not.\r\nIf your patch is committed, in your example, any operation for cur1 will not be accepted.\r\n\r\nMy idea is changing ecpg_update_declare_statement() for permitting one-to-many relation between a declared name and cursors.\r\nAn example is as below:\r\n\r\np = ecpg_find_declared_statement(declared_name);\r\nif (p && p->cursor_name == cursor_name)\r\np->cursor_name = ecpg_strdup(cursor_name, lineno);\r\n\r\nDo you have any suggestions or comments for this?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n\n\n",
"msg_date": "Tue, 11 Jun 2019 08:14:06 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Dear Zhang,\r\n\r\nSorry for my late reply.\r\nI'm now planning to refactor this functionality:\r\nhttps://www.postgresql.org/message-id/OSAPR01MB20048298F882D25897C6AB23F5EF0@OSAPR01MB2004.jpnprd01.prod.outlook.com\r\n\r\nIf DECLARE STATEMENT and other related statements are enabled only preprocessing process, this problem will be easily solved.\r\n\r\nHow about it?\r\n\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n\r\n-----Original Message-----\r\nFrom: Zhang, Jie/张 杰 \r\nSent: Tuesday, June 11, 2019 5:14 PM\r\nTo: Kuroda, Hayato/�\\田 隼人 <kuroda.hayato@fujitsu.com>; Matsumura, Ryo/松村 量 <matsumura.ryo@jp.fujitsu.com>; pgsql-hackers@lists.postgresql.org\r\nSubject: RE: [PATCH] memory leak in ecpglib\r\n\r\nHi Kuroda,\r\n\r\n>If your patch is committed, in your example, any operation for cur1 will not be accepted.\r\nAlthough the return value after calling ecpg_get_con_name_by_cursor_name(cur1) is NULL,\r\nin ecpg_get_connection(), actual_connection will be returned.\r\nso, operation for cur1 will be accepted,\r\n\r\n>p = ecpg_find_declared_statement(declared_name);\r\n>if (p && p->cursor_name == cursor_name)\r\n>p->cursor_name = ecpg_strdup(cursor_name, lineno);\r\nBecause the initial value of p->cursor_name is NULL, p->cursor_name will never be updated.\r\n\r\nBest Regards!\r\n\r\n-----Original Message-----\r\nFrom: Kuroda, Hayato/�\\田 隼人 \r\nSent: Tuesday, June 11, 2019 2:36 PM\r\nTo: Zhang, Jie/张 杰 <zhangjie2@cn.fujitsu.com>; Matsumura, Ryo/松村 量 <matsumura.ryo@jp.fujitsu.com>; pgsql-hackers@lists.postgresql.org\r\nSubject: RE: [PATCH] memory leak in ecpglib\r\n\r\nDear Zhang,\r\n\r\n# I resend the email\r\n\r\nThank you for reporting a bug. I didn't care about this case.\r\n\r\n>> We should free p->cursor_name before p->cursor_name = \r\n>> ecpg_strdup(cursor_name, lineno).\r\n\r\nI'm wondering whether this approach is correct or not.\r\nIf your patch is committed, in your example, any operation for cur1 will not be accepted.\r\n\r\nMy idea is changing ecpg_update_declare_statement() for permitting one-to-many relation between a declared name and cursors.\r\nAn example is as below:\r\n\r\np = ecpg_find_declared_statement(declared_name);\r\nif (p && p->cursor_name == cursor_name)\r\np->cursor_name = ecpg_strdup(cursor_name, lineno);\r\n\r\nDo you have any suggestions or comments for this?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n\r\n",
"msg_date": "Wed, 19 Jun 2019 02:43:56 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Hi,\n\n> Memory leaks occur when the ecpg_update_declare_statement() is called\n> the second time.\n> ...\n\nI'm going to commit this patch HEAD, this way we can see if it works as\nadvertised. It does not hurt if it gets removed by a rewrite.\n\nThanks for finding the issue,\n\nMichael\n-- \nMichael Meskes\nMichael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)\nMeskes at (Debian|Postgresql) dot Org\nJabber: michael at xmpp dot meskes dot org\nVfL Borussia! Força Barça! SF 49ers! Use Debian GNU/Linux, PostgreSQL\n\n\n\n",
"msg_date": "Tue, 02 Jul 2019 04:01:24 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] memory leak in ecpglib"
},
{
"msg_contents": "Dear Meskes, Zhang,\r\n\r\nI think this modification is not enough, and I have an another idea. \r\n\r\n>> If your patch is committed, in your example, any operation for cur1 will not be accepted.\r\n> Although the return value after calling ecpg_get_con_name_by_cursor_name(cur1)\r\n> is NULL, in ecpg_get_connection(), actual_connection will be returned.\r\n> so, operation for cur1 will be accepted,\r\n\r\nDid you mention about this code?\r\n(Copied from ECPGfetch)\r\n\r\n```\r\nreal_connection_name = ecpg_get_con_name_by_cursor_name(cursor_name);\r\nif (real_connection_name == NULL)\r\n{\r\n\t/*\r\n\t* If can't get the connection name by cursor name then using\r\n\t* connection name coming from the parameter connection_name\r\n\t*/\r\n\treal_connection_name = connection_name;\r\n}\r\n```\r\n\r\nIf so, I think this approach is wrong. This connection_name corresponds to the following con1.\r\n\r\n```\r\nEXEC SQL AT con1 FETCH cur1 ...\r\n ^^^^\r\n```\r\n\r\nTherefore the followed FETCH statement will fail\r\nbecause the application forget the connection of cur_1.\r\n\r\n```\r\nEXEC SQL AT con1 DECLARE stmt_1 STATEMENT;\r\nEXEC SQL PREPARE stmt_1 FROM :selectString;\r\nEXEC SQL DECLARE cur_1 CURSOR FOR stmt_1;\r\nEXEC SQL OPEN cur_1; \r\nEXEC SQL DECLARE cur_2 CURSOR FOR stmt_1;\r\nEXEC SQL OPEN cur_2;\r\nEXEC SQL FETCH cur_1;\r\n```\r\n\r\n\r\nI think the g_declared_list is not needed for managing connection. I was wrong.\r\nWe should treat DECLARE STAEMENT as declarative, like #include or #define in C macro.\r\n\r\nPlease send me your reply.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFujitsu LIMITED\r\n\r\n\r\n",
"msg_date": "Mon, 8 Jul 2019 08:27:52 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] memory leak in ecpglib"
}
] |
[
{
"msg_contents": "Hi all\r\n\r\nIn src\\backend\\utils\\misc\\guc.c, I found a potential memory leak.\r\n\r\nmake_absolute_path() return a malloc'd copy, we should free memory before the function return false.\r\n----------------------------------------------------------------------------\r\nSelectConfigFiles(const char *userDoption, const char *progname)\r\n{\r\n......\r\n\t/* configdir is -D option, or $PGDATA if no -D */\r\n\tif (userDoption)\r\n\t\tconfigdir = make_absolute_path(userDoption); ★\r\n\telse\r\n\t\tconfigdir = make_absolute_path(getenv(\"PGDATA\")); ★\r\n\r\n\tif (configdir && stat(configdir, &stat_buf) != 0)\r\n\t{\r\n\t\twrite_stderr(\"%s: could not access directory \\\"%s\\\": %s\\n\",\r\n\t\t\t\t\t progname,\r\n\t\t\t\t\t configdir,\r\n\t\t\t\t\t strerror(errno));\r\n\t\tif (errno == ENOENT)\r\n\t\t\twrite_stderr(\"Run initdb or pg_basebackup to initialize a PostgreSQL data directory.\\n\");\r\n\t\t★// Need to free memory of configdir\r\n\t\treturn false;\r\n\t}\r\n......\r\n---------------------------------------------------------------------------\r\n\r\nRefer to the following files for the implementation of make_absolute_path().\r\n\r\nfile: src\\port\\path.c\r\n/*\r\n * make_absolute_path\r\n *\r\n * If the given pathname isn't already absolute, make it so, interpreting\r\n * it relative to the current working directory.\r\n *\r\n * Also canonicalizes the path. The result is always a malloc'd copy.",
"msg_date": "Mon, 10 Jun 2019 01:58:48 +0000",
"msg_from": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix potential memoryleak in guc.c"
},
{
"msg_contents": "\"Zhang, Jie\" <zhangjie2@cn.fujitsu.com> writes:\n> In src\\backend\\utils\\misc\\guc.c, I found a potential memory leak.\n> make_absolute_path() return a malloc'd copy, we should free memory before the function return false.\n\nIf SelectConfigFiles were executed more than once per postmaster\nlaunch, this might be worth adding code for ... but as-is, I'm\ndubious. There are a few tens of KB of other one-time leaks\nthat we don't worry about removing.\n\nEven more to the point, the particular code path you're complaining\nabout is a failure exit that will lead to immediate process\ntermination, so there really is no point in adding code there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jun 2019 22:12:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix potential memoryleak in guc.c"
}
] |
[
{
"msg_contents": "Hi\n\nCurrently the documentation for the default role \"pg_signal_backend\" states,\nsomewhat ambiguously, \"Send signals to other backends (eg: cancel query, terminate)\",\ngiving the impression other signals (e.g. SIGHUP) can be sent too, which is\ncurrently not the case.\n\nAttached patch clarifies this, adds a descriptive paragraph (similar to what\nthe other default roles have) and a link to the \"Server Signaling Functions\"\nsection.\n\nPatch applies cleanly to HEAD and REL_11_STABLE.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 10 Jun 2019 11:06:54 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "doc: clarify \"pg_signal_backend\" default role"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 11:06:54AM +0900, Ian Barwick wrote:\n> Currently the documentation for the default role \"pg_signal_backend\" states,\n> somewhat ambiguously, \"Send signals to other backends (eg: cancel query, terminate)\",\n> giving the impression other signals (e.g. SIGHUP) can be sent too, which is\n> currently not the case.\n\n(Perhaps you should avoid cross-posting?)\n\nOK, I can see your point.\n\n> Attached patch clarifies this, adds a descriptive paragraph (similar to what\n> the other default roles have) and a link to the \"Server Signaling Functions\"\n> section.\n\n+1 for being more descriptive here.\n--\nMichael",
"msg_date": "Mon, 10 Jun 2019 14:35:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify \"pg_signal_backend\" default role"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> Currently the documentation for the default role \"pg_signal_backend\" states,\n> somewhat ambiguously, \"Send signals to other backends (eg: cancel query, terminate)\",\n> giving the impression other signals (e.g. SIGHUP) can be sent too, which is\n> currently not the case.\n> Attached patch clarifies this, adds a descriptive paragraph (similar to what\n> the other default roles have) and a link to the \"Server Signaling Functions\"\n> section.\n\nPushed with minor tweaking.\n\n(Note: patches are less likely to fall through the cracks if you\nadd them to the commitfest page.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Aug 2019 18:04:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: clarify \"pg_signal_backend\" default role"
},
{
"msg_contents": "On 8/28/19 7:04 AM, Tom Lane wrote:\n> Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n>> Currently the documentation for the default role \"pg_signal_backend\" states,\n>> somewhat ambiguously, \"Send signals to other backends (eg: cancel query, terminate)\",\n>> giving the impression other signals (e.g. SIGHUP) can be sent too, which is\n>> currently not the case.\n>> Attached patch clarifies this, adds a descriptive paragraph (similar to what\n>> the other default roles have) and a link to the \"Server Signaling Functions\"\n>> section.\n> \n> Pushed with minor tweaking.\n\nThanks!\n\n> (Note: patches are less likely to fall through the cracks if you\n> add them to the commitfest page.)\n\nYup, though I was intending to add that one together with a couple of\nrelated minor doc patches to the next CF.\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 28 Aug 2019 10:13:38 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: clarify \"pg_signal_backend\" default role"
}
] |
[
{
"msg_contents": "Hello.\n\nIn pg_upgrade, prep statuts is shown in English even if LANG is\nset to other languages.\n\n$ LANG=ja_JP.UTF8 pg_upgrade ...\n<\"Performing Consistency Checks on Old Live Server\" in Japanese>\n--------------------------------------------------\nChecking cluster versions ok\nChecking database user is the install user ok\nChecking database connection settings ok\nChecking for prepared transactions ok\n...\n<\"*Clusters are compatible*\" in Japanese>\n\n\nprep_status is marked as GETTEXT_TRIGGERS but actually doesn't\ntranslate. I suppose the reason is we don't have a general and\nportable means to align the message strings containing non-ascii\ncharacters.\n\nI'd like to propose to append \" ... \" instead of aligning messages.\n\nChecking cluster versions ... ok\nChecking database user is the install user ... ok\nChecking database connection settings ... ok\nChecking for prepared transactions ... ok\n\nIf we don't do that, translation lines in po files are\nuseless. prep_stauts must be removed from TETTEXT_TRIGGERS, and a\ncomment that explains the reason for not translating.\n\nAny opinions?\n\nregardes.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 10 Jun 2019 13:57:14 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade: prep_status doesn't translate messages"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 01:57:14PM +0900, Kyotaro Horiguchi wrote:\n> If we don't do that, translation lines in po files are\n> useless. prep_stauts must be removed from TETTEXT_TRIGGERS, and a\n> comment that explains the reason for not translating.\n> \n> Any opinions?\n\nI agree with your point that it should be an all-or-nothing, and not\nsomething in the middle. Now, I would fall into the category of\npeople which would prefer making the full set of contents translated,\nand there has been some work in this area recently:\nhttps://www.postgresql.org/message-id/20170523002827.lzc2jkzh2gubclqb@alvherre.pgsql\n--\nMichael",
"msg_date": "Mon, 10 Jun 2019 16:48:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: prep_status doesn't translate messages"
},
{
"msg_contents": "Hello.\n\nAt Mon, 10 Jun 2019 16:48:42 +0900, Michael Paquier\n<michael@paquier.xyz> wrote in <20190610074842.GH2199@paquier.xyz>\n> On Mon, Jun 10, 2019 at 01:57:14PM +0900, Kyotaro Horiguchi wrote:\n> > If we don't do that, translation lines in po files are\n> > useless. prep_stauts must be removed from TETTEXT_TRIGGERS, and a\n> > comment that explains the reason for not translating.\n> >\n> > Any opinions?\n>\n> I agree with your point that it should be an all-or-nothing, and not\n> something in the middle. Now, I would fall into the category of\n> people which would prefer making the full set of contents translated,\n\nI'm on the same side. Whether do you think this as a 12-issue or\nfor later versions? I think there is no risk in changing it now so\nI wish the change is contained in 12-shipping.\n\n\n> and there has been some work in this area recently:\n> https://www.postgresql.org/message-id/20170523002827.lzc2jkzh2gubclqb@alvherre.pgsql\n\nThanks for the pointer. I'm seeing the result of the discussion\nnow. Apart from the discussion of translate-or-not decision,\nthere can be a discussion how we can reduce the burden of\ntranslation work. I was a bit tired to translate something like\nthe followings:\n\nold and new pg_controldata block sizes are invalid or do not match\nold and new pg_controldata maximum relation segment sizes are invalid\nor do not match\nold and new pg_controldata WAL block sizes are invalid or do not match\n...\n\nI'm not sure where is the compromisable point between burden of\ntranslators and programmers, though.\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 11 Jun 2019 12:05:01 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: prep_status doesn't translate messages"
},
{
"msg_contents": "On 2019-Jun-11, Kyotaro Horiguchi wrote:\n\n> Thanks for the pointer. I'm seeing the result of the discussion\n> now. Apart from the discussion of translate-or-not decision,\n> there can be a discussion how we can reduce the burden of\n> translation work. I was a bit tired to translate something like\n> the followings:\n> \n> old and new pg_controldata block sizes are invalid or do not match\n> old and new pg_controldata maximum relation segment sizes are invalid\n> or do not match\n> old and new pg_controldata WAL block sizes are invalid or do not match\n> ...\n> \n> I'm not sure where is the compromisable point between burden of\n> translators and programmers, though.\n\nI think the problem with those messages is that they are poorly\nworded/styled, but I haven't tried to figure out how to make them\nbetter. That may also fix the translation burden, not sure. If you\nhave proposals for improvement, let's hear them.\n\nHere's a quick idea. We already have this:\n\nmsgid \"The target cluster lacks some required control information:\\n\"\nmsgid \" checkpoint next XID\\n\"\nmsgid \" latest checkpoint next OID\\n\"\n\nso this gives me the idea that one way to fix the problem you mention is\nsomething like this:\n\nmsgid \"The following source and target pg_controldata items do not match:\"\nmsgid \" block size\"\nmsgid \" maximum relation segment size\"\n\netc. (One thing to note is that those strings already exist in the .po\nfiles, so already translated). Obviously needs a bit of code rework\n(and the first new one should use the plural stuff, because it's likely\nit'll only be one item that does not match). Also will need separate\nmessages (with plurals) for\n\nmsgid \"The following source pg_controldata items are invalid:\"\nmsgid \"The following target pg_controldata items are invalid:\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 10:11:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: prep_status doesn't translate messages"
},
{
"msg_contents": "Hello.\n\nOn Tue, Jun 11, 2019 at 11:11 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I think the problem with those messages is that they are poorly\n> worded/styled, but I haven't tried to figure out how to make them\n> better. That may also fix the translation burden, not sure. If you\n> have proposals for improvement, let's hear them.\n\nI didn't think so deeply. What I had in mind at the time was\nsplitting-out of the variable part from template part, as we have many\nexisting examples.\n\n> Here's a quick idea. We already have this:\n>\n> msgid \"The target cluster lacks some required control information:\\n\"\n> msgid \" checkpoint next XID\\n\"\n> msgid \" latest checkpoint next OID\\n\"\n\n== By the way,\n\nI found a similar but to-exit message:\n\ncontroldata.c:175\n| if (cluster == &old_cluster)\n| pg_fatal(\"The source cluster lacks cluster state information:\\n\");\n\nThe colon should be a period?\n\n== END OF \"By the way\"\n\n> so this gives me the idea that one way to fix the problem you mention is\n> something like this:\n>\n> msgid \"The following source and target pg_controldata items do not match:\"\n> msgid \" block size\"\n> msgid \" maximum relation segment size\"\n> etc. (One thing to note is that those strings already exist in the .po\n> files, so already translated). Obviously needs a bit of code rework\n\nEach of the message is pg_fatal'ed. So the following insated will\nwork:\n\npg_fatal(\"The source and target pg_controldata item do not match:%s\",\n _(\" maximum alignment\\n\"));\n\nThat seems closer to the the guideline. (But I don't think\n\"[SP][SP]maximum[SP]alignment\\n\" is not proper as a translation unit..)\n\n> (and the first new one should use the plural stuff, because it's likely\n> it'll only be one item that does not match). Also will need separate\n> messages (with plurals) for\n>\n> msgid \"The following source pg_controldata items are invalid:\"\n> msgid \"The following target pg_controldata items are invalid:\"\n\nSomething like the attached works that way.\n\nBy the way I'm a bit annoyed also by the (seemingly) random occurrence\nof \"old/new\" and \"source/target\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 12 Jun 2019 11:20:00 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: prep_status doesn't translate messages"
}
] |
[
{
"msg_contents": "Several TAP test suites have a need to translate from an msys path to a\nWindows path. They currently use two ways to do that:\n\n1. TestLib::real_dir, new in v11, is sound but works for directories only.\n2. The $vfs_path approach is semi-private to PostgresNode.pm and 017_shm.pl,\n and it does not work if the file falls in a mount point other than \"/\".\n For example, it has been doing the wrong thing when builddir is\n /c/nm/postgresql (falling in mount point /c).\n\nI'd like to fix the mount point problem and consolidate these two methods. I\nplan to call it TestLib::perl2host, since it translates a path in Perl's\nnotion of the filesystem to a path in the @host@ notion of the filesystem.\nAttached.",
"msg_date": "Sun, 9 Jun 2019 21:58:38 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Fix testing on msys when builddir is under /c mount point"
},
{
"msg_contents": "\nOn 6/10/19 12:58 AM, Noah Misch wrote:\n> Several TAP test suites have a need to translate from an msys path to a\n> Windows path. They currently use two ways to do that:\n>\n> 1. TestLib::real_dir, new in v11, is sound but works for directories only.\n> 2. The $vfs_path approach is semi-private to PostgresNode.pm and 017_shm.pl,\n> and it does not work if the file falls in a mount point other than \"/\".\n> For example, it has been doing the wrong thing when builddir is\n> /c/nm/postgresql (falling in mount point /c).\n>\n> I'd like to fix the mount point problem and consolidate these two methods. I\n> plan to call it TestLib::perl2host, since it translates a path in Perl's\n> notion of the filesystem to a path in the @host@ notion of the filesystem.\n> Attached.\n\n\nLooks sane enough. I think I had to work round this recently by using a\nWindows symlink/junction.\n\n\nI haven't tested it.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 10 Jun 2019 10:40:34 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix testing on msys when builddir is under /c mount point"
}
] |
[
{
"msg_contents": "Hi,\n\n I am reading the code that generating plan for `rowmarks` of Postgres\n9.4 (\nhttps://github.com/postgres/postgres/blob/REL9_4_STABLE/src/backend/optimizer/plan/planner.c#L2070\n)\n\n After emitting the `LockRows` plannode, the results cannot be considered\nin order, and there are comments there:\n\n/*\n * The result can no longer be assumed sorted, since locking might\n * cause the sort key columns to be replaced with new values.\n */\n\n\nI do not understand the reason and after some guess, I come up with a case:\n\n```\ncreate table t(c int);\ninsert into t values (1), (2), (3), (4);\n\n-- Transaction 1\nbegin;\nupdate t set c = 999 where c = 1; -- change the smallest value to a very\nbig one\n-- transaction 1 not commit yet\n\n-- Transaction 2, another session\nbegin;\nselect * from t order by c limit 1 for update; -- Want to find the smallest\nvalue, and then update it\n-- this transaction will be blocked by transaction 1\n\n-- then, transaction 1 commit and transaction 2 will return the tuple with\nvalue 999\n```\n\nI think the reason is that EvalPlanQual does not check the order.\n\nI try this case under mysql, it will output 2 (which is the correct value\nfor the meaning of smallest).\n\nSo, in summary, my questions are:\n\n1. why after emitting `lockrows` plannode, the result can no longer be\nassumed sorted?\n2. Is the case above a bug or a feature?\n\nThanks!\n\nBest Regards,\nZhenghua Lyu\n\nHi, I am reading the code that generating plan for `rowmarks` of Postgres 9.4 (https://github.com/postgres/postgres/blob/REL9_4_STABLE/src/backend/optimizer/plan/planner.c#L2070) After emitting the `LockRows` plannode, the results cannot be considered in order, and there are comments there:/* * The result can no longer be assumed sorted, since locking might * cause the sort key columns to be replaced with new values. */I do not understand the reason and after some guess, I come up with a case:```create table t(c int);insert into t values (1), (2), (3), (4);-- Transaction 1begin;update t set c = 999 where c = 1; -- change the smallest value to a very big one-- transaction 1 not commit yet-- Transaction 2, another sessionbegin;select * from t order by c limit 1 for update; -- Want to find the smallest value, and then update it-- this transaction will be blocked by transaction 1-- then, transaction 1 commit and transaction 2 will return the tuple with value 999```I think the reason is that EvalPlanQual does not check the order.I try this case under mysql, it will output 2 (which is the correct value for the meaning of smallest).So, in summary, my questions are:1. why after emitting `lockrows` plannode, the result can no longer be assumed sorted?2. Is the case above a bug or a feature?Thanks!Best Regards,Zhenghua Lyu",
"msg_date": "Mon, 10 Jun 2019 14:00:57 +0800",
"msg_from": "Zhenghua Lyu <zlv@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Questions of 'for update'"
},
{
"msg_contents": "Hello,\n\nOn Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n\n>\n> 1. why after emitting `lockrows` plannode, the result can no longer be\n> assumed sorted?\n>\nThe plan corresponding to your select query is as following:\n QUERY PLAN\n-------------------------------\nLimit\n -> LockRows\n -> Sort\n Sort Key: c\n -> Seq Scan on t\n\nIn LockRows node, the executer tries to lock each tuple which are provided\nby the Sort node. In the meantime, it's possible that some transaction\nupdates a tuple (which is to be locked by the current transaction) and gets\ncommitted. These changes will be visible to the current transaction if it\nhas a transaction isolation level lesser than REPEATABLE_READ. So, the\ncurrent transaction needs to check whether the updated tuple still\nsatisfies the qual check (in your query, there is no quals, so it always\nsatisfies). If it satisfies, it returns the updated tuple.\nSince, the sort has been performed by an earlier node, the output will no\nlonger be sorted.\n\n\n\n> 2. Is the case above a bug or a feature?\n>\n> IMHO, it looks like an expected behaviour of a correct transaction\nmanagement implementation. The argument can be that the snapshot is\nconsistent throughout all the nodes. Whatever tuple you've fetched from the\nbottom level is locked correctly.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nHello,On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:1. why after emitting `lockrows` plannode, the result can no longer be assumed sorted?The plan corresponding to your select query is as following: QUERY PLAN -------------------------------Limit -> LockRows -> Sort Sort Key: c -> Seq Scan on tIn LockRows node, the executer tries to lock each tuple which are provided by the Sort node. In the meantime, it's possible that some transaction updates a tuple (which is to be locked by the current transaction) and gets committed. These changes will be visible to the current transaction if it has a transaction isolation level lesser than REPEATABLE_READ. So, the current transaction needs to check whether the updated tuple still satisfies the qual check (in your query, there is no quals, so it always satisfies). If it satisfies, it returns the updated tuple.Since, the sort has been performed by an earlier node, the output will no longer be sorted. 2. Is the case above a bug or a feature?\nIMHO, it looks like an expected behaviour of a correct transaction management implementation. The argument can be that the snapshot is consistent throughout all the nodes. Whatever tuple you've fetched from the bottom level is locked correctly.-- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 12:20:08 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions of 'for update'"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jun 10, 2019 at 3:50 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n>> 2. Is the case above a bug or a feature?\n>>\n> IMHO, it looks like an expected behaviour of a correct transaction management implementation.\n\nThis is documented behavior; see the Caution for The Locking Clause on\nthe SELECT reference page:\nhttps://www.postgresql.org/docs/11/sql-select.html\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 10 Jun 2019 16:12:30 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions of 'for update'"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 12:42 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Mon, Jun 10, 2019 at 3:50 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\n> wrote:\n> > On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n> >> 2. Is the case above a bug or a feature?\n> >>\n> > IMHO, it looks like an expected behaviour of a correct transaction\n> management implementation.\n>\n> This is documented behavior; see the Caution for The Locking Clause on\n> the SELECT reference page:\n> https://www.postgresql.org/docs/11/sql-select.html\n>\n>\n> Great. It also suggests a workaround.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jun 10, 2019 at 12:42 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:Hi,\n\nOn Mon, Jun 10, 2019 at 3:50 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n>> 2. Is the case above a bug or a feature?\n>>\n> IMHO, it looks like an expected behaviour of a correct transaction management implementation.\n\nThis is documented behavior; see the Caution for The Locking Clause on\nthe SELECT reference page:\nhttps://www.postgresql.org/docs/11/sql-select.html\n\nGreat. It also suggests a workaround. -- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 12:52:22 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Questions of 'for update'"
},
{
"msg_contents": "Thanks so much.\n\nI understand now.\n\nBest Regards,\nZhenghua Lyu\n\n\nOn Mon, Jun 10, 2019 at 3:22 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n> On Mon, Jun 10, 2019 at 12:42 PM Etsuro Fujita <etsuro.fujita@gmail.com>\n> wrote:\n>\n>> Hi,\n>>\n>> On Mon, Jun 10, 2019 at 3:50 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\n>> wrote:\n>> > On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n>> >> 2. Is the case above a bug or a feature?\n>> >>\n>> > IMHO, it looks like an expected behaviour of a correct transaction\n>> management implementation.\n>>\n>> This is documented behavior; see the Caution for The Locking Clause on\n>> the SELECT reference page:\n>> https://www.postgresql.org/docs/11/sql-select.html\n>>\n>>\n>> Great. It also suggests a workaround.\n>\n>\n> --\n> Thanks & Regards,\n> Kuntal Ghosh\n> EnterpriseDB: http://www.enterprisedb.com\n> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.enterprisedb.com&d=DwMFaQ&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=4XHPyPZRSLhdU6MKCd2-Rw&m=xYe6nmboAo9yOHgVlKpvKmLcN1Re8JX2cSDYkaWtysM&s=nCk1b-WTJNHJJMWPzCsAKujWe0vV4wpRH4zpzMGutqc&e=>\n>\n\nThanks so much.I understand now.Best Regards,Zhenghua LyuOn Mon, Jun 10, 2019 at 3:22 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:On Mon, Jun 10, 2019 at 12:42 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:Hi,\n\nOn Mon, Jun 10, 2019 at 3:50 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> On Mon, Jun 10, 2019 at 11:31 AM Zhenghua Lyu <zlv@pivotal.io> wrote:\n>> 2. Is the case above a bug or a feature?\n>>\n> IMHO, it looks like an expected behaviour of a correct transaction management implementation.\n\nThis is documented behavior; see the Caution for The Locking Clause on\nthe SELECT reference page:\nhttps://www.postgresql.org/docs/11/sql-select.html\n\nGreat. It also suggests a workaround. -- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 16:44:30 +0800",
"msg_from": "Zhenghua Lyu <zlv@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Questions of 'for update'"
}
] |
[
{
"msg_contents": "HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n\n It is a tuple which has been deleted AND committed but before the delete\nthere is a transaction started but not committed. Let call this transaction\nas Transaction A.\n\nif we create index on this time, Let's call this index as Index A, it still\nindex this record. my question is why need this.\n\nThe only reason I can think out (maybe also not reasonable enough) is:\nIf we index like this and the isolate level of transaction A is\nserializable, it is possible that the query in transaction A can use Index\nA since it contains the snapshot data when the transaction A was began.\n this reason may be not reasonable enough is because the transaction A may\nbe should not see the index A at all.\n\n HEAPTUPLE_RECENTLY_DEAD,\t/* tuple is dead, but not deletable yet */ It is a tuple which has been deleted AND committed but before the delete there is a transaction started but not committed. Let call this transaction as Transaction A.if we create index on this time, Let's call this index as Index A, it still index this record. my question is why need this. The only reason I can think out (maybe also not reasonable enough) is:If we index like this and the isolate level of transaction A is serializable, it is possible that the query in transaction A can use Index A since it contains the snapshot data when the transaction A was began. this reason may be not reasonable enough is because the transaction A may be should not see the index A at all.",
"msg_date": "Mon, 10 Jun 2019 14:45:25 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote:\n\n> HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n>\n> It is a tuple which has been deleted AND committed but before the delete\n> there is a transaction started but not committed. Let call this transaction\n> as Transaction A.\n>\n> if we create index on this time, Let's call this index as Index A, it\n> still index this record. my question is why need this.\n>\n> In this case, the changes of the tuple is not visible yet. Now suppose,\nyour transaction A is serializable and you've another serializable\ntransaction B which can see the index A. It generates a plan that requires\nto fetch the deleted tuple through an index scan. If the tuple is not\npresent in the index, how are you going to create a conflict edge between\ntransaction A and transaction B?\n\nBasically, you need to identify the following clause to detect serializable\nconflicts:\nTransaction A precedes transaction B. (Because, transaction A has deleted a\ntuple and it's not visible to transaction B)\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote: HEAPTUPLE_RECENTLY_DEAD,\t/* tuple is dead, but not deletable yet */ It is a tuple which has been deleted AND committed but before the delete there is a transaction started but not committed. Let call this transaction as Transaction A.if we create index on this time, Let's call this index as Index A, it still index this record. my question is why need this. In this case, the changes of the tuple is not visible yet. Now suppose, your transaction A is serializable and you've another serializable transaction B which can see the index A. It generates a plan that requires to fetch the deleted tuple through an index scan. If the tuple is not present in the index, how are you going to create a conflict edge between transaction A and transaction B?Basically, you need to identify the following clause to detect serializable conflicts:Transaction A precedes transaction B. (Because, transaction A has deleted a tuple and it's not visible to transaction B)-- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 12:58:37 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 3:28 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n> On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote:\n>\n>> HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n>>\n>> It is a tuple which has been deleted AND committed but before the delete\n>> there is a transaction started but not committed. Let call this transaction\n>> as Transaction A.\n>>\n>> if we create index on this time, Let's call this index as Index A, it\n>> still index this record. my question is why need this.\n>>\n>> In this case, the changes of the tuple is not visible yet. Now suppose,\n> your transaction A is serializable and you've another serializable\n> transaction B which can see the index A. It generates a plan that requires\n> to fetch the deleted tuple through an index scan. If the tuple is not\n> present in the index, how are you going to create a conflict edge between\n> transaction A and transaction B?\n>\n> Basically, you need to identify the following clause to detect\n> serializable conflicts:\n> Transaction A precedes transaction B. (Because, transaction A has deleted\n> a tuple and it's not visible to transaction B)\n>\n>\nthanks Ghosh. Looks your answer is similar with my previous point\n(transaction is serializable). actually if the transaction B can't see\nthe “deleted\" which has been committed, should it see the index A which is\ncreated after the \"delete\" transaction?\n\n\n-- \n> Thanks & Regards,\n> Kuntal Ghosh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Mon, Jun 10, 2019 at 3:28 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote: HEAPTUPLE_RECENTLY_DEAD,\t/* tuple is dead, but not deletable yet */ It is a tuple which has been deleted AND committed but before the delete there is a transaction started but not committed. Let call this transaction as Transaction A.if we create index on this time, Let's call this index as Index A, it still index this record. my question is why need this. In this case, the changes of the tuple is not visible yet. Now suppose, your transaction A is serializable and you've another serializable transaction B which can see the index A. It generates a plan that requires to fetch the deleted tuple through an index scan. If the tuple is not present in the index, how are you going to create a conflict edge between transaction A and transaction B?Basically, you need to identify the following clause to detect serializable conflicts:Transaction A precedes transaction B. (Because, transaction A has deleted a tuple and it's not visible to transaction B)thanks Ghosh. Looks your answer is similar with my previous point (transaction is serializable). actually if the transaction B can't see the “deleted\" which has been committed, should it see the index A which is created after the \"delete\" transaction?-- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 15:59:58 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 1:30 PM Alex <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Mon, Jun 10, 2019 at 3:28 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>>\n>> On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote:\n>>>\n>>> HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n>>>\n>>> It is a tuple which has been deleted AND committed but before the delete there is a transaction started but not committed. Let call this transaction as Transaction A.\n>>>\n>>> if we create index on this time, Let's call this index as Index A, it still index this record. my question is why need this.\n>>>\n>> In this case, the changes of the tuple is not visible yet. Now suppose, your transaction A is serializable and you've another serializable transaction B which can see the index A. It generates a plan that requires to fetch the deleted tuple through an index scan. If the tuple is not present in the index, how are you going to create a conflict edge between transaction A and transaction B?\n>>\n>> Basically, you need to identify the following clause to detect serializable conflicts:\n>> Transaction A precedes transaction B. (Because, transaction A has deleted a tuple and it's not visible to transaction B)\n>>\n>\n> thanks Ghosh. Looks your answer is similar with my previous point (transaction is serializable). actually if the transaction B can't see the “deleted\" which has been committed, should it see the index A which is created after the \"delete\" transaction?\n>\nI think what I'm trying to say is different.\n\nFor my case, the sequence is as following:\n1. Transaction A has deleted a tuple, say t1 and got committed.\n2. Index A has been created successfully.\n3. Now, transaction B starts and use the index A to fetch the tuple\nt1. While doing visibility check, transaction B gets to know that t1\nhas been deleted by a committed transaction A, so it can't see the\ntuple. But, it creates a dependency edge that transaction A precedes\ntransaction B. This edge is required to detect a serializable conflict\nfailure.\n\nIf you don't create the index entry, it'll not be able to create that edge.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 13:40:21 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 4:10 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n> On Mon, Jun 10, 2019 at 1:30 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> >\n> > On Mon, Jun 10, 2019 at 3:28 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\n> wrote:\n> >>\n> >> On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> >>>\n> >>> HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n> >>>\n> >>> It is a tuple which has been deleted AND committed but before the\n> delete there is a transaction started but not committed. Let call this\n> transaction as Transaction A.\n> >>>\n> >>> if we create index on this time, Let's call this index as Index A, it\n> still index this record. my question is why need this.\n> >>>\n> >> In this case, the changes of the tuple is not visible yet. Now suppose,\n> your transaction A is serializable and you've another serializable\n> transaction B which can see the index A. It generates a plan that requires\n> to fetch the deleted tuple through an index scan. If the tuple is not\n> present in the index, how are you going to create a conflict edge between\n> transaction A and transaction B?\n> >>\n> >> Basically, you need to identify the following clause to detect\n> serializable conflicts:\n> >> Transaction A precedes transaction B. (Because, transaction A has\n> deleted a tuple and it's not visible to transaction B)\n> >>\n> >\n> > thanks Ghosh. Looks your answer is similar with my previous point\n> (transaction is serializable). actually if the transaction B can't see\n> the “deleted\" which has been committed, should it see the index A which is\n> created after the \"delete\" transaction?\n> >\n> I think what I'm trying to say is different.\n>\n> For my case, the sequence is as following:\n> 1. Transaction A has deleted a tuple, say t1 and got committed.\n> 2. Index A has been created successfully.\n> 3. Now, transaction B starts and use the index A to fetch the tuple\n> t1. While doing visibility check, transaction B gets to know that t1\n> has been deleted by a committed transaction A, so it can't see the\n> tuple. But, it creates a dependency edge that transaction A precedes\n> transaction B. This edge is required to detect a serializable conflict\n> failure.\n>\n> If you don't create the index entry, it'll not be able to create that edge.\n>\n\nThanks, I got the difference now, but still not get the necessity of it.\n1. Assume we don't index it, in which situation we can get a wrong\nresult?\n2. If we only support \"Read Committed\" isolation level, is there a safe\nway to not index such data?\n\n-- \n> Thanks & Regards,\n> Kuntal Ghosh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nOn Mon, Jun 10, 2019 at 4:10 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:On Mon, Jun 10, 2019 at 1:30 PM Alex <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Mon, Jun 10, 2019 at 3:28 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>>\n>> On Mon, Jun 10, 2019 at 12:15 PM Alex <zhihui.fan1213@gmail.com> wrote:\n>>>\n>>> HEAPTUPLE_RECENTLY_DEAD, /* tuple is dead, but not deletable yet */\n>>>\n>>> It is a tuple which has been deleted AND committed but before the delete there is a transaction started but not committed. Let call this transaction as Transaction A.\n>>>\n>>> if we create index on this time, Let's call this index as Index A, it still index this record. my question is why need this.\n>>>\n>> In this case, the changes of the tuple is not visible yet. Now suppose, your transaction A is serializable and you've another serializable transaction B which can see the index A. It generates a plan that requires to fetch the deleted tuple through an index scan. If the tuple is not present in the index, how are you going to create a conflict edge between transaction A and transaction B?\n>>\n>> Basically, you need to identify the following clause to detect serializable conflicts:\n>> Transaction A precedes transaction B. (Because, transaction A has deleted a tuple and it's not visible to transaction B)\n>>\n>\n> thanks Ghosh. Looks your answer is similar with my previous point (transaction is serializable). actually if the transaction B can't see the “deleted\" which has been committed, should it see the index A which is created after the \"delete\" transaction?\n>\nI think what I'm trying to say is different.\n\nFor my case, the sequence is as following:\n1. Transaction A has deleted a tuple, say t1 and got committed.\n2. Index A has been created successfully.\n3. Now, transaction B starts and use the index A to fetch the tuple\nt1. While doing visibility check, transaction B gets to know that t1\nhas been deleted by a committed transaction A, so it can't see the\ntuple. But, it creates a dependency edge that transaction A precedes\ntransaction B. This edge is required to detect a serializable conflict\nfailure.\n\nIf you don't create the index entry, it'll not be able to create that edge.Thanks, I got the difference now, but still not get the necessity of it. 1. Assume we don't index it, in which situation we can get a wrong result?2. If we only support \"Read Committed\" isolation level, is there a safe way to not index such data?\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 16:42:03 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 2:12 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> On Mon, Jun 10, 2019 at 4:10 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>> I think what I'm trying to say is different.\n>>\n>> For my case, the sequence is as following:\n>> 1. Transaction A has deleted a tuple, say t1 and got committed.\n>> 2. Index A has been created successfully.\n>> 3. Now, transaction B starts and use the index A to fetch the tuple\n>> t1. While doing visibility check, transaction B gets to know that t1\n>> has been deleted by a committed transaction A, so it can't see the\n>> tuple. But, it creates a dependency edge that transaction A precedes\n>> transaction B. This edge is required to detect a serializable conflict\n>> failure.\n>>\n>> If you don't create the index entry, it'll not be able to create that edge.\n>\n>\n> Thanks, I got the difference now, but still not get the necessity of it.\n> 1. Assume we don't index it, in which situation we can get a wrong result?\n\nConsider the following sequence of three different transactions X,A and B:\n\n1. Transaction X reads a tuple t2.\n2. Transaction A updates the tuple t2, deletes a tuple t1 and gets\ncommitted. So, there transaction X precedes transaction A, i.e., X <-\nA.\n3. Index A is created successfully.\n4. Transaction B starts and use the index A to fetch tuple t1. But,\nit's already deleted by the committed transaction A. So, transaction A\nprecedes transaction B, i.e., A<-B.\n5. At this point you've a dangerous structure X<-A<-B (definition of\ndangerous structure src/backend/storage/lmgr/README-SSI) in the graph\nwhich can produce an anomaly. For example now, if X tries to update\nanother tuple previously read by B, you'll have a dependency B<-X.\nBut, you already have X<-B which leads to serializable conflict.\nPostgres tries to resolve this anomaly by rolling back one of the\ntransaction.\n\nIn your case, it'll be difficult to detect.\n\n> 2. If we only support \"Read Committed\" isolation level, is there a safe way to not index such data?\n>\nI can't think of a case where the RECENTLY_DELETED tuple needs to be\nindexed in \"Read Committed\" case. So, your suggestion likely to work\nlogically in \"Read committed\" isolation level. But, I'm not sure\nwhether you'll encounter any assertion failures in vacuum path or\nconcurrent index paths.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 15:04:36 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "Thanks! Appreciate it for your time!\n\nOn Mon, Jun 10, 2019 at 5:34 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nwrote:\n\n> On Mon, Jun 10, 2019 at 2:12 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> > On Mon, Jun 10, 2019 at 4:10 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com>\n> wrote:\n> >> I think what I'm trying to say is different.\n> >>\n> >> For my case, the sequence is as following:\n> >> 1. Transaction A has deleted a tuple, say t1 and got committed.\n> >> 2. Index A has been created successfully.\n> >> 3. Now, transaction B starts and use the index A to fetch the tuple\n> >> t1. While doing visibility check, transaction B gets to know that t1\n> >> has been deleted by a committed transaction A, so it can't see the\n> >> tuple. But, it creates a dependency edge that transaction A precedes\n> >> transaction B. This edge is required to detect a serializable conflict\n> >> failure.\n> >>\n> >> If you don't create the index entry, it'll not be able to create that\n> edge.\n> >\n> >\n> > Thanks, I got the difference now, but still not get the necessity of it.\n> > 1. Assume we don't index it, in which situation we can get a wrong\n> result?\n>\n> Consider the following sequence of three different transactions X,A and B:\n>\n> 1. Transaction X reads a tuple t2.\n> 2. Transaction A updates the tuple t2, deletes a tuple t1 and gets\n> committed. So, there transaction X precedes transaction A, i.e., X <-\n> A.\n> 3. Index A is created successfully.\n> 4. Transaction B starts and use the index A to fetch tuple t1. But,\n> it's already deleted by the committed transaction A. So, transaction A\n> precedes transaction B, i.e., A<-B.\n> 5. At this point you've a dangerous structure X<-A<-B (definition of\n> dangerous structure src/backend/storage/lmgr/README-SSI) in the graph\n> which can produce an anomaly. For example now, if X tries to update\n> another tuple previously read by B, you'll have a dependency B<-X.\n> But, you already have X<-B which leads to serializable conflict.\n> Postgres tries to resolve this anomaly by rolling back one of the\n> transaction.\n>\n> In your case, it'll be difficult to detect.\n>\n> > 2. If we only support \"Read Committed\" isolation level, is there a\n> safe way to not index such data?\n> >\n> I can't think of a case where the RECENTLY_DELETED tuple needs to be\n> indexed in \"Read Committed\" case. So, your suggestion likely to work\n> logically in \"Read committed\" isolation level. But, I'm not sure\n> whether you'll encounter any assertion failures in vacuum path or\n> concurrent index paths.\n>\n>\n> --\n> Thanks & Regards,\n> Kuntal Ghosh\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks! Appreciate it for your time!On Mon, Jun 10, 2019 at 5:34 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:On Mon, Jun 10, 2019 at 2:12 PM Alex <zhihui.fan1213@gmail.com> wrote:\n> On Mon, Jun 10, 2019 at 4:10 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>> I think what I'm trying to say is different.\n>>\n>> For my case, the sequence is as following:\n>> 1. Transaction A has deleted a tuple, say t1 and got committed.\n>> 2. Index A has been created successfully.\n>> 3. Now, transaction B starts and use the index A to fetch the tuple\n>> t1. While doing visibility check, transaction B gets to know that t1\n>> has been deleted by a committed transaction A, so it can't see the\n>> tuple. But, it creates a dependency edge that transaction A precedes\n>> transaction B. This edge is required to detect a serializable conflict\n>> failure.\n>>\n>> If you don't create the index entry, it'll not be able to create that edge.\n>\n>\n> Thanks, I got the difference now, but still not get the necessity of it.\n> 1. Assume we don't index it, in which situation we can get a wrong result?\n\nConsider the following sequence of three different transactions X,A and B:\n\n1. Transaction X reads a tuple t2.\n2. Transaction A updates the tuple t2, deletes a tuple t1 and gets\ncommitted. So, there transaction X precedes transaction A, i.e., X <-\nA.\n3. Index A is created successfully.\n4. Transaction B starts and use the index A to fetch tuple t1. But,\nit's already deleted by the committed transaction A. So, transaction A\nprecedes transaction B, i.e., A<-B.\n5. At this point you've a dangerous structure X<-A<-B (definition of\ndangerous structure src/backend/storage/lmgr/README-SSI) in the graph\nwhich can produce an anomaly. For example now, if X tries to update\nanother tuple previously read by B, you'll have a dependency B<-X.\nBut, you already have X<-B which leads to serializable conflict.\nPostgres tries to resolve this anomaly by rolling back one of the\ntransaction.\n\nIn your case, it'll be difficult to detect.\n\n> 2. If we only support \"Read Committed\" isolation level, is there a safe way to not index such data?\n>\nI can't think of a case where the RECENTLY_DELETED tuple needs to be\nindexed in \"Read Committed\" case. So, your suggestion likely to work\nlogically in \"Read committed\" isolation level. But, I'm not sure\nwhether you'll encounter any assertion failures in vacuum path or\nconcurrent index paths.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 10 Jun 2019 19:00:20 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "Kuntal Ghosh <kuntalghosh.2007@gmail.com> writes:\n>> 2. If we only support \"Read Committed\" isolation level, is there a safe way to not index such data?\n\n> I can't think of a case where the RECENTLY_DELETED tuple needs to be\n> indexed in \"Read Committed\" case.\n\nI think you're making dangerously optimistic assumptions about how\nlong a query snapshot might survive in READ COMMITTED mode.\n\nIn particular, I suspect you're reasoning that the new index couldn't\nbe used except by a freshly-created query plan, which is possibly\ntrue, and that such a plan must be used with a freshly-created\nsnapshot, which is simply wrong. A counterexample could be built\nusing a SQL or PL function that's marked STABLE, because such a\nfunction is defined to be executed using the calling query's\nsnapshot. But it'll make query plans using current reality.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Jun 2019 07:56:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 5:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kuntal Ghosh <kuntalghosh.2007@gmail.com> writes:\n> >> 2. If we only support \"Read Committed\" isolation level, is there a safe way to not index such data?\n>\n> > I can't think of a case where the RECENTLY_DELETED tuple needs to be\n> > indexed in \"Read Committed\" case.\n>\n> I think you're making dangerously optimistic assumptions about how\n> long a query snapshot might survive in READ COMMITTED mode.\n>\n> In particular, I suspect you're reasoning that the new index couldn't\n> be used except by a freshly-created query plan, which is possibly\n> true, and that such a plan must be used with a freshly-created\n> snapshot, which is simply wrong. A counterexample could be built\n> using a SQL or PL function that's marked STABLE, because such a\n> function is defined to be executed using the calling query's\n> snapshot. But it'll make query plans using current reality.\n>\nWow. I've not thought of this scenario. Also, I'm not aware about this\ndifferent snapshot usage as well. I'll debug the same. Thank you Tom.\n\nSo, the READ COMMITTED mode will also cause this kind of issues.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 19:55:23 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why to index a \"Recently DEAD\" tuple when creating index"
}
] |
[
{
"msg_contents": "Hi,\n\nInefficiency of Postgres on some complex queries in most cases is caused \nby errors in selectivity estimations.\nPostgres doesn't take in account correlation between columns unless you \nexplicitly create mutlicolumn statistics\n(but multicolumn statistic is used only for restriction clauses, not for \njoin selectivity, where estimation errors are most critical).\n\nCertainly it is possible to collect more statistics and improve \nestimation formulas but c'est la vie is that estimation of relation size\nafter several joins //more looks like an exercise in guesswork. This is \nwhy alternative approach based on adaptive query optimization\nseems to be more promising. When we analyze query execution with EXPLAIN \nANALYZE, we can see actual number of rows for each plan node.\nWe can use this information to adjust clause selectivity and reduce \nestimation error.\n\nAt PGconf 2017 my former colleague Oleg Ivanov made presentation about \nusing machine learning for AQO:\nhttps://www.pgcon.org/2017/schedule/events/1086.en.html\nRight now this project is available from PostgresPro repository: \nhttps://github.com/postgrespro/aqo\n\nThere are several problems with this approach:\n1. It requires \"learning phase\"\n2. It saves collected data in Postgres tables, which makes read-only \ntransaction executing only queries to become read-write transaction, \nobtaining XID...\n3. It doesn't take in account concrete values of literals used in \nclauses, so it is not able to address data skews.\n4. Machine learning can be quite expensive and seems to be overkill if \nwe want just to adjust selectivities according to actual number of \naffected rows.\n\nI tried to create much simpler version of AQO based on auto_explain \nextension.\nThis extension provide all necessary infrastructure to analyze \nstatements with long execution time.\nI have added two new modes to auto_explain:\n1. Auto generation of multicolumn statistics for variables using in \nclauses with large estimation error.\n2. Direct adjustment of estimated number of rows based on information \ncollected by EXPLAIN ANALYZE.\n\nAs well as in Oleg's implementation, it requires few changes in \nPostgres core: introducing some new hooks for relation size estimation.\nBut most of functionality is implemented in auto_explain extension.\nAttached please find patch to vanilla.\nPlease read Readme.ms file for more details.\n\nI have tested it on join order benchmark JOB \nhttps://github.com/gregrahn/join-order-benchmark\naqo.svg contains results of applying my and Oleg's versions of AQO to \nJOB queries. First result corresponds to the vanilla Postgres, second - \nmy AQO keeping literal values, third my AQO ignoring literal values\nand last one result of Oleg's machine learning (after 10 iterations).\n\nThe principle problem with AQO approach is that using provided explain \nfeedback we are able to adjust selectivities only for one particular plan.\nBut there may be many other alternative plans, and once we adjust one \nplan, optimizer most likely choose some other plan which actually can be \never worser than\noriginal plan. Certainly if we continue learning, then sooner or later \nwe will know real selectivities for all possible clauses. But number of \npossible plans can be very\nlarge for queries with many joins (factorial), so many iterations may be \nrequired. What is worser some intermediate bad plans can take huge \namount of time.\nParticularly sixth iteration of Oleg's AQO on JOB queries set takes \nabout two hours (instead of original 10 minutes!).\nSuch thing doesn't happen with my AQO, but it seems to be just matter of \nluck.\n\nAny comments and feed back are welcome.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 10 Jun 2019 11:53:02 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Adaptive query optimization"
},
{
"msg_contents": "Hello,\nthis seems very interesting and make me think about 2 other projets:\n- https://github.com/trustly/pg_badplan\n- https://github.com/ossc-db/pg_plan_advsr\n\nAs I understand all this, there are actually 3 steps:\n- compare actual / estimated rows\n- suggests some statistics gathering modification\n- store optimised plans (or optimized stats) for reuse\n\nI really like the \"advisor Idea\", permitting to identify where statistics\nare wrong \nand how to fix them with ANALYZE command.\nIs there a chance that some futur Optimizer / statistics improvements\nmake thoses \"statistics advices\" enough to get good plans (without having to\nstore live stats \nor optimized plan) ?\n\nThanks in advance\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Tue, 11 Jun 2019 14:02:47 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "Hi Alexander,\n\nThanks for starting this thread. I've had similar ideas in the past and\neven hacked together something (very dirty), so it's great someone else\nis interested in this topic too.\n\nOn Mon, Jun 10, 2019 at 11:53:02AM +0300, Konstantin Knizhnik wrote:\n>Hi,\n>\n>Inefficiency of Postgres on some complex queries in most cases is \n>caused by errors in selectivity estimations.\n>Postgres doesn't take in account correlation between columns unless \n>you explicitly create mutlicolumn statistics\n>(but multicolumn statistic is used only for restriction clauses, not \n>for join selectivity, where estimation errors are most critical).\n>\n\nYes, that's the current status. I'd say we want to allow using\nmulticolumn stats to join estimation too, but that's a hard problem.\n\n>Certainly it is possible to collect more statistics and improve \n>estimation formulas but c'est la vie is that estimation of relation \n>size after several joins //more looks like an exercise in guesswork.\n\nI'd go even further - it's a simple fact we can't have perfect stats\nthat would give us \"sufficiently good\" estimates for all common data\ndistributions and clauses.\n\nFirstly, stats are a merely a simplified representation of the overall\ndata distribution - which makes them small, but eliminates some details\n(which may be quite important for highly non-uniform distributions).\n\nSecondly, we only have a number of generic stats types (MCV, histogram,\n...) but that may not be sufficient to \"capture\" the imporant aspects of\nthe data distribution.\n\nAnd finally, we only know how to use those stats for specific types of\nclauses (equality, inequality, ...) with very simple expressions. But\nthat's often not what the users do.\n\nI think adaptive query optimization - in the sense of collecting data\nfrom query executions and and leverating that when planning future\nqueries - can (hopefully) help with all those challenges. At least in\nsome cases.\n\n>This is why alternative approach based on adaptive query optimization\n>seems to be more promising. When we analyze query execution with \n>EXPLAIN ANALYZE, we can see actual number of rows for each plan node.\n>We can use this information to adjust clause selectivity and reduce \n>estimation error.\n>\n\nYep, that's roughly the idea. I don't think we need EXPLAIN ANALYZE, it\nshould be enough to instrument queries to collect row counts on the fly.\nBut I guess that's mostly what the explain_analyze changes do.\n\n>At PGconf 2017 my former colleague Oleg Ivanov made presentation about \n>using machine learning for AQO:\n>https://www.pgcon.org/2017/schedule/events/1086.en.html\n>Right now this project is available from PostgresPro repository: \n>https://github.com/postgrespro/aqo\n>\n>There are several problems with this approach:\n>1. It requires \"learning phase\"\n\nI don't think \"learning phase\" is an issue, in fact I think that's\nsomething we need to do - it ensures we have enough data to make good\ndecisions.\n\n>2. It saves collected data in Postgres tables, which makes read-only \n>transaction executing only queries to become read-write transaction, \n>obtaining XID...\n\nYeah, that's an issue because it makes it useless on standbys etc. I\nthink it'd be enough to do something similar to pg_stat_statements, i.e.\nstore it in memory and flush it to disk once in a while.\n\n>3. It doesn't take in account concrete values of literals used in \n>clauses, so it is not able to address data skews.\n\nYep. I don't think it's necessarily an issue with all approaches to\nadaptive optimization, though. But I agree we should detect both\nsystemic estimation issues, and misestimates for particular parameter\nvalues. I think that's doable.\n\n>4. Machine learning� can be quite� expensive and seems to be overkill \n>if we want just to adjust selectivities according to actual number of \n>affected rows.\n>\n\nI think that depends - some machine learning approaches are not that\nbad. But I think there's a more serious issue - explainability. We need\na solution where we can explain/justify why it makes some decisions. I\nreally don't want a black box that produces numbers that you just need\nto take at face value.\n\nThe good thing is that the simpler the method, the less expensive and\nmore explainable it is.\n\n>I tried to create much simpler version of AQO based on auto_explain \n>extension.\n>This extension provide all necessary infrastructure to analyze \n>statements with long execution time.\n>I have added two new modes to auto_explain:\n>1. Auto generation of multicolumn statistics for variables using in \n>clauses with large estimation error.\n\nInteresting! I probably wouldn't consider this part of adaptive query\noptimization, but it probably makes sense to make it part of this. I\nwonder if we might improve this to also suggest \"missing\" indexes? \n\n>2. Direct adjustment of estimated number of rows based on information \n>collected by EXPLAIN ANALYZE.\n>\n\nYep!\n\n>As well as in Oleg's implementation, it requires few changes� in \n>Postgres core: introducing some new hooks for relation size \n>estimation.\n>But most of functionality is implemented in auto_explain extension.\n>Attached please find patch to vanilla.\n>Please read Readme.ms file for more details.\n>\n>I have tested it on join order benchmark JOB \n>https://github.com/gregrahn/join-order-benchmark\n>aqo.svg contains results of applying my and Oleg's versions of AQO to \n>JOB queries. First result corresponds to the vanilla Postgres, second \n>- my AQO keeping literal values, third my AQO ignoring literal values\n>and last one result of Oleg's machine learning (after 10 iterations).\n>\n>The principle problem with AQO approach is that using provided explain \n>feedback we are able to adjust selectivities only for one particular \n>plan.\n>But there may be many other alternative plans, and once we adjust one \n>plan, optimizer most likely choose some other plan which actually can \n>be ever worser than\n>original plan. Certainly if we continue learning, then sooner or later \n>we will know real selectivities for all possible clauses. But number \n>of possible plans can be very\n>large for queries with many joins (factorial), so many iterations may \n>be required. What is worser some intermediate bad plans can take huge \n>amount of time.\n>Particularly sixth iteration of Oleg's AQO on JOB queries set takes \n>about two hours (instead of original 10 minutes!).\n>Such thing doesn't happen with my AQO, but it seems to be just matter \n>of luck.\n>\n\nRight. But I think I might have an idea how to address (some of) this.\n\nAs I already mentioned, I was experimenting with something similar,\nmaybe two or three years ago (I remember chatting about it with Teodor\nat pgcon last week). I was facing the same issues, and my approach was\nbased on hooks too.\n\nBut my idea was to not to track stats for a plan as a whole, but instead\ndecompose it into individual nodes, categoried into three basic groups -\nscans, joins and aggregations. And then use this extracted information\nto other plans, with \"matching\" nodes.\n\nFor example, let's consider a simple \"single-scan\" query\n\n SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;\n\nNow, if you execute this enought times (say, 100x or 1000x), tracking\nthe estimates and actual row counts, you may then compute the average\nmisestimate (maybe a geometric mean would be more appropriate here?):\n\n AVG(actual/estimate)\n\nand if this is significantly different from 1.0, then we can say there's\na systemic misestimate, and we can use this as a correction coefficient\nwhen computing the scan estimate. (And we need to be careful about\ncollection new data, because the estimates will include this correction.\nBut that can be done by tracking \"epoch\" of the plan.)\n\nNow, if someone uses this same scan in a join, like for example\n\n SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n AND (t2.x = ? AND t2.y = ?)\n\nthen we can still apply the same correction to the t1 scan (I think).\nBut then we can also collect data for the t1-t2 join, and compute a\ncorrection coefficient in a similar way. It requires a bit of care\nbecause we need to compensate for misestimates of inputs, but I think\nthat's doable.\n\nOf course, all this is rather speculative, and I never got to anything\nbeyond a very simple PoC. So I hope it makes at least some sense.\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 11 Jun 2019 23:43:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": ">>I tried to create much simpler version of AQO based on auto_explain \n>>extension.\n>>This extension provide all necessary infrastructure to analyze \n>>statements with long execution time.\n>>I have added two new modes to auto_explain:\n>>1. Auto generation of multicolumn statistics for variables using in \n>>clauses with large estimation error.\n\n>Interesting! I probably wouldn't consider this part of adaptive query\n>optimization, but it probably makes sense to make it part of this. I\n>wonder if we might improve this to also suggest \"missing\" indexes? \n\nShouldn't this be extended to adjust the default_statistics_target\nconfiguration \nvariable, or on a column-by-column basis by setting the per-column\nstatistics\ntarget with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 12 Jun 2019 00:08:38 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On 12.06.2019 0:43, Tomas Vondra wrote:\n>\n>\n> I don't think \"learning phase\" is an issue, in fact I think that's\n> something we need to do - it ensures we have enough data to make good\n> decisions.\n>\nWhat is wrong with learning phase is that it requires some DBA \nassistance: somebody should determine when to start learning,\nprovide relevant workload and determine when learning is finished.\nOne of the most recent trends in DBMSes is autonomous databases with \nzero administration effort.\nIt is especially important for clouds. And of one the main advantages of \nAQO is that it allows to optimize queries without human interaction.\n\nBut unfortunately I really do not know how to avoid learning phase, \nespecially if we what to run queries at replica.\n\n\n>> 2. It saves collected data in Postgres tables, which makes read-only \n>> transaction executing only queries to become read-write transaction, \n>> obtaining XID...\n>\n> Yeah, that's an issue because it makes it useless on standbys etc. I\n> think it'd be enough to do something similar to pg_stat_statements, i.e.\n> store it in memory and flush it to disk once in a while.\n>\nThus is why my AQO implementation is storing data in file.\n\n>> 3. It doesn't take in account concrete values of literals used in \n>> clauses, so it is not able to address data skews.\n>\n> Yep. I don't think it's necessarily an issue with all approaches to\n> adaptive optimization, though. But I agree we should detect both\n> systemic estimation issues, and misestimates for particular parameter\n> values. I think that's doable.\n>\n>> 4. Machine learning can be quite expensive and seems to be overkill \n>> if we want just to adjust selectivities according to actual number of \n>> affected rows.\n>>\n>\n> I think that depends - some machine learning approaches are not that\n> bad. But I think there's a more serious issue - explainability. We need\n> a solution where we can explain/justify why it makes some decisions. I\n> really don't want a black box that produces numbers that you just need\n> to take at face value.\n>\n> The good thing is that the simpler the method, the less expensive and\n> more explainable it is.\n>\n>> I tried to create much simpler version of AQO based on auto_explain \n>> extension.\n>> This extension provide all necessary infrastructure to analyze \n>> statements with long execution time.\n>> I have added two new modes to auto_explain:\n>> 1. Auto generation of multicolumn statistics for variables using in \n>> clauses with large estimation error.\n>\n> Interesting! I probably wouldn't consider this part of adaptive query\n> optimization, but it probably makes sense to make it part of this. I\n> wonder if we might improve this to also suggest \"missing\" indexes?\n\nI think that it should be nest step of adaptive query optimization:\n- autogeneration of indexes\n- auto adjustment of optimizer cost parameters (cpu cost, \nrandom/sequential page access cost,...)\n\nThere is already extension hypothetical index \nhttps://github.com/HypoPG/hypopg\nwhich can be used to estimate effect of introducing new indexes.\n\n>\n> Right. But I think I might have an idea how to address (some of) this.\n>\n> As I already mentioned, I was experimenting with something similar,\n> maybe two or three years ago (I remember chatting about it with Teodor\n> at pgcon last week). I was facing the same issues, and my approach was\n> based on hooks too.\n>\n> But my idea was to not to track stats for a plan as a whole, but instead\n> decompose it into individual nodes, categoried into three basic groups -\n> scans, joins and aggregations. And then use this extracted information\n> to other plans, with \"matching\" nodes.\n>\n> For example, let's consider a simple \"single-scan\" query\n>\n> SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;\n>\n> Now, if you execute this enought times (say, 100x or 1000x), tracking\n> the estimates and actual row counts, you may then compute the average\n> misestimate (maybe a geometric mean would be more appropriate here?):\n>\n> AVG(actual/estimate)\n\nCertainly stats should be collected for each plan node, not for the \nwhole plan.\nAnd it is done now in Oleg's and my implementation.\nOleg is using gradient descent method. I first tried to calculate \naverage, but then find out that building something like \"histogram\",\nwhere bin is determined as log10 of estimated number of rows.\n\n>\n> and if this is significantly different from 1.0, then we can say there's\n> a systemic misestimate, and we can use this as a correction coefficient\n> when computing the scan estimate. (And we need to be careful about\n> collection new data, because the estimates will include this correction.\n> But that can be done by tracking \"epoch\" of the plan.)\n>\n> Now, if someone uses this same scan in a join, like for example\n>\n> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n> AND (t2.x = ? AND t2.y = ?)\n>\n> then we can still apply the same correction to the t1 scan (I think).\n> But then we can also collect data for the t1-t2 join, and compute a\n> correction coefficient in a similar way. It requires a bit of care\n> because we need to compensate for misestimates of inputs, but I think\n> that's doable.\n>\n> Of course, all this is rather speculative, and I never got to anything\n> beyond a very simple PoC. So I hope it makes at least some sense.\n>\n\nAs far as I know Oleg's AQO is now used by Amason.\nSo it is something more than just PoC. But certainly there are still \nmany problems\nand my experiments with JOB benchmark shown that there are still a lot \nof things to improve.\n\n\n\n\n\n\n\n\n\nOn 12.06.2019 0:43, Tomas Vondra wrote:\n\n\n\n I don't think \"learning phase\" is an issue, in fact I think that's\n \n something we need to do - it ensures we have enough data to make\n good\n \n decisions.\n \n\n\n What is wrong with learning phase is that it requires some DBA\n assistance: somebody should determine when to start learning,\n provide relevant workload and determine when learning is finished.\n One of the most recent trends in DBMSes is autonomous databases with\n zero administration effort.\n It is especially important for clouds. And of one the main\n advantages of AQO is that it allows to optimize queries without\n human interaction.\n\n But unfortunately I really do not know how to avoid learning phase,\n especially if we what to run queries at replica.\n\n\n\n2. It saves collected data in Postgres\n tables, which makes read-only transaction executing only queries\n to become read-write transaction, obtaining XID...\n \n\n\n Yeah, that's an issue because it makes it useless on standbys etc.\n I\n \n think it'd be enough to do something similar to\n pg_stat_statements, i.e.\n \n store it in memory and flush it to disk once in a while.\n \n\n\n Thus is why my AQO implementation is storing data in file.\n\n\n3. It doesn't take in account concrete\n values of literals used in clauses, so it is not able to address\n data skews.\n \n\n\n Yep. I don't think it's necessarily an issue with all approaches\n to\n \n adaptive optimization, though. But I agree we should detect both\n \n systemic estimation issues, and misestimates for particular\n parameter\n \n values. I think that's doable.\n \n\n4. Machine learning can be quite \n expensive and seems to be overkill if we want just to adjust\n selectivities according to actual number of affected rows.\n \n\n\n\n I think that depends - some machine learning approaches are not\n that\n \n bad. But I think there's a more serious issue - explainability. We\n need\n \n a solution where we can explain/justify why it makes some\n decisions. I\n \n really don't want a black box that produces numbers that you just\n need\n \n to take at face value.\n \n\n The good thing is that the simpler the method, the less expensive\n and\n \n more explainable it is.\n \n\nI tried to create much simpler version of\n AQO based on auto_explain extension.\n \n This extension provide all necessary infrastructure to analyze\n statements with long execution time.\n \n I have added two new modes to auto_explain:\n \n 1. Auto generation of multicolumn statistics for variables using\n in clauses with large estimation error.\n \n\n\n Interesting! I probably wouldn't consider this part of adaptive\n query\n \n optimization, but it probably makes sense to make it part of this.\n I\n \n wonder if we might improve this to also suggest \"missing\" indexes?\n \n\n\n I think that it should be nest step of adaptive query optimization:\n - autogeneration of indexes\n - auto adjustment of optimizer cost parameters (cpu cost,\n random/sequential page access cost,...)\n\n There is already extension hypothetical index\n https://github.com/HypoPG/hypopg\n which can be used to estimate effect of introducing new indexes.\n\n\n Right. But I think I might have an idea how to address (some of)\n this.\n \n\n As I already mentioned, I was experimenting with something\n similar,\n \n maybe two or three years ago (I remember chatting about it with\n Teodor\n \n at pgcon last week). I was facing the same issues, and my approach\n was\n \n based on hooks too.\n \n\n But my idea was to not to track stats for a plan as a whole, but\n instead\n \n decompose it into individual nodes, categoried into three basic\n groups -\n \n scans, joins and aggregations. And then use this extracted\n information\n \n to other plans, with \"matching\" nodes.\n \n\n For example, let's consider a simple \"single-scan\" query\n \n\n SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;\n \n\n Now, if you execute this enought times (say, 100x or 1000x),\n tracking\n \n the estimates and actual row counts, you may then compute the\n average\n \n misestimate (maybe a geometric mean would be more appropriate\n here?):\n \n\n AVG(actual/estimate)\n \n\n\n Certainly stats should be collected for each plan node, not for the\n whole plan.\n And it is done now in Oleg's and my implementation.\n Oleg is using gradient descent method. I first tried to\n calculate average, but then find out that building something\n like \"histogram\",\n where bin is determined as log10 of estimated number of rows.\n\n\n\n and if this is significantly different from 1.0, then we can say\n there's\n \n a systemic misestimate, and we can use this as a correction\n coefficient\n \n when computing the scan estimate. (And we need to be careful about\n \n collection new data, because the estimates will include this\n correction.\n \n But that can be done by tracking \"epoch\" of the plan.)\n \n\n Now, if someone uses this same scan in a join, like for example\n \n\n SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n \n WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n \n AND (t2.x = ? AND t2.y = ?)\n \n\n then we can still apply the same correction to the t1 scan (I\n think).\n \n But then we can also collect data for the t1-t2 join, and compute\n a\n \n correction coefficient in a similar way. It requires a bit of care\n \n because we need to compensate for misestimates of inputs, but I\n think\n \n that's doable.\n \n\n Of course, all this is rather speculative, and I never got to\n anything\n \n beyond a very simple PoC. So I hope it makes at least some sense.\n \n\n\n\n As far as I know Oleg's AQO is now used by Amason.\n So it is something more than just PoC. But certainly there are still\n many problems\n and my experiments with JOB benchmark shown that there are still a\n lot of things to improve.",
"msg_date": "Wed, 12 Jun 2019 14:36:08 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "Hello,\n\nOn Wed, Jun 12, 2019 at 5:06 PM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> On 12.06.2019 0:43, Tomas Vondra wrote:\n> I don't think \"learning phase\" is an issue, in fact I think that's\n> something we need to do - it ensures we have enough data to make good\n> decisions.\n>\n> What is wrong with learning phase is that it requires some DBA assistance: somebody should determine when to start learning,\n> provide relevant workload and determine when learning is finished.\n> One of the most recent trends in DBMSes is autonomous databases with zero administration effort.\n> It is especially important for clouds. And of one the main advantages of AQO is that it allows to optimize queries without human interaction.\n>\n> But unfortunately I really do not know how to avoid learning phase, especially if we what to run queries at replica.\n>\nAvoiding learning phase in AQO a implementation sounds like an oxymoron. :-)\nPerhaps, you meant how we can minimize the effort in learning phase. A\nlearning phase has its own complications - like\na. deciding the the number of iterations needed to achieve certain\nkind of confidence\nb. which parameters to tune (are the existing parameters enough?)\nc. deciding the cost model\nComing up answers for these things is pretty hard.\n\n>\n> I think that depends - some machine learning approaches are not that\n> bad. But I think there's a more serious issue - explainability. We need\n> a solution where we can explain/justify why it makes some decisions. I\n> really don't want a black box that produces numbers that you just need\n> to take at face value.\n>\n> The good thing is that the simpler the method, the less expensive and\n> more explainable it is.\n+1\n\n>\n> I tried to create much simpler version of AQO based on auto_explain extension.\n> This extension provide all necessary infrastructure to analyze statements with long execution time.\n> I have added two new modes to auto_explain:\n> 1. Auto generation of multicolumn statistics for variables using in clauses with large estimation error.\n>\n>\n> Interesting! I probably wouldn't consider this part of adaptive query\n> optimization, but it probably makes sense to make it part of this. I\n> wonder if we might improve this to also suggest \"missing\" indexes?\n>\nI like this part of the implementation. I also agree that this can be\nused to come up with good hypothetical index suggestions. But, it\nneeds some additional algorithms. For example, after analysing a set\nof queries, we can come up with a minimal set of indexes that needs to\nbe created to minimize the total cost. I've not checked the internal\nimplementation of hypogo. Probably, I should do that.\n\n>\n> I think that it should be nest step of adaptive query optimization:\n> - autogeneration of indexes\n> - auto adjustment of optimizer cost parameters (cpu cost, random/sequential page access cost,...)\nAFAIK, the need for adjustment of cost parameters are highly dominated\nby solving the selectivity estimation errors. But of course, you can\nargue with that.\n\n>\n> Right. But I think I might have an idea how to address (some of) this.\n>\n> As I already mentioned, I was experimenting with something similar,\n> maybe two or three years ago (I remember chatting about it with Teodor\n> at pgcon last week). I was facing the same issues, and my approach was\n> based on hooks too.\n>\n> But my idea was to not to track stats for a plan as a whole, but instead\n> decompose it into individual nodes, categoried into three basic groups -\n> scans, joins and aggregations. And then use this extracted information\n> to other plans, with \"matching\" nodes.\n>\n> For example, let's consider a simple \"single-scan\" query\n>\n> SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;\n>\n> Now, if you execute this enought times (say, 100x or 1000x), tracking\n> the estimates and actual row counts, you may then compute the average\n> misestimate (maybe a geometric mean would be more appropriate here?):\n>\n> AVG(actual/estimate)\n>\n>\n> Certainly stats should be collected for each plan node, not for the whole plan.\n> And it is done now in Oleg's and my implementation.\n> Oleg is using gradient descent method. I first tried to calculate average, but then find out that building something like \"histogram\",\n> where bin is determined as log10 of estimated number of rows.\n>\nI think maintaining a \"histogram\" sounds good. I've read a paper\ncalled \"Self-tuning Histograms: Building Histograms Without\nLooking at Data\" which tries to do something similar[1].\n\n>\n> and if this is significantly different from 1.0, then we can say there's\n> a systemic misestimate, and we can use this as a correction coefficient\n> when computing the scan estimate. (And we need to be careful about\n> collection new data, because the estimates will include this correction.\n> But that can be done by tracking \"epoch\" of the plan.)\n>\n> Now, if someone uses this same scan in a join, like for example\n>\n> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n> AND (t2.x = ? AND t2.y = ?)\n>\n> then we can still apply the same correction to the t1 scan (I think).\n> But then we can also collect data for the t1-t2 join, and compute a\n> correction coefficient in a similar way. It requires a bit of care\n> because we need to compensate for misestimates of inputs, but I think\n> that's doable.\n>\nThat'll be an interesting work. For the above query, we can definitely\ncalculate the correction coefficient of t1-t2 join given (t1.a = ? AND\nt1.b = ? AND t1.c < ?) and\n(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\nextrapolate that value for t1-t2 join.\n>\n> As far as I know Oleg's AQO is now used by Amason.\n> So it is something more than just PoC. But certainly there are still many problems\n> and my experiments with JOB benchmark shown that there are still a lot of things to improve.\n>\nNice.\n\n[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21.921&rep=rep1&type=pdf\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Jun 2019 18:14:41 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "re: is used at Amazon\n\nNot yet (for RDS, anyway), but I've played with AQO on the Join Order\nBenchmark and I was very impressed. The version I was using required a very\n'hands on' user (me, in this case) to participate in the training phase. \nUsability issues aside, AQO worked remarkably well. I think it has a lot of\npotential.\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:52:09 -0700 (MST)",
"msg_from": "Jim Finnerty <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 06:14:41PM +0530, Kuntal Ghosh wrote:\n>Hello,\n>\n>On Wed, Jun 12, 2019 at 5:06 PM Konstantin Knizhnik\n><k.knizhnik@postgrespro.ru> wrote:\n>> On 12.06.2019 0:43, Tomas Vondra wrote:\n>> I don't think \"learning phase\" is an issue, in fact I think that's\n>> something we need to do - it ensures we have enough data to make good\n>> decisions.\n>>\n>> What is wrong with learning phase is that it requires some DBA assistance: somebody should determine when to start learning,\n>> provide relevant workload and determine when learning is finished.\n>> One of the most recent trends in DBMSes is autonomous databases with zero administration effort.\n>> It is especially important for clouds. And of one the main advantages of AQO is that it allows to optimize queries without human interaction.\n>>\n>> But unfortunately I really do not know how to avoid learning phase, especially if we what to run queries at replica.\n>>\n>Avoiding learning phase in AQO a implementation sounds like an oxymoron. :-)\n>Perhaps, you meant how we can minimize the effort in learning phase. A\n>learning phase has its own complications - like\n>a. deciding the the number of iterations needed to achieve certain\n>kind of confidence\n>b. which parameters to tune (are the existing parameters enough?)\n>c. deciding the cost model\n>Coming up answers for these things is pretty hard.\n>\n\nI kinda agree with both of you - the learning phase may be a significant\nburden. But I don't think we can get rid of it entirely - we simply need\nto collect the data to learn from somehow. But we should make it as\nunobtrusive and easy to perform as possible.\n\nMy plan was to allow continuous learning during regular operation, i.e.\nfrom workload generated by the application. So instead of requiring a\nseparate learning phase, we'd require a certain number of samples for a\ngiven node, because we start using it to correct estimates.\n\nFor example, we might require 1000 samples for a given node (say, scan\nwith some quals), before we start using it to tweak the estimates. Once\nwe get the number of estimates, we can continue collecting more data,\nand once in a while update the correction. This would require some care,\nof course, because we need to know what coefficient was used to compute\nthe estimate, but that's solvable by having some sort of epoch.\n\nOf course, the question is what number should we use, but overall this\nwould be a much lower-overhead way to do the learning.\n\nUnfortunately, the learning as implemented in the patch does not allow\nthis. It pretty much requires dedicated learning phase with generated\nworkload, in a single process.\n\nBut I think that's solvable, assuming we:\n\n1) Store the data in shared memory, instead of a file. Collect data from\nall backends, instead of just a single one, etc.\n\n2) Make the decision for individual entries, depending on how many\nsamples we have for it.\n\n>>\n>> I think that depends - some machine learning approaches are not that\n>> bad. But I think there's a more serious issue - explainability. We need\n>> a solution where we can explain/justify why it makes some decisions. I\n>> really don't want a black box that produces numbers that you just need\n>> to take at face value.\n>>\n>> The good thing is that the simpler the method, the less expensive and\n>> more explainable it is.\n>+1\n>\n>>\n>> I tried to create much simpler version of AQO based on auto_explain extension.\n>> This extension provide all necessary infrastructure to analyze statements with long execution time.\n>> I have added two new modes to auto_explain:\n>> 1. Auto generation of multicolumn statistics for variables using in clauses with large estimation error.\n>>\n>>\n>> Interesting! I probably wouldn't consider this part of adaptive query\n>> optimization, but it probably makes sense to make it part of this. I\n>> wonder if we might improve this to also suggest \"missing\" indexes?\n>>\n>I like this part of the implementation. I also agree that this can be\n>used to come up with good hypothetical index suggestions. But, it\n>needs some additional algorithms. For example, after analysing a set\n>of queries, we can come up with a minimal set of indexes that needs to\n>be created to minimize the total cost. I've not checked the internal\n>implementation of hypogo. Probably, I should do that.\n>\n\nI suggest we try to solve one issue at a time. I agree advising which\nindexes to create is a very interesting (and valuable) thing, but I see\nit as an extension of the AQO feature. That is, basic AQO (tweaking row\nestimates) can work without it.\n\n>>\n>> I think that it should be nest step of adaptive query optimization:\n>> - autogeneration of indexes\n>> - auto adjustment of optimizer cost parameters (cpu cost, random/sequential page access cost,...)\n>AFAIK, the need for adjustment of cost parameters are highly dominated\n>by solving the selectivity estimation errors. But of course, you can\n>argue with that.\n\nThat's probably true. But more to the point, it makes little sense to\ntune cost parameters until the row estimates are fairly accurate. So I\nthink we should focus on getting that part working first, and then maybe\nlook into tuning cost parameters when this part works well enough.\n\nFurthermore, I wonder how would we even tune cost parameters? I mean, it\nseems much harder than correcting row estimates, because the feedback\nseems much less reliable. For row estimates we know the actual row\ncount, but for cost parameters we only have the total query runtime.\nWhich is somehow correlated, but it seems to rather noisy (e.g., due to\nsharing resources with other stuff on the same system), and it's unclear\nhow to map the duration to individual nodes (which may be using very\ndifferent costing formulas).\n\n>\n>>\n>> Right. But I think I might have an idea how to address (some of) this.\n>>\n>> As I already mentioned, I was experimenting with something similar,\n>> maybe two or three years ago (I remember chatting about it with Teodor\n>> at pgcon last week). I was facing the same issues, and my approach was\n>> based on hooks too.\n>>\n>> But my idea was to not to track stats for a plan as a whole, but instead\n>> decompose it into individual nodes, categoried into three basic groups -\n>> scans, joins and aggregations. And then use this extracted information\n>> to other plans, with \"matching\" nodes.\n>>\n>> For example, let's consider a simple \"single-scan\" query\n>>\n>> SELECT * FROM t1 WHERE a = ? AND b = ? AND c < ?;\n>>\n>> Now, if you execute this enought times (say, 100x or 1000x), tracking\n>> the estimates and actual row counts, you may then compute the average\n>> misestimate (maybe a geometric mean would be more appropriate here?):\n>>\n>> AVG(actual/estimate)\n>>\n>>\n>> Certainly stats should be collected for each plan node, not for the whole plan.\n>> And it is done now in Oleg's and my implementation.\n>> Oleg is using gradient descent method. I first tried to calculate average, but then find out that building something like \"histogram\",\n>> where bin is determined as log10 of estimated number of rows.\n>>\n>I think maintaining a \"histogram\" sounds good. I've read a paper\n>called \"Self-tuning Histograms: Building Histograms Without\n>Looking at Data\" which tries to do something similar[1].\n>\n\nYeah. As long as we know how to compute the correction coefficient, it\ndoes not matter how exactly we store the data (array of values,\nhistogram, something else).\n\nBut I think we should keep this simple, so the self-tuning histograms\nmay be an overkill here.\n\n>>\n>> and if this is significantly different from 1.0, then we can say there's\n>> a systemic misestimate, and we can use this as a correction coefficient\n>> when computing the scan estimate. (And we need to be careful about\n>> collection new data, because the estimates will include this correction.\n>> But that can be done by tracking \"epoch\" of the plan.)\n>>\n>> Now, if someone uses this same scan in a join, like for example\n>>\n>> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n>> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n>> AND (t2.x = ? AND t2.y = ?)\n>>\n>> then we can still apply the same correction to the t1 scan (I think).\n>> But then we can also collect data for the t1-t2 join, and compute a\n>> correction coefficient in a similar way. It requires a bit of care\n>> because we need to compensate for misestimates of inputs, but I think\n>> that's doable.\n>>\n>That'll be an interesting work. For the above query, we can definitely\n>calculate the correction coefficient of t1-t2 join given (t1.a = ? AND\n>t1.b = ? AND t1.c < ?) and\n>(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\n>extrapolate that value for t1-t2 join.\n\nI'm not sure I see the problem? Essentially, we need to know the sizes\nof the join inputs, i.e.\n\n t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n\n t2 WHERE (t2.x = ? AND t2.y = ?)\n\n(which we know, and we know how to correct the estimate), and then the\nselectivity of the join condition. Which we also know.\n\nObviously, there's a chance those parts (clauses at the scan / join\nlevel) are correlated, which could make this less accurate. But again,\nthis is about systemic estimation errors - if all queries are affected\nby this, then the correction will reflect that.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 13 Jun 2019 02:19:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> For example, we might require 1000 samples for a given node (say, scan\n> with some quals), before we start using it to tweak the estimates. Once\n> we get the number of estimates, we can continue collecting more data,\n> and once in a while update the correction. This would require some care,\n> of course, because we need to know what coefficient was used to compute\n> the estimate, but that's solvable by having some sort of epoch.\n>\n> Of course, the question is what number should we use, but overall this\n> would be a much lower-overhead way to do the learning.\n>\n> Unfortunately, the learning as implemented in the patch does not allow\n> this. It pretty much requires dedicated learning phase with generated\n> workload, in a single process.\n>\n> But I think that's solvable, assuming we:\n>\n> 1) Store the data in shared memory, instead of a file. Collect data from\n> all backends, instead of just a single one, etc.\n>\n> 2) Make the decision for individual entries, depending on how many\n> samples we have for it.\n>\nSounds good. I was trying to think whether we can maintain a running\ncoefficient. In that way, we don't have to store the samples. But,\ncalculating a running coefficient for more than two variables (with\nsome single pass algorithm) seems to be a hard problem. Moreover, it\ncan introduce significant misestimation. Your suggested approach works\nbetter.\n\n> I suggest we try to solve one issue at a time. I agree advising which\n> indexes to create is a very interesting (and valuable) thing, but I see\n> it as an extension of the AQO feature. That is, basic AQO (tweaking row\n> estimates) can work without it.\n>\n+1\n\n> >> Now, if someone uses this same scan in a join, like for example\n> >>\n> >> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n> >> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n> >> AND (t2.x = ? AND t2.y = ?)\n> >>\n> >> then we can still apply the same correction to the t1 scan (I think).\n> >> But then we can also collect data for the t1-t2 join, and compute a\n> >> correction coefficient in a similar way. It requires a bit of care\n> >> because we need to compensate for misestimates of inputs, but I think\n> >> that's doable.\n> >>\n> >That'll be an interesting work. For the above query, we can definitely\n> >calculate the correction coefficient of t1-t2 join given (t1.a = ? AND\n> >t1.b = ? AND t1.c < ?) and\n> >(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\n> >extrapolate that value for t1-t2 join.\n>\n> I'm not sure I see the problem? Essentially, we need to know the sizes\n> of the join inputs, i.e.\n>\n> t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n>\n> t2 WHERE (t2.x = ? AND t2.y = ?)\n>\n> (which we know, and we know how to correct the estimate), and then the\n> selectivity of the join condition. Which we also know.\n>\n> Obviously, there's a chance those parts (clauses at the scan / join\n> level) are correlated, which could make this less accurate.\nThis is exactly what my concern is. The base predicate selectivities\nof t1 and t2 should have an impact on the calculation of the\ncorrection coefficient. If those selectivities are low, the\nmisestimation (which is actual/estimate) should not affect the t1-t2\njoin correction coefficient much.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:37:07 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On Thu, 13 Jun 2019 at 06:07, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > For example, we might require 1000 samples for a given node (say, scan\n> > with some quals), before we start using it to tweak the estimates. Once\n> > we get the number of estimates, we can continue collecting more data,\n> > and once in a while update the correction. This would require some care,\n> > of course, because we need to know what coefficient was used to compute\n> > the estimate, but that's solvable by having some sort of epoch.\n> >\n> > Of course, the question is what number should we use, but overall this\n> > would be a much lower-overhead way to do the learning.\n> >\n> > Unfortunately, the learning as implemented in the patch does not allow\n> > this. It pretty much requires dedicated learning phase with generated\n> > workload, in a single process.\n> >\n> > But I think that's solvable, assuming we:\n> >\n> > 1) Store the data in shared memory, instead of a file. Collect data from\n> > all backends, instead of just a single one, etc.\n> >\n> > 2) Make the decision for individual entries, depending on how many\n> > samples we have for it.\n> >\n> Sounds good. I was trying to think whether we can maintain a running\n> coefficient. In that way, we don't have to store the samples. But,\n> calculating a running coefficient for more than two variables (with\n> some single pass algorithm) seems to be a hard problem. Moreover, it\n> can introduce significant misestimation. Your suggested approach works\n> better.\n>\n> > I suggest we try to solve one issue at a time. I agree advising which\n> > indexes to create is a very interesting (and valuable) thing, but I see\n> > it as an extension of the AQO feature. That is, basic AQO (tweaking row\n> > estimates) can work without it.\n> >\n> +1\n>\n> > >> Now, if someone uses this same scan in a join, like for example\n> > >>\n> > >> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n> > >> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n> > >> AND (t2.x = ? AND t2.y = ?)\n> > >>\n> > >> then we can still apply the same correction to the t1 scan (I think).\n> > >> But then we can also collect data for the t1-t2 join, and compute a\n> > >> correction coefficient in a similar way. It requires a bit of care\n> > >> because we need to compensate for misestimates of inputs, but I think\n> > >> that's doable.\n> > >>\n> > >That'll be an interesting work. For the above query, we can definitely\n> > >calculate the correction coefficient of t1-t2 join given (t1.a = ? AND\n> > >t1.b = ? AND t1.c < ?) and\n> > >(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\n> > >extrapolate that value for t1-t2 join.\n> >\n> > I'm not sure I see the problem? Essentially, we need to know the sizes\n> > of the join inputs, i.e.\n> >\n> > t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n> >\n> > t2 WHERE (t2.x = ? AND t2.y = ?)\n> >\n> > (which we know, and we know how to correct the estimate), and then the\n> > selectivity of the join condition. Which we also know.\n> >\n> > Obviously, there's a chance those parts (clauses at the scan / join\n> > level) are correlated, which could make this less accurate.\n> This is exactly what my concern is. The base predicate selectivities\n> of t1 and t2 should have an impact on the calculation of the\n> correction coefficient. If those selectivities are low, the\n> misestimation (which is actual/estimate) should not affect the t1-t2\n> join correction coefficient much.\n>\nInteresting discussion. Talking of query optimization techniques and\nchallenges, isn't the biggest challenge there is of selectivity\nestimation? Then instead of working on optimizing the process which\nhas been talked of since long, how about skipping the process\naltogether. This reminds of the work I came across sometime back[1].\nBasically, the idea is to not spend any energy on estimation the\nselectivities rather get on with the execution. Precisely, a set of\nplans is kept apriori for different selectivities and at the execution\ntime it starts with the plans one by one, starting from the lower\nselectivity one till the query execution completes. It might sound\nlike too much work but it isn't, there are some theoretical guarantees\nto bound the worst case execution time. The trick is in choosing the\nplan-set and switching at the time of execution. Another good point\nabout this is that it works smoothly for join predicates as well.\n\nSince, we are talking about this problem here, I though it might be a\ngood idea to shed some light on such an approach and see if there is\nsome interesting trick we might use.\n\n[1] https://dsl.cds.iisc.ac.in/publications/conference/bouquet.pdf\n\n-- \nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 13 Jun 2019 15:17:07 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 09:37:07AM +0530, Kuntal Ghosh wrote:\n>On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> For example, we might require 1000 samples for a given node (say, scan\n>> with some quals), before we start using it to tweak the estimates. Once\n>> we get the number of estimates, we can continue collecting more data,\n>> and once in a while update the correction. This would require some care,\n>> of course, because we need to know what coefficient was used to compute\n>> the estimate, but that's solvable by having some sort of epoch.\n>>\n>> Of course, the question is what number should we use, but overall this\n>> would be a much lower-overhead way to do the learning.\n>>\n>> Unfortunately, the learning as implemented in the patch does not allow\n>> this. It pretty much requires dedicated learning phase with generated\n>> workload, in a single process.\n>>\n>> But I think that's solvable, assuming we:\n>>\n>> 1) Store the data in shared memory, instead of a file. Collect data from\n>> all backends, instead of just a single one, etc.\n>>\n>> 2) Make the decision for individual entries, depending on how many\n>> samples we have for it.\n>>\n>Sounds good. I was trying to think whether we can maintain a running\n>coefficient. In that way, we don't have to store the samples. But,\n>calculating a running coefficient for more than two variables (with\n>some single pass algorithm) seems to be a hard problem. Moreover, it\n>can introduce significant misestimation. Your suggested approach works\n>better.\n>\n\nI don't know, TBH. I think it would be enough to store the coefficient and\nthe number of samples it's based on, so that you can consider that as a\nweight when merging it with additional values. But I don't think it's a\nsolved issue, so we may need to experiment a bit.\n\n>> I suggest we try to solve one issue at a time. I agree advising which\n>> indexes to create is a very interesting (and valuable) thing, but I see\n>> it as an extension of the AQO feature. That is, basic AQO (tweaking row\n>> estimates) can work without it.\n>>\n>+1\n>\n>> >> Now, if someone uses this same scan in a join, like for example\n>> >>\n>> >> SELECT * FROM t1 JOIN t2 ON (t1.id = t2.id)\n>> >> WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n>> >> AND (t2.x = ? AND t2.y = ?)\n>> >>\n>> >> then we can still apply the same correction to the t1 scan (I think).\n>> >> But then we can also collect data for the t1-t2 join, and compute a\n>> >> correction coefficient in a similar way. It requires a bit of care\n>> >> because we need to compensate for misestimates of inputs, but I think\n>> >> that's doable.\n>> >>\n>> >That'll be an interesting work. For the above query, we can definitely\n>> >calculate the correction coefficient of t1-t2 join given (t1.a = ? AND\n>> >t1.b = ? AND t1.c < ?) and\n>> >(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\n>> >extrapolate that value for t1-t2 join.\n>>\n>> I'm not sure I see the problem? Essentially, we need to know the sizes\n>> of the join inputs, i.e.\n>>\n>> t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n>>\n>> t2 WHERE (t2.x = ? AND t2.y = ?)\n>>\n>> (which we know, and we know how to correct the estimate), and then the\n>> selectivity of the join condition. Which we also know.\n>>\n>> Obviously, there's a chance those parts (clauses at the scan / join\n>> level) are correlated, which could make this less accurate.\n>This is exactly what my concern is. The base predicate selectivities\n>of t1 and t2 should have an impact on the calculation of the\n>correction coefficient. If those selectivities are low, the\n>misestimation (which is actual/estimate) should not affect the t1-t2\n>join correction coefficient much.\n>\n\nThe question is whether it really matters. The question is whether this\ncorrelation between restriction and join clauses is universal (applies to\nmost queries) or an exception.\n\nIf it's an exception (only for a small number of rarely queried values),\nthen we have little chance to fix it. If we ever get extended statistics\non joins, that might help, but I think AQO alone is unlikely to help.\n\nOTOH if it's a systemic misestimate (affecting most queries), then we'll\ncatch it just fine.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 13 Jun 2019 15:22:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 03:17:07PM +0200, Rafia Sabih wrote:\n>On Thu, 13 Jun 2019 at 06:07, Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>>\n>> On Thu, Jun 13, 2019 at 5:49 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>> >\n>> > >> ...\n>> > >>\n>> > >That'll be an interesting work. For the above query, we can definitely\n>> > >calculate the correction coefficient of t1-t2 join given (t1.a = ? AND\n>> > >t1.b = ? AND t1.c < ?) and\n>> > >(t2.x = ? AND t2.y = ?) are true. But, I'm not sure how we can\n>> > >extrapolate that value for t1-t2 join.\n>> >\n>> > I'm not sure I see the problem? Essentially, we need to know the sizes\n>> > of the join inputs, i.e.\n>> >\n>> > t1 WHERE (t1.a = ? AND t1.b = ? AND t1.c < ?)\n>> >\n>> > t2 WHERE (t2.x = ? AND t2.y = ?)\n>> >\n>> > (which we know, and we know how to correct the estimate), and then the\n>> > selectivity of the join condition. Which we also know.\n>> >\n>> > Obviously, there's a chance those parts (clauses at the scan / join\n>> > level) are correlated, which could make this less accurate.\n>> This is exactly what my concern is. The base predicate selectivities\n>> of t1 and t2 should have an impact on the calculation of the\n>> correction coefficient. If those selectivities are low, the\n>> misestimation (which is actual/estimate) should not affect the t1-t2\n>> join correction coefficient much.\n>>\n>Interesting discussion. Talking of query optimization techniques and\n>challenges, isn't the biggest challenge there is of selectivity\n>estimation?\n\nYes, selectivity estimation is the major challenge. It's not the only one,\nbut we rely on the estimates quite a bit - it's probably the main factor\naffecting cost estimates.\n\n> Then instead of working on optimizing the process which\n>has been talked of since long, how about skipping the process\n>altogether. This reminds of the work I came across sometime back[1].\n>Basically, the idea is to not spend any energy on estimation the\n>selectivities rather get on with the execution. Precisely, a set of\n>plans is kept apriori for different selectivities and at the execution\n>time it starts with the plans one by one, starting from the lower\n>selectivity one till the query execution completes. It might sound\n>like too much work but it isn't, there are some theoretical guarantees\n>to bound the worst case execution time. The trick is in choosing the\n>plan-set and switching at the time of execution. Another good point\n>about this is that it works smoothly for join predicates as well.\n>\n>Since, we are talking about this problem here, I though it might be a\n>good idea to shed some light on such an approach and see if there is\n>some interesting trick we might use.\n>\n>[1] https://dsl.cds.iisc.ac.in/publications/conference/bouquet.pdf\n>\n\nAFAIK adaptive execution (switching from one plan to another\nmid-execution) is actually quite difficult to implement in practice,\nespecially when some of the rows might have been already sent to the\nuser, etc. Which is why databases (outside of academia) use this only\nin very limited/specific situations.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 13 Jun 2019 15:35:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive query optimization"
}
] |
[
{
"msg_contents": "Here is a small patch to reorder header files in postgres_fdw.c and\nconnection.c in alphabetical order.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 10 Jun 2019 17:53:20 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: unordered C includes"
},
{
"msg_contents": "On 2019-Jun-10, Etsuro Fujita wrote:\n\n> Here is a small patch to reorder header files in postgres_fdw.c and\n> connection.c in alphabetical order.\n\nLooks good.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 10:19:27 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: unordered C includes"
},
{
"msg_contents": "Alvaro,\n\nOn Mon, Jun 10, 2019 at 11:19 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> On 2019-Jun-10, Etsuro Fujita wrote:\n> > Here is a small patch to reorder header files in postgres_fdw.c and\n> > connection.c in alphabetical order.\n>\n> Looks good.\n\nPushed. Thanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 11 Jun 2019 13:45:20 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: unordered C includes"
}
] |
[
{
"msg_contents": "Hi!\n\nAfter 5f32b29c explain of Hash Join sometimes triggers an error.\n\nSimple reproduction case is below.\n\n# create table t (x int);\nCREATE TABLE\n# set enable_sort = off;\nSET\n# explain select * from t a, t b where a.x = (select 1 where b.x = 1);\nERROR: bogus varno: 65000\n\nBefore 5f32b29c the same case works OK.\n\n# explain select * from t a, t b where a.x = (select 1 where b.x = 1);\n QUERY PLAN\n-------------------------------------------------------------------\n Hash Join (cost=67.38..5311.24 rows=32512 width=8)\n Hash Cond: (a.x = (SubPlan 1))\n -> Seq Scan on t a (cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=35.50..35.50 rows=2550 width=4)\n -> Seq Scan on t b (cost=0.00..35.50 rows=2550 width=4)\n SubPlan 1\n -> Result (cost=0.00..0.01 rows=1 width=4)\n One-Time Filter: (b.x = 1)\n(8 rows)\n\nOriginally spotted by Nikita Glukhov. I didn't investigate this case\nfurther yet.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 10 Jun 2019 21:28:12 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-10 21:28:12 +0300, Alexander Korotkov wrote:\n> After 5f32b29c explain of Hash Join sometimes triggers an error.\n>\n> Simple reproduction case is below.\n\nThanks for finding. I've created an open issue for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 00:45:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 00:45:57 -0700, Andres Freund wrote:\n> On 2019-06-10 21:28:12 +0300, Alexander Korotkov wrote:\n> > After 5f32b29c explain of Hash Join sometimes triggers an error.\n> >\n> > Simple reproduction case is below.\n> \n> Thanks for finding. I've created an open issue for now.\n\nI am too tired to look further into this. I suspect the only reason we\ndidn't previously run into trouble with the executor stashing hashkeys\nmanually at a different tree level with:\n((HashState *) innerPlanState(hjstate))->hashkeys\nis that hashkeys itself isn't printed...\n\nIf done properly, the expression would actually reside in the Hash node\nitself, rather than ExecInitHashJoin() splitting up the join condition\nitself, and moving it into the HashState.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 01:22:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I am too tired to look further into this. I suspect the only reason we\n> didn't previously run into trouble with the executor stashing hashkeys\n> manually at a different tree level with:\n> ((HashState *) innerPlanState(hjstate))->hashkeys\n> is that hashkeys itself isn't printed...\n\nTBH, I think 5f32b29c is just wrong and should be reverted for now.\nIf there's a need to handle those expressions differently, it will\nrequire some cooperation from the planner not merely a two-line hack\nin executor startup. That commit didn't include any test case or\nother demonstration that it was solving a live problem, so I think\nwe can leave it for v13 to address the issue.\n\n(But possibly we should add a test case similar to Nikita's,\nso that we don't overlook such problems in future.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 18:38:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn June 13, 2019 3:38:47 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> I am too tired to look further into this. I suspect the only reason\n>we\n>> didn't previously run into trouble with the executor stashing\n>hashkeys\n>> manually at a different tree level with:\n>> ((HashState *) innerPlanState(hjstate))->hashkeys\n>> is that hashkeys itself isn't printed...\n>\n>TBH, I think 5f32b29c is just wrong and should be reverted for now.\n>If there's a need to handle those expressions differently, it will\n>require some cooperation from the planner not merely a two-line hack\n>in executor startup. That commit didn't include any test case or\n>other demonstration that it was solving a live problem, so I think\n>we can leave it for v13 to address the issue.\n\nI'm pretty sure you'd get an assertion failure if you reverted it (that's why it was added). So it's a bit more complicated than that. Unfortunately I'll not get back to work until Monday, but I'll spend time on this then.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 16:23:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-13 16:23:34 -0700, Andres Freund wrote:\n> On June 13, 2019 3:38:47 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >Andres Freund <andres@anarazel.de> writes:\n> >> I am too tired to look further into this. I suspect the only reason\n> >we\n> >> didn't previously run into trouble with the executor stashing\n> >hashkeys\n> >> manually at a different tree level with:\n> >> ((HashState *) innerPlanState(hjstate))->hashkeys\n> >> is that hashkeys itself isn't printed...\n> >\n> >TBH, I think 5f32b29c is just wrong and should be reverted for now.\n> >If there's a need to handle those expressions differently, it will\n> >require some cooperation from the planner not merely a two-line hack\n> >in executor startup. That commit didn't include any test case or\n> >other demonstration that it was solving a live problem, so I think\n> >we can leave it for v13 to address the issue.\n> \n> I'm pretty sure you'd get an assertion failure if you reverted it\n> (that's why it was added). So it's a bit more complicated than that.\n> Unfortunately I'll not get back to work until Monday, but I'll spend\n> time on this then.\n\nIndeed, there are assertion failures when initializing the expression\nwith HashJoinState as parent - that's because when computing the\nhashvalue for nodeHash input, we expect the slot from the node below to\nbe of the type that HashState returns (as that's what INNER_VAR for an\nexpression at the HashJoin level refers to), rather than the type of the\ninput to HashState. We could work around that by marking the slots from\nunderlying nodes as being of an unknown type, but that'd slow down\nexecution.\n\nI briefly played with the dirty hack of set_deparse_planstate()\nsetting dpns->inner_planstate = ps for IsA(ps, HashState), but that\nseems just too ugly.\n\nI think the most straight-forward fix might just be to just properly\nsplit the expression at plan time. Adding workarounds for things as\ndirty as building an expression for a subsidiary node in the parent, and\nthen modifying the subsidiary node from the parent, doesn't seem like a\nbetter way forward.\n\nThe attached *prototype* does so.\n\nIf we go that way, we probably need to:\n- Add a test for the failure case at hand\n- check a few of the comments around inner/outer in nodeHash.c\n- consider moving the setrefs.c code into its own function?\n- probably clean up the naming scheme in createplan.c\n\nI think there's a few more things we could do, although it's not clear\nthat that needs to happen in v12:\n- Consider not extracting hj_OuterHashKeys, hj_HashOperators,\n hj_Collations out of HashJoin->hashclauses, and instead just directly\n handing them individually in the planner. create_mergejoin_plan()\n already partially does that.\n\nGreetings,\n\nAndres Freund\n\nPS: If I were to write hashjoin today, it sure wouldn't be as two nodes\n- it seems pretty clear that the boundaries are just too fuzzy. To the\npoint that I wonder if it'd not be worth merging them at some point.",
"msg_date": "Tue, 18 Jun 2019 00:00:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-18 00:00:28 -0700, Andres Freund wrote:\n> On 2019-06-13 16:23:34 -0700, Andres Freund wrote:\n> > On June 13, 2019 3:38:47 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >Andres Freund <andres@anarazel.de> writes:\n> > >> I am too tired to look further into this. I suspect the only reason\n> > >we\n> > >> didn't previously run into trouble with the executor stashing\n> > >hashkeys\n> > >> manually at a different tree level with:\n> > >> ((HashState *) innerPlanState(hjstate))->hashkeys\n> > >> is that hashkeys itself isn't printed...\n> > >\n> > >TBH, I think 5f32b29c is just wrong and should be reverted for now.\n> > >If there's a need to handle those expressions differently, it will\n> > >require some cooperation from the planner not merely a two-line hack\n> > >in executor startup. That commit didn't include any test case or\n> > >other demonstration that it was solving a live problem, so I think\n> > >we can leave it for v13 to address the issue.\n> > \n> > I'm pretty sure you'd get an assertion failure if you reverted it\n> > (that's why it was added). So it's a bit more complicated than that.\n> > Unfortunately I'll not get back to work until Monday, but I'll spend\n> > time on this then.\n> \n> Indeed, there are assertion failures when initializing the expression\n> with HashJoinState as parent - that's because when computing the\n> hashvalue for nodeHash input, we expect the slot from the node below to\n> be of the type that HashState returns (as that's what INNER_VAR for an\n> expression at the HashJoin level refers to), rather than the type of the\n> input to HashState. We could work around that by marking the slots from\n> underlying nodes as being of an unknown type, but that'd slow down\n> execution.\n> \n> I briefly played with the dirty hack of set_deparse_planstate()\n> setting dpns->inner_planstate = ps for IsA(ps, HashState), but that\n> seems just too ugly.\n> \n> I think the most straight-forward fix might just be to just properly\n> split the expression at plan time. Adding workarounds for things as\n> dirty as building an expression for a subsidiary node in the parent, and\n> then modifying the subsidiary node from the parent, doesn't seem like a\n> better way forward.\n> \n> The attached *prototype* does so.\n> \n> If we go that way, we probably need to:\n> - Add a test for the failure case at hand\n> - check a few of the comments around inner/outer in nodeHash.c\n> - consider moving the setrefs.c code into its own function?\n> - probably clean up the naming scheme in createplan.c\n> \n> I think there's a few more things we could do, although it's not clear\n> that that needs to happen in v12:\n> - Consider not extracting hj_OuterHashKeys, hj_HashOperators,\n> hj_Collations out of HashJoin->hashclauses, and instead just directly\n> handing them individually in the planner. create_mergejoin_plan()\n> already partially does that.\n\nTom, any comments? Otherwise I'll go ahead, and commit after a round or\ntwo of polishing.\n\n- Andres\n\n\n",
"msg_date": "Mon, 1 Jul 2019 17:01:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Tom, any comments? Otherwise I'll go ahead, and commit after a round or\n> two of polishing.\n\nSorry for not getting to this sooner --- I'll try to look tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2019 20:08:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Tom, any comments? Otherwise I'll go ahead, and commit after a round or\n>> two of polishing.\n\n> Sorry for not getting to this sooner --- I'll try to look tomorrow.\n\nI took a look, and I think this is going in the right direction.\nWe definitely need a test case corresponding to the live bug,\nand I think the comments could use more work, and there are some\ncosmetic things (I wouldn't add the new struct Hash field at the\nend, for instance).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jul 2019 10:50:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-02 10:50:02 -0400, Tom Lane wrote:\n> I wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Tom, any comments? Otherwise I'll go ahead, and commit after a round or\n> >> two of polishing.\n> \n> > Sorry for not getting to this sooner --- I'll try to look tomorrow.\n> \n> I took a look, and I think this is going in the right direction.\n> We definitely need a test case corresponding to the live bug,\n> and I think the comments could use more work, and there are some\n> cosmetic things (I wouldn't add the new struct Hash field at the\n> end, for instance).\n\nI finally pushed a substantially polished version of this. I ended up\nmoving, as I had wondered about, hashoperator and hashcollation\ncomputation to the planner too - without that we would end up with two\nvery similar loops during plan and execution time.\n\nI've added a test that puts subplans just about everywhere possible in a\nhash join - it's the only reliable way I found to trigger errors (only\nduring EXPLAIN, as deparsing there tries to find the associated node,\nfor column names etc, and failed because the subplan referenced an\nINNER_VAR, even though Hash doesn't have an inner plan). Makes the test\nquery a bit hard to read, but I didn't get any better ideas, and it\ndoesn't seem too bad.\n\nThanks Tom for the review, thanks Alexander and Nikita for the\nreport. Sorry that it took this long.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 00:05:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Hash join explain is broken"
}
] |
[
{
"msg_contents": "Hello,\r\n\r\nI saw the previous thread but it wasn't in my inbox at the time, so I’m creating a new one sorry about that.\r\nhttps://www.postgresql.org/message-id/20190516170434.masck6ehwg2kvbi2@alap3.anarazel.de\r\n\r\nI’ve managed to reproduce the issue pretty consistently on REL9_6_STABLE on commit 959792087a10baf7f1b58408d28411109bcedb7a \r\n\r\nOS version:\r\n[ec2-user@ ... ~]$ uname -a\r\n... 4.14.77-80.57.amzn2.x86_64 #1 SMP Tue Nov 6 21:18:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\nPostgres version:\r\npostgres=# SELECT version();\r\n version \r\n----------------------------------------------------------------------------------------------------------\r\n PostgreSQL 9.6.13 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5), 64-bit\r\n(1 row)\r\n\r\nI’m on an EC2 m5.4xlarge if that matters.\r\n\r\nRepro steps:\r\n\r\n1. Create the following script\r\n[ec2-user@ip-172-31-18-48 ~]$ cat ~/subbench.txt \r\n\\set aid random(1, 100000 * :scale)\r\n\\set bid random(1, 1 * :scale)\r\n\\set tid random(1, 10 * :scale)\r\n\\set delta random(-5000, 5000)\r\n\\set subcnt random(1, 800)\r\nselect * from pgbench(:aid, :bid, :tid, :delta, :subcnt);\r\n\r\n2. Create the following function:\r\n\r\nCREATE OR REPLACE FUNCTION pgbench(pAid int, pBid int, pTid int, delta int, subcnt int) returns int as $$\r\nDECLARE\r\n abal int;\r\nBEGIN\r\n FOR i in 1 .. subcnt LOOP\r\n BEGIN\r\n UPDATE pgbench_accounts SET abalance = abalance + delta WHERE aid = pAid;\r\n --subcnt := subcnt;\r\n EXCEPTION\r\n WHEN division_by_zero THEN\r\n subcnt := subcnt;\r\n END;\r\n END LOOP;\r\n abal := abalance FROM pgbench_accounts WHERE aid = pAid;\r\n return abal;\r\nEND; $$LANGUAGE 'plpgsql';\r\n\r\n3. Create a few logical slots in the database\r\n\r\nselect pg_create_logical_replication_slot('test_slot_1', 'test_decoding');\r\nselect pg_create_logical_replication_slot('test_slot_2', 'test_decoding');\r\nselect pg_create_logical_replication_slot('test_slot_3', 'test_decoding');\r\n...\r\n\r\n4. Initialize pgbench\r\npgbench -i -d postgres\r\n\r\n5. Load the data\r\npgbench -f subbench.txt -c 64 -j 64 -T 600 -P 1 -d postgres\r\n\r\n6. Run pg_recvlogical with a timeout, it usually takes a few iterations (~7-9) before the error occurs\r\n\r\nvar=0\r\nwhile true; do \r\ntimeout 30 pg_recvlogical -d postgres --start --slot test_slot_1 -f /dev/null;\r\nvar=$((var+1))\r\necho \"Sleeping 5s Time: $var\";\r\nsleep 5; \r\ndone\r\n\r\npg_recvlogical -d postgres --start --slot test_slot_1 -f -\r\npg_recvlogical: unexpected termination of replication stream: ERROR: subtransaction logged without previous top-level txn record\r\n\r\npg_recvlogical -d postgres --start --slot test_slot_2 -f -\r\npg_recvlogical: unexpected termination of replication stream: ERROR: subtransaction logged without previous top-level txn record\r\n\r\npg_recvlogical -d postgres --start --slot test_slot_3 -f -\r\npg_recvlogical: unexpected termination of replication stream: ERROR: subtransaction logged without previous top-level txn record\r\npg_recvlogical: disconnected; waiting 5 seconds to try again\r\n\r\nWhat's interesting is that the confirmed_flush_lsn are all different from test_slot_1 --> test_slot_3\r\n\r\npostgres=# select * from pg_replication_slots;\r\n slot_name | plugin | slot_type | datoid | database | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn \r\n--------------+---------------+-----------+--------+----------+--------+------------+------+--------------+-------------+---------------------\r\n test_slot_1 | test_decoding | logical | 13382 | postgres | f | | | 1848 | 0/1C5BC5A0 | 0/5488E468\r\n test_slot_2 | test_decoding | logical | 13382 | postgres | f | | | 1848 | 0/1C5BC5A0 | 0/40E45EA0\r\n test_slot_3 | test_decoding | logical | 13382 | postgres | f | | | 1848 | 0/3F4B6AF8 | 0/6BB3A990\r\n\r\n\r\nLet me know if you require more info to repro.\r\n\r\nThanks!\r\n\r\nJohn H\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Mon, 10 Jun 2019 21:08:46 +0000",
"msg_from": "\"Hsu, John\" <hsuchen@amazon.com>",
"msg_from_op": true,
"msg_subject": "ERROR: subtransaction logged without previous top-level txn record"
},
{
"msg_contents": "Hi,\n\nOur customer also encountered this issue and I've looked into it. The problem is\nreproduced well enough using the instructions in the previous message.\n\nThe check leading to this ERROR is too strict, it forbids legit behaviours. Say\nwe have in WAL\n\n[ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_1> <commit> confirmed_flush_lsn> ]\n\n- First xl_xact_assignment record is beyond reading, i.e. earlier\n restart_lsn, where ready snapshot will be taken from disk.\n- After restart_lsn there is some change of a subxact.\n- After that, there is second xl_xact_assignment (for another subxact)\n revealing relationship between top and first subxact, where this ERROR fires.\n\nSuch transaction won't be streamed because we hadn't seen it in full. It must be\nfinished before streaming will start, i.e. before confirmed_flush_lsn.\n\nOf course, the easiest fix is to just throw the check out. However, supposing\nthat someone would probably want to relax it instead, I considered ways to\naccomplish this. Something like 'if we are still in SNAPSHOT_FULL and xid is\nbefore SnapBuildNextPhaseAt, just ignore xl_xact_assignment record, we haven't\nseen such xact in full and definitely won't stream it.' That led to discovery of\nanother bug in the place which I had found suspicious long before.\n\nSnapbuilder enters into SNAPBUILD_CONSISTENT immediately after deserializing the\nsnapshot. Generally this is incorrect because SNAPBUILD_CONSISTENT means not\njust complete snapshot (snapshot by itself in FULL state is just good as in\nCONSISTENT), but also reorderbuffer filled with all currently running\nxacts. This is painless for decoding sessions with existing slots because they\nwon't stream anything before confirmed_flush_lsn is reached anyway, at which\npoint all transactions which hadn't got into reorderbuffer would definitely\nfinish. However, new slots might be created too early, thus losing (not\ndecoding) parts of transactions committed after freshly created\nconfirmed_flush_lsn. This can happen under the following extremely unlucky\ncircumstances:\n - New slot creation reserves point in WAL since which it would read it\n (GetXLogInsertRecPtr);\n - It logs xl_running_xacts to start assembling a snapshot;\n - Running decoding session with another slot quickly reads this\n xl_running_xacts and serializes its snapshot;\n - New slot reads xl_running_xacts and picks this snapshot up, saying that it\n is ready to stream henceforth, though its reorderbuffer is empty.\n\nExact reproducing steps:\n\n-- session 1\ncreate table t (i int);\nselect pg_create_logical_replication_slot('slot_1', 'test_decoding');\n\n-- session 2\nbegin;\ninsert into t values (1);\n\n-- session 3, start slot creation\nselect pg_create_logical_replication_slot('slot_2', 'test_decoding');\n-- stop (with gdb or something) it at DecodingContextFindStartpoint(ctx);\n\n-- session 1\n-- xl_running_xacts is dumped by ReplicationSlotReserveWal in previous command, no\n-- need to sleep; our snap will be immediately serialized there\nSELECT data FROM pg_logical_slot_get_changes('slot_1', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n\n-- continue slot_2 creation\n\n-- session 2: insert some more and commit\ninsert into t values (1);\ncommit;\n\n-- now this would find second insert, but not the first one\nSELECT data FROM pg_logical_slot_get_changes('slot_2', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n\n\nWhat we can do here? Initially I was like, ok, then let's get into FULL_SNAPSHOT\nupon deserializing the snap and wait for all xacts finish as usual. However, to\nmy surprise I've found this impossible. That is, snapbuilder has no way to\nenforce that we go into CONSISTENT only when we have seen all running xacts\ncompletely without risk of skipping legit transactions. Specifically, after\ndeserializing FULL snapshot snapbuilder must iterate over WAL further until all\nrunning xacts finish, as we must see with correct snapshots all changes of every\ntransaction we are going to stream. However, snapbuilder can't *immediately*\nnotice this point, because\n - Snapbuilder updates xmin (first running xact) by taking it from xl_running_xacts\n (c.f. SnapBuildProcessRunningXacts). Even if we guarantee that, for\n each possible WAL reading starting position, there is always an an\n xl_running_xacts records logged right before the earliest possible\n streaming point -- IOW, after all xacts which we can't stream had\n finished (which is currently true btw, as slot's advancement is\n considered only at xl_running_xacts) -- that would not be enough due\n to races around xl_running_xacts, i.e with WAL like\n [ <T1> <restart_lsn> <T1 commit> <confirmed_flush_lsn, xrx> <T2 commit> ]\n T2 might be skipped if T1 is shown as running in xl_running_xacts.\n - Tracking xmin manually by recoding commits is not only inefficient,\n it just not feasible because serialized snapshot is not full: it\n contains only committed catalog-modifying xacts. Thus, we can't\n distinguish non-catalog-modifying xact committed before serialized\n snapshot from not yet committed one.\n\nWhich means only code external to snapbuilder knows the earliest point suitable\nfor streaming; slot advancement machinery ensures that <restart_lsn,\nconfirmed_flush_lsn> pair is always good. So possible fix is the following: if\nsnapbuilder's user knows exact LSN since which streaming is safe (existing slot,\nessentially), trust him and and switch into CONSISTENT state after deserializing\nsnapshot as before. OTOH, if he doesn't know it (new slot creation), go via\nusual FULL -> CONSISTENT procedure; we might transition into CONSISTENT a bit\nlater than it became possible, but there is nothing bad about that.\n\nFirst attached patch implements this. I don't particularly like it, but the only\nalternative which I see is to rework slots advancement logic to make\n<restart_lsn, confirmed_flush_lsn> pair such that there is always\nxl_running_xacts before confirmed_flush_lsn which confirms all xacts running as\nof restart_lsn have finished. This unnecessary complexity looks much worse.\n\n\nAs for the check in the topic, I nonetheless propose to remove it completely, as\nin second attached patch. Saying for sure whether xact of some record\nencountered after snapshot was deserialized can be streamed or not requires to\nknow nextXid (first not yet running xid) as of this snapshot's lsn -- all xids <\nnextXid possibly hadn't been seen in full and are not subject to\ndecoding. However, generally we don't know nextXid which is taken from\nxl_running_xacts; in particular snapshot can be serizalized/deserialized at\nXLOG_END_OF_RECOVERY. Changing that for the sake of the check in question is not\nworthwhile, so just throw it out instead of trying to relax.\n\nIn fact, I don't see what is so important about seeing the top xact first at\nall. During COMMIT decoding we'll know all subxids anyway, so why care?\n\n\nP.S. While digging this, I have noted that return values of\nSnapBuildFindSnapshot seem pretty random to me. Basically returning 'true'\nperforms immediately 4 things:\n - update xmin\n - purge old xip entries\n - advance xmin of the slot\n - if CONSISTENT, advance lsn (earliest serialized snap)\n\nThe latter two make sense only after slot created or confirmed_flush_lsn\nreached. The first two make sense even immediately after deserializing the\nsnapshot (because it is serialized *before* updating xmin and xip); generally,\nalways when committed xids are tracked. Then why cleanup is done when xmin\nhorizon is too low? Why it is not performed after restoring serialized snapshot?\nOn the whole, I find this not very important as all these operations are pretty\ncheap and harmless if executed too early -- it would be simpler just do them\nalways.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 24 Oct 2019 12:59:30 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Hi,\n\nOn 2019-10-24 12:59:30 +0300, Arseny Sher wrote:\n> Our customer also encountered this issue and I've looked into it. The problem is\n> reproduced well enough using the instructions in the previous message.\n\nIs this with\n\ncommit bac2fae05c7737530a6fe8276cd27d210d25c6ac\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2019-09-13 16:36:28 -0300\n\n logical decoding: process ASSIGNMENT during snapshot build\n \n Most WAL records are ignored in early SnapBuild snapshot build phases.\n But it's critical to process some of them, so that later messages have\n the correct transaction state after the snapshot is completely built; in\n particular, XLOG_XACT_ASSIGNMENT messages are critical in order for\n sub-transactions to be correctly assigned to their parent transactions,\n or at least one assert misbehaves, as reported by Ildar Musin.\n \n Diagnosed-by: Masahiko Sawada\n Author: Masahiko Sawada\n Discussion: https://postgr.es/m/CAONYFtOv+Er1p3WAuwUsy1zsCFrSYvpHLhapC_fMD-zNaRWxYg@mail.gmail.com\n\napplied?\n\n> The check leading to this ERROR is too strict, it forbids legit behaviours. Say\n> we have in WAL\n> \n> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_1> <commit> confirmed_flush_lsn> ]\n> \n> - First xl_xact_assignment record is beyond reading, i.e. earlier\n> restart_lsn, where ready snapshot will be taken from disk.\n> - After restart_lsn there is some change of a subxact.\n> - After that, there is second xl_xact_assignment (for another subxact)\n> revealing relationship between top and first subxact, where this ERROR fires.\n> \n> Such transaction won't be streamed because we hadn't seen it in full. It must be\n> finished before streaming will start, i.e. before confirmed_flush_lsn.\n> \n> Of course, the easiest fix is to just throw the check out.\n\nI don't think that'd actually be a fix, and just hiding a bug.\n\n\n> Snapbuilder enters into SNAPBUILD_CONSISTENT immediately after deserializing the\n> snapshot. Generally this is incorrect because SNAPBUILD_CONSISTENT means not\n> just complete snapshot (snapshot by itself in FULL state is just good as in\n> CONSISTENT), but also reorderbuffer filled with all currently running\n> xacts. This is painless for decoding sessions with existing slots because they\n> won't stream anything before confirmed_flush_lsn is reached anyway, at which\n> point all transactions which hadn't got into reorderbuffer would definitely\n> finish. However, new slots might be created too early, thus losing (not\n> decoding) parts of transactions committed after freshly created\n> confirmed_flush_lsn. This can happen under the following extremely unlucky\n> circumstances:\n> - New slot creation reserves point in WAL since which it would read it\n> (GetXLogInsertRecPtr);\n> - It logs xl_running_xacts to start assembling a snapshot;\n> - Running decoding session with another slot quickly reads this\n> xl_running_xacts and serializes its snapshot;\n> - New slot reads xl_running_xacts and picks this snapshot up, saying that it\n> is ready to stream henceforth, though its reorderbuffer is empty.\n\nYea, that's a problem :(\n\n\n> Exact reproducing steps:\n> \n> -- session 1\n> create table t (i int);\n> select pg_create_logical_replication_slot('slot_1', 'test_decoding');\n> \n> -- session 2\n> begin;\n> insert into t values (1);\n> \n> -- session 3, start slot creation\n> select pg_create_logical_replication_slot('slot_2', 'test_decoding');\n> -- stop (with gdb or something) it at DecodingContextFindStartpoint(ctx);\n> \n> -- session 1\n> -- xl_running_xacts is dumped by ReplicationSlotReserveWal in previous command, no\n> -- need to sleep; our snap will be immediately serialized there\n> SELECT data FROM pg_logical_slot_get_changes('slot_1', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n> \n> -- continue slot_2 creation\n> \n> -- session 2: insert some more and commit\n> insert into t values (1);\n> commit;\n> \n> -- now this would find second insert, but not the first one\n> SELECT data FROM pg_logical_slot_get_changes('slot_2', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n\nIt'd be good to make this into an isolation test.\n\n\n> What we can do here?\n\nI think the easiest fix might actually be to just ignore serialized\nsnapshots when building the initial snapshot.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Oct 2019 14:31:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAndres Freund <andres@anarazel.de> writes:\n\n> Hi,\n>\n> On 2019-10-24 12:59:30 +0300, Arseny Sher wrote:\n>> Our customer also encountered this issue and I've looked into it. The problem is\n>> reproduced well enough using the instructions in the previous message.\n>\n> Is this with\n>\n> commit bac2fae05c7737530a6fe8276cd27d210d25c6ac\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: 2019-09-13 16:36:28 -0300\n>\n> logical decoding: process ASSIGNMENT during snapshot build\n>\n> Most WAL records are ignored in early SnapBuild snapshot build phases.\n> But it's critical to process some of them, so that later messages have\n> the correct transaction state after the snapshot is completely built; in\n> particular, XLOG_XACT_ASSIGNMENT messages are critical in order for\n> sub-transactions to be correctly assigned to their parent transactions,\n> or at least one assert misbehaves, as reported by Ildar Musin.\n>\n> Diagnosed-by: Masahiko Sawada\n> Author: Masahiko Sawada\n> Discussion: https://postgr.es/m/CAONYFtOv+Er1p3WAuwUsy1zsCFrSYvpHLhapC_fMD-zNaRWxYg@mail.gmail.com\n>\n> applied?\n\nYeah, I tried fresh master. See below: skipped xl_xact_assignment is\nbeyond restart_lsn at all (and thus not read), so this doesn't matter.\n\n\n>> The check leading to this ERROR is too strict, it forbids legit behaviours. Say\n>> we have in WAL\n>>\n>> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_1> <commit> confirmed_flush_lsn> ]\n>>\n>> - First xl_xact_assignment record is beyond reading, i.e. earlier\n>> restart_lsn, where ready snapshot will be taken from disk.\n>> - After restart_lsn there is some change of a subxact.\n>> - After that, there is second xl_xact_assignment (for another subxact)\n>> revealing relationship between top and first subxact, where this ERROR fires.\n>>\n>> Such transaction won't be streamed because we hadn't seen it in full. It must be\n>> finished before streaming will start, i.e. before confirmed_flush_lsn.\n>>\n>> Of course, the easiest fix is to just throw the check out.\n>\n> I don't think that'd actually be a fix, and just hiding a bug.\n\nI don't see a bug here. At least in reproduced scenario I see false\nalert, as explained above: transaction with skipped xl_xact_assignment\nwon't be streamed as it finishes before confirmed_flush_lsn. And I am\npretty sure people encountered in the field the same issue.\n\nIn the end of my mail I have offered a way to relax this check instead\nof removing it to avoid false triggers: serialize/deserialize a snapshot\nonly at xl_running_xacts to know nextXid, add function to snapbuilder to\ncheck whether xact can be streamed or not by looking at its xid, etc,\nsomehow deal with existing serialized snaps which may be logged at\nEND_OF_RECOVERY without nextXid. I don't believe this check is worth\nthese complexities. If you think it does, I can do that though.\n\n>\n>> Snapbuilder enters into SNAPBUILD_CONSISTENT immediately after deserializing the\n>> snapshot. Generally this is incorrect because SNAPBUILD_CONSISTENT means not\n>> just complete snapshot (snapshot by itself in FULL state is just good as in\n>> CONSISTENT), but also reorderbuffer filled with all currently running\n>> xacts. This is painless for decoding sessions with existing slots because they\n>> won't stream anything before confirmed_flush_lsn is reached anyway, at which\n>> point all transactions which hadn't got into reorderbuffer would definitely\n>> finish. However, new slots might be created too early, thus losing (not\n>> decoding) parts of transactions committed after freshly created\n>> confirmed_flush_lsn. This can happen under the following extremely unlucky\n>> circumstances:\n>> - New slot creation reserves point in WAL since which it would read it\n>> (GetXLogInsertRecPtr);\n>> - It logs xl_running_xacts to start assembling a snapshot;\n>> - Running decoding session with another slot quickly reads this\n>> xl_running_xacts and serializes its snapshot;\n>> - New slot reads xl_running_xacts and picks this snapshot up, saying that it\n>> is ready to stream henceforth, though its reorderbuffer is empty.\n>\n> Yea, that's a problem :(\n>\n>\n>> Exact reproducing steps:\n>>\n>> -- session 1\n>> create table t (i int);\n>> select pg_create_logical_replication_slot('slot_1', 'test_decoding');\n>>\n>> -- session 2\n>> begin;\n>> insert into t values (1);\n>>\n>> -- session 3, start slot creation\n>> select pg_create_logical_replication_slot('slot_2', 'test_decoding');\n>> -- stop (with gdb or something) it at DecodingContextFindStartpoint(ctx);\n>>\n>> -- session 1\n>> -- xl_running_xacts is dumped by ReplicationSlotReserveWal in previous command, no\n>> -- need to sleep; our snap will be immediately serialized there\n>> SELECT data FROM pg_logical_slot_get_changes('slot_1', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n>>\n>> -- continue slot_2 creation\n>>\n>> -- session 2: insert some more and commit\n>> insert into t values (1);\n>> commit;\n>>\n>> -- now this would find second insert, but not the first one\n>> SELECT data FROM pg_logical_slot_get_changes('slot_2', NULL, NULL, 'include-xids', '1', 'skip-empty-xacts', '0');\n>\n> It'd be good to make this into an isolation test.\n\nYeah, but to get real chance of firing this requires kinda sleep/break\nin the middle of pg_create_logical_replication_slot execution, so I have\nno idea how to do that =(\n\n>\n>> What we can do here?\n>\n> I think the easiest fix might actually be to just ignore serialized\n> snapshots when building the initial snapshot.\n\nThat's an option. However, that anyway requires the distinction between\nnew slot creation and streaming from existing slot at snapbuilder level,\nwhich currently doesn't exist and which constitutes most part of my\nfirst patch. If we add that, changing between using and not using\nserialized snapshots in new slot creation is easy (my patch uses it), I\nthink this is not principal.\n\n\n--\nArseny Sher\n\n\n",
"msg_date": "Fri, 25 Oct 2019 09:56:23 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Andres, Álvaro, could you please have a look at this?\n\n--\nArseny Sher\n\n\n",
"msg_date": "Tue, 12 Nov 2019 14:35:10 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Is the resolution of the issue in this thread being tracked elsewhere,\neither in a commit fest or other stream of work?\n\nOn Tue, Dec 17, 2019 at 9:47 AM Arseny Sher <a.sher@postgrespro.ru> wrote:\n\n> Andres, Álvaro, could you please have a look at this?\n>\n> --\n> Arseny Sher\n>\n>\n>\n>\n>\n\nIs the resolution of the issue in this thread being tracked elsewhere, either in a commit fest or other stream of work?On Tue, Dec 17, 2019 at 9:47 AM Arseny Sher <a.sher@postgrespro.ru> wrote:Andres, Álvaro, could you please have a look at this?\n\n--\nArseny Sher",
"msg_date": "Tue, 17 Dec 2019 09:49:11 -0500",
"msg_from": "Dan Katz <dkatz@joor.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nDan Katz <dkatz@joor.com> writes:\n\n> Is the resolution of the issue in this thread being tracked elsewhere,\n> either in a commit fest or other stream of work?\n\nOk, I've created a cf entry:\nhttps://commitfest.postgresql.org/26/2383/\n\nI believe it is the most important to commit at least\n\n0002-Stop-demanding-that-top-xact-must-be-seen-before-sub.patch\n\nfrom my mail above -- as we see, this issue creates problems in the\nfield. Moreover, the patch is trivial and hopefully I've shown that\nERROR triggers were spurious and there is no easy way to relax the\ncheck.\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 17 Dec 2019 18:15:42 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Is there any chance this fix will get into the next minor version of PostgreSQL scheduled for February?",
"msg_date": "Thu, 30 Jan 2020 10:13:23 +0000",
"msg_from": "Maurizio Sambati <maurizio@viralize.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Arseny,\n\nI was hoping you could give me some insights about how this bug might\nappear with multiple replications slots. For example if I have two\nreplication slots would you expect both slots to see the same error, even\nif they were started, consumed or the LSN was confirmed-flushed at\ndifferent times?\n\nDan\n\nOn Tue, Dec 17, 2019 at 10:15 AM Arseny Sher <a.sher@postgrespro.ru> wrote:\n\n>\n> Dan Katz <dkatz@joor.com> writes:\n>\n> > Is the resolution of the issue in this thread being tracked elsewhere,\n> > either in a commit fest or other stream of work?\n>\n> Ok, I've created a cf entry:\n> https://commitfest.postgresql.org/26/2383/\n>\n> I believe it is the most important to commit at least\n>\n> 0002-Stop-demanding-that-top-xact-must-be-seen-before-sub.patch\n>\n> from my mail above -- as we see, this issue creates problems in the\n> field. Moreover, the patch is trivial and hopefully I've shown that\n> ERROR triggers were spurious and there is no easy way to relax the\n> check.\n>\n> --\n> Arseny Sher\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\nArseny,I was hoping you could give me some insights about how this bug might appear with multiple replications slots. For example if I have two replication slots would you expect both slots to see the same error, even if they were started, consumed or the LSN was confirmed-flushed at different times?DanOn Tue, Dec 17, 2019 at 10:15 AM Arseny Sher <a.sher@postgrespro.ru> wrote:\nDan Katz <dkatz@joor.com> writes:\n\n> Is the resolution of the issue in this thread being tracked elsewhere,\n> either in a commit fest or other stream of work?\n\nOk, I've created a cf entry:\nhttps://commitfest.postgresql.org/26/2383/\n\nI believe it is the most important to commit at least\n\n0002-Stop-demanding-that-top-xact-must-be-seen-before-sub.patch\n\nfrom my mail above -- as we see, this issue creates problems in the\nfield. Moreover, the patch is trivial and hopefully I've shown that\nERROR triggers were spurious and there is no easy way to relax the\ncheck.\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 30 Jan 2020 15:09:57 -0500",
"msg_from": "Dan Katz <dkatz@joor.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Hi,\n\nDan Katz <dkatz@joor.com> writes:\n\n> Arseny,\n>\n> I was hoping you could give me some insights about how this bug might\n> appear with multiple replications slots. For example if I have two\n> replication slots would you expect both slots to see the same error, even\n> if they were started, consumed or the LSN was confirmed-flushed at\n> different times?\n\nWell, to encounter this you must happen to interrupt decoding session\n(e.g. shutdown server) when restart_lsn (LSN since WAL will be read next\ntime) is at unfortunate position, as described in\nhttps://www.postgresql.org/message-id/87ftjifoql.fsf%40ars-thinkpad\n\nGenerally each slot has its own restart_lsn, so if one decoding session\nstucked on this issue, another one won't necessarily fail at the same\ntime. However, restart_lsn can be advanced only to certain points,\nmainly xl_running_xacts records, which is logged every 15 seconds. So if\nall consumers acknowledge changes fast enough, it is quite likely that\nduring shutdown restart_lsn will be the same for all slots -- which\nmeans either all of them will stuck on further decoding or all of them\nwon't. If not, different slots might have different restart_lsn and\nprobably won't fail at the same time; but encountering this issue even\nonce suggests that your workload makes possibility of such problematic\nrestart_lsn perceptible (i.e. many subtransactions). And each\nrestart_lsn probably has approximately the same chance to be 'bad'\n(provided the workload is even).\n\n\nWe need a committer familiar with this code to look here...\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 31 Jan 2020 00:22:46 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Fri, Oct 25, 2019 at 12:26 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Andres Freund <andres@anarazel.de> writes:\n>\n> > Hi,\n> >\n> > On 2019-10-24 12:59:30 +0300, Arseny Sher wrote:\n> >> Our customer also encountered this issue and I've looked into it. The problem is\n> >> reproduced well enough using the instructions in the previous message.\n> >\n> > Is this with\n> >\n> > commit bac2fae05c7737530a6fe8276cd27d210d25c6ac\n> > Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > Date: 2019-09-13 16:36:28 -0300\n> >\n> > logical decoding: process ASSIGNMENT during snapshot build\n> >\n> > Most WAL records are ignored in early SnapBuild snapshot build phases.\n> > But it's critical to process some of them, so that later messages have\n> > the correct transaction state after the snapshot is completely built; in\n> > particular, XLOG_XACT_ASSIGNMENT messages are critical in order for\n> > sub-transactions to be correctly assigned to their parent transactions,\n> > or at least one assert misbehaves, as reported by Ildar Musin.\n> >\n> > Diagnosed-by: Masahiko Sawada\n> > Author: Masahiko Sawada\n> > Discussion: https://postgr.es/m/CAONYFtOv+Er1p3WAuwUsy1zsCFrSYvpHLhapC_fMD-zNaRWxYg@mail.gmail.com\n> >\n> > applied?\n>\n> Yeah, I tried fresh master. See below: skipped xl_xact_assignment is\n> beyond restart_lsn at all (and thus not read), so this doesn't matter.\n>\n>\n> >> The check leading to this ERROR is too strict, it forbids legit behaviours. Say\n> >> we have in WAL\n> >>\n> >> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_1> <commit> confirmed_flush_lsn> ]\n> >>\n> >> - First xl_xact_assignment record is beyond reading, i.e. earlier\n> >> restart_lsn, where ready snapshot will be taken from disk.\n> >> - After restart_lsn there is some change of a subxact.\n> >> - After that, there is second xl_xact_assignment (for another subxact)\n> >> revealing relationship between top and first subxact, where this ERROR fires.\n> >>\n> >> Such transaction won't be streamed because we hadn't seen it in full. It must be\n> >> finished before streaming will start, i.e. before confirmed_flush_lsn.\n> >>\n> >> Of course, the easiest fix is to just throw the check out.\n> >\n> > I don't think that'd actually be a fix, and just hiding a bug.\n>\n> I don't see a bug here. At least in reproduced scenario I see false\n> alert, as explained above: transaction with skipped xl_xact_assignment\n> won't be streamed as it finishes before confirmed_flush_lsn.\n>\n\nDoes this guarantee come from the fact that we need to wait for such a\ntransaction before reaching a consistent snapshot state? If not, can\nyou explain a bit more what makes you say so?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Feb 2020 12:15:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n>> I don't see a bug here. At least in reproduced scenario I see false\n>> alert, as explained above: transaction with skipped xl_xact_assignment\n>> won't be streamed as it finishes before confirmed_flush_lsn.\n>>\n>\n> Does this guarantee come from the fact that we need to wait for such a\n> transaction before reaching a consistent snapshot state? If not, can\n> you explain a bit more what makes you say so?\n\nRight, see FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition -- it exists\nexactly for this purpose: once we have good snapshot, we need to wait\nfor all running xacts to finish to see all xacts we are promising to\nstream in full. This ensures <restart_lsn, confirmed_flush_lsn> pair is\ngood (reading WAL since the former is enough to see all xacts committing\nafter the latter in full) initially, and slot advancement arrangements\nensure it stays good forever (see\nLogicalIncreaseRestartDecodingForSlot).\n\nWell, almost. This is true as long initial snapshot construction process\ngoes the long way of building the snapshot by itself. If it happens to\npick up from disk ready snapshot pickled there by another decoding\nsession, it fast path'es to SNAPBUILD_CONSISTENT, which is technically a\nbug as described in\nhttps://www.postgresql.org/message-id/87ftjifoql.fsf%40ars-thinkpad\n\nIn theory, this bug could indeed lead to 'subtransaction logged without\nprevious top-level txn record' error. In practice, I think its\npossibility is disappearingly small -- process of slot creation must be\nintervened in a very short gap by another decoder who serializes its\nsnapshot (see the exact sequence of steps in the mail above). What is\nmuch more probable (doesn't involve new slot creation and relatively\neasily reproducible without sleeps) is false alert triggered by unlucky\nposition of restart_lsn.\n\n\nSurely we still must fix it. I just mean\n - People definitely encountered false alert, not this bug\n (at least because nobody said this was immediately after slot\n creation).\n - I've no bright ideas how to relax the check to make it proper\n without additional complications and I'm pretty sure this is\n impossible (again, see above for details), so I'd remove it.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 03 Feb 2020 12:20:30 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 2:50 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> >> I don't see a bug here. At least in reproduced scenario I see false\n> >> alert, as explained above: transaction with skipped xl_xact_assignment\n> >> won't be streamed as it finishes before confirmed_flush_lsn.\n> >>\n> >\n> > Does this guarantee come from the fact that we need to wait for such a\n> > transaction before reaching a consistent snapshot state? If not, can\n> > you explain a bit more what makes you say so?\n>\n> Right, see FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition -- it exists\n> exactly for this purpose: once we have good snapshot, we need to wait\n> for all running xacts to finish to see all xacts we are promising to\n> stream in full.\n>\n\nSo, doesn't this mean that it started occurring after the fix done in\ncommit 96b5033e11 [1]? Because before that fix we wouldn't have\nallowed processing XLOG_XACT_ASSIGNMENT records unless we are in\nSNAPBUILD_FULL_SNAPSHOT state. I am not telling the fix in that\ncommit is wrong, but just trying to understand the situation here.\n\n>\n> Well, almost. This is true as long initial snapshot construction process\n> goes the long way of building the snapshot by itself. If it happens to\n> pick up from disk ready snapshot pickled there by another decoding\n> session, it fast path'es to SNAPBUILD_CONSISTENT, which is technically a\n> bug as described in\n> https://www.postgresql.org/message-id/87ftjifoql.fsf%40ars-thinkpad\n>\n\nCan't we deal with this separately? If so, I think let's not mix the\ndiscussions for both as the root cause of both seems different.\n\n\n[1] -\n commit bac2fae05c7737530a6fe8276cd27d210d25c6ac\n Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: 2019-09-13 16:36:28 -0300\n\n logical decoding: process ASSIGNMENT during snapshot build\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Feb 2020 18:24:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n> So, doesn't this mean that it started occurring after the fix done in\n> commit 96b5033e11 [1]? Because before that fix we wouldn't have\n> allowed processing XLOG_XACT_ASSIGNMENT records unless we are in\n> SNAPBUILD_FULL_SNAPSHOT state. I am not telling the fix in that\n> commit is wrong, but just trying to understand the situation here.\n\nNope. Consider again example of WAL above triggering the error:\n\n[ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_2> <commit> <confirmed_flush_lsn> ]\n\nDecoder starting reading WAL at <restart_lsn> where he immediately reads\nfrom disk snapshot serialized earlier, which makes it jump to\nSNAPBUILD_CONSISTENT right away. It doesn't read xl_xact_assignment_1,\nbut it reads xl_xact_assignment_2 already in SNAPBUILD_CONSISTENT state,\nso catches the error regardless of this commit.\n\n>> Well, almost. This is true as long initial snapshot construction process\n>> goes the long way of building the snapshot by itself. If it happens to\n>> pick up from disk ready snapshot pickled there by another decoding\n>> session, it fast path'es to SNAPBUILD_CONSISTENT, which is technically a\n>> bug as described in\n>> https://www.postgresql.org/message-id/87ftjifoql.fsf%40ars-thinkpad\n>>\n>\n> Can't we deal with this separately? If so, I think let's not mix the\n> discussions for both as the root cause of both seems different.\n\nThese issues are related: before removing the check it would be nice to\nensure that there is no bugs it might protect us from (and it turns out\nthere actually is, though it won't always protect, and though this bug\nhas very small probability). Moreover, they are about more or less\nsubject -- avoiding partially decoded xacts -- and once you dived deep\nenough to deal with one, it is reasonable to deal with another instead\nof doing that twice. But as a practical matter, removing the check is\nsimple one-liner, and its presence causes people troubles -- so I'd\nsuggest doing that first and then deal with the rest. I don't think\nstarting new thread is worthwhile here, but if you think it does, I can\ncreate it.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 03 Feb 2020 16:46:05 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 7:16 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > So, doesn't this mean that it started occurring after the fix done in\n> > commit 96b5033e11 [1]? Because before that fix we wouldn't have\n> > allowed processing XLOG_XACT_ASSIGNMENT records unless we are in\n> > SNAPBUILD_FULL_SNAPSHOT state. I am not telling the fix in that\n> > commit is wrong, but just trying to understand the situation here.\n>\n> Nope. Consider again example of WAL above triggering the error:\n>\n> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_2> <commit> <confirmed_flush_lsn> ]\n>\n> Decoder starting reading WAL at <restart_lsn> where he immediately reads\n> from disk snapshot serialized earlier, which makes it jump to\n> SNAPBUILD_CONSISTENT right away.\n>\n\nSure, I understand that if we get serialized snapshot from disk, this\nproblem can occur and probably we can fix by ignoring serialized\nsnapshots during slot creation or something on those lines. However,\nwhat I am trying to understand is whether this can occur from another\npath as well. I think it might occur without using serialized\nsnapshots as well because we allow decoding xl_xact_assignment record\neven when the snapshot state is not full. Say in your above example,\neven if the snapshot state is not SNAPBUILD_CONSISTENT as we haven't\nused the serialized snapshot, then also, it can lead to the above\nproblem due to decoding of xl_xact_assignment. I have not tried to\ngenerate a test case for this, so I could easily be wrong here.\n\nWhat I am trying to get at is if the problem can only occur by using\nserialized snapshots and we fix it by somehow not using them initial\nslot creation, then ideally we don't need to remove the error in\nReorderBufferAssignChild, but if that is not the case, then we need to\ndiscuss other cases as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Feb 2020 12:11:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n> On Mon, Feb 3, 2020 at 7:16 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>> Amit Kapila <amit.kapila16@gmail.com> writes:\n>>\n>> > So, doesn't this mean that it started occurring after the fix done in\n>> > commit 96b5033e11 [1]? Because before that fix we wouldn't have\n>> > allowed processing XLOG_XACT_ASSIGNMENT records unless we are in\n>> > SNAPBUILD_FULL_SNAPSHOT state. I am not telling the fix in that\n>> > commit is wrong, but just trying to understand the situation here.\n>>\n>> Nope. Consider again example of WAL above triggering the error:\n>>\n>> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_2> <commit> <confirmed_flush_lsn> ]\n>>\n>> Decoder starting reading WAL at <restart_lsn> where he immediately reads\n>> from disk snapshot serialized earlier, which makes it jump to\n>> SNAPBUILD_CONSISTENT right away.\n>>\n>\n> Sure, I understand that if we get serialized snapshot from disk, this\n> problem can occur and probably we can fix by ignoring serialized\n> snapshots during slot creation or something on those lines.\n\nThere is some confusion. I'll try to reword what we have here:\n\n1) Decoding from existing slot (*not* initial snapshot construction)\nstarts up, immediately picks up snapshot at restart_lsn (getting into\nSNAPBUILD_CONSISTENT) and in some xl_xact_assignment learns that it\nhadn't seen in full (no toplevel records) transaction which it is not\neven going to stream -- but still dies with \"subtransation logged\nwithout...\". That's my example above, and that's what people are\ncomplaining about. Here, usage of serialized snapshot and jump to\nSNAPBUILD_CONSISTENT is not just legit, it is essential: or order to be\nable to stream data since confirmed_flush_lsn, we must pick it up as we\nmight not be able to assemble it from scratch in time. I mean, what is\nwrong here is not serialized snapshot usage but the check.\n\n(Lengthy comment to AllocateSnapshotBuilder in my\n0001-Fix-serialized-snapshot-usage-for-new-logical-slots.patch explains\nwhy snapbuilder is not able to do FULL -> CONSISTENT transition on its\nown early enough for decoding from existing slot, so the jump on\nsnapshot pickup is performed to CONSISTENT directly.)\n\nThis happens with or without bac2fae05c.\n\n2) *New* slot creationg process picks up serialized snapshot and jumps\nto CONSISTENT without waiting for all running xacts to finish. This is\nwrong and is a bug (of very low probability), as we risk promising to\ndecode xacts which we might not seen in full. Sometimes it could be\narrested by \"subtransation logged without...\" check, but not necessarily\n-- e.g. there could be no subxacts at all.\n\n\n> However,\n> what I am trying to understand is whether this can occur from another\n> path as well. I think it might occur without using serialized\n> snapshots as well because we allow decoding xl_xact_assignment record\n> even when the snapshot state is not full. Say in your above example,\n> even if the snapshot state is not SNAPBUILD_CONSISTENT as we haven't\n> used the serialized snapshot, then also, it can lead to the above\n> problem due to decoding of xl_xact_assignment. I have not tried to\n> generate a test case for this, so I could easily be wrong here.\n\nWhat you are suggesting here is 3), which is, well, sort of form of 1),\nmeaning \"subxact logged...\" error also pointlessly triggered, but for\nnew slot creation. With bac2fae0, decoder might miss first\nxl_xact_assignment because it is beyond start of reading WAL but\nencounter second xl_xact_assignment and die on it due to this check\nbefore even getting FULL.\n\nBut now that I'm thinking about it, I suspect that similar could happen\neven before bac2fae0. Imagine\n\n<start_of_reading_wal> <xl_xact_assignment_1> <SNAPBUILD_FULL> <subxact_change> <xl_xact_assignment_2> <commit> ... <SNAPBUILD_CONSISTENT>\n\nBefore bac2fae0, xl_xact_assignment_1 was ignored, so\nxl_xact_assignment_1 would trigger the error.\n\n> What I am trying to get at is if the problem can only occur by using\n> serialized snapshots and we fix it by somehow not using them initial\n> slot creation, then ideally we don't need to remove the error in\n> ReorderBufferAssignChild, but if that is not the case, then we need to\n> discuss other cases as well.\n\nSo, 1) and 3) mean this is not the case.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 05 Feb 2020 12:04:06 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Wed, Feb 5, 2020 at 2:34 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > On Mon, Feb 3, 2020 at 7:16 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> >> Amit Kapila <amit.kapila16@gmail.com> writes:\n> >>\n> >> > So, doesn't this mean that it started occurring after the fix done in\n> >> > commit 96b5033e11 [1]? Because before that fix we wouldn't have\n> >> > allowed processing XLOG_XACT_ASSIGNMENT records unless we are in\n> >> > SNAPBUILD_FULL_SNAPSHOT state. I am not telling the fix in that\n> >> > commit is wrong, but just trying to understand the situation here.\n> >>\n> >> Nope. Consider again example of WAL above triggering the error:\n> >>\n> >> [ <xl_xact_assignment_1> <restart_lsn> <subxact_change> <xl_xact_assignment_2> <commit> <confirmed_flush_lsn> ]\n> >>\n> >> Decoder starting reading WAL at <restart_lsn> where he immediately reads\n> >> from disk snapshot serialized earlier, which makes it jump to\n> >> SNAPBUILD_CONSISTENT right away.\n> >>\n> >\n> > Sure, I understand that if we get serialized snapshot from disk, this\n> > problem can occur and probably we can fix by ignoring serialized\n> > snapshots during slot creation or something on those lines.\n>\n> There is some confusion. I'll try to reword what we have here:\n>\n> 1) Decoding from existing slot (*not* initial snapshot construction)\n> starts up, immediately picks up snapshot at restart_lsn (getting into\n> SNAPBUILD_CONSISTENT) and in some xl_xact_assignment learns that it\n> hadn't seen in full (no toplevel records) transaction which it is not\n> even going to stream -- but still dies with \"subtransation logged\n> without...\". That's my example above, and that's what people are\n> complaining about. Here, usage of serialized snapshot and jump to\n> SNAPBUILD_CONSISTENT is not just legit, it is essential: or order to be\n> able to stream data since confirmed_flush_lsn, we must pick it up as we\n> might not be able to assemble it from scratch in time. I mean, what is\n> wrong here is not serialized snapshot usage but the check.\n>\n\nI was thinking if we have some way to skip processing of\nxl_xact_assignment for such cases, then it might be better. Say,\nalong with restart_lsn, if have some way to find corresponding nextXid\n(below which we don't need to process records). Say, if, during\nSnapBuildProcessRunningXacts, we record the xid of txn we got by\nReorderBufferGetOldestTXN in slot, then can't we use it to skip such\nrecords.\n\n> (Lengthy comment to AllocateSnapshotBuilder in my\n> 0001-Fix-serialized-snapshot-usage-for-new-logical-slots.patch explains\n> why snapbuilder is not able to do FULL -> CONSISTENT transition on its\n> own early enough for decoding from existing slot, so the jump on\n> snapshot pickup is performed to CONSISTENT directly.)\n>\n> This happens with or without bac2fae05c.\n>\n> 2) *New* slot creationg process picks up serialized snapshot and jumps\n> to CONSISTENT without waiting for all running xacts to finish. This is\n> wrong and is a bug (of very low probability), as we risk promising to\n> decode xacts which we might not seen in full. Sometimes it could be\n> arrested by \"subtransation logged without...\" check, but not necessarily\n> -- e.g. there could be no subxacts at all.\n>\n>\n> > However,\n> > what I am trying to understand is whether this can occur from another\n> > path as well. I think it might occur without using serialized\n> > snapshots as well because we allow decoding xl_xact_assignment record\n> > even when the snapshot state is not full. Say in your above example,\n> > even if the snapshot state is not SNAPBUILD_CONSISTENT as we haven't\n> > used the serialized snapshot, then also, it can lead to the above\n> > problem due to decoding of xl_xact_assignment. I have not tried to\n> > generate a test case for this, so I could easily be wrong here.\n>\n> What you are suggesting here is 3), which is, well, sort of form of 1),\n> meaning \"subxact logged...\" error also pointlessly triggered, but for\n> new slot creation. With bac2fae0, decoder might miss first\n> xl_xact_assignment because it is beyond start of reading WAL but\n> encounter second xl_xact_assignment and die on it due to this check\n> before even getting FULL.\n>\n> But now that I'm thinking about it, I suspect that similar could happen\n> even before bac2fae0. Imagine\n>\n> <start_of_reading_wal> <xl_xact_assignment_1> <SNAPBUILD_FULL> <subxact_change> <xl_xact_assignment_2> <commit> ... <SNAPBUILD_CONSISTENT>\n>\n> Before bac2fae0, xl_xact_assignment_1 was ignored, so\n> xl_xact_assignment_1 would trigger the error.\n>\n\n'xl_xact_assignment_1 would trigger the error', I think in this part\nof sentence you mean to say xl_xact_assignment_2 because we won't try\nto decode xl_xact_assignment_1 before bac2fae0. If so, won't we wait\nfor such a transaction to finish while changing the snapshot state\nfrom SNAPBUILD_BUILDING_SNAPSHOT to SNAPBUILD_FULL_SNAPSHOT? And if\nthe transaction is finished, ideally, we should not try to decode its\nWAL and or create its ReorderBufferTxn.\n\n> > What I am trying to get at is if the problem can only occur by using\n> > serialized snapshots and we fix it by somehow not using them initial\n> > slot creation, then ideally we don't need to remove the error in\n> > ReorderBufferAssignChild, but if that is not the case, then we need to\n> > discuss other cases as well.\n>\n> So, 1) and 3) mean this is not the case.\n>\n\nRight, I am thinking that if we can find some way to skip the xact\nassignment for (1) and (3), then that might be a better fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 7 Feb 2020 16:29:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 4:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 5, 2020 at 2:34 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> >\n>\n> > > What I am trying to get at is if the problem can only occur by using\n> > > serialized snapshots and we fix it by somehow not using them initial\n> > > slot creation, then ideally we don't need to remove the error in\n> > > ReorderBufferAssignChild, but if that is not the case, then we need to\n> > > discuss other cases as well.\n> >\n> > So, 1) and 3) mean this is not the case.\n> >\n>\n> Right, I am thinking that if we can find some way to skip the xact\n> assignment for (1) and (3), then that might be a better fix.\n>\n\nJust to be clear, I am just brainstorming the ideas to see if we can\nfind some better solutions to the problems (1) and (3) described by\nArseny in the above email [1]. At this stage, it is not clear to me\nthat we have a fix simple enough to backpatch apart from what Arseny\nposted in his fist email [2] (which is to stop demanding that top xact\nmust be seen before subxact in decoding.).\n\n[1] - https://www.postgresql.org/message-id/87zhdx76d5.fsf%40ars-thinkpad\n[2] - https://www.postgresql.org/message-id/87ftjifoql.fsf%40ars-thinkpad\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 9 Feb 2020 12:57:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n\n>> 1) Decoding from existing slot (*not* initial snapshot construction)\n>> starts up, immediately picks up snapshot at restart_lsn (getting into\n>> SNAPBUILD_CONSISTENT) and in some xl_xact_assignment learns that it\n>> hadn't seen in full (no toplevel records) transaction which it is not\n>> even going to stream -- but still dies with \"subtransation logged\n>> without...\". That's my example above, and that's what people are\n>> complaining about. Here, usage of serialized snapshot and jump to\n>> SNAPBUILD_CONSISTENT is not just legit, it is essential: or order to be\n>> able to stream data since confirmed_flush_lsn, we must pick it up as we\n>> might not be able to assemble it from scratch in time. I mean, what is\n>> wrong here is not serialized snapshot usage but the check.\n>>\n>\n> I was thinking if we have some way to skip processing of\n> xl_xact_assignment for such cases, then it might be better. Say,\n> along with restart_lsn, if have some way to find corresponding nextXid\n> (below which we don't need to process records).\n\nI don't believe you can that without persisting additional\ndata. Basically, what we need is list of transactions who are running at\nthe point of snapshot serialization *and* already wrote something before\nit -- those we hadn't seen in full and can't decode. We have no such\ndata currently. The closest thing we have is xl_running_xacts->nextXid,\nbut\n\n 1) issued xid doesn't necessarily means xact actually wrote something,\n so we can't just skip xl_xact_assignment for xid < nextXid, it might\n still be decoded\n 2) snapshot might be serialized not at xl_running_xacts anyway\n\nSurely this thing doesn't deserve changing persisted data format.\n\n\nSomehow I hadn't realized this earlier, so my comments/commit messages\nin patches above were not accurate here; I've edited them. Also in the\nfirst patch serialized snapshots are not no longer used for new slot\ncreation at all, as Andres suggested above. This is not principal, as I\nsaid, but arguably makes things simpler a bit.\n\nI've also found a couple of issues with slot copying feature, will post\nin separate thread on them.\n\n\n\n\n\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 09 Feb 2020 19:07:50 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Sun, Feb 9, 2020 at 9:37 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> >> 1) Decoding from existing slot (*not* initial snapshot construction)\n> >> starts up, immediately picks up snapshot at restart_lsn (getting into\n> >> SNAPBUILD_CONSISTENT) and in some xl_xact_assignment learns that it\n> >> hadn't seen in full (no toplevel records) transaction which it is not\n> >> even going to stream -- but still dies with \"subtransation logged\n> >> without...\". That's my example above, and that's what people are\n> >> complaining about. Here, usage of serialized snapshot and jump to\n> >> SNAPBUILD_CONSISTENT is not just legit, it is essential: or order to be\n> >> able to stream data since confirmed_flush_lsn, we must pick it up as we\n> >> might not be able to assemble it from scratch in time. I mean, what is\n> >> wrong here is not serialized snapshot usage but the check.\n> >>\n> >\n> > I was thinking if we have some way to skip processing of\n> > xl_xact_assignment for such cases, then it might be better. Say,\n> > along with restart_lsn, if have some way to find corresponding nextXid\n> > (below which we don't need to process records).\n>\n> I don't believe you can that without persisting additional\n> data. Basically, what we need is list of transactions who are running at\n> the point of snapshot serialization *and* already wrote something before\n> it -- those we hadn't seen in full and can't decode. We have no such\n> data currently. The closest thing we have is xl_running_xacts->nextXid,\n> but\n>\n> 1) issued xid doesn't necessarily means xact actually wrote something,\n> so we can't just skip xl_xact_assignment for xid < nextXid, it might\n> still be decoded\n> 2) snapshot might be serialized not at xl_running_xacts anyway\n>\n> Surely this thing doesn't deserve changing persisted data format.\n>\n\nI agree that it won't be a good idea to change the persisted data\nformat, especially in back-branches. I don't see any fix which can\navoid this without doing major changes in the code. Apart from this,\nwe have to come up with a solution for point (3) discussed in the\nabove email [1] which again could be change in design. I think we can\nfirst try to proceed with the patch\n0002-Stop-demanding-that-top-xact-must-be-seen-before--v2 and then we\ncan discuss the other patch. I can't see a way to write a test case\nfor this, can you think of any way?\n\nAndres, anyone else, if you have a better idea other than changing the\ncode (removing the expected error) as in\n0002-Stop-demanding-that-top-xact-must-be-seen-before--v2, then\nplease, let us know. You can read the points (1) and (3) in the email\nabove [1] where the below error check will hit for valid cases. We\nhave discussed this in detail, but couldn't come up with anything\nbetter than to remove this check.\n\n@@ -778,9 +778,6 @@ ReorderBufferAssignChild(ReorderBuffer *rb,\nTransactionId xid,\n txn = ReorderBufferTXNByXid(rb, xid, true, &new_top, lsn, true);\n subtxn = ReorderBufferTXNByXid(rb, subxid, true, &new_sub, lsn, false);\n\n- if (new_top && !new_sub)\n- elog(ERROR, \"subtransaction logged without previous top-level txn record\");\n-\n\n[1] - https://www.postgresql.org/message-id/87zhdx76d5.fsf%40ars-thinkpad\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Feb 2020 16:33:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n\n>> I don't believe you can that without persisting additional\n>> data. Basically, what we need is list of transactions who are running at\n>> the point of snapshot serialization *and* already wrote something before\n>> it -- those we hadn't seen in full and can't decode. We have no such\n>> data currently. The closest thing we have is xl_running_xacts->nextXid,\n>> but\n>>\n>> 1) issued xid doesn't necessarily means xact actually wrote something,\n>> so we can't just skip xl_xact_assignment for xid < nextXid, it might\n>> still be decoded\n>> 2) snapshot might be serialized not at xl_running_xacts anyway\n>>\n>> Surely this thing doesn't deserve changing persisted data format.\n>>\n>\n> I agree that it won't be a good idea to change the persisted data\n> format, especially in back-branches. I don't see any fix which can\n> avoid this without doing major changes in the code. Apart from this,\n> we have to come up with a solution for point (3) discussed in the\n> above email [1] which again could be change in design. I think we can\n> first try to proceed with the patch\n> 0002-Stop-demanding-that-top-xact-must-be-seen-before--v2 and then we\n> can discuss the other patch. I can't see a way to write a test case\n> for this, can you think of any way?\n\nYeah, let's finally get it.\n\nAttached is raw version of isolation test triggering false\n'subtransaction logged without...' (case (1)). However, frankly I don't\nsee much value in it, so I'm dubious whether it should be included in\nthe patch.\n\n\n\n\n\n-- cheers, Arseny",
"msg_date": "Mon, 10 Feb 2020 16:04:46 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Mon, Feb 10, 2020 at 6:34 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> >> I don't believe you can that without persisting additional\n> >> data. Basically, what we need is list of transactions who are running at\n> >> the point of snapshot serialization *and* already wrote something before\n> >> it -- those we hadn't seen in full and can't decode. We have no such\n> >> data currently. The closest thing we have is xl_running_xacts->nextXid,\n> >> but\n> >>\n> >> 1) issued xid doesn't necessarily means xact actually wrote something,\n> >> so we can't just skip xl_xact_assignment for xid < nextXid, it might\n> >> still be decoded\n> >> 2) snapshot might be serialized not at xl_running_xacts anyway\n> >>\n> >> Surely this thing doesn't deserve changing persisted data format.\n> >>\n> >\n> > I agree that it won't be a good idea to change the persisted data\n> > format, especially in back-branches. I don't see any fix which can\n> > avoid this without doing major changes in the code. Apart from this,\n> > we have to come up with a solution for point (3) discussed in the\n> > above email [1] which again could be change in design. I think we can\n> > first try to proceed with the patch\n> > 0002-Stop-demanding-that-top-xact-must-be-seen-before--v2 and then we\n> > can discuss the other patch. I can't see a way to write a test case\n> > for this, can you think of any way?\n>\n> Yeah, let's finally get it.\n>\n> Attached is raw version of isolation test triggering false\n> 'subtransaction logged without...' (case (1)).\n>\n\nThis didn't reproduce the desired error for me (tried without a\npatch). I think you need to add two more steps (\"s2_checkpoint\"\n\"s2_get_changes\") at the end of the test to set the restart_lsn at the\nappropriate location.\n\n> However, frankly I don't\n> see much value in it, so I'm dubious whether it should be included in\n> the patch.\n>\n\nI think this will surely test some part of the system which was not\ntested before, mainly having some subxacts without top-xact getting\ndecoded even though we don't need to send such a transaction. Can you\nprepare a complete patch (for\nStop-demanding-that-top-xact-must-be-seen-before-sub) having this test\nas part of it?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 Feb 2020 09:29:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n>> Attached is raw version of isolation test triggering false\n>> 'subtransaction logged without...' (case (1)).\n>>\n>\n> This didn't reproduce the desired error for me (tried without a\n> patch). I think you need to add two more steps (\"s2_checkpoint\"\n> \"s2_get_changes\") at the end of the test to set the restart_lsn at the\n> appropriate location.\n\nThat's weird, it reliably fails with expected error for me. There are\nalready two s2_checkpoint's: first establishes potential (broken)\nrestart_lsn (serializes snapshot after first xl_xact_assignment of s0\nxact, but before first record of s1 xact), the second ensures\ns2_get_changes directly following it will actually advance the slot,\nmaking that potential restart_lsn real.\n\nI don't see how adding s2_checkpoint and s2_get_changes helps here. Do\nthey really provoke error in your setup? Could you check with pg_waldump\nwhat's going on?\n\n>\n>> However, frankly I don't\n>> see much value in it, so I'm dubious whether it should be included in\n>> the patch.\n>>\n>\n> I think this will surely test some part of the system which was not\n> tested before, mainly having some subxacts without top-xact getting\n> decoded even though we don't need to send such a transaction. Can you\n> prepare a complete patch (for\n> Stop-demanding-that-top-xact-must-be-seen-before-sub) having this test\n> as part of it?\n\nOk, will do.\n\n\n-- cheers, arseny\n\n\n",
"msg_date": "Tue, 11 Feb 2020 07:32:22 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": ">> I think this will surely test some part of the system which was not\n>> tested before, mainly having some subxacts without top-xact getting\n>> decoded even though we don't need to send such a transaction. Can you\n>> prepare a complete patch (for\n>> Stop-demanding-that-top-xact-must-be-seen-before-sub) having this test\n>> as part of it?\n>\n> Ok, will do.\n\nHere it is.\n\n\n\n\n\n-- cheers, arseny",
"msg_date": "Tue, 11 Feb 2020 08:06:59 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Tue, Feb 11, 2020 at 10:02 AM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> >> Attached is raw version of isolation test triggering false\n> >> 'subtransaction logged without...' (case (1)).\n> >>\n> >\n> > This didn't reproduce the desired error for me (tried without a\n> > patch). I think you need to add two more steps (\"s2_checkpoint\"\n> > \"s2_get_changes\") at the end of the test to set the restart_lsn at the\n> > appropriate location.\n>\n> That's weird, it reliably fails with expected error for me. There are\n> already two s2_checkpoint's: first establishes potential (broken)\n> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> xact, but before first record of s1 xact), the second ensures\n> s2_get_changes directly following it will actually advance the slot,\n>\n\nIn my case, s2_get_changes doesn't seem to be advancing the restart\nlsn because when it processed running_xact by s2_checkpoint, the slots\nconfirm flush location (slot->data.confirmed_flush) was behind it. As\nconfirmed_flush was behind running_xact of s2_checkpoint, it couldn't\nupdate slot->candidate_restart_lsn (in function\nLogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\nlocation will only be updated at the end of get_changes. This is the\nreason I need extra get_changes call to generate an error.\n\nI will think and investigate this more, but thought of sharing the\ncurrent situation with you. There is something different going on in\nmy system or maybe the nature of test is like that.\n\n> making that potential restart_lsn real.\n>\n> I don't see how adding s2_checkpoint and s2_get_changes helps here. Do\n> they really provoke error in your setup?\n>\n\nYes, I am running each of the steps in test manually by using three\ndifferent terminals.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 Feb 2020 17:15:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n\n>> That's weird, it reliably fails with expected error for me. There are\n>> already two s2_checkpoint's: first establishes potential (broken)\n>> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n>> xact, but before first record of s1 xact), the second ensures\n>> s2_get_changes directly following it will actually advance the slot,\n>>\n>\n> In my case, s2_get_changes doesn't seem to be advancing the restart\n> lsn because when it processed running_xact by s2_checkpoint, the slots\n> confirm flush location (slot->data.confirmed_flush) was behind it. As\n> confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> update slot->candidate_restart_lsn (in function\n> LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> location will only be updated at the end of get_changes. This is the\n> reason I need extra get_changes call to generate an error.\n>\n> I will think and investigate this more, but thought of sharing the\n> current situation with you. There is something different going on in\n> my system or maybe the nature of test is like that.\n\nAh, I think I know what's happening -- you have one more\nxl_running_xacts which catches the advancement -- similar issue is\nexplained in the comment in oldest_xmin.spec.\n\nTry attached.\n\n\n\n\n\n-- cheers, arseny",
"msg_date": "Tue, 11 Feb 2020 15:06:02 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Tue, Feb 11, 2020 at 5:36 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> >> That's weird, it reliably fails with expected error for me. There are\n> >> already two s2_checkpoint's: first establishes potential (broken)\n> >> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> >> xact, but before first record of s1 xact), the second ensures\n> >> s2_get_changes directly following it will actually advance the slot,\n> >>\n> >\n> > In my case, s2_get_changes doesn't seem to be advancing the restart\n> > lsn because when it processed running_xact by s2_checkpoint, the slots\n> > confirm flush location (slot->data.confirmed_flush) was behind it. As\n> > confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> > update slot->candidate_restart_lsn (in function\n> > LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> > location will only be updated at the end of get_changes. This is the\n> > reason I need extra get_changes call to generate an error.\n> >\n> > I will think and investigate this more, but thought of sharing the\n> > current situation with you. There is something different going on in\n> > my system or maybe the nature of test is like that.\n>\n> Ah, I think I know what's happening -- you have one more\n> xl_running_xacts which catches the advancement -- similar issue is\n> explained in the comment in oldest_xmin.spec.\n>\nThere is one more inconsistency in the test case which I faced while\ntrying to reproduce. The problem is that, after \"s0_begin\"\n\"s0_first_subxact\", steps the open transaction is the top-transaction\nbecause we have generated the sub-transaction and closed it. Now,\nduring the \"s0_many_subxacts\" step, while scanning the system table\n(e.g. for finding the function) the top-transaction might log the WAL\nfor the hint bits. And then we will lose the purpose of the test\nbecause we will get the WAL for the top-transaction, after the restart\npoint and then there will be no error. For fixing this I have\nmodified the \"s0_first_subxact\" as shown below\n\n+step \"s0_first_subxact\" {\n+ DO LANGUAGE plpgsql $$\n+ BEGIN\n+ BEGIN\n+ INSERT INTO harvest VALUES (41);\n+ EXCEPTION WHEN OTHERS THEN RAISE;\n+ END;\n+ END $$;\n+}\nsavepoint s1; -- added extra\nINSERT INTO harvest VALUES (41); --added extra\n\nBasically, after these two steps, the open transaction will be the\nsub-transaction, not the top-transaction and that will make sure that\neven if the future steps log some WAL then those will be under the\nsub-transaction, not the top-transaction.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Feb 2020 08:46:22 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 8:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Feb 11, 2020 at 5:36 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> >\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> >\n> > >> That's weird, it reliably fails with expected error for me. There are\n> > >> already two s2_checkpoint's: first establishes potential (broken)\n> > >> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> > >> xact, but before first record of s1 xact), the second ensures\n> > >> s2_get_changes directly following it will actually advance the slot,\n> > >>\n> > >\n> > > In my case, s2_get_changes doesn't seem to be advancing the restart\n> > > lsn because when it processed running_xact by s2_checkpoint, the slots\n> > > confirm flush location (slot->data.confirmed_flush) was behind it. As\n> > > confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> > > update slot->candidate_restart_lsn (in function\n> > > LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> > > location will only be updated at the end of get_changes. This is the\n> > > reason I need extra get_changes call to generate an error.\n> > >\n> > > I will think and investigate this more, but thought of sharing the\n> > > current situation with you. There is something different going on in\n> > > my system or maybe the nature of test is like that.\n> >\n> > Ah, I think I know what's happening -- you have one more\n> > xl_running_xacts which catches the advancement -- similar issue is\n> > explained in the comment in oldest_xmin.spec.\n> >\n\nRight, that is why in my case get_changes were required twice. After\ncalling get_changes as we do in oldest_xmin.spec will make test case\nreliable.\n\n> There is one more inconsistency in the test case which I faced while\n> trying to reproduce. The problem is that, after \"s0_begin\"\n> \"s0_first_subxact\", steps the open transaction is the top-transaction\n> because we have generated the sub-transaction and closed it. Now,\n> during the \"s0_many_subxacts\" step, while scanning the system table\n> (e.g. for finding the function) the top-transaction might log the WAL\n> for the hint bits.\n>\n\nI am curious to know how this is happening in your case? Because we\nlog WAL for hint-bits only when checksums or wal_log_hints are enabled\n(See (or XLogHintBitIsNeeded) which is not the default case?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Feb 2020 09:09:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 12, 2020 at 8:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Feb 11, 2020 at 5:36 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> > >\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > >\n> > > >> That's weird, it reliably fails with expected error for me. There are\n> > > >> already two s2_checkpoint's: first establishes potential (broken)\n> > > >> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> > > >> xact, but before first record of s1 xact), the second ensures\n> > > >> s2_get_changes directly following it will actually advance the slot,\n> > > >>\n> > > >\n> > > > In my case, s2_get_changes doesn't seem to be advancing the restart\n> > > > lsn because when it processed running_xact by s2_checkpoint, the slots\n> > > > confirm flush location (slot->data.confirmed_flush) was behind it. As\n> > > > confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> > > > update slot->candidate_restart_lsn (in function\n> > > > LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> > > > location will only be updated at the end of get_changes. This is the\n> > > > reason I need extra get_changes call to generate an error.\n> > > >\n> > > > I will think and investigate this more, but thought of sharing the\n> > > > current situation with you. There is something different going on in\n> > > > my system or maybe the nature of test is like that.\n> > >\n> > > Ah, I think I know what's happening -- you have one more\n> > > xl_running_xacts which catches the advancement -- similar issue is\n> > > explained in the comment in oldest_xmin.spec.\n> > >\n>\n> Right, that is why in my case get_changes were required twice. After\n> calling get_changes as we do in oldest_xmin.spec will make test case\n> reliable.\n>\n> > There is one more inconsistency in the test case which I faced while\n> > trying to reproduce. The problem is that, after \"s0_begin\"\n> > \"s0_first_subxact\", steps the open transaction is the top-transaction\n> > because we have generated the sub-transaction and closed it. Now,\n> > during the \"s0_many_subxacts\" step, while scanning the system table\n> > (e.g. for finding the function) the top-transaction might log the WAL\n> > for the hint bits.\n> >\n>\n> I am curious to know how this is happening in your case? Because we\n> log WAL for hint-bits only when checksums or wal_log_hints are enabled\n> (See (or XLogHintBitIsNeeded) which is not the default case?\n\nYeah, you are right. Actually, wal_log_hints is set in my\nconfiguration. So it should not be a problem.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 12 Feb 2020 09:23:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 12, 2020 at 8:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Feb 11, 2020 at 5:36 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> > >\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > >\n> > > >> That's weird, it reliably fails with expected error for me. There are\n> > > >> already two s2_checkpoint's: first establishes potential (broken)\n> > > >> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> > > >> xact, but before first record of s1 xact), the second ensures\n> > > >> s2_get_changes directly following it will actually advance the slot,\n> > > >>\n> > > >\n> > > > In my case, s2_get_changes doesn't seem to be advancing the restart\n> > > > lsn because when it processed running_xact by s2_checkpoint, the slots\n> > > > confirm flush location (slot->data.confirmed_flush) was behind it. As\n> > > > confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> > > > update slot->candidate_restart_lsn (in function\n> > > > LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> > > > location will only be updated at the end of get_changes. This is the\n> > > > reason I need extra get_changes call to generate an error.\n> > > >\n> > > > I will think and investigate this more, but thought of sharing the\n> > > > current situation with you. There is something different going on in\n> > > > my system or maybe the nature of test is like that.\n> > >\n> > > Ah, I think I know what's happening -- you have one more\n> > > xl_running_xacts which catches the advancement -- similar issue is\n> > > explained in the comment in oldest_xmin.spec.\n> > >\n>\n> Right, that is why in my case get_changes were required twice. After\n> calling get_changes as we do in oldest_xmin.spec will make test case\n> reliable.\n>\n\nAttached is a patch where I have modified the comments and slightly\nedited the commit message. This patch was not getting applied in v11\nand branches lower than that, so I prepared a patch for those branches\nas well. I have tested this patch till 9.5 and it works as intended.\n\nCan you also once check the patch and verify it in back-branches?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 12 Feb 2020 13:42:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 12, 2020 at 9:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Feb 12, 2020 at 8:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Feb 11, 2020 at 5:36 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> > > >\n> > > >\n> > > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > >\n> > > > >> That's weird, it reliably fails with expected error for me. There are\n> > > > >> already two s2_checkpoint's: first establishes potential (broken)\n> > > > >> restart_lsn (serializes snapshot after first xl_xact_assignment of s0\n> > > > >> xact, but before first record of s1 xact), the second ensures\n> > > > >> s2_get_changes directly following it will actually advance the slot,\n> > > > >>\n> > > > >\n> > > > > In my case, s2_get_changes doesn't seem to be advancing the restart\n> > > > > lsn because when it processed running_xact by s2_checkpoint, the slots\n> > > > > confirm flush location (slot->data.confirmed_flush) was behind it. As\n> > > > > confirmed_flush was behind running_xact of s2_checkpoint, it couldn't\n> > > > > update slot->candidate_restart_lsn (in function\n> > > > > LogicalIncreaseRestartDecodingForSlot). I think the confirmed_flush\n> > > > > location will only be updated at the end of get_changes. This is the\n> > > > > reason I need extra get_changes call to generate an error.\n> > > > >\n> > > > > I will think and investigate this more, but thought of sharing the\n> > > > > current situation with you. There is something different going on in\n> > > > > my system or maybe the nature of test is like that.\n> > > >\n> > > > Ah, I think I know what's happening -- you have one more\n> > > > xl_running_xacts which catches the advancement -- similar issue is\n> > > > explained in the comment in oldest_xmin.spec.\n> > > >\n> >\n> > Right, that is why in my case get_changes were required twice. After\n> > calling get_changes as we do in oldest_xmin.spec will make test case\n> > reliable.\n> >\n>\n> Attached is a patch where I have modified the comments and slightly\n> edited the commit message. This patch was not getting applied in v11\n> and branches lower than that, so I prepared a patch for those branches\n> as well. I have tested this patch till 9.5 and it works as intended.\n>\n> Can you also once check the patch and verify it in back-branches?\n\nI have checked the patch and it looks fine to me. I have also tested\nit on the back branches and it works fine.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Feb 2020 10:01:50 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 10:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Feb 12, 2020 at 1:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Attached is a patch where I have modified the comments and slightly\n> > edited the commit message. This patch was not getting applied in v11\n> > and branches lower than that, so I prepared a patch for those branches\n> > as well. I have tested this patch till 9.5 and it works as intended.\n> >\n> > Can you also once check the patch and verify it in back-branches?\n>\n> I have checked the patch and it looks fine to me. I have also tested\n> it on the back branches and it works fine.\n>\n\nThanks, I am planning to commit this next week sometime (on Wednesday\n19-Feb) to let others review or share their opinion.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Feb 2020 14:15:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n> Attached is a patch where I have modified the comments and slightly\n> edited the commit message. This patch was not getting applied in v11\n> and branches lower than that, so I prepared a patch for those branches\n> as well. I have tested this patch till 9.5 and it works as intended.\n>\n> Can you also once check the patch and verify it in back-branches?\n\nSorry for the delay. I'm fine with changes. make check down to 9.5 is\nalso happy here.\n\nInteresting thing about wal_log_hints, hasn't occured to me.\n\n\n-- cheers, arseny\n\n\n",
"msg_date": "Fri, 14 Feb 2020 16:34:28 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Fri, Feb 14, 2020 at 7:04 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > Attached is a patch where I have modified the comments and slightly\n> > edited the commit message. This patch was not getting applied in v11\n> > and branches lower than that, so I prepared a patch for those branches\n> > as well. I have tested this patch till 9.5 and it works as intended.\n> >\n> > Can you also once check the patch and verify it in back-branches?\n>\n> Sorry for the delay. I'm fine with changes. make check down to 9.5 is\n> also happy here.\n>\n\nPushed, let's keep an eye on buildfarm.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Feb 2020 09:26:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Sun, Feb 9, 2020 at 9:37 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Somehow I hadn't realized this earlier, so my comments/commit messages\n> in patches above were not accurate here; I've edited them. Also in the\n> first patch serialized snapshots are not no longer used for new slot\n> creation at all, as Andres suggested above.\n>\n\n+ /*\n+ * Don't use serialized snapshot if we are not sure where all\n+ * currently running xacts will finish (new slot creation).\n+ * (Actually, if we came here through xl_running_xacts, we could perform\n+ * SNAPBUILD_FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition properly,\n+ * but added lines of code would hardly worth the benefit.)\n+ */\n+ if (builder->start_decoding_at == InvalidXLogRecPtr)\n+ return false;\n\nInstead of using start_decoding_at to decide whether to restore\nsnapshot or not, won't it be better to have new variable in SnapBuild\n(say can_use_serialized_snap or something like that) and for this\npurpose?\n\nI think the patch is trying to use a variable that is not meant for\nthe purpose we are using for it, so not sure if it is the right\ndirection for the fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Mar 2020 11:56:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n> On Sun, Feb 9, 2020 at 9:37 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> + /*\n> + * Don't use serialized snapshot if we are not sure where all\n> + * currently running xacts will finish (new slot creation).\n> + * (Actually, if we came here through xl_running_xacts, we could perform\n> + * SNAPBUILD_FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition properly,\n> + * but added lines of code would hardly worth the benefit.)\n> + */\n> + if (builder->start_decoding_at == InvalidXLogRecPtr)\n> + return false;\n>\n> Instead of using start_decoding_at to decide whether to restore\n> snapshot or not, won't it be better to have new variable in SnapBuild\n> (say can_use_serialized_snap or something like that) and for this\n> purpose?\n\nstart_decoding_at who is initialized externally at\nAllocateSnapshotBuilder is what actually defines how to handle\nserialized snapshots: if it is valid LSN, snapbuild must trust the\ncaller that WAL reading starts early enough to stream since this LSN, so\nwe deserialize the snap and jump into CONSISTENT. If it is invalid, we\ndon't know the safe streaming point yet, and it remains invalid until we\nlearn full snapshot and then wait for all xacts finishing. So such bool\nwould be a pointless synonym.\n\nMoreover, as cited comment mentions:\n\n> + * (Actually, if we came here through xl_running_xacts, we could perform\n> + * SNAPBUILD_FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition properly,\n> + * but added lines of code would hardly worth the benefit.)\n\nthere is nothing wrong in using the serialized snapshot per se. It's\njust that we must wait for all xacts finishing after getting the\nsnapshot and this is impossible if we don't know who is running. So\ncan_use_serialized_snap would be even somewhat confusing.\n\n\n-- cheers, arseny\n\n\n",
"msg_date": "Mon, 02 Mar 2020 10:41:54 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 1:11 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > On Sun, Feb 9, 2020 at 9:37 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n> > + /*\n> > + * Don't use serialized snapshot if we are not sure where all\n> > + * currently running xacts will finish (new slot creation).\n> > + * (Actually, if we came here through xl_running_xacts, we could perform\n> > + * SNAPBUILD_FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition properly,\n> > + * but added lines of code would hardly worth the benefit.)\n> > + */\n> > + if (builder->start_decoding_at == InvalidXLogRecPtr)\n> > + return false;\n> >\n> > Instead of using start_decoding_at to decide whether to restore\n> > snapshot or not, won't it be better to have new variable in SnapBuild\n> > (say can_use_serialized_snap or something like that) and for this\n> > purpose?\n>\n> start_decoding_at who is initialized externally at\n> AllocateSnapshotBuilder is what actually defines how to handle\n> serialized snapshots: if it is valid LSN, snapbuild must trust the\n> caller that WAL reading starts early enough to stream since this LSN, so\n> we deserialize the snap and jump into CONSISTENT. If it is invalid, we\n> don't know the safe streaming point yet, and it remains invalid until we\n> learn full snapshot and then wait for all xacts finishing.\n>\n\nI think here you are trying to deduce the meaning. I don't see that\nit can clearly define that don't use serialized snapshots. It is not\nclear to me why have you changed the below code, basically why it is\nokay to pass InvalidXLogRecPtr instead of restart_lsn?\n\n@@ -327,7 +327,7 @@ CreateInitDecodingContext(char *plugin,\n ReplicationSlotMarkDirty();\n ReplicationSlotSave();\n\n- ctx = StartupDecodingContext(NIL, restart_lsn, xmin_horizon,\n+ ctx = StartupDecodingContext(NIL, InvalidXLogRecPtr, xmin_horizon,\n need_full_snapshot, false,\n read_page, prepare_write, do_write,\n update_progress);\n\n> So such bool\n> would be a pointless synonym.\n>\n> Moreover, as cited comment mentions:\n>\n> > + * (Actually, if we came here through xl_running_xacts, we could perform\n> > + * SNAPBUILD_FULL_SNAPSHOT -> SNAPBUILD_CONSISTENT transition properly,\n> > + * but added lines of code would hardly worth the benefit.)\n>\n> there is nothing wrong in using the serialized snapshot per se. It's\n> just that we must wait for all xacts finishing after getting the\n> snapshot and this is impossible if we don't know who is running.\n>\n\nI am not denying any such possibility and even if we do something like\nthat it will be for the master branch.\n\n> So\n> can_use_serialized_snap would be even somewhat confusing.\n>\n\nIt is possible that your idea of trying to deduce from\nstart_decoding_at is better, but I am not sure about it if anyone else\nalso thinks that is a good way to do, then fine.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Mar 2020 15:44:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n> I think here you are trying to deduce the meaning. I don't see that\n> it can clearly define that don't use serialized snapshots. It is not\n> clear to me why have you changed the below code, basically why it is\n> okay to pass InvalidXLogRecPtr instead of restart_lsn?\n>\n> @@ -327,7 +327,7 @@ CreateInitDecodingContext(char *plugin,\n> ReplicationSlotMarkDirty();\n> ReplicationSlotSave();\n>\n> - ctx = StartupDecodingContext(NIL, restart_lsn, xmin_horizon,\n> + ctx = StartupDecodingContext(NIL, InvalidXLogRecPtr, xmin_horizon,\n> need_full_snapshot, false,\n> read_page, prepare_write, do_write,\n> update_progress);\n\nBecause when we create the slot we don't demand to stream from some\nspecific point. In fact we just can't, because we don't know since which\nLSN it is actually possible to stream, i.e. when we'd have good snapshot\nand no old (which we haven't seen in full) xacts running. It is up to\nsnapbuild.c to define this point. The previous coding was meaningless:\nwe asked for some random restart_lsn and snapbuild.c would silently\nadvance it to earliest suitable LSN.\n\nOTOH, when we are decoding from existing slot not only we know earliest\npossible point, but to avoid missing xacts we must enforce streaming\nsince this very point despite the snapbuilder being unable (because he\nmight not know which xacts are running at point of the snapshot) to\ncheck its safety.\n\nstart_decoding_at reflects the difference between these scenarios, and\nserialized snapshots handling stems from here.\n\nThanks for looking into this.\n\n\n-- cheers, arseny\n\n\n",
"msg_date": "Mon, 02 Mar 2020 16:37:04 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "On Mon, Mar 2, 2020 at 7:07 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> > I think here you are trying to deduce the meaning. I don't see that\n> > it can clearly define that don't use serialized snapshots. It is not\n> > clear to me why have you changed the below code, basically why it is\n> > okay to pass InvalidXLogRecPtr instead of restart_lsn?\n> >\n> > @@ -327,7 +327,7 @@ CreateInitDecodingContext(char *plugin,\n> > ReplicationSlotMarkDirty();\n> > ReplicationSlotSave();\n> >\n> > - ctx = StartupDecodingContext(NIL, restart_lsn, xmin_horizon,\n> > + ctx = StartupDecodingContext(NIL, InvalidXLogRecPtr, xmin_horizon,\n> > need_full_snapshot, false,\n> > read_page, prepare_write, do_write,\n> > update_progress);\n>\n> Because when we create the slot we don't demand to stream from some\n> specific point. In fact we just can't, because we don't know since which\n> LSN it is actually possible to stream, i.e. when we'd have good snapshot\n> and no old (which we haven't seen in full) xacts running. It is up to\n> snapbuild.c to define this point. The previous coding was meaningless:\n> we asked for some random restart_lsn and snapbuild.c would silently\n> advance it to earliest suitable LSN.\n>\n\nHmm, if this is the case then it should be true even without solving\nthis particular problem and we should be able to make this change.\nLeaving that aside, I think this change can make copy replication slot\nfunctionality to also skip using serialized snapshots with this patch\nwhich is not our intention. Also, it doesn't seem like a good idea to\nignore setting start_decoding_at when we already set\nslot->data.restart_lsn with this value.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Mar 2020 15:56:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
},
{
"msg_contents": "\nAmit Kapila <amit.kapila16@gmail.com> writes:\n\n>> Because when we create the slot we don't demand to stream from some\n>> specific point. In fact we just can't, because we don't know since which\n>> LSN it is actually possible to stream, i.e. when we'd have good snapshot\n>> and no old (which we haven't seen in full) xacts running. It is up to\n>> snapbuild.c to define this point. The previous coding was meaningless:\n>> we asked for some random restart_lsn and snapbuild.c would silently\n>> advance it to earliest suitable LSN.\n>>\n>\n> Hmm, if this is the case then it should be true even without solving\n> this particular problem and we should be able to make this change.\n\nRight.\n\n> Leaving that aside, I think this change can make copy replication slot\n> functionality to also skip using serialized snapshots with this patch\n> which is not our intention.\n\nAs I say at [1] logical slot copying facility is currently anyway broken\nin this regard: restart_lsn is copied, but confirmed_flush isn't, and\nthe right fix, in my view, is to avoid DecodingContextFindStartpoint\nthere altogether (by checking donor's confirmed_flush is valid and\ncopying it) which would render this irrelevant. To speculate, even if\nwanted to go through DecodingContextFindStartpoint for slot copying and\nestablish confirmed_flush on our own, surely we'd need to handle\nserialized snapshots exactly as new slot creation does because dangers\nof getting SNAPBUILD_CONSISTENT too early are the same in both cases.\n\n> Also, it doesn't seem like a good idea to ignore setting\n> start_decoding_at when we already set slot->data.restart_lsn with this\n> value.\n\nWell, these two fields have absolutely different values. BTW I find the\nnaming here somewhat unfortunate, and this phrase suggests that it\nindeed leads to confusion.\n\nSlot's restart_lsn is the LSN since which we start reading WAL and by\nsetting data.restart_lsn we prevent WAL we need from\nrecycling. start_decoding_at is the LSN since which we start\n*streaming*, i.e. actually replaying COMMITs. So setting the first one\n(as we must hold WAL) and not the second one (as we don't know the\nstreaming point yet when we start slot creation) is just fine.\n\n\n[1] https://www.postgresql.org/message-id/flat/CA%2Bfd4k70BXLTm-N6q18LrL%3DGbKtwY3-2%2B%2BUVFw05SvFTkZgTyQ%40mail.gmail.com\n\n\n-- cheers, arseny\n\n\n",
"msg_date": "Wed, 04 Mar 2020 16:29:44 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: subtransaction logged without previous top-level txn\n record"
}
] |
[
{
"msg_contents": "Hello,\n\nI pray this email finds you well.\n\nMy name is Grace, a Computer Science undergraduate and a technical writer.\nI have read through the organization's project ideas for writers during\nGoogle Season of Doc and I am excited to say that I am interested to work\non the Introductory Resources doc.\n\nLast semester I used PostgreSQL as my database for a Java class project and\nwhile at it, I taught myself how to work with PostgreSQL from scratch since\nnobody's in class had used it. It was not easy though and that is why I\nwould love to offer my skills and enthusiasm as a writer to create\nintroductory material for beginners so as to assist them in using\nPostgreSQL fast and easily in their class or personal projects. Please\nadvice me on how I can proceed and also model my proposal to suit the\norganization's need.\n\nI am looking forward to hearing from you. Thank you.\n\nKind regards,\nGrace Kahinga\n\nHello,I pray this email finds you well.My name is Grace, a Computer Science undergraduate and a technical writer. I have read through the organization's project ideas for writers during Google Season of Doc and I am excited to say that I am interested to work on the Introductory Resources doc.Last semester I used PostgreSQL as my database for a Java class project and while at it, I taught myself how to work with PostgreSQL from scratch since nobody's in class had used it. It was not easy though and that is why I would love to offer my skills and enthusiasm as a writer to create introductory material for beginners so as to assist them in using PostgreSQL fast and easily in their class or personal projects. Please advice me on how I can proceed and also model my proposal to suit the organization's need.I am looking forward to hearing from you. Thank you.Kind regards,Grace Kahinga",
"msg_date": "Tue, 11 Jun 2019 03:44:58 +0300",
"msg_from": "Grace Kahinga <gracekahinga99@gmail.com>",
"msg_from_op": true,
"msg_subject": "Creating Introductory Resources as GSoD Project"
}
] |
[
{
"msg_contents": "Hi,\n\nI've talked a few times about a bgwriter replacement prototype I'd\nwritten a few years back. That happened somewhere deep in another thread\n[1], and thus not easy to fix.\n\nTomas Vondra asked me for a link, but there was some considerable bitrot\nsince. Attached is a rebased and slightly improved version. It's also\navailable at [2][3].\n\nThe basic observation is that there's some fairly fundamental issues\nwith the current bgwriter implementation:\n\n1) The pacing logic is complicated, but doesn't work well\n2) If most/all buffers have a usagecount, it cannot do anything, because\n it doesn't participate in the clock-sweep\n3) Backends have to re-discover the now clean buffers.\n\n\nThe prototype is much simpler - in my opinion of course. It has a\nringbuffer of buffers it thinks are clean (which might be reused\nconcurrently though). It fills that ringbuffer by performing\nclock-sweep, and if necessary cleaning, usagecount=pincount=0\nbuffers. Backends can then pop buffers from that ringbuffer.\n\nPacing works by bgwriter trying to keep the ringbuffer full, and\nbackends emptying the ringbuffer. If the ringbuffer is less than 1/4\nfull, backends wake up bgwriter using the existing latch mechanism.\n\nThe ringbuffer is a pretty simplistic lockless (but just obstruction\nfree, not lock free) implementation, with a lot of unneccessary\nconstraints.\n\nI've had to improve the current instrumentation for pgwriter\n(i.e. pg_stat_bgwriter) considerably - the details in there imo are not\neven remotely good enough to actually understand the system (nor are the\nnames understandable). That needs to be split into a separate commit,\nand the half dozen different implementations of the counters need to be\nunified.\n\nObviously this is very prototype-stage code. But I think it's a good\nstarting point for going forward.\n\nTo enable it, one currently has to set the bgwriter_legacy = false GUC.\n\nSome early benchmarks show that in IO heavy cases there's somewhere\nbetween a very mild regression (close to noise), to a pretty\nconsiderable improvement. To see a benefit one - fairly obviously -\nneeds a workload that is bigger than shared buffers, because otherwise\ncheckpointer is going to do all writes (and should, it can sort them\nperfectly!).\n\nIt's quite possible to saturate what a single bgwriter can write out (as\nit is before the replacement). I'm inclined to think the next solution\nfor that is asynchronous IO, and write-combining, rather than multiple\nbgwriters.\n\nHere's an example pg_stat_bgwriter from the middle of a pgbench run\n(after resetting it a short while before):\n\n┌─[ RECORD 1 ]───────────────┬───────────────────────────────┐\n│ checkpoints_timed │ 1 │\n│ checkpoints_req │ 0 │\n│ checkpoint_write_time │ 179491 │\n│ checkpoint_sync_time │ 266 │\n│ buffers_written_checkpoint │ 172414 │\n│ buffers_written_bgwriter │ 475802 │\n│ buffers_written_backend │ 7140 │\n│ buffers_written_ring │ 0 │\n│ buffers_fsync_checkpointer │ 137 │\n│ buffers_fsync_bgwriter │ 0 │\n│ buffers_fsync_backend │ 0 │\n│ buffers_bgwriter_clean │ 832616 │\n│ buffers_alloc_preclean │ 1306572 │\n│ buffers_alloc_free │ 0 │\n│ buffers_alloc_sweep │ 4639 │\n│ buffers_alloc_ring │ 767 │\n│ buffers_ticks_bgwriter │ 4398290 │\n│ buffers_ticks_backend │ 17098 │\n│ maxwritten_clean │ 17 │\n│ stats_reset │ 2019-06-10 20:17:56.087704-07 │\n└────────────────────────────┴───────────────────────────────┘\n\n\nNote that buffers_written_backend (as buffers_backend before) accounts\nfor file extensions too - which bgwriter can't offload. We should\nreplace that by a non-write (i.e. fallocate) anyway.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20160204155458.jrw3crmyscusdqf6%40alap3.anarazel.de\n[2] https://git.postgresql.org/gitweb/?p=users/andresfreund/postgres.git;a=shortlog;h=refs/heads/bgwriter-rewrite\n[3] https://github.com/anarazel/postgres/tree/bgwriter-rewrite",
"msg_date": "Mon, 10 Jun 2019 20:22:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "rebased background worker reimplementation prototype"
},
{
"msg_contents": "Hi,\n\nI've done a bit of benchmarking / testing on this, so let me report some\nbasic results. I haven't done any significant code review, I've simply\nran a bunch of pgbench runs on different systems with different scales.\n\nSystem #1\n---------\n* CPU: Intel i5\n* RAM: 8GB\n* storage: 6 x SATA SSD RAID0 (Intel S3700)\n* autovacuum_analyze_scale_factor = 0.1\n* autovacuum_vacuum_cost_delay = 2\n* autovacuum_vacuum_cost_limit = 1000\n* autovacuum_vacuum_scale_factor = 0.01\n* bgwriter_delay = 100\n* bgwriter_lru_maxpages = 10000\n* checkpoint_timeout = 30min\n* max_wal_size = 64GB\n* shared_buffers = 1GB\n\nSystem #2\n---------\n* CPU: 2x Xeon E5-2620v5\n* RAM: 64GB\n* storage: 3 x 7.2k SATA RAID0, 1x NVMe\n* autovacuum_analyze_scale_factor = 0.1\n* autovacuum_vacuum_cost_delay = 2\n* autovacuum_vacuum_cost_limit = 1000\n* autovacuum_vacuum_scale_factor = 0.01\n* bgwriter_delay = 100\n* bgwriter_lru_maxpages = 10000\n* checkpoint_completion_target = 0.9\n* checkpoint_timeout = 15min\n* max_wal_size = 32GB\n* shared_buffers = 8GB\n\nFor each config I've done tests with three scales - small (fits into\nshared buffers), medium (fits into RAM) and large (at least 2x the RAM).\nAside from the basic metrics (throughput etc.) I've also sampled data\nabout 5% of transactions, to be able to look at latency stats.\n\nThe tests were done on master and patched code (both in the 'legacy' and\nnew mode).\n\nI haven't done any temporal analysis yet (i.e. I'm only looking at global\nsummaries, not tps over time etc).\n\nAttached is a spreadsheet with a summary of the results and a couple of\ncharts. Generally speaking, the patch has minimal impact on throughput, \nespecially when using SSD/NVMe storage. See the attached \"tps\" charts.\n\nWhen running on the 7.2k SATA RAID, the throughput improves with the\nmedium scale - from ~340tps to ~439tps, which is a pretty significant\njump. But on the large scale this disappears (in fact, it seems to be a\nbit lower than master/legacy cases). Of course, all this is just from a\nsingle run (although 4h, so noise should even out).\n\nI've also computed latency CDF (from the 5% sample) - I've attached this\nfor the two interesting cases mentioned in the previous paragraph. This\nshows that with the medium scale the latencies move down (with the patch,\nboth in the legacy and \"new\" modes), while on large scale the \"new\" mode\nmoves a bit to the right to higher values).\n\nAnd finally, I've looked at buffer stats, i.e. number of buffers written\nin various ways (checkpoing, bgwriter, backends) etc. Interestingly\nenough, these numbers did not change very much - especially on the flash\nstorage. Maybe that's expected, though.\n\nThe one case where it did change is the \"medium\" scale on SATA storage,\nwhere the throughput improved with the patch. But the change is kinda\nstrange, because the number of buffers evicted by the bgwriter decreased\n(and instead it got evicted by the checkpointer). Which might explain the\nhigher throughput, because checkpointer is probably more efficient.\n\n\nresults\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 12 Jul 2019 15:47:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: rebased background worker reimplementation prototype"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-12 15:47:02 +0200, Tomas Vondra wrote:\n> I've done a bit of benchmarking / testing on this, so let me report some\n> basic results. I haven't done any significant code review, I've simply\n> ran a bunch of pgbench runs on different systems with different scales.\n\nThanks!\n\n\n> System #1\n> ---------\n> * CPU: Intel i5\n> * RAM: 8GB\n> * storage: 6 x SATA SSD RAID0 (Intel S3700)\n> * autovacuum_analyze_scale_factor = 0.1\n> * autovacuum_vacuum_cost_delay = 2\n> * autovacuum_vacuum_cost_limit = 1000\n> * autovacuum_vacuum_scale_factor = 0.01\n> * bgwriter_delay = 100\n> * bgwriter_lru_maxpages = 10000\n> * checkpoint_timeout = 30min\n> * max_wal_size = 64GB\n> * shared_buffers = 1GB\n\nWhat's the controller situation here? Can the full SATA3 bandwidth on\nall of those drives be employed concurrently?\n\n\n> System #2\n> ---------\n> * CPU: 2x Xeon E5-2620v5\n> * RAM: 64GB\n> * storage: 3 x 7.2k SATA RAID0, 1x NVMe\n> * autovacuum_analyze_scale_factor = 0.1\n> * autovacuum_vacuum_cost_delay = 2\n> * autovacuum_vacuum_cost_limit = 1000\n> * autovacuum_vacuum_scale_factor = 0.01\n> * bgwriter_delay = 100\n> * bgwriter_lru_maxpages = 10000\n> * checkpoint_completion_target = 0.9\n> * checkpoint_timeout = 15min\n> * max_wal_size = 32GB\n> * shared_buffers = 8GB\n\nWhat type of NVMe disk is this? I'm mostly wondering whether it's fast\nenough that there's no conceivable way that IO scheduling is going to\nmake a meaningful difference, given other bottlenecks in postgres.\n\nIn some preliminary benchmark runs I've seen fairly significant gains on\nSATA and SAS SSDs, as well as spinning rust, but I've not yet\nbenchmarked on a decent NVMe SSD.\n\n\n> For each config I've done tests with three scales - small (fits into\n> shared buffers), medium (fits into RAM) and large (at least 2x the RAM).\n> Aside from the basic metrics (throughput etc.) I've also sampled data\n> about 5% of transactions, to be able to look at latency stats.\n> \n> The tests were done on master and patched code (both in the 'legacy' and\n> new mode).\n\n\n\n> I haven't done any temporal analysis yet (i.e. I'm only looking at global\n> summaries, not tps over time etc).\n\nFWIW, I'm working on a tool that generates correlated graphs of OS, PG,\npgbench stats. Especially being able to correlate the kernel's\n'Writeback' stats (grep Writeback: /proc/meminfo) and latency is very\nvaluable. Sampling wait events over time also is worthwhile.\n\n\n> When running on the 7.2k SATA RAID, the throughput improves with the\n> medium scale - from ~340tps to ~439tps, which is a pretty significant\n> jump. But on the large scale this disappears (in fact, it seems to be a\n> bit lower than master/legacy cases). Of course, all this is just from a\n> single run (although 4h, so noise should even out).\n\nAny chance there's an order-of-test factor here? In my tests I found two\nrelated issues very important: 1) the first few tests are slower,\nbecause WAL segments don't yet exist. 2) Some poor bugger of a later\ntest will get hit with anti-wraparound vacuums, even if otherwise not\nnecessary.\n\nThe fact that the master and \"legacy\" numbers differ significantly\ne.g. in the \"xeon sata scale 1000\" latency CDF does make me wonder\nwhether there's an effect like that. While there might be some small\nperformance difference due to different stats message sizes, and a few\nadditional branches, I don't see how it could be that noticable.\n\n\n> I've also computed latency CDF (from the 5% sample) - I've attached this\n> for the two interesting cases mentioned in the previous paragraph. This\n> shows that with the medium scale the latencies move down (with the patch,\n> both in the legacy and \"new\" modes), while on large scale the \"new\" mode\n> moves a bit to the right to higher values).\n\nHm. I can't yet explain that.\n\n\n> And finally, I've looked at buffer stats, i.e. number of buffers written\n> in various ways (checkpoing, bgwriter, backends) etc. Interestingly\n> enough, these numbers did not change very much - especially on the flash\n> storage. Maybe that's expected, though.\n\nSome of that is expected, e.g. because file extensions count as backend\nwrites, and are going to be roughly correlate with throughput, and not\nmuch else. But they're more similar than I'd actually expect.\n\nI do see a pretty big difference in the number of bgwriter written\nbackends in the \"new\" case for scale 10000, on the nvme?\n\nFor the SATA SSD case, I wonder if the throughput bottleneck is WAL\nwrites. I see much more noticable differences if I enable\nwal_compression or disable full_page_writes, because otherwise the bulk\nof the volume is WAL data. But even in that case, I see a latency\nstddev reduction with the new bgwriter around checkpoints.\n\n\n> The one case where it did change is the \"medium\" scale on SATA storage,\n> where the throughput improved with the patch. But the change is kinda\n> strange, because the number of buffers evicted by the bgwriter decreased\n> (and instead it got evicted by the checkpointer). Which might explain the\n> higher throughput, because checkpointer is probably more efficient.\n\nWell, one problem with the current bgwriter implementation is that the\nvictim selection isn't good. Because it doesn't perform clock sweep, and\ndoesn't clean buffers with a usagecount, it'll often run until it finds\na dirty buffer that's pretty far ahead of the clock hand, and clean\nthose. But with a random test like pgbench it's somewhat likely that\nthose buffers will get re-dirtied before backends actually get to\nreusing them (that's a problem with the new implementation too, the\nwindow just is smaller). But I'm far from sure that that's the cause here.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 10:53:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: rebased background worker reimplementation prototype"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 10:53:46AM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-07-12 15:47:02 +0200, Tomas Vondra wrote:\n>> I've done a bit of benchmarking / testing on this, so let me report some\n>> basic results. I haven't done any significant code review, I've simply\n>> ran a bunch of pgbench runs on different systems with different scales.\n>\n>Thanks!\n>\n>\n>> System #1\n>> ---------\n>> * CPU: Intel i5\n>> * RAM: 8GB\n>> * storage: 6 x SATA SSD RAID0 (Intel S3700)\n>> * autovacuum_analyze_scale_factor = 0.1\n>> * autovacuum_vacuum_cost_delay = 2\n>> * autovacuum_vacuum_cost_limit = 1000\n>> * autovacuum_vacuum_scale_factor = 0.01\n>> * bgwriter_delay = 100\n>> * bgwriter_lru_maxpages = 10000\n>> * checkpoint_timeout = 30min\n>> * max_wal_size = 64GB\n>> * shared_buffers = 1GB\n>\n>What's the controller situation here? Can the full SATA3 bandwidth on\n>all of those drives be employed concurrently?\n>\n\nThere's just an on-board SATA controller, so it might be a bottleneck.\n\nA single drive can do ~440 MB/s reads sequentially, and the whole RAID0\narray (Linux sw raid) does ~1.6GB/s, so not exactly 6x that. But I don't\nthink we're generating that many writes during the test.\n\n>\n>> System #2\n>> ---------\n>> * CPU: 2x Xeon E5-2620v5\n>> * RAM: 64GB\n>> * storage: 3 x 7.2k SATA RAID0, 1x NVMe\n>> * autovacuum_analyze_scale_factor = 0.1\n>> * autovacuum_vacuum_cost_delay = 2\n>> * autovacuum_vacuum_cost_limit = 1000\n>> * autovacuum_vacuum_scale_factor = 0.01\n>> * bgwriter_delay = 100\n>> * bgwriter_lru_maxpages = 10000\n>> * checkpoint_completion_target = 0.9\n>> * checkpoint_timeout = 15min\n>> * max_wal_size = 32GB\n>> * shared_buffers = 8GB\n>\n>What type of NVMe disk is this? I'm mostly wondering whether it's fast\n>enough that there's no conceivable way that IO scheduling is going to\n>make a meaningful difference, given other bottlenecks in postgres.\n>\n>In some preliminary benchmark runs I've seen fairly significant gains on\n>SATA and SAS SSDs, as well as spinning rust, but I've not yet\n>benchmarked on a decent NVMe SSD.\n>\n\nIntel Optane 900P 280MB (model SSDPED1D280GA) [1].\n\n[1] https://ssd.userbenchmark.com/SpeedTest/315555/INTEL-SSDPED1D280GA\n\nI think one of the main improvements in this generation of drives is\ngood performance with low queue depth. See for example [2].\n\n[2] https://www.anandtech.com/show/12136/the-intel-optane-ssd-900p-480gb-review/5\n\nNot sure if that plays role here, but I've seen this to afffect prefetch\nand similar things.\n\n>\n>> For each config I've done tests with three scales - small (fits into\n>> shared buffers), medium (fits into RAM) and large (at least 2x the RAM).\n>> Aside from the basic metrics (throughput etc.) I've also sampled data\n>> about 5% of transactions, to be able to look at latency stats.\n>>\n>> The tests were done on master and patched code (both in the 'legacy' and\n>> new mode).\n>\n>\n>\n>> I haven't done any temporal analysis yet (i.e. I'm only looking at global\n>> summaries, not tps over time etc).\n>\n>FWIW, I'm working on a tool that generates correlated graphs of OS, PG,\n>pgbench stats. Especially being able to correlate the kernel's\n>'Writeback' stats (grep Writeback: /proc/meminfo) and latency is very\n>valuable. Sampling wait events over time also is worthwhile.\n>\n\nGood to know, although I don't think it's difficult to fetch the data\nfrom sar and plot it. I might even already have ugly bash scripts doing\nthat, somewhere.\n\n>\n>> When running on the 7.2k SATA RAID, the throughput improves with the\n>> medium scale - from ~340tps to ~439tps, which is a pretty significant\n>> jump. But on the large scale this disappears (in fact, it seems to be a\n>> bit lower than master/legacy cases). Of course, all this is just from a\n>> single run (although 4h, so noise should even out).\n>\n>Any chance there's an order-of-test factor here? In my tests I found two\n>related issues very important: 1) the first few tests are slower,\n>because WAL segments don't yet exist. 2) Some poor bugger of a later\n>test will get hit with anti-wraparound vacuums, even if otherwise not\n>necessary.\n>\n\nNot sure - I'll check, but I find it unlikely. I need to repeat the\ntests to have multiple runs.\n\n>The fact that the master and \"legacy\" numbers differ significantly\n>e.g. in the \"xeon sata scale 1000\" latency CDF does make me wonder\n>whether there's an effect like that. While there might be some small\n>performance difference due to different stats message sizes, and a few\n>additional branches, I don't see how it could be that noticable.\n>\n\nThat's about the one case where things like anti-wraparound are pretty\nmuch impossible, because the SATA storage is so slow ...\n\n>\n>> I've also computed latency CDF (from the 5% sample) - I've attached this\n>> for the two interesting cases mentioned in the previous paragraph. This\n>> shows that with the medium scale the latencies move down (with the patch,\n>> both in the legacy and \"new\" modes), while on large scale the \"new\" mode\n>> moves a bit to the right to higher values).\n>\n>Hm. I can't yet explain that.\n>\n>\n>> And finally, I've looked at buffer stats, i.e. number of buffers written\n>> in various ways (checkpoing, bgwriter, backends) etc. Interestingly\n>> enough, these numbers did not change very much - especially on the flash\n>> storage. Maybe that's expected, though.\n>\n>Some of that is expected, e.g. because file extensions count as backend\n>writes, and are going to be roughly correlate with throughput, and not\n>much else. But they're more similar than I'd actually expect.\n>\n>I do see a pretty big difference in the number of bgwriter written\n>backends in the \"new\" case for scale 10000, on the nvme?\n>\n\nRight.\n\n>For the SATA SSD case, I wonder if the throughput bottleneck is WAL\n>writes. I see much more noticable differences if I enable\n>wal_compression or disable full_page_writes, because otherwise the bulk\n>of the volume is WAL data. But even in that case, I see a latency\n>stddev reduction with the new bgwriter around checkpoints.\n>\n\nI may try that during the next round of tests.\n\n>\n>> The one case where it did change is the \"medium\" scale on SATA storage,\n>> where the throughput improved with the patch. But the change is kinda\n>> strange, because the number of buffers evicted by the bgwriter decreased\n>> (and instead it got evicted by the checkpointer). Which might explain the\n>> higher throughput, because checkpointer is probably more efficient.\n>\n>Well, one problem with the current bgwriter implementation is that the\n>victim selection isn't good. Because it doesn't perform clock sweep, and\n>doesn't clean buffers with a usagecount, it'll often run until it finds\n>a dirty buffer that's pretty far ahead of the clock hand, and clean\n>those. But with a random test like pgbench it's somewhat likely that\n>those buffers will get re-dirtied before backends actually get to\n>reusing them (that's a problem with the new implementation too, the\n>window just is smaller). But I'm far from sure that that's the cause here.\n>\n\nOK.\n\nTime for more tests, I guess.\n\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 16 Jul 2019 21:16:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: rebased background worker reimplementation prototype"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that pgbench's -R influences not just the computation of lag,\nbut also of latency. That doesn't look right to me, but maybe I'm just\nmissing something?\n\nIt's quite easy to demonstrate when running pgbench with/without\nprogress report at a transaction rate that's around the limit of what\nthe server can do:\n\nandres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S pgbench_10\nprogress: 1.0 s, 37416.3 tps, lat 0.027 ms stddev 0.013\nprogress: 2.0 s, 37345.1 tps, lat 0.027 ms stddev 0.011\nprogress: 3.0 s, 38787.8 tps, lat 0.026 ms stddev 0.009\n...\n\nandres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S -R 37000 pgbench_10\nprogress: 1.0 s, 32792.8 tps, lat 81.795 ms stddev 35.552, lag 81.765 ms\nprogress: 2.0 s, 37770.6 tps, lat 113.194 ms stddev 4.651, lag 113.168 ms\nprogress: 3.0 s, 37006.3 tps, lat 113.905 ms stddev 5.007, lag 113.878 ms\n\nThat's obviously a very different result.\n\nISTM that's because processXactStats() computes latency as:\n\nlatency = INSTR_TIME_GET_MICROSEC(*now) - st->txn_scheduled;\n\nwhich is set differently when throttling is enabled:\n\n\t\t\t\t/*\n\t\t\t\t * When not throttling, this is also the transaction's\n\t\t\t\t * scheduled start time.\n\t\t\t\t */\n\t\t\t\tif (!throttle_delay)\n\t\t\t\t\tst->txn_scheduled = INSTR_TIME_GET_MICROSEC(now);\n\n\nreplacing latency computation with\n\nlatency = INSTR_TIME_GET_MICROSEC(*now) - INSTR_TIME_GET_MICROSEC(st->txn_begin);\n\nimmediately makes the result make more sense:\n\nandres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S -R 37000 pgbench_10\nprogress: 1.0 s, 37141.7 tps, lat 0.026 ms stddev 0.011, lag 1.895 ms\nprogress: 2.0 s, 36805.6 tps, lat 0.026 ms stddev 0.012, lag 0.670 ms\nprogress: 3.0 s, 37033.5 tps, lat 0.026 ms stddev 0.012, lag 1.067 ms\n\nand you still get lag if the rate is too high:\n\nandres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S -R 80000 pgbench_10\nprogress: 1.0 s, 37628.8 tps, lat 0.026 ms stddev 0.016, lag 287.379 ms\nprogress: 2.0 s, 39651.8 tps, lat 0.025 ms stddev 0.008, lag 790.527 ms\nprogress: 3.0 s, 39254.8 tps, lat 0.025 ms stddev 0.009, lag 1290.833 ms\nprogress: 4.0 s, 38859.5 tps, lat 0.026 ms stddev 0.009, lag 1808.529 ms\nprogress: 5.0 s, 39699.0 tps, lat 0.025 ms stddev 0.008, lag 2307.732 ms\nprogress: 6.0 s, 39297.0 tps, lat 0.025 ms stddev 0.009, lag 2813.291 ms\nprogress: 7.0 s, 39880.6 tps, lat 0.025 ms stddev 0.008, lag 3315.430 ms\n\nFabien, is this a bug, or am I misunderstanding something?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jun 2019 21:56:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "Hi\n\nOn 2019-06-10 21:56:31 -0700, Andres Freund wrote:\n> I noticed that pgbench's -R influences not just the computation of lag,\n> but also of latency. That doesn't look right to me, but maybe I'm just\n> missing something?\n\nI apparently did:\n\n> -P sec\n> --progress=sec\n> \n> Show progress report every sec seconds. The report includes the time\n> since the beginning of the run, the TPS since the last report, and\n> the transaction latency average and standard deviation since the\n> last report. Under throttling (-R), the latency is computed with\n> respect to the transaction scheduled start time, not the actual\n> transaction beginning time, thus it also includes the average\n> schedule lag time.\n\nBut that makes very little sense to me. I see that was changed by Heikki\nin\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=02e3bcc661598275a75a8649b48602dca7aeec79\n> Change the way latency is calculated with pgbench --rate option.\n> \n> The reported latency values now include the \"schedule lag\" time, that is,\n> the time between the transaction's scheduled start time and the time it\n> actually started. This relates better to a model where requests arrive at a\n> certain rate, and we are interested in the response time to the end user or\n> application, rather than the response time of the database itself.\n> \n> Also, when --rate is used, include the schedule lag time in the log output.\n> \n> The --rate option is new in 9.4, so backpatch to 9.4. It seems better to\n> make this change in 9.4, while we're still in the beta period, than ship a\n> 9.4 version that calculates the values differently than 9.5.\n\nI find that pretty unconvincing. Report a new field, sure. But what's\nthe point of changing an *established* field, just due to rate limiting?\nAt the very least that ought to commented upon in the code as well.\n\nDoesn't this mean that latency and lag are quite redundant, just more\nobscure, due this change?\n\n\t\tlatency = INSTR_TIME_GET_MICROSEC(*now) - st->txn_scheduled;\n\t\tlag = INSTR_TIME_GET_MICROSEC(st->txn_begin) - st->txn_scheduled;\n\nI guess I can just subtract latency from lag to get to the non-throttled\nlatency. But that is, uh, odd.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jun 2019 22:09:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "\nHello Andres,\n\n> I noticed that pgbench's -R influences not just the computation of lag,\n> but also of latency. That doesn't look right to me, but maybe I'm just\n> missing something?\n>\n> It's quite easy to demonstrate when running pgbench with/without\n> progress report at a transaction rate that's around the limit of what\n> the server can do:\n>\n> andres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S pgbench_10\n> progress: 1.0 s, 37416.3 tps, lat 0.027 ms stddev 0.013\n> progress: 2.0 s, 37345.1 tps, lat 0.027 ms stddev 0.011\n> progress: 3.0 s, 38787.8 tps, lat 0.026 ms stddev 0.009\n> ...\n>\n> andres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S -R 37000 pgbench_10\n> progress: 1.0 s, 32792.8 tps, lat 81.795 ms stddev 35.552, lag 81.765 ms\n> progress: 2.0 s, 37770.6 tps, lat 113.194 ms stddev 4.651, lag 113.168 ms\n> progress: 3.0 s, 37006.3 tps, lat 113.905 ms stddev 5.007, lag 113.878 ms\n\n[...]\n\n> Fabien, is this a bug, or am I misunderstanding something?\n\nThis behavior under -R is fully voluntary, and the result above just show \nthat the database cannot really keep up with the load, which is simply the \ncase, so for me it is okay to show bad figures.\n\nThe idea under throttling is to model a client which would want the result \nof a query at a certain point in time, say a query for a web page which is \nbeing generated, which is the scheduled time. It is the when the client \nknows it wants an answer. If it is not processed immediately, that is bad \nfor its client perceived latency.\n\nWhether this is due to lag (i.e. the server is loaded and cannot start to \nprocess the answer) or because the server is slow to answer is irrelevant, \nthe client is waiting, the web page is not generated, the system is slow. \nSo latency under -R is really \"client latency\", not only query latency, as \nit is documented.\n\nYou can offset the lag to get the query latency only, but from a client \nperspective the fact is that the system does not keep up with the \nscheduled load is the main information, thus this is what is displayed. \nThe bad figures reflect a bad behavior wrt handling the load. For me it is \nwhat should be wanted under -R. Maybe it could be more clearly documented, \nbut for me this is the right behavior, and it is I wanted to measure with \nthrottling.\n\nUnder this performance model, the client would give up its requests after \nsome time, hence the available --latency-limit option.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 11 Jun 2019 08:36:55 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 08:36:55 +0200, Fabien COELHO wrote:\n> > I noticed that pgbench's -R influences not just the computation of lag,\n> > but also of latency. That doesn't look right to me, but maybe I'm just\n> > missing something?\n> > \n> > It's quite easy to demonstrate when running pgbench with/without\n> > progress report at a transaction rate that's around the limit of what\n> > the server can do:\n> > \n> > andres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S pgbench_10\n> > progress: 1.0 s, 37416.3 tps, lat 0.027 ms stddev 0.013\n> > progress: 2.0 s, 37345.1 tps, lat 0.027 ms stddev 0.011\n> > progress: 3.0 s, 38787.8 tps, lat 0.026 ms stddev 0.009\n> > ...\n> > \n> > andres@alap4:~/src/postgresql$ pgbench -n -M prepared -c 1 -j 1 -T 100000 -P1 -r -S -R 37000 pgbench_10\n> > progress: 1.0 s, 32792.8 tps, lat 81.795 ms stddev 35.552, lag 81.765 ms\n> > progress: 2.0 s, 37770.6 tps, lat 113.194 ms stddev 4.651, lag 113.168 ms\n> > progress: 3.0 s, 37006.3 tps, lat 113.905 ms stddev 5.007, lag 113.878 ms\n> \n> [...]\n> \n> > Fabien, is this a bug, or am I misunderstanding something?\n> \n> This behavior under -R is fully voluntary, and the result above just show\n> that the database cannot really keep up with the load, which is simply the\n> case, so for me it is okay to show bad figures.\n\nI mean, you just turned one named value, into a different one, without\nrenaming it. And the new meaning under -R, is basically the same as one\nthat's already there (lag). Also note that it also can actually keep up\nin the above example.\n\n\n> The idea under throttling is to model a client which would want the result\n> of a query at a certain point in time, say a query for a web page which is\n> being generated, which is the scheduled time. It is the when the client\n> knows it wants an answer. If it is not processed immediately, that is bad\n> for its client perceived latency.\n\n> Whether this is due to lag (i.e. the server is loaded and cannot start to\n> process the answer) or because the server is slow to answer is irrelevant,\n> the client is waiting, the web page is not generated, the system is slow. So\n> latency under -R is really \"client latency\", not only query latency, as it\n> is documented.\n\nWhat does that have to do with incorporating the same data into both lag\nand latency? I just fail to see what the point is, except to make it\nunnecessarily harder to compare postgres' behaviour under both a\nthrottled and push-it-to-the-breaking point loads.\n\nHow long individual transactions take, and how much variance there is in\nthat, is something *crucial* to optimize for. *Especially* when the\nmachine/load is provisioned in a way to not overload the machine.\n\nHow is e.g.\nprogress: 1.6 s, 0.0 tps, lat 0.000 ms stddev 0.000, lag 0.000 ms\nprogress: 2.0 s, 103546.5 tps, lat 1584.161 ms stddev 35.589, lag 1582.043 ms\nprogress: 3.0 s, 108535.2 tps, lat 1347.619 ms stddev 101.782, lag 1346.170 ms\nprogress: 4.0 s, 108528.8 tps, lat 996.603 ms stddev 106.052, lag 995.159 ms\nprogress: 5.0 s, 109468.8 tps, lat 633.464 ms stddev 108.483, lag 632.030 ms\nprogress: 6.0 s, 110606.7 tps, lat 252.923 ms stddev 110.391, lag 251.505 ms\nprogress: 7.0 s, 84253.3 tps, lat 6.829 ms stddev 15.067, lag 6.423 ms\nprogress: 8.0 s, 80470.7 tps, lat 0.142 ms stddev 0.079, lag 0.017 ms\nprogress: 9.0 s, 80104.2 tps, lat 0.142 ms stddev 0.081, lag 0.017 ms\nprogress: 10.0 s, 80277.0 tps, lat 0.152 ms stddev 0.150, lag 0.017 ms\n\nthe lat column adds basically nothing over the lag column here.\n\nmore useful than:\nprogress: 1.3 s, 0.0 tps, lat 0.000 ms stddev 0.000, lag 0.000 ms\nprogress: 2.0 s, 116315.6 tps, lat 1.425 ms stddev 1.440, lag 1087.076 ms\nprogress: 3.0 s, 113526.2 tps, lat 1.390 ms stddev 0.408, lag 709.908 ms\nprogress: 4.0 s, 111816.4 tps, lat 1.407 ms stddev 0.399, lag 302.866 ms\nprogress: 5.0 s, 88061.9 tps, lat 0.543 ms stddev 0.652, lag 16.526 ms\nprogress: 6.0 s, 80045.4 tps, lat 0.128 ms stddev 0.079, lag 0.017 ms\nprogress: 7.0 s, 79636.3 tps, lat 0.124 ms stddev 0.073, lag 0.016 ms\nprogress: 8.0 s, 80535.3 tps, lat 0.125 ms stddev 0.073, lag 0.016 ms\n\nwhere I can see that the transactions are now actually fast enough.\nObviously this is a toy example, but this really make -R close to\nuseless to me. I often want to switch from a unthrottled to a 90% load,\nand improve the outlier beheaviour - but that outlier behaviour is\nhidden due to this redefinition of lat (as the issue is now reported\nover a much longer period of time, as it includes lag).\n\nI think we should just restore lat to a sane behaviour under -R, and if\nyou want to have lat + lag as a separate column in -R mode, then let's\ndo that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 00:12:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "On 11/06/2019 10:12, Andres Freund wrote:\n> On 2019-06-11 08:36:55 +0200, Fabien COELHO wrote:\n>> This behavior under -R is fully voluntary, and the result above just show\n>> that the database cannot really keep up with the load, which is simply the\n>> case, so for me it is okay to show bad figures.\n> \n> I mean, you just turned one named value, into a different one, without\n> renaming it. And the new meaning under -R, is basically the same as one\n> that's already there (lag). Also note that it also can actually keep up\n> in the above example.\n\nIt's not fair to say that its meaning was changed. Before 9.4, there was \nno -R option. As Fabien said, the reported latency is the latency seen \nby the imaginary user of the system, and without -R, there's no lag so \nit's the same number. See also how it works with --latency-limit. The \nlimit is on the reported latency, which includes the lag.\n\nYeah, I can see that the server-observed transaction latency would often \nbe more useful than what's printed now. But changing the current meaning \ndoesn't seem like a good idea.\n\n>> The idea under throttling is to model a client which would want the result\n>> of a query at a certain point in time, say a query for a web page which is\n>> being generated, which is the scheduled time. It is the when the client\n>> knows it wants an answer. If it is not processed immediately, that is bad\n>> for its client perceived latency.\n> \n>> Whether this is due to lag (i.e. the server is loaded and cannot start to\n>> process the answer) or because the server is slow to answer is irrelevant,\n>> the client is waiting, the web page is not generated, the system is slow. So\n>> latency under -R is really \"client latency\", not only query latency, as it\n>> is documented.\n> \n> What does that have to do with incorporating the same data into both lag\n> and latency? I just fail to see what the point is, except to make it\n> unnecessarily harder to compare postgres' behaviour under both a\n> throttled and push-it-to-the-breaking point loads.\n> \n> How long individual transactions take, and how much variance there is in\n> that, is something *crucial* to optimize for. *Especially* when the\n> machine/load is provisioned in a way to not overload the machine.\n> \n> How is e.g.\n> progress: 1.6 s, 0.0 tps, lat 0.000 ms stddev 0.000, lag 0.000 ms\n> progress: 2.0 s, 103546.5 tps, lat 1584.161 ms stddev 35.589, lag 1582.043 ms\n> progress: 3.0 s, 108535.2 tps, lat 1347.619 ms stddev 101.782, lag 1346.170 ms\n> progress: 4.0 s, 108528.8 tps, lat 996.603 ms stddev 106.052, lag 995.159 ms\n> progress: 5.0 s, 109468.8 tps, lat 633.464 ms stddev 108.483, lag 632.030 ms\n> progress: 6.0 s, 110606.7 tps, lat 252.923 ms stddev 110.391, lag 251.505 ms\n> progress: 7.0 s, 84253.3 tps, lat 6.829 ms stddev 15.067, lag 6.423 ms\n> progress: 8.0 s, 80470.7 tps, lat 0.142 ms stddev 0.079, lag 0.017 ms\n> progress: 9.0 s, 80104.2 tps, lat 0.142 ms stddev 0.081, lag 0.017 ms\n> progress: 10.0 s, 80277.0 tps, lat 0.152 ms stddev 0.150, lag 0.017 ms\n> \n> the lat column adds basically nothing over the lag column here.\n> \n> more useful than:\n> progress: 1.3 s, 0.0 tps, lat 0.000 ms stddev 0.000, lag 0.000 ms\n> progress: 2.0 s, 116315.6 tps, lat 1.425 ms stddev 1.440, lag 1087.076 ms\n> progress: 3.0 s, 113526.2 tps, lat 1.390 ms stddev 0.408, lag 709.908 ms\n> progress: 4.0 s, 111816.4 tps, lat 1.407 ms stddev 0.399, lag 302.866 ms\n> progress: 5.0 s, 88061.9 tps, lat 0.543 ms stddev 0.652, lag 16.526 ms\n> progress: 6.0 s, 80045.4 tps, lat 0.128 ms stddev 0.079, lag 0.017 ms\n> progress: 7.0 s, 79636.3 tps, lat 0.124 ms stddev 0.073, lag 0.016 ms\n> progress: 8.0 s, 80535.3 tps, lat 0.125 ms stddev 0.073, lag 0.016 ms\n> \n> where I can see that the transactions are now actually fast enough.\n> Obviously this is a toy example, but this really make -R close to\n> useless to me. I often want to switch from a unthrottled to a 90% load,\n> and improve the outlier beheaviour - but that outlier behaviour is\n> hidden due to this redefinition of lat (as the issue is now reported\n> over a much longer period of time, as it includes lag).\n\nThe outlier behavior seems very visible in both of the above. The system \ncompletely stalled for about 1-2 seconds. And then it takes a few \nseconds to process the backlog and catch up.\n\nFor testing the server under full load, like during that catch up \nperiod, testing without -R seems better. Or perhaps you'd want to use \nthe --latency-limit option? You said that the transactions are now \"fast \nenough\", so that might be a better fit for what you're trying to model.\n\n> I think we should just restore lat to a sane behaviour under -R, and if\n> you want to have lat + lag as a separate column in -R mode, then let's\n> do that.\n\nIt seems like a bad idea to change the meaning of the value after the \nfact. Would be good to at least rename it, to avoid confusion. Maybe \nthat's not too important for the interactive -P reports, but got to be \nmindful about the numbers logged in the log file, at least.\n\nIf you change it, please also consider how it plays together with \n--latency-limit.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 11 Jun 2019 11:31:15 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 11:31:15 +0300, Heikki Linnakangas wrote:\n> It's not fair to say that its meaning was changed. Before 9.4, there was no\n> -R option.\n\nWell, my point is that -R changed the existing meaning of a field, and\nthat that's not nice.\n\n\n> Yeah, I can see that the server-observed transaction latency would often be\n> more useful than what's printed now. But changing the current meaning\n> doesn't seem like a good idea.\n\nWell, then a *new* column should have been added for that value under\n-R. Although admittedly 'lat' is not a very good descriptor for the non\n-R behaviour.\n\nBut anyway, to go forward, I think we should replace 'lat' with a\n'txtime' (or similar) column that is not affected by -R. And then, under\n-R only, add a new 'txlat' column, that shows the 'current' meaning of\nlat under -R. Not convinced the names are right, but you get the gist.\n\n\n> As Fabien said, the reported latency is the latency seen by the\n> imaginary user of the system, and without -R, there's no lag so it's the\n> same number. See also how it works with --latency-limit. The limit is on the\n> reported latency, which includes the lag.\n\nWell, that's how it works in a lot of scenarios (e.g. interactive\nscenarios, where users give up). But there's also a lot where the amount\nof work doesn't decrease due to lateness, it just queues up (e.g. many\nbatch / queue processing workloads).\n\n\n> The outlier behavior seems very visible in both of the above. The system\n> completely stalled for about 1-2 seconds. And then it takes a few seconds to\n> process the backlog and catch up.\n\nBut that was just because I was showing a simplistic\nexample. E.g. here's a log of a vacuum finishing, and then another\nstarting a few seconds later (both vacuums lasting a fair while):\n\nprogress: 139.0 s, 2438.4 tps, txtime 13.033 ms stddev 3.830, lat 17530.784 ms stddev 590.153, lag 17517.751 ms\nprogress: 140.0 s, 2489.0 tps, txtime 12.911 ms stddev 3.642, lat 17752.862 ms stddev 600.661, lag 17739.952 ms\nprogress: 141.0 s, 2270.0 tps, txtime 14.021 ms stddev 4.965, lat 17973.805 ms stddev 594.784, lag 17959.784 ms\nprogress: 142.0 s, 1408.0 tps, txtime 22.848 ms stddev 5.365, lat 18417.808 ms stddev 632.729, lag 18394.960 ms\nprogress: 143.0 s, 3001.0 tps, txtime 10.724 ms stddev 4.318, lat 18796.971 ms stddev 617.462, lag 18786.247 ms\nprogress: 144.0 s, 4678.0 tps, txtime 6.823 ms stddev 2.136, lat 18503.253 ms stddev 669.072, lag 18496.431 ms\nprogress: 145.0 s, 4577.0 tps, txtime 7.001 ms stddev 1.526, lat 18108.596 ms stddev 689.843, lag 18101.596 ms\nprogress: 146.0 s, 2596.0 tps, txtime 12.261 ms stddev 3.060, lat 17961.623 ms stddev 683.498, lag 17949.363 ms\nprogress: 147.0 s, 2654.0 tps, txtime 12.072 ms stddev 3.282, lat 18120.009 ms stddev 685.074, lag 18107.938 ms\nprogress: 148.0 s, 3471.0 tps, txtime 9.240 ms stddev 3.702, lat 18251.712 ms stddev 676.572, lag 18242.472 ms\nprogress: 149.0 s, 3056.0 tps, txtime 10.468 ms stddev 5.131, lat 18058.950 ms stddev 675.334, lag 18048.482 ms\nprogress: 150.0 s, 2319.0 tps, txtime 13.778 ms stddev 3.762, lat 18305.101 ms stddev 688.186, lag 18291.323 ms\nprogress: 151.0 s, 2355.0 tps, txtime 13.567 ms stddev 3.891, lat 18586.073 ms stddev 691.656, lag 18572.506 ms\nprogress: 152.0 s, 2321.0 tps, txtime 13.742 ms stddev 3.708, lat 18835.985 ms stddev 709.580, lag 18822.244 ms\nprogress: 153.0 s, 2360.0 tps, txtime 13.604 ms stddev 3.533, lat 19121.166 ms stddev 709.682, lag 19107.562 ms\n\nThe period inbetween where no vacuum was running is imo considerably\nharder to spot looking at 'lat'. I guess you can argue that one can just\nlook at tps instead. But for other rate limited cases that doesn't work\nas well:\n\nprogress: 121.0 s, 961.0 tps, txtime 3.452 ms stddev 0.947, lat 3.765 ms stddev 1.268, lag 0.313 ms\nprogress: 122.0 s, 979.0 tps, txtime 5.388 ms stddev 8.737, lat 7.378 ms stddev 11.137, lag 1.990 ms\nprogress: 123.0 s, 1078.8 tps, txtime 3.679 ms stddev 1.278, lat 4.322 ms stddev 3.216, lag 0.643 ms\nprogress: 124.0 s, 1082.2 tps, txtime 5.575 ms stddev 9.790, lat 8.716 ms stddev 15.317, lag 3.141 ms\nprogress: 125.0 s, 990.0 tps, txtime 3.489 ms stddev 1.148, lat 3.817 ms stddev 1.456, lag 0.328 ms\nprogress: 126.0 s, 955.0 tps, txtime 9.284 ms stddev 15.362, lat 14.210 ms stddev 22.084, lag 4.926 ms\nprogress: 127.0 s, 960.0 tps, txtime 11.951 ms stddev 11.222, lat 17.732 ms stddev 21.066, lag 5.781 ms\nprogress: 128.0 s, 945.9 tps, txtime 11.702 ms stddev 17.590, lat 23.791 ms stddev 45.327, lag 12.089 ms\nprogress: 129.0 s, 1013.1 tps, txtime 19.871 ms stddev 19.407, lat 42.530 ms stddev 39.582, lag 22.659 ms\nprogress: 130.0 s, 1004.7 tps, txtime 12.748 ms stddev 7.864, lat 19.827 ms stddev 20.084, lag 7.079 ms\nprogress: 131.0 s, 1025.2 tps, txtime 9.005 ms stddev 16.524, lat 18.491 ms stddev 29.864, lag 9.485 ms\nprogress: 132.0 s, 1015.9 tps, txtime 3.366 ms stddev 0.885, lat 3.640 ms stddev 1.182, lag 0.274 ms\nprogress: 133.0 s, 1013.1 tps, txtime 4.749 ms stddev 8.485, lat 6.828 ms stddev 12.520, lag 2.079 ms\nprogress: 134.0 s, 1026.0 tps, txtime 4.362 ms stddev 2.556, lat 4.879 ms stddev 3.158, lag 0.517 ms\n\nhere e.g. the was a noticable slowdown, where looking at tps doesn't\nhelp (because we're still able to meet the tps goal).\n\n\nI still don't quite get when I would want to look at lat, when I have\nlag. They're always going to be close by.\n\n\n> For testing the server under full load, like during that catch up period,\n> testing without -R seems better.\n\nOne area in which postgres is pretty weak, although less bad than we\nused to be, is in is predicatable latency. Most production servers\naren't run under the highest possible throughput, therefore optimizing\njitter under loaded but not breakneck speeds is important.\n\nAnd to be able to localize where such latency is introduced, it's\nimportant to see the precise moment things got slower / where\nperformance recovered.\n\n\n> > I think we should just restore lat to a sane behaviour under -R, and if\n> > you want to have lat + lag as a separate column in -R mode, then let's\n> > do that.\n> \n> It seems like a bad idea to change the meaning of the value after the fact.\n> Would be good to at least rename it, to avoid confusion. Maybe that's not\n> too important for the interactive -P reports, but got to be mindful about\n> the numbers logged in the log file, at least.\n\nGiven that 'lat' is currently in the file, and I would bet the majority\nof users of pgbench use it without -R, I'm not convinced that's\nsomething to care about. Most are going to interpret it the way it's\ncomputed without -R.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:24:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "\nHello Andres,\n\n> On 2019-06-11 11:31:15 +0300, Heikki Linnakangas wrote:\n>> It's not fair to say that its meaning was changed. Before 9.4, there was no\n>> -R option.\n>\n> Well, my point is that -R changed the existing meaning of a field,\n\nI do not think it does, because the client and transaction latencies are \nactually the same without -R, so nothing is changed.\n\nWhen the two concepts started to differ, I choose the one interpretion \nthat I thought best, and for me the whole point of running pgbench with -R \nis to look at the client latency, i.e. to answer to the question \"is the \ndatabase keeping up with the scheduled load\", so this is the one \ndisplayed.\n\n> and that that's not nice.\n\nWell, that is clearly not what you would have expected.\n\n>> Yeah, I can see that the server-observed transaction latency would often be\n>> more useful than what's printed now. But changing the current meaning\n>> doesn't seem like a good idea.\n>\n> Well, then a *new* column should have been added for that value under\n> -R. Although admittedly 'lat' is not a very good descriptor for the non\n> -R behaviour.\n>\n> But anyway, to go forward, I think we should replace 'lat' with a\n> 'txtime' (or similar) column that is not affected by -R. And then, under\n> -R only, add a new 'txlat' column, that shows the 'current' meaning of\n> lat under -R. Not convinced the names are right, but you get the gist.\n\nI do not have a strong opinion. \"lat\" says latency in a short form \nconstrained by a one line output. What precise latency is displayed is \nexplained in the doc.\n\n>> As Fabien said, the reported latency is the latency seen by the\n>> imaginary user of the system, and without -R, there's no lag so it's the\n>> same number. See also how it works with --latency-limit. The limit is on the\n>> reported latency, which includes the lag.\n>\n> Well, that's how it works in a lot of scenarios (e.g. interactive\n> scenarios, where users give up). But there's also a lot where the amount\n> of work doesn't decrease due to lateness, it just queues up (e.g. many\n> batch / queue processing workloads).\n\nSure. In which case --latency-limit should not be used.\n\n>> The outlier behavior seems very visible in both of the above. The system\n>> completely stalled for about 1-2 seconds. And then it takes a few seconds to\n>> process the backlog and catch up.\n>\n> But that was just because I was showing a simplistic example. E.g. \n> here's a log of a vacuum finishing, and then another starting a few \n> seconds later (both vacuums lasting a fair while):\n>\n> progress: 139.0 s, 2438.4 tps, txtime 13.033 ms stddev 3.830, lat 17530.784 ms stddev 590.153, lag 17517.751 ms\n> progress: 140.0 s, 2489.0 tps, txtime 12.911 ms stddev 3.642, lat 17752.862 ms stddev 600.661, lag 17739.952 ms\n> progress: 141.0 s, 2270.0 tps, txtime 14.021 ms stddev 4.965, lat 17973.805 ms stddev 594.784, lag 17959.784 ms\n> progress: 142.0 s, 1408.0 tps, txtime 22.848 ms stddev 5.365, lat 18417.808 ms stddev 632.729, lag 18394.960 ms\n> progress: 143.0 s, 3001.0 tps, txtime 10.724 ms stddev 4.318, lat 18796.971 ms stddev 617.462, lag 18786.247 ms\n> progress: 144.0 s, 4678.0 tps, txtime 6.823 ms stddev 2.136, lat 18503.253 ms stddev 669.072, lag 18496.431 ms\n> progress: 145.0 s, 4577.0 tps, txtime 7.001 ms stddev 1.526, lat 18108.596 ms stddev 689.843, lag 18101.596 ms\n> progress: 146.0 s, 2596.0 tps, txtime 12.261 ms stddev 3.060, lat 17961.623 ms stddev 683.498, lag 17949.363 ms\n> progress: 147.0 s, 2654.0 tps, txtime 12.072 ms stddev 3.282, lat 18120.009 ms stddev 685.074, lag 18107.938 ms\n> progress: 148.0 s, 3471.0 tps, txtime 9.240 ms stddev 3.702, lat 18251.712 ms stddev 676.572, lag 18242.472 ms\n> progress: 149.0 s, 3056.0 tps, txtime 10.468 ms stddev 5.131, lat 18058.950 ms stddev 675.334, lag 18048.482 ms\n> progress: 150.0 s, 2319.0 tps, txtime 13.778 ms stddev 3.762, lat 18305.101 ms stddev 688.186, lag 18291.323 ms\n> progress: 151.0 s, 2355.0 tps, txtime 13.567 ms stddev 3.891, lat 18586.073 ms stddev 691.656, lag 18572.506 ms\n> progress: 152.0 s, 2321.0 tps, txtime 13.742 ms stddev 3.708, lat 18835.985 ms stddev 709.580, lag 18822.244 ms\n> progress: 153.0 s, 2360.0 tps, txtime 13.604 ms stddev 3.533, lat 19121.166 ms stddev 709.682, lag 19107.562 ms\n>\n> The period inbetween where no vacuum was running is imo considerably\n> harder to spot looking at 'lat'.\n\nISTM that the signal is pretty clear in whether the lag increases or \ndecreases. Basically the database is 18 seconds behind its load, which is \nvery bad if a user is waiting.\n\n> I guess you can argue that one can just look at tps instead. But for \n> other rate limited cases that doesn't work as well:\n>\n> progress: 121.0 s, 961.0 tps, txtime 3.452 ms stddev 0.947, lat 3.765 ms stddev 1.268, lag 0.313 ms\n> progress: 122.0 s, 979.0 tps, txtime 5.388 ms stddev 8.737, lat 7.378 ms stddev 11.137, lag 1.990 ms\n> progress: 123.0 s, 1078.8 tps, txtime 3.679 ms stddev 1.278, lat 4.322 ms stddev 3.216, lag 0.643 ms\n> progress: 124.0 s, 1082.2 tps, txtime 5.575 ms stddev 9.790, lat 8.716 ms stddev 15.317, lag 3.141 ms\n> progress: 125.0 s, 990.0 tps, txtime 3.489 ms stddev 1.148, lat 3.817 ms stddev 1.456, lag 0.328 ms\n> progress: 126.0 s, 955.0 tps, txtime 9.284 ms stddev 15.362, lat 14.210 ms stddev 22.084, lag 4.926 ms\n> progress: 127.0 s, 960.0 tps, txtime 11.951 ms stddev 11.222, lat 17.732 ms stddev 21.066, lag 5.781 ms\n> progress: 128.0 s, 945.9 tps, txtime 11.702 ms stddev 17.590, lat 23.791 ms stddev 45.327, lag 12.089 ms\n> progress: 129.0 s, 1013.1 tps, txtime 19.871 ms stddev 19.407, lat 42.530 ms stddev 39.582, lag 22.659 ms\n\nAlthough the tps seems stable, something is clearly amiss because the lag \nand stddev (both lat & txtime) are increasing, and also the txtime. ISTM \nthat the information is there, but needs to be interpreted, and this is \nnever trivial.\n\n> here e.g. the was a noticable slowdown, where looking at tps doesn't\n> help (because we're still able to meet the tps goal).\n\nSure, that is why there are other informations on the line.\n\n> I still don't quite get when I would want to look at lat, when I have\n> lag. They're always going to be close by.\n\nMore or less. Under -R \"lag + tx = lat\", they are close if tx is small. \nThe reason I choose so show lat is because I think it is the most \nimportant figure, and I added lag to show what part it represented in the \noverall performance. I do not think that showing all 3 is very useful \nbecause it makes the lines too long.\n\n>> For testing the server under full load, like during that catch up period,\n>> testing without -R seems better.\n>\n> One area in which postgres is pretty weak, although less bad than we\n> used to be, is in is predicatable latency. Most production servers\n> aren't run under the highest possible throughput, therefore optimizing\n> jitter under loaded but not breakneck speeds is important.\n\nYes, I completely agree. I spent a lot of time working on the checkpointer \nto improve its behavior and reduce large client side latency spikes that \ncould occur even with a moderate load.\n\n> And to be able to localize where such latency is introduced, it's\n> important to see the precise moment things got slower / where\n> performance recovered.\n\nFor me it is more complicated. The tx could stay low but the system could \nlag far behind, unable to catch up. In your first trace above, txtime \nseems low enough (~ 10-15 ms), but the system is lagging 18000 ms behind.\n\n>>> I think we should just restore lat to a sane behaviour under -R, and if\n>>> you want to have lat + lag as a separate column in -R mode, then let's\n>>> do that.\n>>\n>> It seems like a bad idea to change the meaning of the value after the fact.\n>> Would be good to at least rename it, to avoid confusion. Maybe that's not\n>> too important for the interactive -P reports, but got to be mindful about\n>> the numbers logged in the log file, at least.\n>\n> Given that 'lat' is currently in the file, and I would bet the majority\n> of users of pgbench use it without -R, I'm not convinced that's\n> something to care about. Most are going to interpret it the way it's\n> computed without -R.\n\nAs said above, without -R we have \"lat = txtime\" and \"lag = 0\". It is more \nof your intuitive expectations than an issue with what is displayed under \n-R, which for me are currently the most important figures from a \nbenchmarking (client oriented) perspective.\n\nI do not think that there is an issue to fix, but you can do as you feel.\n\nMaybe there could be an option to show the transaction time (that I would \nshorten as \"tx\") instead of \"lat\", for people who want to focus more on \nserver performance than on client perceived performance. I can add such an \noption, maybe for the next CF or the one afterwards, although I have \nalready sent quite a few small patches about pgbench already.\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 12 Jun 2019 08:23:33 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "On 12/06/2019 02:24, Andres Freund wrote:\n> But anyway, to go forward, I think we should replace 'lat' with a\n> 'txtime' (or similar) column that is not affected by -R. And then, under\n> -R only, add a new 'txlat' column, that shows the 'current' meaning of\n> lat under -R. Not convinced the names are right, but you get the gist.\n\nI'm OK with that.\n\n>> For testing the server under full load, like during that catch up period,\n>> testing without -R seems better.\n> \n> One area in which postgres is pretty weak, although less bad than we\n> used to be, is in is predicatable latency. Most production servers\n> aren't run under the highest possible throughput, therefore optimizing\n> jitter under loaded but not breakneck speeds is important.\n> \n> And to be able to localize where such latency is introduced, it's\n> important to see the precise moment things got slower / where\n> performance recovered.\n\nI agree with all that. I'm still not convinced the changes you're \nproposing will help much, but if you would find it useful, I can't argue \nwith that.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:31:39 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "On 12/06/2019 09:23, Fabien COELHO wrote:\n>> But that was just because I was showing a simplistic example. E.g.\n>> here's a log of a vacuum finishing, and then another starting a few\n>> seconds later (both vacuums lasting a fair while):\n>>\n>> progress: 139.0 s, 2438.4 tps, txtime 13.033 ms stddev 3.830, lat 17530.784 ms stddev 590.153, lag 17517.751 ms\n>> progress: 140.0 s, 2489.0 tps, txtime 12.911 ms stddev 3.642, lat 17752.862 ms stddev 600.661, lag 17739.952 ms\n>> progress: 141.0 s, 2270.0 tps, txtime 14.021 ms stddev 4.965, lat 17973.805 ms stddev 594.784, lag 17959.784 ms\n>> progress: 142.0 s, 1408.0 tps, txtime 22.848 ms stddev 5.365, lat 18417.808 ms stddev 632.729, lag 18394.960 ms\n>> progress: 143.0 s, 3001.0 tps, txtime 10.724 ms stddev 4.318, lat 18796.971 ms stddev 617.462, lag 18786.247 ms\n>> progress: 144.0 s, 4678.0 tps, txtime 6.823 ms stddev 2.136, lat 18503.253 ms stddev 669.072, lag 18496.431 ms\n>> progress: 145.0 s, 4577.0 tps, txtime 7.001 ms stddev 1.526, lat 18108.596 ms stddev 689.843, lag 18101.596 ms\n>> progress: 146.0 s, 2596.0 tps, txtime 12.261 ms stddev 3.060, lat 17961.623 ms stddev 683.498, lag 17949.363 ms\n>> progress: 147.0 s, 2654.0 tps, txtime 12.072 ms stddev 3.282, lat 18120.009 ms stddev 685.074, lag 18107.938 ms\n>> progress: 148.0 s, 3471.0 tps, txtime 9.240 ms stddev 3.702, lat 18251.712 ms stddev 676.572, lag 18242.472 ms\n>> progress: 149.0 s, 3056.0 tps, txtime 10.468 ms stddev 5.131, lat 18058.950 ms stddev 675.334, lag 18048.482 ms\n>> progress: 150.0 s, 2319.0 tps, txtime 13.778 ms stddev 3.762, lat 18305.101 ms stddev 688.186, lag 18291.323 ms\n>> progress: 151.0 s, 2355.0 tps, txtime 13.567 ms stddev 3.891, lat 18586.073 ms stddev 691.656, lag 18572.506 ms\n>> progress: 152.0 s, 2321.0 tps, txtime 13.742 ms stddev 3.708, lat 18835.985 ms stddev 709.580, lag 18822.244 ms\n>> progress: 153.0 s, 2360.0 tps, txtime 13.604 ms stddev 3.533, lat 19121.166 ms stddev 709.682, lag 19107.562 ms\n>>\n>> The period inbetween where no vacuum was running is imo considerably\n>> harder to spot looking at 'lat'.\n> \n> ISTM that the signal is pretty clear in whether the lag increases or\n> decreases. Basically the database is 18 seconds behind its load, which is\n> very bad if a user is waiting.\n\nThat was my thought too, when looking at this example. When there is \nalready a long queue of transactions, the txtime of individual \ntransactions doesn't matter much. The most important thing under that \ncondition is how fast the system can dissolve the queue (or how fast it \nbuilds up even more). So the derivative of the lag or lat seems like the \nmost important figure. We don't print exactly that, but it's roughly the \nsame as the TPS. Jitter experienced by the user matters too, i.e. stddev \nof 'lat'.\n\nTo illustrate this, imagine that the server magically detected that \nthere's a long queue of transactions. It would be beneficial to go into \n\"batch mode\", where it collects incoming transactions into larger \nbatches. The effect of this imaginary batch mode is that the TPS rate \nincreases by 50%, but the txtime also increases by 1000%, and becomes \nhighly variable. Would that be a good tradeoff? I would say yes. The \nuser is experiencing an 18 s delay anyway, and the increase in txtime \nwould be insignificant compared to that, but the queue would be busted \nmore quickly.\n\nOf course, there is no such batch mode in PostgreSQL, and I wouldn't \nsuggest trying to implement anything like that. In a different kind of \napplication, you would rather maintain a steady txtime when the server \nis at full load, even if it means a lower overall TPS rate. And that \nfeels like a more important goal than just TPS. I think we all agree on \nthat. To simulate that kind of an application, though, you probably \ndon't want to use -R, or you would use it with --latency-limit. Except \nclearly Andres is trying to do just that, which is why I'm still a bit \nconfused :-).\n\n- Heikki\n\n\n",
"msg_date": "Wed, 12 Jun 2019 10:09:02 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
},
{
"msg_contents": "On 2019-Jun-12, Heikki Linnakangas wrote:\n\n> That was my thought too, when looking at this example. When there is already\n> a long queue of transactions, the txtime of individual transactions doesn't\n> matter much. The most important thing under that condition is how fast the\n> system can dissolve the queue (or how fast it builds up even more). So the\n> derivative of the lag or lat seems like the most important figure. We don't\n> print exactly that, but it's roughly the same as the TPS. Jitter experienced\n> by the user matters too, i.e. stddev of 'lat'.\n\nIt's funny that you mention taking the derivative of lat or lag, because\nthat suggests that these numbers should not be merely printed on the\nscreen but rather produced in a way that's easy for a database to\nconsume. Then you can just write the raw numbers and provide a set of\npre-written queries that generate whatever numbers the user desires.\nWe already have that ... but we don't provide any help on actually using\nthose log files -- there aren't instructions on how to import that into\na table, or what queries could be useful.\n\nMaybe that's a useful direction to move towards? I think the console\noutput is good for getting a gut-feeling of the test, but not for actual\ndata analysis.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 16:18:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench rate limiting changes transaction latency computation"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on fixing [1] I noticed that 2dedf4d9a899 \"Integrate\nrecovery.conf into postgresql.conf\" added two non-rethrowing PG_CATCH\nuses. That's not OK. See\n\nhttps://www.postgresql.org/message-id/1676.1548726280%40sss.pgh.pa.us\nhttps://postgr.es/m/20190206160958.GA22304%40alvherre.pgsql\netc.\n\nstatic bool\ncheck_recovery_target_time(char **newval, void **extra, GucSource source)\n{\n if (strcmp(*newval, \"\") != 0)\n {\n TimestampTz time;\n TimestampTz *myextra;\n MemoryContext oldcontext = CurrentMemoryContext;\n\n /* reject some special values */\n if (strcmp(*newval, \"epoch\") == 0 ||\n strcmp(*newval, \"infinity\") == 0 ||\n strcmp(*newval, \"-infinity\") == 0 ||\n strcmp(*newval, \"now\") == 0 ||\n strcmp(*newval, \"today\") == 0 ||\n strcmp(*newval, \"tomorrow\") == 0 ||\n strcmp(*newval, \"yesterday\") == 0)\n {\n return false;\n }\n\n PG_TRY();\n {\n time = DatumGetTimestampTz(DirectFunctionCall3(timestamptz_in,\n CStringGetDatum(*newval),\n ObjectIdGetDatum(InvalidOid),\n Int32GetDatum(-1)));\n }\n PG_CATCH();\n {\n ErrorData *edata;\n\n /* Save error info */\n MemoryContextSwitchTo(oldcontext);\n edata = CopyErrorData();\n FlushErrorState();\n\n /* Pass the error message */\n GUC_check_errdetail(\"%s\", edata->message);\n FreeErrorData(edata);\n return false;\n }\n PG_END_TRY();\n\nsame in check_recovery_target_lsn.\n\nI'll add an open item.\n\nGreetings,\n\nAndres Freund\n\n[1] CALDaNm1KXK9gbZfY-p_peRFm_XrBh1OwQO1Kk6Gig0c0fVZ2uw@mail.gmail.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 23:11:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "Hello\n\n> That's not OK.\n\nhmm. Did you mean catching only needed errors by errcode? Something like attached?\n\nregards, Sergei",
"msg_date": "Tue, 11 Jun 2019 17:29:41 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n>> That's not OK.\n\n> hmm. Did you mean catching only needed errors by errcode? Something like attached?\n\nNo, he means you can't EVER catch an error and not re-throw it, unless\nyou do a full (sub)transaction abort and cleanup instead of re-throwing.\nWe've been around on this repeatedly because people want to believe they\ncan take shortcuts. (See e.g. discussions for the jsonpath stuff.)\nIt doesn't reliably work to do so, and we have a project policy against\ntrying, and this code should never have been committed in this state.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jun 2019 10:49:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 10:49:28 -0400, Tom Lane wrote:\n> It doesn't reliably work to do so, and we have a project policy against\n> trying, and this code should never have been committed in this state.\n\nI'll also note that I complained about this specific instance being\nintroduced all the way back in 2013 and then again 2016:\n\nhttps://www.postgresql.org/message-id/20131118172748.GG20305%40awork2.anarazel.de\n\nOn 2013-11-18 18:27:48 +0100, Andres Freund wrote:\n> * Why the PG_TRY/PG_CATCH in check_recovery_target_time? Besides being\n> really strangely formatted (multiline :? inside a function?) it\n> doesn't a) seem to be correct to ignore potential memory allocation\n> errors by just switching back into the context that just errored out,\n> and continue to work there b) forgo cleanup by just continuing as if\n> nothing happened. That's unlikely to be acceptable.\n> * You access recovery_target_name[0] unconditionally, although it's\n\nhttps://www.postgresql.org/message-id/20140123133424.GD29782%40awork2.anarazel.de\n\n\nOn 2016-11-12 08:09:49 -0800, Andres Freund wrote:\n> > +static bool\n> > +check_recovery_target_time(char **newval, void **extra, GucSource source)\n> > +{\n> > +\tTimestampTz time;\n> > +\tTimestampTz *myextra;\n> > +\tMemoryContext oldcontext = CurrentMemoryContext;\n> > +\n> > +\tPG_TRY();\n> > +\t{\n> > +\t\ttime = (strcmp(*newval, \"\") == 0) ?\n> > +\t\t\t0 :\n> > +\t\t\tDatumGetTimestampTz(DirectFunctionCall3(timestamptz_in,\n> > +\t\t\t\t\t\t\t\t\t\t\t\t\tCStringGetDatum(*newval),\n> > +\t\t\t\t\t\t\t\t\t\t\t\t\tObjectIdGetDatum(InvalidOid),\n> > +\t\t\t\t\t\t\t\t\t\t\t\t\tInt32GetDatum(-1)));\n> > +\t}\n> > +\tPG_CATCH();\n> > +\t{\n> > +\t\tErrorData *edata;\n> > +\n> > +\t\t/* Save error info */\n> > +\t\tMemoryContextSwitchTo(oldcontext);\n> > +\t\tedata = CopyErrorData();\n> > +\t\tFlushErrorState();\n> > +\n> > +\t\t/* Pass the error message */\n> > +\t\tGUC_check_errdetail(\"%s\", edata->message);\n> > +\t\tFreeErrorData(edata);\n> > +\t\treturn false;\n> > +\t}\n> > +\tPG_END_TRY();\n> \n> Hm, I'm not happy to do that kind of thing. What if there's ever any\n> database interaction added to timestamptz_in?\n> \n> It's also problematic because the parsing of timestamps depends on the\n> timezone GUC - which might not yet have taken effect...\n\n\nI don't have particularly polite words about this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 08:35:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On 2019-06-11 08:11, Andres Freund wrote:\n> While working on fixing [1] I noticed that 2dedf4d9a899 \"Integrate\n> recovery.conf into postgresql.conf\" added two non-rethrowing PG_CATCH\n> uses. That's not OK.\n\nRight. Here is a patch that addresses this by copying the relevant code\nfrom pg_lsn_in() and timestamptz_in() directly into the check hooks.\nIt's obviously a bit unfortunate not to be able to share that code, but\nit's not actually that much.\n\nI haven't figured out the time zone issue yet, but I guess the solution\nmight involve moving some of the code from check_recovery_target_time()\nto assign_recovery_target_time().\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Jun 2019 13:16:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 01:16:54PM +0200, Peter Eisentraut wrote:\n> Right. Here is a patch that addresses this by copying the relevant code\n> from pg_lsn_in() and timestamptz_in() directly into the check hooks.\n> It's obviously a bit unfortunate not to be able to share that code,\n> but it's not actually that much.\n\n+ len1 = strspn(str, \"0123456789abcdefABCDEF\");\n+ if (len1 < 1 || len1 > MAXPG_LSNCOMPONENT || str[len1] != '/')\n+ return false;\n+\n+ len2 = strspn(str + len1 + 1, \"0123456789abcdefABCDEF\");\n+ if (len2 < 1 || len2 > MAXPG_LSNCOMPONENT || str[len1 + 1 + len2] != '\\0')\n+ return false;\nSpeaking about pg_lsn. We have introduced it to reduce the amount of\nduplication when mapping an LSN to text, so I am not much a fan of\nthis patch which adds again a duplication. You also lose some error\ncontext as you get the same type of error when parsing the first or\nthe second part of the LSN. Couldn't you refactor the whole so as an\nerror string is present as in GUC_check_errdetail()? I would just put\na wrapper in pg_lsn.c, like pg_lsn_parse() which returns uint64.\n\nThe same remark applies to the timestamp_in portion..\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 15:55:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On 2019-06-13 08:55, Michael Paquier wrote:\n> Speaking about pg_lsn. We have introduced it to reduce the amount of\n> duplication when mapping an LSN to text, so I am not much a fan of\n> this patch which adds again a duplication. You also lose some error\n> context as you get the same type of error when parsing the first or\n> the second part of the LSN. Couldn't you refactor the whole so as an\n> error string is present as in GUC_check_errdetail()?\n\nThere isn't really much more detail to be had. pg_lsn_in() just reports\n\"invalid input syntax for type pg_lsn\", and with the current patch the\nGUC system would report something like 'invalid value for parameter\n\"recovery_target_time\"'.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 09:04:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On 2019-06-12 13:16, Peter Eisentraut wrote:\n> I haven't figured out the time zone issue yet, but I guess the solution\n> might involve moving some of the code from check_recovery_target_time()\n> to assign_recovery_target_time().\n\nI think that won't work either. What we need to do is postpone the\ninterpretation of the timestamp string until after all the GUC\nprocessing is done. So check_recovery_target_time() would just do some\nbasic parsing checks, but stores the string. Then when we need the\nrecovery_target_time_value we do the final parsing. Then we can be sure\nthat the time zone is all set.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 20 Jun 2019 15:42:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-20 15:42:14 +0200, Peter Eisentraut wrote:\n> On 2019-06-12 13:16, Peter Eisentraut wrote:\n> > I haven't figured out the time zone issue yet, but I guess the solution\n> > might involve moving some of the code from check_recovery_target_time()\n> > to assign_recovery_target_time().\n> \n> I think that won't work either. What we need to do is postpone the\n> interpretation of the timestamp string until after all the GUC\n> processing is done. So check_recovery_target_time() would just do some\n> basic parsing checks, but stores the string. Then when we need the\n> recovery_target_time_value we do the final parsing. Then we can be sure\n> that the time zone is all set.\n\nThat sounds right to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 20 Jun 2019 09:05:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On 2019-06-20 18:05, Andres Freund wrote:\n> Hi,\n> \n> On 2019-06-20 15:42:14 +0200, Peter Eisentraut wrote:\n>> On 2019-06-12 13:16, Peter Eisentraut wrote:\n>>> I haven't figured out the time zone issue yet, but I guess the solution\n>>> might involve moving some of the code from check_recovery_target_time()\n>>> to assign_recovery_target_time().\n>>\n>> I think that won't work either. What we need to do is postpone the\n>> interpretation of the timestamp string until after all the GUC\n>> processing is done. So check_recovery_target_time() would just do some\n>> basic parsing checks, but stores the string. Then when we need the\n>> recovery_target_time_value we do the final parsing. Then we can be sure\n>> that the time zone is all set.\n> \n> That sounds right to me.\n\nUpdated patch for that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 23 Jun 2019 19:21:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 07:21:02PM +0200, Peter Eisentraut wrote:\n> Updated patch for that.\n\nI have been looking at this patch set. Patch 0001 looks good to me.\nYou are removing all traces of a set of timestamp keywords not\nsupported anymore, and no objections from my side for this cleanup.\n\n+#define MAXPG_LSNCOMPONENT 8\n+\n static bool\n check_recovery_target_lsn(char **newval, void **extra, GucSource source)\nLet's avoid the duplication for the declarations. I would suggest to\nmove the definitions of MAXPG_LSNLEN and MAXPG_LSNCOMPONENT to\npg_lsn.h. Funny part, I was actually in need of this definition a\ncouple of days ago for a LSN string in a frontend tool. I would\nsuggest renames at the same time:\n- PG_LSN_LEN\n- PG_LSN_COMPONENT\n\nI think that should have a third definition for\n\"0123456789abcdefABCDEF\", say PG_LSN_CHARACTERS, and we could have one\nmore for the separator '/'.\n\nAvoiding the duplication between pg_lsn.c and guc.c is proving to be\nrather ugly and reduces the readability within pg_lsn.c, so please let\nme withdraw my previous objection. (Looked at that part.)\n\n- if (strcmp(*newval, \"epoch\") == 0 ||\n- strcmp(*newval, \"infinity\") == 0 ||\n- strcmp(*newval, \"-infinity\") == 0 ||\nWhy do you remove these? They should still be rejected because they\nmake no sense as recovery targets, no?\n\nIt may be worth mentioning that AdjustTimestampForTypmod() is not\nduplicated because we don't care about the typmod in this case.\n--\nMichael",
"msg_date": "Mon, 24 Jun 2019 13:06:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On 2019-06-24 06:06, Michael Paquier wrote:\n> - if (strcmp(*newval, \"epoch\") == 0 ||\n> - strcmp(*newval, \"infinity\") == 0 ||\n> - strcmp(*newval, \"-infinity\") == 0 ||\n> Why do you remove these? They should still be rejected because they\n> make no sense as recovery targets, no?\n\nYeah but the new code already rejects those anyway. Note how\ntimestamptz_in() has explicit switch cases to accept those, and we\ndidn't carry those over into check_recovery_time().\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 24 Jun 2019 23:27:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On Mon, Jun 24, 2019 at 11:27:26PM +0200, Peter Eisentraut wrote:\n> Yeah but the new code already rejects those anyway. Note how\n> timestamptz_in() has explicit switch cases to accept those, and we\n> didn't carry those over into check_recovery_time().\n\nDitto. I was not paying much attention to the code. Your patch\nindeed rejects anything else than DTK_DATE. So we are good here,\nsorry for the noise.\n--\nMichael",
"msg_date": "Tue, 25 Jun 2019 10:07:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "This has been committed.\n\nOn 2019-06-24 06:06, Michael Paquier wrote:\n> I have been looking at this patch set. Patch 0001 looks good to me.\n> You are removing all traces of a set of timestamp keywords not\n> supported anymore, and no objections from my side for this cleanup.\n> \n> +#define MAXPG_LSNCOMPONENT 8\n> +\n> static bool\n> check_recovery_target_lsn(char **newval, void **extra, GucSource source)\n> Let's avoid the duplication for the declarations. I would suggest to\n> move the definitions of MAXPG_LSNLEN and MAXPG_LSNCOMPONENT to\n> pg_lsn.h. Funny part, I was actually in need of this definition a\n> couple of days ago for a LSN string in a frontend tool. I would\n> suggest renames at the same time:\n> - PG_LSN_LEN\n> - PG_LSN_COMPONENT\n\nI ended up rewriting this by extracting the parsing code into\npg_lsn_in_internal() and having both pg_lsn_in() and\ncheck_recovery_target_lsn() calling it. This mirrors similar\narrangements in float.c\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Jun 2019 11:06:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On Sun, Jun 30, 2019 at 11:06:58AM +0200, Peter Eisentraut wrote:\n> I ended up rewriting this by extracting the parsing code into\n> pg_lsn_in_internal() and having both pg_lsn_in() and\n> check_recovery_target_lsn() calling it. This mirrors similar\n> arrangements in float.c\n\nThe refactoring looks good to me (including what you have just fixed\nwith PG_RETURN_LSN). Thanks for considering it.\n--\nMichael",
"msg_date": "Sun, 30 Jun 2019 21:35:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
},
{
"msg_contents": "On Sun, Jun 30, 2019 at 09:35:52PM +0900, Michael Paquier wrote:\n> The refactoring looks good to me (including what you have just fixed\n> with PG_RETURN_LSN). Thanks for considering it.\n\nThis issue was still listed as an open item for v12, so I have removed\nit.\n--\nMichael",
"msg_date": "Fri, 5 Jul 2019 12:30:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_recovery_target_lsn() does a PG_CATCH without a throw"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a patch to speed up the performance of truncates of relations.\nThis is also my first time to contribute my own patch,\nand I'd gladly appreciate your feedback and advice.\n\n\nA. Summary\n\nWhenever we truncate relations, it scans the shared buffers thrice\n(one per fork) which can be time-consuming. This patch improves\nthe performance of relation truncates by initially marking the\npages-to-be-truncated of relation forks, then simultaneously\ntruncating them, resulting to an improved performance in VACUUM,\nautovacuum operations and their recovery performance.\n\n\nB. Patch Details\nThe following functions were modified:\n\n\n1. FreeSpaceMapTruncateRel() and visibilitymap_truncate()\n\na. CURRENT HEAD: These functions truncate the FSM pages and unused VM pages.\n\nb. PATCH: Both functions only mark the pages to truncate and return a block number.\n\n- We used to call smgrtruncate() in these functions, but these are now moved inside the RelationTruncate() and smgr_redo().\n\n- The tentative renaming of the functions are: MarkFreeSpaceMapTruncateRel() and visibilitymap_mark_truncate(). Feel free to suggest better names.\n\n\n2. RelationTruncate()\n\na. HEAD: Truncate FSM and VM first, then write WAL, and lastly truncate main fork.\n\nb. PATCH: Now we mark FSM and VM pages first, write WAL, mark MAIN fork pages, then truncate all forks (MAIN, FSM, VM) simultaneously.\n\n\n3. smgr_redo()\n\na. HEAD: Truncate main fork and the relation during XLOG replay, create fake rel cache for FSM and VM, truncate FSM, truncate VM, then free fake rel cache.\n\nb. PATCH: Mark main fork dirty buffers, create fake rel cache, mark fsm and vm buffers, truncate marked pages of relation forks simultaneously, truncate relation during XLOG replay, then free fake rel cache.\n\n\n4. smgrtruncate(), DropRelFileNodeBuffers()\n\n- input arguments are changed to array of forknum and block numbers, int nforks (size of forkNum array)\n\n- truncates the pages of relation forks simultaneously\n\n\n5. smgrdounlinkfork()\nI modified the function because it calls DropRelFileNodeBuffers. However, this is a dead code that can be removed.\nI did not remove it for now because that's not for me but the community to decide.\n\n\nC. Performance Test\n\nI setup a synchronous streaming replication between a master-standby.\n\nIn postgresql.conf:\nautovacuum = off\nwal_level = replica\nmax_wal_senders = 5\nwal_keep_segments = 16\nmax_locks_per_transaction = 10000\n#shared_buffers = 8GB\n#shared_buffers = 24GB\n\nObjective: Measure VACUUM execution time; varying shared_buffers size.\n\n1. Create table (ex. 10,000 tables). Insert data to tables.\n2. DELETE FROM TABLE (ex. all rows of 10,000 tables)\n3. psql -c \"\\timing on\" (measures total execution of SQL queries)\n4. VACUUM (whole db)\n\nIf you want to test with large number of relations,\nyou may use the stored functions I used here:\nhttp://bit.ly/reltruncates\n\n\nD. Results\n\nHEAD results\n1) 128MB shared_buffers = 48.885 seconds\n2) 8GB shared_buffers = 5 min 30.695 s\n3) 24GB shared_buffers = 14 min 13.598 s\n\nPATCH results\n1) 128MB shared_buffers = 42.736 s\n2) 8GB shared_buffers = 2 min 26.464 s\n3) 24GB shared_buffers = 5 min 35.848 s\n\nThe performance significantly improved compared to HEAD,\nespecially for large shared buffers.\n\n---\nWould appreciate to hear your thoughts, comments, advice.\nThank you in advance.\n\n\nRegards,\nKirk Jamison",
"msg_date": "Tue, 11 Jun 2019 07:34:35 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 6/11/19 9:34 AM, Jamison, Kirk wrote:\n> Hi all,\n> \n> Attached is a patch to speed up the performance of truncates of relations.\n> \n\nThanks for working on this!\n\n> \n> *C. **Performance Test*\n> \n> I setup a synchronous streaming replication between a master-standby.\n> \n> In postgresql.conf:\n> autovacuum = off\n> wal_level = replica\n> max_wal_senders = 5\n> wal_keep_segments = 16\n> max_locks_per_transaction = 10000\n> #shared_buffers = 8GB\n> #shared_buffers = 24GB\n> \n> Objective: Measure VACUUM execution time; varying shared_buffers size.\n> \n> 1. Create table (ex. 10,000 tables). Insert data to tables.\n> 2. DELETE FROM TABLE (ex. all rows of 10,000 tables)\n> 3. psql -c \"\\timing on\" (measures total execution of SQL queries)\n> 4. VACUUM (whole db)\n> \n> If you want to test with large number of relations,\n> \n> you may use the stored functions I used here:\n> http://bit.ly/reltruncates\n\nYou should post these functions in this thread for the archives ;)\n\n> \n> *D. **Results*\n> \n> HEAD results\n> \n> 1) 128MB shared_buffers = 48.885 seconds\n> 2) 8GB shared_buffers = 5 min 30.695 s\n> 3) 24GB shared_buffers = 14 min 13.598 s\n> \n> PATCH results\n> \n> 1) 128MB shared_buffers = 42.736 s\n> 2) 8GB shared_buffers = 2 min 26.464 s\n> 3) 24GB shared_buffers = 5 min 35.848 s\n> \n> The performance significantly improved compared to HEAD,\n> especially for large shared buffers.\n> \n\nFrom a user POW, the main issue with relation truncation is that it can block\nqueries on standby server during truncation replay.\n\nIt could be interesting if you can test this case and give results of your path.\nMaybe by performing read queries on standby server and counting wait_event with\npg_wait_sampling?\n\nRegards,\n\n-- \nAdrien",
"msg_date": "Tue, 11 Jun 2019 12:22:59 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 07:34:35AM +0000, Jamison, Kirk wrote:\n>Hi all,\n>\n>Attached is a patch to speed up the performance of truncates of relations.\n>This is also my first time to contribute my own patch,\n>and I'd gladly appreciate your feedback and advice.\n>\n\nThanks for the patch. Please add it to the commitfest app, so that we\ndon't forget about it: https://commitfest.postgresql.org/23/\n\n>\n>A. Summary\n>\n>Whenever we truncate relations, it scans the shared buffers thrice\n>(one per fork) which can be time-consuming. This patch improves\n>the performance of relation truncates by initially marking the\n>pages-to-be-truncated of relation forks, then simultaneously\n>truncating them, resulting to an improved performance in VACUUM,\n>autovacuum operations and their recovery performance.\n>\n\nOK, so essentially the whole point is to scan the buffers only once, for\nall forks at the same time (instead of three times).\n\n>\n>B. Patch Details\n>The following functions were modified:\n>\n>\n>1. FreeSpaceMapTruncateRel() and visibilitymap_truncate()\n>\n>a. CURRENT HEAD: These functions truncate the FSM pages and unused VM pages.\n>\n>b. PATCH: Both functions only mark the pages to truncate and return a block number.\n>\n>- We used to call smgrtruncate() in these functions, but these are now moved inside the RelationTruncate() and smgr_redo().\n>\n>- The tentative renaming of the functions are: MarkFreeSpaceMapTruncateRel() and visibilitymap_mark_truncate(). Feel free to suggest better names.\n>\n>\n>2. RelationTruncate()\n>\n>a. HEAD: Truncate FSM and VM first, then write WAL, and lastly truncate main fork.\n>\n>b. PATCH: Now we mark FSM and VM pages first, write WAL, mark MAIN fork pages, then truncate all forks (MAIN, FSM, VM) simultaneously.\n>\n>\n>3. smgr_redo()\n>\n>a. HEAD: Truncate main fork and the relation during XLOG replay, create fake rel cache for FSM and VM, truncate FSM, truncate VM, then free fake rel cache.\n>\n>b. PATCH: Mark main fork dirty buffers, create fake rel cache, mark fsm and vm buffers, truncate marked pages of relation forks simultaneously, truncate relation during XLOG replay, then free fake rel cache.\n>\n>\n>4. smgrtruncate(), DropRelFileNodeBuffers()\n>\n>- input arguments are changed to array of forknum and block numbers, int nforks (size of forkNum array)\n>\n>- truncates the pages of relation forks simultaneously\n>\n>\n>5. smgrdounlinkfork()\n>I modified the function because it calls DropRelFileNodeBuffers. However, this is a dead code that can be removed.\n>I did not remove it for now because that's not for me but the community to decide.\n>\n\nYou really don't need to extract the changes like this - such changes\nare generally obvious from the diff.\n\nYou only need to explain things that are not obvious from the code\nitself, e.g. non-trivial design decisions, etc.\n\n>\n>C. Performance Test\n>\n>I setup a synchronous streaming replication between a master-standby.\n>\n>In postgresql.conf:\n>autovacuum = off\n>wal_level = replica\n>max_wal_senders = 5\n>wal_keep_segments = 16\n>max_locks_per_transaction = 10000\n>#shared_buffers = 8GB\n>#shared_buffers = 24GB\n>\n>Objective: Measure VACUUM execution time; varying shared_buffers size.\n>\n>1. Create table (ex. 10,000 tables). Insert data to tables.\n>2. DELETE FROM TABLE (ex. all rows of 10,000 tables)\n>3. psql -c \"\\timing on\" (measures total execution of SQL queries)\n>4. VACUUM (whole db)\n>\n>If you want to test with large number of relations,\n>you may use the stored functions I used here:\n>http://bit.ly/reltruncates\n>\n>\n>D. Results\n>\n>HEAD results\n>1) 128MB shared_buffers = 48.885 seconds\n>2) 8GB shared_buffers = 5 min 30.695 s\n>3) 24GB shared_buffers = 14 min 13.598 s\n>\n>PATCH results\n>1) 128MB shared_buffers = 42.736 s\n>2) 8GB shared_buffers = 2 min 26.464 s\n>3) 24GB shared_buffers = 5 min 35.848 s\n>\n>The performance significantly improved compared to HEAD,\n>especially for large shared buffers.\n>\n\nRight, that seems nice. And it matches the expected 1:3 speedup, at\nleast for the larger shared_buffers cases.\n\nYears ago I've implemented an optimization for many DROP TABLE commands\nin a single transaction - instead of scanning buffers for each relation,\nthe code now accumulates a small number of relations into an array, and\nthen does a bsearch for each buffer.\n\nWould something like that be applicable/useful here? That is, if we do\nmultiple TRUNCATE commands in a single transaction, can we optimize it\nlike this?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 12 Jun 2019 01:09:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 2019-Jun-12, Tomas Vondra wrote:\n\n> Years ago I've implemented an optimization for many DROP TABLE commands\n> in a single transaction - instead of scanning buffers for each relation,\n> the code now accumulates a small number of relations into an array, and\n> then does a bsearch for each buffer.\n\ncommit 279628a0a7cf582f7dfb68e25b7b76183dd8ff2f:\n Accelerate end-of-transaction dropping of relations\n \n When relations are dropped, at end of transaction we need to remove the\n files and clean the buffer pool of buffers containing pages of those\n relations. Previously we would scan the buffer pool once per relation\n to clean up buffers. When there are many relations to drop, the\n repeated scans make this process slow; so we now instead pass a list of\n relations to drop and scan the pool once, checking each buffer against\n the passed list. When the number of relations is larger than a\n threshold (which as of this patch is being set to 20 relations) we sort\n the array before starting, and bsearch the array; when it's smaller, we\n simply scan the array linearly each time, because that's faster. The\n exact optimal threshold value depends on many factors, but the\n difference is not likely to be significant enough to justify making it\n user-settable.\n \n This has been measured to be a significant win (a 15x win when dropping\n 100,000 relations; an extreme case, but reportedly a real one).\n \n Author: Tomas Vondra, some tweaks by me\n Reviewed by: Robert Haas, Shigeru Hanada, Andres Freund, �lvaro Herrera\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 19:13:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\r\n> Years ago I've implemented an optimization for many DROP TABLE commands\r\n> in a single transaction - instead of scanning buffers for each relation,\r\n> the code now accumulates a small number of relations into an array, and\r\n> then does a bsearch for each buffer.\r\n> \r\n> Would something like that be applicable/useful here? That is, if we do\r\n> multiple TRUNCATE commands in a single transaction, can we optimize it\r\n> like this?\r\n\r\nUnfortunately not. VACUUM and autovacuum handles each table in a different transaction.\r\n\r\nBTW, what we really want to do is to keep the failover time within 10 seconds. The customer periodically TRUNCATEs tens of thousands of tables. If failover unluckily happens immediately after those TRUNCATEs, the recovery on the standby could take much longer. But your past improvement seems likely to prevent that problem, if the customer TRUNCATEs tables in the same transaction.\r\n\r\nOn the other hand, it's now highly possible that the customer can only TRUNCATE a single table in a transaction, thus run as many transactions as the TRUNCATEd tables. So, we also want to speed up each TRUNCATE by touching only the buffers for the table, not scanning the whole shared buffers. Andres proposed one method that uses a radix tree, but we don't have an idea how to do it yet.\r\n\r\nSpeeding up each TRUNCATE and its recovery is a different topic. The patch proposed here is one possible improvement to shorten the failover time.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 12 Jun 2019 03:24:49 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki\n<tsunakawa.takay@jp.fujitsu.com> wrote:\n>\n> From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n> > Years ago I've implemented an optimization for many DROP TABLE commands\n> > in a single transaction - instead of scanning buffers for each relation,\n> > the code now accumulates a small number of relations into an array, and\n> > then does a bsearch for each buffer.\n> >\n> > Would something like that be applicable/useful here? That is, if we do\n> > multiple TRUNCATE commands in a single transaction, can we optimize it\n> > like this?\n>\n> Unfortunately not. VACUUM and autovacuum handles each table in a different transaction.\n\nWe do RelationTruncate() also when we truncate heaps that are created\nin the current transactions or has a new relfilenodes in the current\ntransaction. So I think there is a room for optimization Thomas\nsuggested, although I'm not sure it's a popular use case.\n\nI've not look at this patch deeply but in DropRelFileNodeBuffer I\nthink we can get the min value of all firstDelBlock and use it as the\nlower bound of block number that we're interested in. That way we can\nskip checking the array during scanning the buffer pool.\n\n-extern void smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum,\nbool isRedo);\n+extern void smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum,\n+ bool isRedo,\nint nforks);\n-extern void smgrtruncate(SMgrRelation reln, ForkNumber forknum,\n- BlockNumber nblocks);\n+extern void smgrtruncate(SMgrRelation reln, ForkNumber *forknum,\n+ BlockNumber *nblocks,\nint nforks);\n\nDon't we use each elements of nblocks for each fork? That is, each\nfork uses an element at its fork number in the nblocks array and sets\nInvalidBlockNumber for invalid slots, instead of passing the valid\nnumber of elements. That way the following code that exist at many places,\n\n blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);\n if (BlockNumberIsValid(blocks[nforks]))\n {\n forks[nforks] = VISIBILITYMAP_FORKNUM;\n nforks++;\n }\n\nwould become\n\n blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel, nblocks);\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 12 Jun 2019 16:28:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tuesday, June 11, 2019 7:23 PM (GMT+9), Adrien Nayrat wrote:\r\n\r\n> > Attached is a patch to speed up the performance of truncates of relations.\r\n> \r\n> Thanks for working on this!\r\n\r\nThank you also for taking a look at my thread. \r\n\r\n> > If you want to test with large number of relations,\r\n> > you may use the stored functions I used here:\r\n> > http://bit.ly/reltruncates\r\n> \r\n> You should post these functions in this thread for the archives ;)\r\nThis is noted. Pasting it below: \r\n\r\ncreate or replace function create_tables(numtabs int)\r\nreturns void as $$\r\ndeclare query_string text;\r\nbegin\r\n for i in 1..numtabs loop\r\n query_string := 'create table tab_' || i::text || ' (a int);';\r\n execute query_string;\r\n end loop;\r\nend;\r\n$$ language plpgsql;\r\n\r\ncreate or replace function delfrom_tables(numtabs int)\r\nreturns void as $$\r\ndeclare query_string text;\r\nbegin\r\n for i in 1..numtabs loop\r\n query_string := 'delete from tab_' || i::text;\r\n execute query_string;\r\n end loop;\r\nend;\r\n$$ language plpgsql;\r\n\r\ncreate or replace function insert_tables(numtabs int)\r\nreturns void as $$\r\ndeclare query_string text;\r\nbegin\r\n for i in 1..numtabs loop\r\n query_string := 'insert into tab_' || i::text || ' VALUES (5);' ;\r\n execute query_string;\r\n end loop;\r\nend;\r\n$$ language plpgsql;\r\n\r\n\r\n> From a user POW, the main issue with relation truncation is that it can block\r\n> queries on standby server during truncation replay.\r\n> \r\n> It could be interesting if you can test this case and give results of your\r\n> path.\r\n> Maybe by performing read queries on standby server and counting wait_event\r\n> with pg_wait_sampling?\r\n\r\nThanks for the suggestion. I tried using the extension pg_wait_sampling,\r\nBut I wasn't sure that I could replicate the problem of blocked queries on standby server.\r\nCould you advise?\r\nHere's what I did for now, similar to my previous test with hot standby setup,\r\nbut with additional read queries of wait events on standby server.\r\n\r\n128MB shared_buffers\r\nSELECT create_tables(10000);\r\nSELECT insert_tables(10000);\r\nSELECT delfrom_tables(10000);\r\n\r\n[Before VACUUM]\r\nStandby: SELECT the following view from pg_stat_waitaccum\r\n\r\nwait_event_type | wait_event | calls | microsec\r\n-----------------+-----------------+-------+----------\r\n Client | ClientRead | 2 | 20887759\r\n IO | DataFileRead | 175 | 2788\r\n IO | RelationMapRead | 4 | 26\r\n IO | SLRURead | 2 | 38\r\n\r\nPrimary: Execute VACUUM (induces relation truncates)\r\n\r\n[After VACUUM]\r\nStandby:\r\n wait_event_type | wait_event | calls | microsec\r\n-----------------+-----------------+-------+----------\r\n Client | ClientRead | 7 | 77662067\r\n IO | DataFileRead | 284 | 4523\r\n IO | RelationMapRead | 10 | 51\r\n IO | SLRURead | 3 | 57\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Wed, 12 Jun 2019 08:29:44 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "From: Masahiko Sawada [mailto:sawada.mshk@gmail.com]\r\n> We do RelationTruncate() also when we truncate heaps that are created\r\n> in the current transactions or has a new relfilenodes in the current\r\n> transaction. So I think there is a room for optimization Thomas\r\n> suggested, although I'm not sure it's a popular use case.\r\n\r\nRight, and I don't think of a use case that motivates the opmitizaion, too.\r\n\r\n\r\n> I've not look at this patch deeply but in DropRelFileNodeBuffer I\r\n> think we can get the min value of all firstDelBlock and use it as the\r\n> lower bound of block number that we're interested in. That way we can\r\n> skip checking the array during scanning the buffer pool.\r\n\r\nThat sounds reasonable, although I haven't examined the code, either.\r\n\r\n\r\n> Don't we use each elements of nblocks for each fork? That is, each\r\n> fork uses an element at its fork number in the nblocks array and sets\r\n> InvalidBlockNumber for invalid slots, instead of passing the valid\r\n> number of elements. That way the following code that exist at many places,\r\n\r\nI think the current patch tries to reduce the loop count in DropRelFileNodeBuffers() by passing the number of target forks.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n \r\n",
"msg_date": "Thu, 13 Jun 2019 05:57:50 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:\r\n> On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki\r\n> <tsunakawa.takay@jp.fujitsu.com> wrote:\r\n> >\r\n> > From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\r\n> > > Years ago I've implemented an optimization for many DROP TABLE\r\n> > > commands in a single transaction - instead of scanning buffers for\r\n> > > each relation, the code now accumulates a small number of relations\r\n> > > into an array, and then does a bsearch for each buffer.\r\n> > >\r\n> > > Would something like that be applicable/useful here? That is, if we\r\n> > > do multiple TRUNCATE commands in a single transaction, can we\r\n> > > optimize it like this?\r\n> >\r\n> > Unfortunately not. VACUUM and autovacuum handles each table in a different\r\n> transaction.\r\n> \r\n> We do RelationTruncate() also when we truncate heaps that are created in the\r\n> current transactions or has a new relfilenodes in the current transaction.\r\n> So I think there is a room for optimization Thomas suggested, although I'm\r\n> not sure it's a popular use case.\r\n\r\nI couldn't think of a use case too.\r\n\r\n> I've not look at this patch deeply but in DropRelFileNodeBuffer I think we\r\n> can get the min value of all firstDelBlock and use it as the lower bound of\r\n> block number that we're interested in. That way we can skip checking the array\r\n> during scanning the buffer pool.\r\n\r\nI'll take note of this suggestion.\r\nCould you help me expound more on this idea, skipping the internal loop by\r\ncomparing the min and buffer descriptor (bufHdr)?\r\n\r\nIn the current patch, I've implemented the following in DropRelFileNodeBuffers:\r\n\tfor (i = 0; i < NBuffers; i++)\r\n\t{\r\n\t\t...\r\n\t\tbuf_state = LockBufHdr(bufHdr);\r\n\t\tfor (k = 0; k < nforks; k++)\r\n\t\t{\r\n\t\t\tif (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n\t\t\t\tbufHdr->tag.forkNum == forkNum[k] &&\r\n\t\t\t\tbufHdr->tag.blockNum >= firstDelBlock[k])\r\n\t\t\t{\r\n\t\t\t\tInvalidateBuffer(bufHdr); /* releases spinlock */\r\n\t\t\t\tbreak;\r\n\t\t\t}\r\n\r\n> Don't we use each elements of nblocks for each fork? That is, each fork uses\r\n> an element at its fork number in the nblocks array and sets InvalidBlockNumber\r\n> for invalid slots, instead of passing the valid number of elements. That way\r\n> the following code that exist at many places,\r\n> \r\n> blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);\r\n> if (BlockNumberIsValid(blocks[nforks]))\r\n> {\r\n> forks[nforks] = VISIBILITYMAP_FORKNUM;\r\n> nforks++;\r\n> }\r\n> \r\n> would become\r\n> \r\n> blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,\r\n> nblocks);\r\n\r\nIn the patch, we want to truncate all forks' blocks simultaneously, so\r\nwe optimize the invalidation of buffers and reduce the number of loops\r\nusing those values.\r\nThe suggestion above would have to remove the forks array and its\r\nforksize (nforks), is it correct? But I think we’d need the fork array\r\nand nforks to execute the truncation all at once.\r\nIf I'm missing something, I'd really appreciate your further comments.\r\n\r\n--\r\nThank you everyone for taking a look at my thread.\r\nI've also already added this patch to the CommitFest app.\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Thu, 13 Jun 2019 09:30:00 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 6:30 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:\n> > On Wed, Jun 12, 2019 at 12:25 PM Tsunakawa, Takayuki\n> > <tsunakawa.takay@jp.fujitsu.com> wrote:\n> > >\n> > > From: Tomas Vondra [mailto:tomas.vondra@2ndquadrant.com]\n> > > > Years ago I've implemented an optimization for many DROP TABLE\n> > > > commands in a single transaction - instead of scanning buffers for\n> > > > each relation, the code now accumulates a small number of relations\n> > > > into an array, and then does a bsearch for each buffer.\n> > > >\n> > > > Would something like that be applicable/useful here? That is, if we\n> > > > do multiple TRUNCATE commands in a single transaction, can we\n> > > > optimize it like this?\n> > >\n> > > Unfortunately not. VACUUM and autovacuum handles each table in a different\n> > transaction.\n> >\n> > We do RelationTruncate() also when we truncate heaps that are created in the\n> > current transactions or has a new relfilenodes in the current transaction.\n> > So I think there is a room for optimization Thomas suggested, although I'm\n> > not sure it's a popular use case.\n>\n> I couldn't think of a use case too.\n>\n> > I've not look at this patch deeply but in DropRelFileNodeBuffer I think we\n> > can get the min value of all firstDelBlock and use it as the lower bound of\n> > block number that we're interested in. That way we can skip checking the array\n> > during scanning the buffer pool.\n>\n> I'll take note of this suggestion.\n> Could you help me expound more on this idea, skipping the internal loop by\n> comparing the min and buffer descriptor (bufHdr)?\n>\n\nYes. For example,\n\n BlockNumber minBlock = InvalidBlockNumber;\n(snip)\n /* Get lower bound block number we're interested in */\n for (i = 0; i < nforks; i++)\n {\n if (!BlockNumberIsValid(minBlock) ||\n minBlock > firstDelBlock[i])\n minBlock = firstDelBlock[i];\n }\n\n for (i = 0; i < NBuffers; i++)\n {\n(snip)\n buf_state = LockBufHdr(bufHdr);\n\n /* check with the lower bound and skip the loop */\n if (bufHdr->tag.blockNum < minBlock)\n {\n UnlockBufHdr(bufHdr, buf_state);\n continue;\n }\n\n for (k = 0; k < nforks; k++)\n {\n if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n bufHdr->tag.forkNum == forkNum[k] &&\n bufHdr->tag.blockNum >= firstDelBlock[k])\n\nBut since we acquire the buffer header lock after all and the number\nof the internal loops is small (at most 3 for now) the benefit will\nnot be big.\n\n> In the current patch, I've implemented the following in DropRelFileNodeBuffers:\n> for (i = 0; i < NBuffers; i++)\n> {\n> ...\n> buf_state = LockBufHdr(bufHdr);\n> for (k = 0; k < nforks; k++)\n> {\n> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\n> bufHdr->tag.forkNum == forkNum[k] &&\n> bufHdr->tag.blockNum >= firstDelBlock[k])\n> {\n> InvalidateBuffer(bufHdr); /* releases spinlock */\n> break;\n> }\n>\n> > Don't we use each elements of nblocks for each fork? That is, each fork uses\n> > an element at its fork number in the nblocks array and sets InvalidBlockNumber\n> > for invalid slots, instead of passing the valid number of elements. That way\n> > the following code that exist at many places,\n> >\n> > blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);\n> > if (BlockNumberIsValid(blocks[nforks]))\n> > {\n> > forks[nforks] = VISIBILITYMAP_FORKNUM;\n> > nforks++;\n> > }\n> >\n> > would become\n> >\n> > blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,\n> > nblocks);\n>\n> In the patch, we want to truncate all forks' blocks simultaneously, so\n> we optimize the invalidation of buffers and reduce the number of loops\n> using those values.\n> The suggestion above would have to remove the forks array and its\n> forksize (nforks), is it correct? But I think we’d need the fork array\n> and nforks to execute the truncation all at once.\n\nI meant that each forks can use the its forknumber'th element of\nfirstDelBlock[]. For example, if firstDelBlock = {1000,\nInvalidBlockNumber, 20, InvalidBlockNumber}, we can invalid buffers\npertaining both greater than block number 1000 of main and greater\nthan block number 20 of vm. Since firstDelBlock[FSM_FORKNUM] ==\nInvalidBlockNumber we don't invalid buffers of fsm.\n\nAs Tsunakawa-san mentioned, since your approach would reduce the loop\ncount your idea might be better than mine which always takes 4 loop\ncounts.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Jun 2019 20:01:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "From: Masahiko Sawada [mailto:sawada.mshk@gmail.com]\r\n> for (i = 0; i < NBuffers; i++)\r\n> {\r\n> (snip)\r\n> buf_state = LockBufHdr(bufHdr);\r\n> \r\n> /* check with the lower bound and skip the loop */\r\n> if (bufHdr->tag.blockNum < minBlock)\r\n> {\r\n> UnlockBufHdr(bufHdr, buf_state);\r\n> continue;\r\n> }\r\n> \r\n> for (k = 0; k < nforks; k++)\r\n> {\r\n> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n> bufHdr->tag.forkNum == forkNum[k] &&\r\n> bufHdr->tag.blockNum >= firstDelBlock[k])\r\n> \r\n> But since we acquire the buffer header lock after all and the number\r\n> of the internal loops is small (at most 3 for now) the benefit will\r\n> not be big.\r\n\r\nYeah, so I think we can just compare the block number without locking the buffer header here.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Fri, 14 Jun 2019 00:10:07 +0000",
"msg_from": "\"Tsunakawa, Takayuki\" <tsunakawa.takay@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "Hi Sawada-san,\r\n\r\nOn Thursday, June 13, 2019 8:01 PM, Masahiko Sawada wrote:\r\n> On Thu, Jun 13, 2019 at 6:30 PM Jamison, Kirk <k.jamison@jp.fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Wednesday, June 12, 2019 4:29 PM (GMT+9), Masahiko Sawada wrote:\r\n> > > ...\r\n> > > I've not look at this patch deeply but in DropRelFileNodeBuffer I\r\n> > > think we can get the min value of all firstDelBlock and use it as\r\n> > > the lower bound of block number that we're interested in. That way\r\n> > > we can skip checking the array during scanning the buffer pool.\r\n> >\r\n> > I'll take note of this suggestion.\r\n> > Could you help me expound more on this idea, skipping the internal\r\n> > loop by comparing the min and buffer descriptor (bufHdr)?\r\n> >\r\n> \r\n> Yes. For example,\r\n> \r\n> BlockNumber minBlock = InvalidBlockNumber;\r\n> (snip)\r\n> /* Get lower bound block number we're interested in */\r\n> for (i = 0; i < nforks; i++)\r\n> {\r\n> if (!BlockNumberIsValid(minBlock) ||\r\n> minBlock > firstDelBlock[i])\r\n> minBlock = firstDelBlock[i];\r\n> }\r\n> \r\n> for (i = 0; i < NBuffers; i++)\r\n> {\r\n> (snip)\r\n> buf_state = LockBufHdr(bufHdr);\r\n> \r\n> /* check with the lower bound and skip the loop */\r\n> if (bufHdr->tag.blockNum < minBlock)\r\n> {\r\n> UnlockBufHdr(bufHdr, buf_state);\r\n> continue;\r\n> }\r\n> \r\n> for (k = 0; k < nforks; k++)\r\n> {\r\n> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&\r\n> bufHdr->tag.forkNum == forkNum[k] &&\r\n> bufHdr->tag.blockNum >= firstDelBlock[k])\r\n> \r\n> But since we acquire the buffer header lock after all and the number of the\r\n> internal loops is small (at most 3 for now) the benefit will not be big.\r\n\r\nThank you very much for your kind and detailed explanation.\r\nI'll still consider your suggestions in the next patch and optimize it more\r\nso that we could possibly not need to acquire the LockBufHdr anymore.\r\n\r\n\r\n> > > Don't we use each elements of nblocks for each fork? That is, each\r\n> > > fork uses an element at its fork number in the nblocks array and\r\n> > > sets InvalidBlockNumber for invalid slots, instead of passing the\r\n> > > valid number of elements. That way the following code that exist at\r\n> > > many places,\r\n> > >\r\n> > > blocks[nforks] = visibilitymap_mark_truncate(rel, nblocks);\r\n> > > if (BlockNumberIsValid(blocks[nforks]))\r\n> > > {\r\n> > > forks[nforks] = VISIBILITYMAP_FORKNUM;\r\n> > > nforks++;\r\n> > > }\r\n> > >\r\n> > > would become\r\n> > >\r\n> > > blocks[VISIBILITYMAP_FORKNUM] = visibilitymap_mark_truncate(rel,\r\n> > > nblocks);\r\n> >\r\n> > In the patch, we want to truncate all forks' blocks simultaneously, so\r\n> > we optimize the invalidation of buffers and reduce the number of loops\r\n> > using those values.\r\n> > The suggestion above would have to remove the forks array and its\r\n> > forksize (nforks), is it correct? But I think we’d need the fork array\r\n> > and nforks to execute the truncation all at once.\r\n> \r\n> I meant that each forks can use the its forknumber'th element of\r\n> firstDelBlock[]. For example, if firstDelBlock = {1000, InvalidBlockNumber,\r\n> 20, InvalidBlockNumber}, we can invalid buffers pertaining both greater than\r\n> block number 1000 of main and greater than block number 20 of vm. Since\r\n> firstDelBlock[FSM_FORKNUM] == InvalidBlockNumber we don't invalid buffers\r\n> of fsm.\r\n> \r\n> As Tsunakawa-san mentioned, since your approach would reduce the loop count\r\n> your idea might be better than mine which always takes 4 loop counts.\r\n\r\nUnderstood. Thank you again for the kind and detailed explanations. \r\nI'll reconsider these approaches.\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Fri, 14 Jun 2019 01:27:10 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "Hi all,\r\n\r\nAttached is the v2 of the patch. I added the optimization that Sawada-san\r\nsuggested for DropRelFileNodeBuffers, although I did not acquire the lock\r\nwhen comparing the minBlock and target block. \r\n\r\nThere's actually a comment written in the source code that we could\r\npre-check buffer tag for forkNum and blockNum, but given that FSM and VM\r\nblocks are small compared to main fork's, the additional benefit of doing so \r\nwould be small.\r\n\r\n>* We could check forkNum and blockNum as well as the rnode, but the\r\n>* incremental win from doing so seems small.\r\n\r\nI personally think it's alright not to include the suggested pre-checking.\r\nIf that's the case, we can just follow the patch v1 version.\r\n\r\nThoughts?\r\n\r\nComments and reviews from other parts of the patch are also very much welcome.\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Mon, 17 Jun 2019 08:01:04 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 6/12/19 10:29 AM, Jamison, Kirk wrote:\n> \n>> From a user POW, the main issue with relation truncation is that it can block\n>> queries on standby server during truncation replay.\n>>\n>> It could be interesting if you can test this case and give results of your\n>> path.\n>> Maybe by performing read queries on standby server and counting wait_event\n>> with pg_wait_sampling?\n> \n> Thanks for the suggestion. I tried using the extension pg_wait_sampling,\n> But I wasn't sure that I could replicate the problem of blocked queries on standby server.\n> Could you advise?\n> Here's what I did for now, similar to my previous test with hot standby setup,\n> but with additional read queries of wait events on standby server.\n> \n> 128MB shared_buffers\n> SELECT create_tables(10000);\n> SELECT insert_tables(10000);\n> SELECT delfrom_tables(10000);\n> \n> [Before VACUUM]\n> Standby: SELECT the following view from pg_stat_waitaccum\n> \n> wait_event_type | wait_event | calls | microsec\n> -----------------+-----------------+-------+----------\n> Client | ClientRead | 2 | 20887759\n> IO | DataFileRead | 175 | 2788\n> IO | RelationMapRead | 4 | 26\n> IO | SLRURead | 2 | 38\n> \n> Primary: Execute VACUUM (induces relation truncates)\n> \n> [After VACUUM]\n> Standby:\n> wait_event_type | wait_event | calls | microsec\n> -----------------+-----------------+-------+----------\n> Client | ClientRead | 7 | 77662067\n> IO | DataFileRead | 284 | 4523\n> IO | RelationMapRead | 10 | 51\n> IO | SLRURead | 3 | 57\n> \n\n(Sorry for the delay, I forgot to answer you)\n\nAs far as I remember, you should see \"relation\" wait events (type lock) on\nstandby server. This is due to startup process acquiring AccessExclusiveLock for\nthe truncation and other backend waiting to acquire a lock to read the table.\n\nOn primary server, vacuum is able to cancel truncation:\n\n/*\n * We need full exclusive lock on the relation in order to do\n * truncation. If we can't get it, give up rather than waiting --- we\n * don't want to block other backends, and we don't want to deadlock\n * (which is quite possible considering we already hold a lower-grade\n * lock).\n */\nvacrelstats->lock_waiter_detected = false;\nlock_retry = 0;\nwhile (true)\n{\n if (ConditionalLockRelation(onerel, AccessExclusiveLock))\n break;\n\n /*\n * Check for interrupts while trying to (re-)acquire the exclusive\n * lock.\n */\n CHECK_FOR_INTERRUPTS();\n\n if (++lock_retry > (VACUUM_TRUNCATE_LOCK_TIMEOUT /\n VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL))\n {\n /*\n * We failed to establish the lock in the specified number of\n * retries. This means we give up truncating.\n */\n vacrelstats->lock_waiter_detected = true;\n ereport(elevel,\n (errmsg(\"\\\"%s\\\": stopping truncate due to conflicting lock request\",\n RelationGetRelationName(onerel))));\n return;\n }\n\n pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\n}\n\n\nTo maximize chances to reproduce we can use big shared_buffers. But I am afraid\nit is not easy to perform reproducible tests to compare results. Unfortunately I\ndon't have servers to perform tests.\n\nRegards,",
"msg_date": "Wed, 26 Jun 2019 11:09:58 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Wednesday, June 26, 2019 6:10 PM(GMT+9), Adrien Nayrat wrote:\r\n> As far as I remember, you should see \"relation\" wait events (type lock) on\r\n> standby server. This is due to startup process acquiring AccessExclusiveLock\r\n> for the truncation and other backend waiting to acquire a lock to read the\r\n> table.\r\n\r\nHi Adrien, thank you for taking time to reply.\r\n\r\nI understand that RelationTruncate() can block read-only queries on\r\nstandby during redo. However, it's difficult for me to reproduce the \r\ntest case where I need to catch that wait for relation lock, because\r\none has to execute SELECT within the few milliseconds of redoing the\r\ntruncation of one table.\r\n\r\nInstead, I just measured the whole recovery time, smgr_redo(),\r\nto show the recovery improvement compared to head. Please refer below.\r\n\r\n[Recovery Test]\r\nI used the same stored functions and configurations in the previous email\r\n& created \"test\" db.\r\n\r\n$ createdb test\r\n$ psql -d test\r\n\r\n1. [Primary] Create 10,000 relations.\r\n\ttest=# SELECT create_tables(10000);\r\n\r\n2. [P] Insert one row in each table.\r\n\ttest=# SELECT insert_tables(10000);\r\n\r\n3. [P] Delete row of each table.\r\n\ttest=# SELECT delfrom_tables(10000);\r\n\r\n4. [Standby] WAL application is stopped at Standby server.\r\n\ttest=# SELECT pg_wal_replay_pause();\r\n\r\n5. [P] VACUUM is executed at Primary side, and measure its execution time.\t\t\r\n\ttest=# \\timing on\r\n\ttest=# VACUUM;\r\n\r\n\tAlternatively, you may use:\r\n\t$ time psql -d test -c 'VACUUM;'\r\n\t(Note: WAL has not replayed on standby because it's been paused.)\r\n\r\n6. [P] Wait until VACUUM has finished execution. Then, stop primary server. \r\n\ttest=# pg_ctl stop -w\r\n\r\n7. [S] Resume WAL replay, then promote standby (failover).\r\nI used a shell script to execute recovery & promote standby server\r\nbecause it's kinda difficult to measure recovery time. Please refer to the script below.\r\n- \"SELECT pg_wal_replay_resume();\" is executed and the WAL application is resumed.\r\n- \"pg_ctl promote\" to promote standby.\r\n- The time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\" is measured.\r\n\r\nshell script:\r\n\r\nPGDT=/path_to_storage_directory/\r\n\r\nif [ \"$1\" = \"resume\" ]; then\r\n\tpsql -c \"SELECT pg_wal_replay_resume();\" test\r\n\tdate +%Y/%m/%d_%H:%M:%S.%3N\r\n\tpg_ctl promote -D ${PGDT}\r\n\tset +x\r\n\tdate +%Y/%m/%d_%H:%M:%S.%3N\r\n\twhile [ 1 ]\r\n\tdo\r\n\t\tRS=`psql -Atc \"select pg_is_in_recovery();\" test`\t\t\r\n\t\tif [ ${RS} = \"f\" ]; then\r\n\t\t\tbreak\r\n\t\tfi\r\n\tdone\r\n\tdate +%Y/%m/%d_%H:%M:%S.%3N\r\n\tset -x\r\n\texit 0\r\nfi\r\n\r\n\r\n[Test Results]\r\nshared_buffers = 24GB\r\n\r\n1. HEAD\r\n(wal replay resumed)\r\n2019/07/01_08:48:50.326\r\nserver promoted\r\n2019/07/01_08:49:50.482\r\n2019/07/01_09:02:41.051\r\n\r\n Recovery Time:\r\n 13 min 50.725 s -> Time difference from WAL replay to complete recovery\r\n 12 min 50.569 s -> Time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\"\r\n\r\n2. PATCH\r\n(wal replay resumed)\r\n2019/07/01_07:34:26.766\r\nserver promoted\r\n2019/07/01_07:34:57.790\r\n2019/07/01_07:34:57.809\r\n\r\n Recovery Time:\t\r\n 31.043 s -> Time difference from WAL replay to complete recovery\r\n 00.019 s -> Time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\"\r\n \r\n[Conclusion]\r\nThe recovery time significantly improved compared to head\r\nfrom 13 minutes to 30 seconds.\r\n\r\nAny thoughts?\r\nI'd really appreciate your comments/feedback about the patch and/or test.\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Mon, 1 Jul 2019 10:55:49 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 5:01 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> Hi all,\n>\n> Attached is the v2 of the patch. I added the optimization that Sawada-san\n> suggested for DropRelFileNodeBuffers, although I did not acquire the lock\n> when comparing the minBlock and target block.\n>\n> There's actually a comment written in the source code that we could\n> pre-check buffer tag for forkNum and blockNum, but given that FSM and VM\n> blocks are small compared to main fork's, the additional benefit of doing so\n> would be small.\n>\n> >* We could check forkNum and blockNum as well as the rnode, but the\n> >* incremental win from doing so seems small.\n>\n> I personally think it's alright not to include the suggested pre-checking.\n> If that's the case, we can just follow the patch v1 version.\n>\n> Thoughts?\n>\n> Comments and reviews from other parts of the patch are also very much welcome.\n>\n\nThank you for updating the patch. Here is the review comments for v2 patch.\n\n---\n- * visibilitymap_truncate - truncate the visibility map\n+ * visibilitymap_mark_truncate - mark the about-to-be-truncated VM\n+ *\n+ * Formerly, this function truncates VM relation forks. Instead, this just\n+ * marks the dirty buffers.\n *\n * The caller must hold AccessExclusiveLock on the relation, to ensure that\n * other backends receive the smgr invalidation event that this function sends\n * before they access the VM again.\n *\n\nI don't think we should describe about the previous behavior here.\nRather we need to describe what visibilitymap_mark_truncate does and\nwhat it returns to the caller.\n\nI'm not sure that visibilitymap_mark_truncate function name is\nappropriate here since it actually truncate map bits, not only\nmarking. Perhaps we can still use visibilitymap_truncate or change to\nvisibilitymap_truncate_prepare, or something? Anyway, this function\ntruncates only tail bits in the last remaining map page and we can\nhave a rule that the caller must call smgrtruncate() later to actually\ntruncate pages.\n\nThe comment of second paragraph is now out of date since this function\nno longer sends smgr invalidation message.\n\nIs it worth to leave the current visibilitymap_truncate function as a\nshortcut function, instead of replacing? That way we don't need to\nchange pg_truncate_visibility_map function.\n\nThe same comments are true for MarkFreeSpaceMapTruncateRel.\n\n---\n+ ForkNumber forks[MAX_FORKNUM];\n+ BlockNumber blocks[MAX_FORKNUM];\n+ BlockNumber new_nfsmblocks = InvalidBlockNumber; /* FSM blocks */\n+ BlockNumber newnblocks = InvalidBlockNumber; /* VM blocks */\n+ int nforks = 0;\n\nI think that we can have new_nfsmblocks and new_nvmblocks and wipe out\nthe comments.\n\n---\n- /* Truncate the FSM first if it exists */\n+ /*\n+ * We used to truncate FSM and VM forks here. Now we only mark the\n+ * dirty buffers of all forks about-to-be-truncated if they exist.\n+ */\n+\n\nAgain, I think we need the description of current behavior rather than\nthe history, except the case where the history is important.\n\n---\n- /*\n- * Make an XLOG entry reporting the file truncation.\n- */\n+ /* Make an XLOG entry reporting the file truncation */\n\nUnnecessary change.\n\n---\n+ /*\n+ * We might as well update the local smgr_fsm_nblocks and\nsmgr_vm_nblocks\n+ * setting. smgrtruncate sent an smgr cache inval message,\nwhich will cause\n+ * other backends to invalidate their copy of smgr_fsm_nblocks and\n+ * smgr_vm_nblocks, and this one too at the next command\nboundary. But this\n+ * ensures it isn't outright wrong until then.\n+ */\n+ if (rel->rd_smgr)\n+ {\n+ rel->rd_smgr->smgr_fsm_nblocks = new_nfsmblocks;\n+ rel->rd_smgr->smgr_vm_nblocks = newnblocks;\n+ }\n\nnew_nfsmblocks and newnblocks could be InvalidBlockNumber when the\nforks are already enough small. So the above code sets incorrect\nvalues to smgr_{fsm,vm}_nblocks.\n\nAlso, I wonder if we can do the above code in smgrtruncate. Otherwise\nwe always need to set smgr_{fsm,vm}_nblocks after smgrtruncate, which\nis inconvenient.\n\n---\n+ /* Update the local smgr_fsm_nblocks and\nsmgr_vm_nblocks setting */\n+ if (rel->rd_smgr)\n+ {\n+ rel->rd_smgr->smgr_fsm_nblocks = new_nfsmblocks;\n+ rel->rd_smgr->smgr_vm_nblocks = newnblocks;\n+ }\n\nThe save as above. And we need to set smgr_{fsm,vm}_nblocks in spite\nof freeing the fake relcache soon?\n\n---\n+ /* Get the lower bound of target block number we're interested in */\n+ for (i = 0; i < nforks; i++)\n+ {\n+ if (!BlockNumberIsValid(minBlock) ||\n+ minBlock > firstDelBlock[i])\n+ minBlock = firstDelBlock[i];\n+ }\n\nMaybe we can declare 'i' in the for statement (i.e. for (int i = 0;\n...)) at every outer loops in this functions. And in the inner loop we\ncan use 'j'.\n\n---\n-DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber forkNum,\n- BlockNumber firstDelBlock)\n+DropRelFileNodeBuffers(RelFileNodeBackend rnode, ForkNumber *forkNum,\n+ BlockNumber *firstDelBlock,\nint nforks)\n\nI think it's better to declare *forkNum and nforks side by side for\nreadability. That is, we can have it as follows.\n\nDropRelFileNodeBuffers (RelFileNodeBackend rnode, ForkNumber *forkNum,\nint nforks, BlockNumber *firstDelBlock)\n\n\n---\n-smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)\n+smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, bool isRedo,\nint nforks)\n\nSame as above. The order of reln, *forknum, nforks, isRedo would be better.\n\n---\n@@ -383,6 +383,10 @@ pg_truncate_visibility_map(PG_FUNCTION_ARGS)\n {\n Oid relid = PG_GETARG_OID(0);\n Relation rel;\n+ ForkNumber forks[MAX_FORKNUM];\n+ BlockNumber blocks[MAX_FORKNUM];\n+ BlockNumber newnblocks = InvalidBlockNumber;\n+ int nforks = 0;\n\nWhy do we need the array of forks and blocks here? I think it's enough\nto have one fork and one block number.\n\n---\nThe comment of smgrdounlinkfork function needs to be updated. We now\ncan remove multiple forks.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 2 Jul 2019 16:58:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 7/1/19 12:55 PM, Jamison, Kirk wrote:\n> On Wednesday, June 26, 2019 6:10 PM(GMT+9), Adrien Nayrat wrote:\n>> As far as I remember, you should see \"relation\" wait events (type lock) on\n>> standby server. This is due to startup process acquiring AccessExclusiveLock\n>> for the truncation and other backend waiting to acquire a lock to read the\n>> table.\n> \n> Hi Adrien, thank you for taking time to reply.\n> \n> I understand that RelationTruncate() can block read-only queries on\n> standby during redo. However, it's difficult for me to reproduce the \n> test case where I need to catch that wait for relation lock, because\n> one has to execute SELECT within the few milliseconds of redoing the\n> truncation of one table.\n\nYes, that why your test by measuring vacuum execution time is better as it is\nmore reproductible.\n\n> \n> Instead, I just measured the whole recovery time, smgr_redo(),\n> to show the recovery improvement compared to head. Please refer below.\n> \n> [Recovery Test]\n> I used the same stored functions and configurations in the previous email\n> & created \"test\" db.\n> \n> $ createdb test\n> $ psql -d test\n> \n> 1. [Primary] Create 10,000 relations.\n> \ttest=# SELECT create_tables(10000);\n> \n> 2. [P] Insert one row in each table.\n> \ttest=# SELECT insert_tables(10000);\n> \n> 3. [P] Delete row of each table.\n> \ttest=# SELECT delfrom_tables(10000);\n> \n> 4. [Standby] WAL application is stopped at Standby server.\n> \ttest=# SELECT pg_wal_replay_pause();\n> \n> 5. [P] VACUUM is executed at Primary side, and measure its execution time.\t\t\n> \ttest=# \\timing on\n> \ttest=# VACUUM;\n> \n> \tAlternatively, you may use:\n> \t$ time psql -d test -c 'VACUUM;'\n> \t(Note: WAL has not replayed on standby because it's been paused.)\n> \n> 6. [P] Wait until VACUUM has finished execution. Then, stop primary server. \n> \ttest=# pg_ctl stop -w\n> \n> 7. [S] Resume WAL replay, then promote standby (failover).\n> I used a shell script to execute recovery & promote standby server\n> because it's kinda difficult to measure recovery time. Please refer to the script below.\n> - \"SELECT pg_wal_replay_resume();\" is executed and the WAL application is resumed.\n> - \"pg_ctl promote\" to promote standby.\n> - The time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\" is measured.\n> \n> shell script:\n> \n> PGDT=/path_to_storage_directory/\n> \n> if [ \"$1\" = \"resume\" ]; then\n> \tpsql -c \"SELECT pg_wal_replay_resume();\" test\n> \tdate +%Y/%m/%d_%H:%M:%S.%3N\n> \tpg_ctl promote -D ${PGDT}\n> \tset +x\n> \tdate +%Y/%m/%d_%H:%M:%S.%3N\n> \twhile [ 1 ]\n> \tdo\n> \t\tRS=`psql -Atc \"select pg_is_in_recovery();\" test`\t\t\n> \t\tif [ ${RS} = \"f\" ]; then\n> \t\t\tbreak\n> \t\tfi\n> \tdone\n> \tdate +%Y/%m/%d_%H:%M:%S.%3N\n> \tset -x\n> \texit 0\n> fi\n> \n> \n> [Test Results]\n> shared_buffers = 24GB\n> \n> 1. HEAD\n> (wal replay resumed)\n> 2019/07/01_08:48:50.326\n> server promoted\n> 2019/07/01_08:49:50.482\n> 2019/07/01_09:02:41.051\n> \n> Recovery Time:\n> 13 min 50.725 s -> Time difference from WAL replay to complete recovery\n> 12 min 50.569 s -> Time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\"\n> \n> 2. PATCH\n> (wal replay resumed)\n> 2019/07/01_07:34:26.766\n> server promoted\n> 2019/07/01_07:34:57.790\n> 2019/07/01_07:34:57.809\n> \n> Recovery Time:\t\n> 31.043 s -> Time difference from WAL replay to complete recovery\n> 00.019 s -> Time difference of \"select pg_is_in_recovery();\" from \"t\" to \"f\"\n> \n> [Conclusion]\n> The recovery time significantly improved compared to head\n> from 13 minutes to 30 seconds.\n> \n> Any thoughts?\n> I'd really appreciate your comments/feedback about the patch and/or test.\n> \n> \n\nThanks for the time you spend on this test, it is a huge win!\nAlthough creating 10k tables and deleting tuples is not a common use case, it is\nstill good to know how your patch performs.\nI will try to look deeper in your patch, but my knowledge on postgres internal\nare limited :)\n\n-- \nAdrien",
"msg_date": "Wed, 3 Jul 2019 11:39:36 +0200",
"msg_from": "Adrien Nayrat <adrien.nayrat@anayrat.info>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tuesday, July 2, 2019 4:59 PM (GMT+9), Masahiko Sawada wrote:\r\n> Thank you for updating the patch. Here is the review comments for v2 patch.\r\n\r\nThank you so much for review!\r\nI indicated the addressed parts below and attached the updated patch.\r\n\r\n---\r\nvisibilitymap.c: visibilitymap_truncate()\r\n\r\n> I don't think we should describe about the previous behavior here.\r\n> Rather we need to describe what visibilitymap_mark_truncate does and what\r\n> it returns to the caller.\r\n>\r\n> I'm not sure that visibilitymap_mark_truncate function name is appropriate\r\n> here since it actually truncate map bits, not only marking. Perhaps we can\r\n> still use visibilitymap_truncate or change to\r\n> visibilitymap_truncate_prepare, or something? Anyway, this function\r\n> truncates only tail bits in the last remaining map page and we can have a\r\n> rule that the caller must call smgrtruncate() later to actually truncate\r\n> pages.\r\n> \r\n> The comment of second paragraph is now out of date since this function no\r\n> longer sends smgr invalidation message.\r\n\r\n(1) I updated function name to \"visibilitymap_truncate_prepare()\" as suggested.\r\nI think that parameter name fits, unless other reviewers suggest a better name.\r\nI also updated its description more accurately: describing current behavior,\r\ncaller must eventually call smgrtruncate() to actually truncate vm pages,\r\nand removed the outdated description.\r\n\r\n\r\n> Is it worth to leave the current visibilitymap_truncate function as a shortcut\r\n> function, instead of replacing? That way we don't need to change\r\n> pg_truncate_visibility_map function.\r\n\r\n(2) Yeah, it's kinda displeasing that I had to add lines in pg_truncate_visibility_map.\r\nBy any chance, re: shortcut function, you meant to retain the function\r\nvisibilitymap_truncate() and just add another visibilitymap_truncate_prepare(),\r\nisn't it? I'm not sure if it's worth the additional lines of adding\r\nanother function in visibilitymap.c, that's why I just updated the function itself\r\nwhich just adds 10 lines to pg_truncate_visibility_map anyway.\r\nHmm. If it's not wrong to do it this way, then I will retain this change.\r\nOTOH, if pg_visibility.c *must* not be modified, then I'll follow your advice.\r\n\r\n\r\n----\r\npg_visibility.c: pg_truncate_visibility_map()\r\n\r\n> @@ -383,6 +383,10 @@ pg_truncate_visibility_map(PG_FUNCTION_ARGS)\r\n> {\r\n> Oid relid = PG_GETARG_OID(0);\r\n> Relation rel;\r\n> + ForkNumber forks[MAX_FORKNUM];\r\n> + BlockNumber blocks[MAX_FORKNUM];\r\n> + BlockNumber newnblocks = InvalidBlockNumber;\r\n> + int nforks = 0;\r\n> \r\n> Why do we need the array of forks and blocks here? I think it's enough to\r\n> have one fork and one block number.\r\n\r\n(3) Thanks for the catch. Updated.\r\n\r\n\r\n----\r\nfreespace.c: MarkFreeSpaceMapTruncateRel()\r\n\r\n> The same comments are true for MarkFreeSpaceMapTruncateRel.\r\n\r\n> + BlockNumber new_nfsmblocks = InvalidBlockNumber; /* FSM\r\n> blocks */\r\n> + BlockNumber newnblocks = InvalidBlockNumber; /* VM\r\n> blocks */\r\n> + int nforks = 0;\r\n> \r\n> I think that we can have new_nfsmblocks and new_nvmblocks and wipe out the\r\n> comments.\r\n\r\n(4) I updated the description accordingly, describing only the current behavior.\r\nThe caller must eventually call smgrtruncate() to actually truncate fsm pages.\r\nI also removed the outdated description and irrelevant comments.\r\n\r\n\r\n------\r\nstorage.c: RelationTruncate()\r\n\r\n> + * We might as well update the local smgr_fsm_nblocks and\r\n> smgr_vm_nblocks\r\n> + * setting. smgrtruncate sent an smgr cache inval message,\r\n> which will cause\r\n> + * other backends to invalidate their copy of smgr_fsm_nblocks and\r\n> + * smgr_vm_nblocks, and this one too at the next command\r\n> boundary. But this\r\n> + * ensures it isn't outright wrong until then.\r\n> + */\r\n> + if (rel->rd_smgr)\r\n> + {\r\n> + rel->rd_smgr->smgr_fsm_nblocks = new_nfsmblocks;\r\n> + rel->rd_smgr->smgr_vm_nblocks = newnblocks;\r\n> + }\r\n> \r\n> new_nfsmblocks and newnblocks could be InvalidBlockNumber when the forks are\r\n> already enough small. So the above code sets incorrect values to\r\n> smgr_{fsm,vm}_nblocks.\r\n> \r\n> Also, I wonder if we can do the above code in smgrtruncate. Otherwise we always\r\n> need to set smgr_{fsm,vm}_nblocks after smgrtruncate, which is inconvenient.\r\n\r\n(5) \r\nIn my patch, did you mean that there's a possibility that these values\r\nwere already set to InvalidBlockNumber even before I did the setting, is it? \r\nI'm not sure if IIUC, the point of the above code is to make sure that\r\nsmgr_{fsm,vm}_nblocks are not InvalidBlockNumber until the next command\r\nboundary, and while we don't reach that boundary yet, we make sure\r\nthese values are valid within that window. Is my understanding correct?\r\nMaybe following your advice that putting it inside the smgrtruncate loop\r\nwill make these values correct.\r\nFor example, below?\r\n\r\nvoid smgrtruncate\r\n{\r\n\t...\r\n\tCacheInvalidateSmgr(reln->smgr_rnode);\r\n\r\n\t/* Do the truncation */\r\n\tfor (i = 0; i < nforks; i++)\r\n\t{\r\n\t\tsmgrsw[reln->smgr_which].smgr_truncate(reln, forknum[i], nblocks[i]);\r\n\r\n\t\tif (forknum[i] == FSM_FORKNUM)\r\n\t\t\treln->smgr_fsm_nblocks = nblocks[i];\r\n\t\tif (forknum[i] == VISIBILITYMAP_FORKNUM)\r\n\t\t\treln->smgr_vm_nblocks = nblocks[i];\r\n\t}\r\n\r\nAnother problem I have is where I should call FreeSpaceMapVacuumRange to \r\naccount for truncation of fsm pages. I also realized the upper bound\r\nnew_nfsmblocks might be incorrect in this case.\r\nThis is the cause why regression test fails in my updated patch...\r\n+\t * Update upper-level FSM pages to account for the truncation.\r\n+\t * This is important because the just-truncated pages were likely\r\n+\t * marked as all-free, and would be preferentially selected.\r\n+\t */\r\n+\tFreeSpaceMapVacuumRange(rel->rd_smgr, new_nfsmblocks, InvalidBlockNumber);\r\n\r\n\r\n-----------\r\nstorage.c: smgr_redo()\r\n\r\n> + /* Update the local smgr_fsm_nblocks and\r\n> smgr_vm_nblocks setting */\r\n> + if (rel->rd_smgr)\r\n> + {\r\n> + rel->rd_smgr->smgr_fsm_nblocks = new_nfsmblocks;\r\n> + rel->rd_smgr->smgr_vm_nblocks = newnblocks;\r\n> + }\r\n> \r\n> The save as above. And we need to set smgr_{fsm,vm}_nblocks in spite of freeing\r\n> the fake relcache soon?\r\n\r\n(6) You're right. It's unnecessary in this case.\r\nIf I also put the smgr_{fsm,vm}_nblocks setting inside the smgrtruncate\r\nas you suggested above, it will still be set after truncation? Hmm.\r\nPerhaps it's ok, because in the current source code it also does the setting\r\nwhenever we call visibilitymap_truncate, FreeSpaceMapTruncateRel during redo.\r\n\r\n\r\n-----------\r\nbufmgr.c: DropRelFileNodeBuffers()\r\n\r\n> + /* Get the lower bound of target block number we're interested in\r\n> */\r\n> + for (i = 0; i < nforks; i++)\r\n> + {\r\n> + if (!BlockNumberIsValid(minBlock) ||\r\n> + minBlock > firstDelBlock[i])\r\n> + minBlock = firstDelBlock[i];\r\n> + }\r\n> \r\n> Maybe we can declare 'i' in the for statement (i.e. for (int i = 0;\r\n> ...)) at every outer loops in this functions. And in the inner loop we can\r\n> use 'j'.\r\n\r\n(7) Agree. Updated.\r\n\r\n> I think it's better to declare *forkNum and nforks side by side for readability.\r\n> That is, we can have it as follows.\r\n> \r\n> DropRelFileNodeBuffers (RelFileNodeBackend rnode, ForkNumber *forkNum, int\r\n> nforks, BlockNumber *firstDelBlock)\r\n\r\n(8) Agree. I updated DropRelFileNodeBuffers, smgrtruncate and \r\nsmgrdounlinkfork accordingly.\r\n\r\n---------\r\nsmgr.c: smgrdounlinkfork()\r\n\r\n> -smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)\r\n> +smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, bool isRedo,\r\n> int nforks)\r\n> \r\n> Same as above. The order of reln, *forknum, nforks, isRedo would be better.\r\n> \r\n> The comment of smgrdounlinkfork function needs to be updated. We now can\r\n> remove multiple forks.\r\n\r\n(9) Agree. Updated accordingly.\r\n\r\n\r\nI updated the patch based from comments,\r\nbut it still fails the regression test as indicated in (5) above.\r\nKindly verify if I correctly addressed the other parts as what you intended.\r\n\r\nThanks again for the review!\r\nI'll update the patch again after further comments.\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Thu, 4 Jul 2019 11:35:31 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "Hi,\r\n\r\n> I updated the patch based from comments, but it still fails the regression\r\n> test as indicated in (5) above.\r\n> Kindly verify if I correctly addressed the other parts as what you intended.\r\n> \r\n> Thanks again for the review!\r\n> I'll update the patch again after further comments.\r\n\r\nI updated the patch which is similar to V3 of the patch,\r\nbut addressing my problem in (5) in the previous email regarding FreeSpaceMapVacuumRange.\r\nIt seems to pass the regression test now. Kindly check for validation.\r\nThank you!\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Fri, 5 Jul 2019 03:03:25 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Fri, Jul 5, 2019 at 3:03 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n> I updated the patch which is similar to V3 of the patch,\n> but addressing my problem in (5) in the previous email regarding FreeSpaceMapVacuumRange.\n> It seems to pass the regression test now. Kindly check for validation.\n\nHi Kirk,\n\nFYI there are a couple of compiler errors reported:\n\nWindows compiler:\n\ncontrib/pg_visibility/pg_visibility.c(400): error C2143: syntax error\n: missing ')' before '{'\n[C:\\projects\\postgresql\\pg_visibility.vcxproj]\n\nGCC:\n\nstorage.c: In function ‘RelationTruncate’:\nstorage.c:238:14: error: variable ‘newnblocks’ set but not used\n[-Werror=unused-but-set-variable]\n BlockNumber newnblocks = InvalidBlockNumber;\n ^\nstorage.c:237:14: error: variable ‘new_nfsmblocks’ set but not used\n[-Werror=unused-but-set-variable]\n BlockNumber new_nfsmblocks = InvalidBlockNumber;\n ^\nstorage.c: In function ‘smgr_redo’:\nstorage.c:634:15: error: variable ‘newnblocks’ set but not used\n[-Werror=unused-but-set-variable]\n BlockNumber newnblocks = InvalidBlockNumber;\n ^\nstorage.c:633:15: error: variable ‘new_nfsmblocks’ set but not used\n[-Werror=unused-but-set-variable]\n BlockNumber new_nfsmblocks = InvalidBlockNumber;\n ^\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 00:17:41 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "Hi Thomas,\r\n\r\nThanks for checking.\r\n\r\n> On Fri, Jul 5, 2019 at 3:03 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\r\n> > I updated the patch which is similar to V3 of the patch, but\r\n> > addressing my problem in (5) in the previous email regarding\r\n> FreeSpaceMapVacuumRange.\r\n> > It seems to pass the regression test now. Kindly check for validation.\r\n> \r\n> Hi Kirk,\r\n> \r\n> FYI there are a couple of compiler errors reported:\r\n\r\nAttached is the updated patch (V5) fixing the compiler errors.\r\n\r\nComments and reviews about the patch/tests are very much welcome.\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Tue, 9 Jul 2019 02:12:18 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "Hi,\r\n\r\nI repeated the recovery performance test before, and found out that I made a\r\nwrong measurement.\r\nUsing the same steps indicated in the previous email (24GB shared_buffers for my case),\r\nthe recovery time still significantly improved compared to head\r\nfrom \"13 minutes\" to \"4 minutes 44 seconds\" //not 30 seconds.\r\nIt's expected because the measurement of vacuum execution time (no failover)\r\nwhich I reported in the first email is about the same (although 5 minutes):\r\n> HEAD results\r\n> 3) 24GB shared_buffers = 14 min 13.598 s\r\n> PATCH results\r\n> 3) 24GB shared_buffers = 5 min 35.848 s\r\n\r\n\r\nReattaching the patch here again. The V5 of the patch fixed the compile error\r\nmentioned before and mainly addressed the comments/advice of Sawada-san.\r\n- updated more accurate comments describing only current behavior, not history\r\n- updated function name: visibilitymap_truncate_prepare()\r\n- moved the setting of values for smgr_{fsm,vm}_nblocks inside the smgrtruncate()\r\n\r\nI'd be grateful if anyone could provide comments, advice, or insights.\r\nThank you again in advance.\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Wed, 24 Jul 2019 00:58:24 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 9:58 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> Hi,\n>\n> I repeated the recovery performance test before, and found out that I made a\n> wrong measurement.\n> Using the same steps indicated in the previous email (24GB shared_buffers for my case),\n> the recovery time still significantly improved compared to head\n> from \"13 minutes\" to \"4 minutes 44 seconds\" //not 30 seconds.\n> It's expected because the measurement of vacuum execution time (no failover)\n> which I reported in the first email is about the same (although 5 minutes):\n> > HEAD results\n> > 3) 24GB shared_buffers = 14 min 13.598 s\n> > PATCH results\n> > 3) 24GB shared_buffers = 5 min 35.848 s\n>\n>\n> Reattaching the patch here again. The V5 of the patch fixed the compile error\n> mentioned before and mainly addressed the comments/advice of Sawada-san.\n> - updated more accurate comments describing only current behavior, not history\n> - updated function name: visibilitymap_truncate_prepare()\n> - moved the setting of values for smgr_{fsm,vm}_nblocks inside the smgrtruncate()\n>\n> I'd be grateful if anyone could provide comments, advice, or insights.\n> Thank you again in advance.\n\nThanks for the patch!\n\n-smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)\n+smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, int nforks,\nbool isRedo)\n\nsmgrdounlinkfork() is dead code. Per the discussion [1], this unused\nfunction was left intentionally. But it's still dead code since 2012,\nso I'd like to remove it. Or, even if we decide to keep that function\nfor some reasons, I don't think that we need to update that so that\nit can unlink multiple forks at once. So, what about keeping\nsmgrdounlinkfork() as it is?\n\n[1]\nhttps://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\n\n+ for (int i = 0; i < nforks; i++)\n\nThe variable \"i\" should not be declared in for loop\nper PostgreSQL coding style.\n\n+ /* Check with the lower bound block number and skip the loop */\n+ if (bufHdr->tag.blockNum < minBlock)\n+ continue; /* skip checking the buffer pool scan */\n\nBecause of the above code, the following source comment in bufmgr.c\nshould be updated.\n\n* We could check forkNum and blockNum as well as the rnode, but the\n* incremental win from doing so seems small.\n\nAnd, first of all, is this check really useful for performance?\nSince firstDelBlock for FSM fork is usually small,\nminBlock would also be small. So I'm not sure how much\nthis is helpful for performance.\n\nWhen relation is completely truncated at all (i.e., the number of block\nto delete first is zero), can RelationTruncate() and smgr_redo() just\ncall smgrdounlinkall() like smgrDoPendingDeletes() does, instead of\ncalling MarkFreeSpaceMapTruncateRel(), visibilitymap_truncate_prepare()\nand smgrtruncate()? ISTM that smgrdounlinkall() is faster and simpler.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 3 Sep 2019 21:44:25 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tuesday, September 3, 2019 9:44 PM (GMT+9), Fujii Masao wrote:\r\n> Thanks for the patch!\r\n\r\nThank you as well for the review!\r\n\r\n> -smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)\r\n> +smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, int nforks,\r\n> bool isRedo)\r\n> \r\n> smgrdounlinkfork() is dead code. Per the discussion [1], this unused function\r\n> was left intentionally. But it's still dead code since 2012, so I'd like to\r\n> remove it. Or, even if we decide to keep that function for some reasons, I\r\n> don't think that we need to update that so that it can unlink multiple forks\r\n> at once. So, what about keeping\r\n> smgrdounlinkfork() as it is?\r\n> \r\n> [1]\r\n> https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us \r\n\r\nI also mentioned it from my first post if we can just remove this dead code.\r\nIf not, it would require to modify the function because it would also\r\nneed nforks as input argument when calling DropRelFileNodeBuffers. I kept my\r\nchanges in the latest patch.\r\nSo should I remove the function now or keep my changes?\r\n\r\n\r\n> + for (int i = 0; i < nforks; i++)\r\n> \r\n> The variable \"i\" should not be declared in for loop per PostgreSQL coding\r\n> style.\r\n\r\nFixed.\r\n\r\n\r\n> + /* Check with the lower bound block number and skip the loop */ if\r\n> + (bufHdr->tag.blockNum < minBlock) continue; /* skip checking the\r\n> + buffer pool scan */\r\n> \r\n> Because of the above code, the following source comment in bufmgr.c should\r\n> be updated.\r\n> \r\n> * We could check forkNum and blockNum as well as the rnode, but the\r\n> * incremental win from doing so seems small.\r\n> \r\n> And, first of all, is this check really useful for performance?\r\n> Since firstDelBlock for FSM fork is usually small, minBlock would also be\r\n> small. So I'm not sure how much this is helpful for performance.\r\n\r\nThis was a suggestion from Sawada-san in the previous email,\r\nbut he also thought that the performance benefit might be small..\r\nso I just removed the related code block in this patch.\r\n\r\n\r\n> When relation is completely truncated at all (i.e., the number of block to\r\n> delete first is zero), can RelationTruncate() and smgr_redo() just call\r\n> smgrdounlinkall() like smgrDoPendingDeletes() does, instead of calling\r\n> MarkFreeSpaceMapTruncateRel(), visibilitymap_truncate_prepare() and\r\n> smgrtruncate()? ISTM that smgrdounlinkall() is faster and simpler.\r\n\r\nI haven't applied this in my patch yet.\r\nIf my understanding is correct, smgrdounlinkall() is used for deleting\r\nrelation forks. However, we only truncate (not delete) relations\r\nin RelationTruncate() and smgr_redo(). I'm not sure if it's correct to\r\nuse it here. Could you expound more your idea on using smgrdounlinkall?\r\n\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Thu, 5 Sep 2019 08:53:03 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 2019-Sep-05, Jamison, Kirk wrote:\n\n> I also mentioned it from my first post if we can just remove this dead code.\n> If not, it would require to modify the function because it would also\n> need nforks as input argument when calling DropRelFileNodeBuffers. I kept my\n> changes in the latest patch.\n> So should I remove the function now or keep my changes?\n\nPlease add a preliminary patch that removes the function. Dead code is\ngood, as long as it is gone. We can get it pushed ahead of the rest of\nthis.\n\nWhat does it mean to \"mark\" a dirty page in FSM? We don't have the\nconcept of marking pages as far as I know (and I don't see that the\npatch does any sort of mark). Do you mean to find where it is and\nreturn its block number? If so, I wonder how this handles concurrent\ntable extension: are we keeping some sort of lock that prevents it?\n(... or would we lose any newly added pages that receive tuples while\nthis truncation is ongoing?)\n\nI think the new API of smgrtruncate() is fairly confusing. Would it be\nbetter to define a new struct { ForkNum forknum; BlockNumber blkno; }\nand pass an array of those?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 6 Sep 2019 10:51:04 -0400",
"msg_from": "Alvaro Herrera from 2ndQuadrant <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Friday, September 6, 2019 11:51 PM (GMT+9), Alvaro Herrera wrote:\r\n\r\nHi Alvaro,\r\nThank you very much for the review!\r\n\r\n> On 2019-Sep-05, Jamison, Kirk wrote:\r\n> \r\n> > I also mentioned it from my first post if we can just remove this dead code.\r\n> > If not, it would require to modify the function because it would also\r\n> > need nforks as input argument when calling DropRelFileNodeBuffers. I\r\n> > kept my changes in the latest patch.\r\n> > So should I remove the function now or keep my changes?\r\n> \r\n> Please add a preliminary patch that removes the function. Dead code is good,\r\n> as long as it is gone. We can get it pushed ahead of the rest of this.\r\n\r\nAlright. I've attached a separate patch removing the smgrdounlinkfork.\r\n\r\n\r\n> What does it mean to \"mark\" a dirty page in FSM? We don't have the concept\r\n> of marking pages as far as I know (and I don't see that the patch does any\r\n> sort of mark). Do you mean to find where it is and return its block number?\r\n\r\nYes. \"Mark\" is probably not a proper way to describe it, so I temporarily\r\nchanged it to \"locate\" and renamed the function to FreeSpaceMapLocateBlock().\r\nIf anyone could suggest a more appropriate name, that'd be appreciated.\r\n\r\n\r\n> If so, I wonder how this handles concurrent table extension: are we keeping\r\n> some sort of lock that prevents it?\r\n> (... or would we lose any newly added pages that receive tuples while this\r\n> truncation is ongoing?)\r\n\r\nI moved the the description about acquiring AccessExclusiveLock\r\nfrom FreeSpaceMapLocateBlock() and visibilitymap_truncate_prepare() to the\r\nsmgrtruncate description instead.\r\nIIUC, in lazy_truncate_heap() we still acquire AccessExclusiveLock for the relation\r\nbefore calling RelationTruncate(), which then calls smgrtruncate().\r\nWhile holding the exclusive lock, the following are also called to check\r\nif rel has not extended and verify that end pages contain no tuples while\r\nwe were vacuuming (with non-exclusive lock).\r\n new_rel_pages = RelationGetNumberOfBlocks(onerel);\r\n new_rel_pages = count_nondeletable_pages(onerel, vacrelstats);\r\nI assume that the above would update the correct number of pages.\r\nWe then release the exclusive lock as soon as we have truncated the pages.\r\n\r\n\r\n> I think the new API of smgrtruncate() is fairly confusing. Would it be better\r\n> to define a new struct { ForkNum forknum; BlockNumber blkno; } and pass an\r\n> array of those?\r\n\r\nThis is for readbility, right? However, I think there's no need to define a\r\nnew structure for it, so I kept my changes.\r\n smgrtruncate(SMgrRelation reln, ForkNumber *forknum, int nforks, BlockNumber *nblocks).\r\nI also declared *forkNum and nforks next to each other as suggested by Sawada-san.\r\n\r\n\r\nWhat do you think about these changes?\r\n\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Mon, 9 Sep 2019 06:52:03 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Friday, September 6, 2019 11:51 PM (GMT+9), Alvaro Herrera wrote:\n>\n> Hi Alvaro,\n> Thank you very much for the review!\n>\n> > On 2019-Sep-05, Jamison, Kirk wrote:\n> >\n> > > I also mentioned it from my first post if we can just remove this dead code.\n> > > If not, it would require to modify the function because it would also\n> > > need nforks as input argument when calling DropRelFileNodeBuffers. I\n> > > kept my changes in the latest patch.\n> > > So should I remove the function now or keep my changes?\n> >\n> > Please add a preliminary patch that removes the function. Dead code is good,\n> > as long as it is gone. We can get it pushed ahead of the rest of this.\n>\n> Alright. I've attached a separate patch removing the smgrdounlinkfork.\n\nPer the past discussion, some people want to keep this \"dead\" function\nfor some reasons. So, in my opinion, it's better to just enclose the function\nwith #if NOT_USED and #endif, to keep the function itself as it is, and then\nto start new discussion on hackers about the removal of that separatedly\nfrom this patch.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 13 Sep 2019 20:35:55 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 5:53 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Tuesday, September 3, 2019 9:44 PM (GMT+9), Fujii Masao wrote:\n> > Thanks for the patch!\n>\n> Thank you as well for the review!\n>\n> > -smgrdounlinkfork(SMgrRelation reln, ForkNumber forknum, bool isRedo)\n> > +smgrdounlinkfork(SMgrRelation reln, ForkNumber *forknum, int nforks,\n> > bool isRedo)\n> >\n> > smgrdounlinkfork() is dead code. Per the discussion [1], this unused function\n> > was left intentionally. But it's still dead code since 2012, so I'd like to\n> > remove it. Or, even if we decide to keep that function for some reasons, I\n> > don't think that we need to update that so that it can unlink multiple forks\n> > at once. So, what about keeping\n> > smgrdounlinkfork() as it is?\n> >\n> > [1]\n> > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\n>\n> I also mentioned it from my first post if we can just remove this dead code.\n> If not, it would require to modify the function because it would also\n> need nforks as input argument when calling DropRelFileNodeBuffers. I kept my\n> changes in the latest patch.\n> So should I remove the function now or keep my changes?\n>\n>\n> > + for (int i = 0; i < nforks; i++)\n> >\n> > The variable \"i\" should not be declared in for loop per PostgreSQL coding\n> > style.\n>\n> Fixed.\n>\n>\n> > + /* Check with the lower bound block number and skip the loop */ if\n> > + (bufHdr->tag.blockNum < minBlock) continue; /* skip checking the\n> > + buffer pool scan */\n> >\n> > Because of the above code, the following source comment in bufmgr.c should\n> > be updated.\n> >\n> > * We could check forkNum and blockNum as well as the rnode, but the\n> > * incremental win from doing so seems small.\n> >\n> > And, first of all, is this check really useful for performance?\n> > Since firstDelBlock for FSM fork is usually small, minBlock would also be\n> > small. So I'm not sure how much this is helpful for performance.\n>\n> This was a suggestion from Sawada-san in the previous email,\n> but he also thought that the performance benefit might be small..\n> so I just removed the related code block in this patch.\n>\n>\n> > When relation is completely truncated at all (i.e., the number of block to\n> > delete first is zero), can RelationTruncate() and smgr_redo() just call\n> > smgrdounlinkall() like smgrDoPendingDeletes() does, instead of calling\n> > MarkFreeSpaceMapTruncateRel(), visibilitymap_truncate_prepare() and\n> > smgrtruncate()? ISTM that smgrdounlinkall() is faster and simpler.\n>\n> I haven't applied this in my patch yet.\n> If my understanding is correct, smgrdounlinkall() is used for deleting\n> relation forks. However, we only truncate (not delete) relations\n> in RelationTruncate() and smgr_redo(). I'm not sure if it's correct to\n> use it here. Could you expound more your idea on using smgrdounlinkall?\n\nMy this comment is pointless, so please ignore it. Sorry for noise..\n\nHere are other comments for the latest patch:\n\n+ block = visibilitymap_truncate_prepare(rel, 0);\n+ if (BlockNumberIsValid(block))\n+ fork = VISIBILITYMAP_FORKNUM;\n+\n+ smgrtruncate(rel->rd_smgr, &fork, 1, &block);\n\nIf visibilitymap_truncate_prepare() returns InvalidBlockNumber,\nsmgrtruncate() should not be called.\n\n+ FreeSpaceMapVacuumRange(rel, first_removed_nblocks, InvalidBlockNumber);\n\nFreeSpaceMapVacuumRange() should be called only when FSM exists,\nlike the original code does?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 13 Sep 2019 20:38:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On 2019-Sep-13, Fujii Masao wrote:\n\n> On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n\n> > > Please add a preliminary patch that removes the function. Dead code is good,\n> > > as long as it is gone. We can get it pushed ahead of the rest of this.\n> >\n> > Alright. I've attached a separate patch removing the smgrdounlinkfork.\n> \n> Per the past discussion, some people want to keep this \"dead\" function\n> for some reasons. So, in my opinion, it's better to just enclose the function\n> with #if NOT_USED and #endif, to keep the function itself as it is, and then\n> to start new discussion on hackers about the removal of that separatedly\n> from this patch.\n\nI searched for anybody requesting to keep the function. I couldn't find\nanything. Tom said in 2012:\nhttps://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\n\n> As committed, the smgrdounlinkfork case is actually dead code; it's\n> never called from anywhere. I left it in place just in case we want\n> it someday.\n\nbut if no use has appeared in 7 years, I say it's time to kill it.\n\nIn absence of objections, I'll commit a patch to remove it later today.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Sep 2019 09:51:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Sep-13, Fujii Masao wrote:\n>\n> > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> > > > Please add a preliminary patch that removes the function. Dead code is good,\n> > > > as long as it is gone. We can get it pushed ahead of the rest of this.\n> > >\n> > > Alright. I've attached a separate patch removing the smgrdounlinkfork.\n> >\n> > Per the past discussion, some people want to keep this \"dead\" function\n> > for some reasons. So, in my opinion, it's better to just enclose the function\n> > with #if NOT_USED and #endif, to keep the function itself as it is, and then\n> > to start new discussion on hackers about the removal of that separatedly\n> > from this patch.\n>\n> I searched for anybody requesting to keep the function. I couldn't find\n> anything. Tom said in 2012:\n> https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\n\nYes. And I found Andres.\nhttps://www.postgresql.org/message-id/20180621174129.hogefyopje4xaznu@alap3.anarazel.de\n\n> > As committed, the smgrdounlinkfork case is actually dead code; it's\n> > never called from anywhere. I left it in place just in case we want\n> > it someday.\n>\n> but if no use has appeared in 7 years, I say it's time to kill it.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 13 Sep 2019 22:05:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\r\n> On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\r\n> wrote:\r\n> >\r\n> > On 2019-Sep-13, Fujii Masao wrote:\r\n> >\r\n> > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk <k.jamison@jp.fujitsu.com>\r\n> wrote:\r\n> >\r\n> > > > > Please add a preliminary patch that removes the function. Dead\r\n> > > > > code is good, as long as it is gone. We can get it pushed ahead of\r\n> the rest of this.\r\n> > > >\r\n> > > > Alright. I've attached a separate patch removing the smgrdounlinkfork.\r\n> > >\r\n> > > Per the past discussion, some people want to keep this \"dead\"\r\n> > > function for some reasons. So, in my opinion, it's better to just\r\n> > > enclose the function with #if NOT_USED and #endif, to keep the\r\n> > > function itself as it is, and then to start new discussion on\r\n> > > hackers about the removal of that separatedly from this patch.\r\n> >\r\n> > I searched for anybody requesting to keep the function. I couldn't\r\n> > find anything. Tom said in 2012:\r\n> > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\r\n> \r\n> Yes. And I found Andres.\r\n> https://www.postgresql.org/message-id/20180621174129.hogefyopje4xaznu@al\r\n> ap3.anarazel.de\r\n> \r\n> > > As committed, the smgrdounlinkfork case is actually dead code; it's\r\n> > > never called from anywhere. I left it in place just in case we want\r\n> > > it someday.\r\n> >\r\n> > but if no use has appeared in 7 years, I say it's time to kill it.\r\n> \r\n> +1\r\n\r\nThe consensus is we remove it, right?\r\nRe-attaching the patch that removes the deadcode: smgrdounlinkfork().\r\n\r\n---\r\nI've also fixed Fujii-san's comments below in the latest attached speedup truncate rel patch (v8).\r\n> Here are other comments for the latest patch:\r\n> \r\n> + block = visibilitymap_truncate_prepare(rel, 0); if\r\n> + (BlockNumberIsValid(block)) fork = VISIBILITYMAP_FORKNUM;\r\n> +\r\n> + smgrtruncate(rel->rd_smgr, &fork, 1, &block);\r\n> \r\n> If visibilitymap_truncate_prepare() returns InvalidBlockNumber,\r\n> smgrtruncate() should not be called.\r\n> \r\n> + FreeSpaceMapVacuumRange(rel, first_removed_nblocks,\r\n> + InvalidBlockNumber);\r\n\r\nThank you again for the review!\r\n\r\nRegards,\r\nKirk Jamison",
"msg_date": "Tue, 17 Sep 2019 01:44:12 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 01:44:12AM +0000, Jamison, Kirk wrote:\n> On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\n>> On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n>> wrote:\n>>>> As committed, the smgrdounlinkfork case is actually dead code; it's\n>>>> never called from anywhere. I left it in place just in case we want\n>>>> it someday.\n>>>\n>>> but if no use has appeared in 7 years, I say it's time to kill it.\n>> \n>> +1\n> \n> The consensus is we remove it, right?\n\nYes. Just adding my +1 to nuke the function.\n--\nMichael",
"msg_date": "Tue, 17 Sep 2019 14:25:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\n> > On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> > wrote:\n> > >\n> > > On 2019-Sep-13, Fujii Masao wrote:\n> > >\n> > > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk <k.jamison@jp.fujitsu.com>\n> > wrote:\n> > >\n> > > > > > Please add a preliminary patch that removes the function. Dead\n> > > > > > code is good, as long as it is gone. We can get it pushed ahead of\n> > the rest of this.\n> > > > >\n> > > > > Alright. I've attached a separate patch removing the smgrdounlinkfork.\n> > > >\n> > > > Per the past discussion, some people want to keep this \"dead\"\n> > > > function for some reasons. So, in my opinion, it's better to just\n> > > > enclose the function with #if NOT_USED and #endif, to keep the\n> > > > function itself as it is, and then to start new discussion on\n> > > > hackers about the removal of that separatedly from this patch.\n> > >\n> > > I searched for anybody requesting to keep the function. I couldn't\n> > > find anything. Tom said in 2012:\n> > > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.us\n> >\n> > Yes. And I found Andres.\n> > https://www.postgresql.org/message-id/20180621174129.hogefyopje4xaznu@al\n> > ap3.anarazel.de\n> >\n> > > > As committed, the smgrdounlinkfork case is actually dead code; it's\n> > > > never called from anywhere. I left it in place just in case we want\n> > > > it someday.\n> > >\n> > > but if no use has appeared in 7 years, I say it's time to kill it.\n> >\n> > +1\n>\n> The consensus is we remove it, right?\n> Re-attaching the patch that removes the deadcode: smgrdounlinkfork().\n>\n> ---\n> I've also fixed Fujii-san's comments below in the latest attached speedup truncate rel patch (v8).\n\nThanks for updating the patch!\n\n+ block = visibilitymap_truncate_prepare(rel, 0);\n+ if (BlockNumberIsValid(block))\n {\n- xl_smgr_truncate xlrec;\n+ fork = VISIBILITYMAP_FORKNUM;\n+ smgrtruncate(rel->rd_smgr, &fork, 1, &block);\n+\n+ if (RelationNeedsWAL(rel))\n+ {\n+ xl_smgr_truncate xlrec;\n\nI don't think this fix is right. Originally, WAL is generated\neven in the case where visibilitymap_truncate_prepare() returns\nInvalidBlockNumber. But the patch unexpectedly changed the logic\nso that WAL is not generated in that case.\n\n+ if (fsm)\n+ FreeSpaceMapVacuumRange(rel, first_removed_nblocks,\n+ InvalidBlockNumber);\n\nThis code means that FreeSpaceMapVacuumRange() is called if FSM exists\neven if FreeSpaceMapLocateBlock() returns InvalidBlockNumber.\nThis seems not right. Originally, FreeSpaceMapVacuumRange() is not called\nin the case where InvalidBlockNumber is returned.\n\nSo I updated the patch based on yours and fixed the above issues.\nAttached. Could you review this one? If there is no issue in that,\nI'm thinking to commit that.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 18 Sep 2019 20:37:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tue, Sep 17, 2019 at 2:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 17, 2019 at 01:44:12AM +0000, Jamison, Kirk wrote:\n> > On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\n> >> On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\n> >> wrote:\n> >>>> As committed, the smgrdounlinkfork case is actually dead code; it's\n> >>>> never called from anywhere. I left it in place just in case we want\n> >>>> it someday.\n> >>>\n> >>> but if no use has appeared in 7 years, I say it's time to kill it.\n> >>\n> >> +1\n> >\n> > The consensus is we remove it, right?\n>\n> Yes. Just adding my +1 to nuke the function.\n\nOkay, so committed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 18 Sep 2019 21:09:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Wednesday, September 18, 2019 8:38 PM, Fujii Masao wrote:\r\n> On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk <k.jamison@jp.fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\r\n> > > On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera\r\n> > > <alvherre@2ndquadrant.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > On 2019-Sep-13, Fujii Masao wrote:\r\n> > > >\r\n> > > > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk\r\n> > > > > <k.jamison@jp.fujitsu.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > > > > Please add a preliminary patch that removes the function.\r\n> > > > > > > Dead code is good, as long as it is gone. We can get it\r\n> > > > > > > pushed ahead of\r\n> > > the rest of this.\r\n> > > > > >\r\n> > > > > > Alright. I've attached a separate patch removing the\r\n> smgrdounlinkfork.\r\n> > > > >\r\n> > > > > Per the past discussion, some people want to keep this \"dead\"\r\n> > > > > function for some reasons. So, in my opinion, it's better to\r\n> > > > > just enclose the function with #if NOT_USED and #endif, to keep\r\n> > > > > the function itself as it is, and then to start new discussion\r\n> > > > > on hackers about the removal of that separatedly from this patch.\r\n> > > >\r\n> > > > I searched for anybody requesting to keep the function. I\r\n> > > > couldn't find anything. Tom said in 2012:\r\n> > > > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.u\r\n> > > > s\r\n> > >\r\n> > > Yes. And I found Andres.\r\n> > > https://www.postgresql.org/message-id/20180621174129.hogefyopje4xazn\r\n> > > u@al\r\n> > > ap3.anarazel.de\r\n> > >\r\n> > > > > As committed, the smgrdounlinkfork case is actually dead code;\r\n> > > > > it's never called from anywhere. I left it in place just in\r\n> > > > > case we want it someday.\r\n> > > >\r\n> > > > but if no use has appeared in 7 years, I say it's time to kill it.\r\n> > >\r\n> > > +1\r\n> >\r\n> > The consensus is we remove it, right?\r\n> > Re-attaching the patch that removes the deadcode: smgrdounlinkfork().\r\n> >\r\n> > ---\r\n> > I've also fixed Fujii-san's comments below in the latest attached speedup\r\n> truncate rel patch (v8).\r\n> \r\n> Thanks for updating the patch!\r\n> \r\n> + block = visibilitymap_truncate_prepare(rel, 0); if\r\n> + (BlockNumberIsValid(block))\r\n> {\r\n> - xl_smgr_truncate xlrec;\r\n> + fork = VISIBILITYMAP_FORKNUM;\r\n> + smgrtruncate(rel->rd_smgr, &fork, 1, &block);\r\n> +\r\n> + if (RelationNeedsWAL(rel))\r\n> + {\r\n> + xl_smgr_truncate xlrec;\r\n> \r\n> I don't think this fix is right. Originally, WAL is generated even in the\r\n> case where visibilitymap_truncate_prepare() returns InvalidBlockNumber. But\r\n> the patch unexpectedly changed the logic so that WAL is not generated in that\r\n> case.\r\n> \r\n> + if (fsm)\r\n> + FreeSpaceMapVacuumRange(rel, first_removed_nblocks,\r\n> + InvalidBlockNumber);\r\n> \r\n> This code means that FreeSpaceMapVacuumRange() is called if FSM exists even\r\n> if FreeSpaceMapLocateBlock() returns InvalidBlockNumber.\r\n> This seems not right. Originally, FreeSpaceMapVacuumRange() is not called\r\n> in the case where InvalidBlockNumber is returned.\r\n> \r\n> So I updated the patch based on yours and fixed the above issues.\r\n> Attached. Could you review this one? If there is no issue in that, I'm thinking\r\n> to commit that.\r\n\r\nOops. Thanks for the catch to correct my fix and revision of some descriptions.\r\nI also noticed you reordered the truncation of forks, by which main fork will be\r\ntruncated first instead of FSM. I'm not sure if the order matters now given that\r\nwe're truncating the forks simultaneously, so I'm ok with that change.\r\n\r\nJust one minor comment:\r\n+ * Return the number of blocks of new FSM after it's truncated.\r\n\r\n\"after it's truncated\" is quite confusing. \r\nHow about, \"as a result of previous truncation\" or just end the sentence after new FSM?\r\n\r\n\r\nThank you for committing the other patch as well!\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Thu, 19 Sep 2019 00:42:09 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Thu, Sep 19, 2019 at 9:42 AM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Wednesday, September 18, 2019 8:38 PM, Fujii Masao wrote:\n> > On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk <k.jamison@jp.fujitsu.com>\n> > wrote:\n> > >\n> > > On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\n> > > > On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera\n> > > > <alvherre@2ndquadrant.com>\n> > > > wrote:\n> > > > >\n> > > > > On 2019-Sep-13, Fujii Masao wrote:\n> > > > >\n> > > > > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk\n> > > > > > <k.jamison@jp.fujitsu.com>\n> > > > wrote:\n> > > > >\n> > > > > > > > Please add a preliminary patch that removes the function.\n> > > > > > > > Dead code is good, as long as it is gone. We can get it\n> > > > > > > > pushed ahead of\n> > > > the rest of this.\n> > > > > > >\n> > > > > > > Alright. I've attached a separate patch removing the\n> > smgrdounlinkfork.\n> > > > > >\n> > > > > > Per the past discussion, some people want to keep this \"dead\"\n> > > > > > function for some reasons. So, in my opinion, it's better to\n> > > > > > just enclose the function with #if NOT_USED and #endif, to keep\n> > > > > > the function itself as it is, and then to start new discussion\n> > > > > > on hackers about the removal of that separatedly from this patch.\n> > > > >\n> > > > > I searched for anybody requesting to keep the function. I\n> > > > > couldn't find anything. Tom said in 2012:\n> > > > > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.pa.u\n> > > > > s\n> > > >\n> > > > Yes. And I found Andres.\n> > > > https://www.postgresql.org/message-id/20180621174129.hogefyopje4xazn\n> > > > u@al\n> > > > ap3.anarazel.de\n> > > >\n> > > > > > As committed, the smgrdounlinkfork case is actually dead code;\n> > > > > > it's never called from anywhere. I left it in place just in\n> > > > > > case we want it someday.\n> > > > >\n> > > > > but if no use has appeared in 7 years, I say it's time to kill it.\n> > > >\n> > > > +1\n> > >\n> > > The consensus is we remove it, right?\n> > > Re-attaching the patch that removes the deadcode: smgrdounlinkfork().\n> > >\n> > > ---\n> > > I've also fixed Fujii-san's comments below in the latest attached speedup\n> > truncate rel patch (v8).\n> >\n> > Thanks for updating the patch!\n> >\n> > + block = visibilitymap_truncate_prepare(rel, 0); if\n> > + (BlockNumberIsValid(block))\n> > {\n> > - xl_smgr_truncate xlrec;\n> > + fork = VISIBILITYMAP_FORKNUM;\n> > + smgrtruncate(rel->rd_smgr, &fork, 1, &block);\n> > +\n> > + if (RelationNeedsWAL(rel))\n> > + {\n> > + xl_smgr_truncate xlrec;\n> >\n> > I don't think this fix is right. Originally, WAL is generated even in the\n> > case where visibilitymap_truncate_prepare() returns InvalidBlockNumber. But\n> > the patch unexpectedly changed the logic so that WAL is not generated in that\n> > case.\n> >\n> > + if (fsm)\n> > + FreeSpaceMapVacuumRange(rel, first_removed_nblocks,\n> > + InvalidBlockNumber);\n> >\n> > This code means that FreeSpaceMapVacuumRange() is called if FSM exists even\n> > if FreeSpaceMapLocateBlock() returns InvalidBlockNumber.\n> > This seems not right. Originally, FreeSpaceMapVacuumRange() is not called\n> > in the case where InvalidBlockNumber is returned.\n> >\n> > So I updated the patch based on yours and fixed the above issues.\n> > Attached. Could you review this one? If there is no issue in that, I'm thinking\n> > to commit that.\n>\n> Oops. Thanks for the catch to correct my fix and revision of some descriptions.\n> I also noticed you reordered the truncation of forks, by which main fork will be\n> truncated first instead of FSM. I'm not sure if the order matters now given that\n> we're truncating the forks simultaneously, so I'm ok with that change.\n\nI changed that order so that DropRelFileNodeBuffers() can scan shared_buffers\nmore efficiently. Usually the number of buffers for MAIN fork is larger than\nthe others, in shared_buffers. So it's better to compare MAIN fork first for\nperformance, during full scan of shared_buffers.\n\n> Just one minor comment:\n> + * Return the number of blocks of new FSM after it's truncated.\n>\n> \"after it's truncated\" is quite confusing.\n> How about, \"as a result of previous truncation\" or just end the sentence after new FSM?\n\nThanks for the comment!\nI adopted the latter and committed the patch. Thanks!\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 24 Sep 2019 17:40:44 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Speedup truncates of relation forks"
},
{
"msg_contents": "On Tuesday, September 24, 2019 5:41 PM (GMT+9), Fujii Masao wrote:\r\n> On Thu, Sep 19, 2019 at 9:42 AM Jamison, Kirk <k.jamison@jp.fujitsu.com>\r\n> wrote:\r\n> >\r\n> > On Wednesday, September 18, 2019 8:38 PM, Fujii Masao wrote:\r\n> > > On Tue, Sep 17, 2019 at 10:44 AM Jamison, Kirk\r\n> > > <k.jamison@jp.fujitsu.com>\r\n> > > wrote:\r\n> > > >\r\n> > > > On Friday, September 13, 2019 10:06 PM (GMT+9), Fujii Masao wrote:\r\n> > > > > On Fri, Sep 13, 2019 at 9:51 PM Alvaro Herrera\r\n> > > > > <alvherre@2ndquadrant.com>\r\n> > > > > wrote:\r\n> > > > > >\r\n> > > > > > On 2019-Sep-13, Fujii Masao wrote:\r\n> > > > > >\r\n> > > > > > > On Mon, Sep 9, 2019 at 3:52 PM Jamison, Kirk\r\n> > > > > > > <k.jamison@jp.fujitsu.com>\r\n> > > > > wrote:\r\n> > > > > >\r\n> > > > > > > > > Please add a preliminary patch that removes the function.\r\n> > > > > > > > > Dead code is good, as long as it is gone. We can get it\r\n> > > > > > > > > pushed ahead of\r\n> > > > > the rest of this.\r\n> > > > > > > >\r\n> > > > > > > > Alright. I've attached a separate patch removing the\r\n> > > smgrdounlinkfork.\r\n> > > > > > >\r\n> > > > > > > Per the past discussion, some people want to keep this \"dead\"\r\n> > > > > > > function for some reasons. So, in my opinion, it's better to\r\n> > > > > > > just enclose the function with #if NOT_USED and #endif, to\r\n> > > > > > > keep the function itself as it is, and then to start new\r\n> > > > > > > discussion on hackers about the removal of that separatedly from\r\n> this patch.\r\n> > > > > >\r\n> > > > > > I searched for anybody requesting to keep the function. I\r\n> > > > > > couldn't find anything. Tom said in 2012:\r\n> > > > > > https://www.postgresql.org/message-id/1471.1339106082@sss.pgh.\r\n> > > > > > pa.u\r\n> > > > > > s\r\n> > > > >\r\n> > > > > Yes. And I found Andres.\r\n> > > > > https://www.postgresql.org/message-id/20180621174129.hogefyopje4\r\n> > > > > xazn\r\n> > > > > u@al\r\n> > > > > ap3.anarazel.de\r\n> > > > >\r\n> > > > > > > As committed, the smgrdounlinkfork case is actually dead\r\n> > > > > > > code; it's never called from anywhere. I left it in place\r\n> > > > > > > just in case we want it someday.\r\n> > > > > >\r\n> > > > > > but if no use has appeared in 7 years, I say it's time to kill it.\r\n> > > > >\r\n> > > > > +1\r\n> > > >\r\n> > > > The consensus is we remove it, right?\r\n> > > > Re-attaching the patch that removes the deadcode: smgrdounlinkfork().\r\n> > > >\r\n> > > > ---\r\n> > > > I've also fixed Fujii-san's comments below in the latest attached\r\n> > > > speedup\r\n> > > truncate rel patch (v8).\r\n> > >\r\n> > > Thanks for updating the patch!\r\n> > >\r\n> > > + block = visibilitymap_truncate_prepare(rel, 0); if\r\n> > > + (BlockNumberIsValid(block))\r\n> > > {\r\n> > > - xl_smgr_truncate xlrec;\r\n> > > + fork = VISIBILITYMAP_FORKNUM;\r\n> > > + smgrtruncate(rel->rd_smgr, &fork, 1, &block);\r\n> > > +\r\n> > > + if (RelationNeedsWAL(rel))\r\n> > > + {\r\n> > > + xl_smgr_truncate xlrec;\r\n> > >\r\n> > > I don't think this fix is right. Originally, WAL is generated even\r\n> > > in the case where visibilitymap_truncate_prepare() returns\r\n> > > InvalidBlockNumber. But the patch unexpectedly changed the logic so\r\n> > > that WAL is not generated in that case.\r\n> > >\r\n> > > + if (fsm)\r\n> > > + FreeSpaceMapVacuumRange(rel, first_removed_nblocks,\r\n> > > + InvalidBlockNumber);\r\n> > >\r\n> > > This code means that FreeSpaceMapVacuumRange() is called if FSM\r\n> > > exists even if FreeSpaceMapLocateBlock() returns InvalidBlockNumber.\r\n> > > This seems not right. Originally, FreeSpaceMapVacuumRange() is not\r\n> > > called in the case where InvalidBlockNumber is returned.\r\n> > >\r\n> > > So I updated the patch based on yours and fixed the above issues.\r\n> > > Attached. Could you review this one? If there is no issue in that,\r\n> > > I'm thinking to commit that.\r\n> >\r\n> > Oops. Thanks for the catch to correct my fix and revision of some\r\n> descriptions.\r\n> > I also noticed you reordered the truncation of forks, by which main\r\n> > fork will be truncated first instead of FSM. I'm not sure if the order\r\n> > matters now given that we're truncating the forks simultaneously, so I'm\r\n> ok with that change.\r\n> \r\n> I changed that order so that DropRelFileNodeBuffers() can scan shared_buffers\r\n> more efficiently. Usually the number of buffers for MAIN fork is larger than\r\n> the others, in shared_buffers. So it's better to compare MAIN fork first for\r\n> performance, during full scan of shared_buffers.\r\n> \r\n> > Just one minor comment:\r\n> > + * Return the number of blocks of new FSM after it's truncated.\r\n> >\r\n> > \"after it's truncated\" is quite confusing.\r\n> > How about, \"as a result of previous truncation\" or just end the sentence\r\n> after new FSM?\r\n> \r\n> Thanks for the comment!\r\n> I adopted the latter and committed the patch. Thanks!\r\n\r\nThank you very much Fujii-san for taking time to review\r\nas well as for committing this patch!\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Tue, 24 Sep 2019 23:57:16 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Speedup truncates of relation forks"
}
] |
[
{
"msg_contents": "Due to the nature of a reported security vulnerability, we are planning an out-of-cycle release\nfor 2019-06-20. This will include all fixes since the last cumulative update as well as the 12 Beta 2 release.\n\nPlease make an effort to commit all bug fixes for the supported versions (9.4-11) and the 12 beta before\nthis weekend so we can include them in the release.\n\nThanks,\n\nJonathan\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:35:36 +0900",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Release scheduled for 2019-06-20"
}
] |
[
{
"msg_contents": "Greetings.\n\nTrying to build pg extension I've got error:\n```\npglogical_monitoring.o:pglogical_monitoring.c:(.rdata$.refptr.ReplicationSlotCtl[.refptr.ReplicationSlotCtl]+0x0): undefined reference to `ReplicationSlotCtl'\ncollect2: error: ld returned 1 exit status\n```\n\nBut according to https://commitfest.postgresql.org/16/1390/ it should\nbe marked with PGDLLIMPORT.\n\nHowever checking sources there is no this spec\n(https://github.com/postgres/postgres/blob/fff2a7d7bd09db38e1bafc1303c29b10a9805dc0/src/include/replication/slot.h#L172):\n\n```\nextern ReplicationSlotCtlData *ReplicationSlotCtl;\n```\n\nAm I correct or missing smth?\nCraig, it supposed to be your proposal about ReplicationSlotCtl and\npatch. Am I right?\n\n-- \nKind regards,\n Pavlo mailto:pavlo.golub@cybertec.at\n\n\n\n",
"msg_date": "Tue, 11 Jun 2019 15:19:43 +0300",
"msg_from": "Pavlo Golub <pavlo.golub@cybertec.at>",
"msg_from_op": true,
"msg_subject": "ReplicationSlotCtl: undefined reference"
},
{
"msg_contents": "Pavlo Golub <pavlo.golub@cybertec.at> writes:\n> Trying to build pg extension I've got error:\n> ```\n> pglogical_monitoring.o:pglogical_monitoring.c:(.rdata$.refptr.ReplicationSlotCtl[.refptr.ReplicationSlotCtl]+0x0): undefined reference to `ReplicationSlotCtl'\n> collect2: error: ld returned 1 exit status\n> ```\n\n> But according to https://commitfest.postgresql.org/16/1390/ it should\n> be marked with PGDLLIMPORT.\n\nThat last bit never actually got pushed, it seems. Done now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:56:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ReplicationSlotCtl: undefined reference"
},
{
"msg_contents": "On Jun 13, 2019 17:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:Pavlo Golub <pavlo.golub@cybertec.at> writes:\n\n> Trying to build pg extension I've got error:\n\n> ```\n\n> pglogical_monitoring.o:pglogical_monitoring.c:(.rdata$.refptr.ReplicationSlotCtl[.refptr.ReplicationSlotCtl]+0x0): undefined reference to `ReplicationSlotCtl'\n\n> collect2: error: ld returned 1 exit status\n\n> ```\n\n\n> But according to https://commitfest.postgresql.org/16/1390/ it should\n\n> be marked with PGDLLIMPORT.\n\n\nThat last bit never actually got pushed, it seems. Done now.\nThanks Tom. Really appreciate that!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 18:15:42 +0300",
"msg_from": "Pavlo Golub <pavlo.golub@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: ReplicationSlotCtl: undefined reference"
}
] |
[
{
"msg_contents": "Hi,\n\nSkink a few days ago started failing [1][2] with errors like:\n\n==2732== Conditional jump or move depends on uninitialised value(s)\n==2732== at 0x4C612E3: ??? (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C621FA: RAND_DRBG_generate (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C63620: ??? (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C61A09: RAND_DRBG_instantiate (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C62937: ??? (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C62CC9: RAND_DRBG_get0_public (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x4C62CEF: ??? (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n==2732== by 0x69C6AF: pg_strong_random (pg_strong_random.c:135)\n==2732== by 0x4A2841: InitProcessGlobals (postmaster.c:2581)\n==2732== by 0x661137: InitStandaloneProcess (miscinit.c:322)\n==2732== by 0x2943CF: AuxiliaryProcessMain (bootstrap.c:209)\n==2732== by 0x4005D1: main (main.c:220)\n==2732== Uninitialised value was created by a stack allocation\n==2732== at 0x4C633B0: ??? (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)\n\nand then a lot of followup error.\n\nReproduced that locally to get a nicer trace:\n==7146== Conditional jump or move depends on uninitialised value(s)\n==7146== at 0x4B122E3: inc_128 (drbg_ctr.c:32)\n==7146== by 0x4B122E3: drbg_ctr_generate (drbg_ctr.c:330)\n==7146== by 0x4B131FA: RAND_DRBG_generate (drbg_lib.c:638)\n==7146== by 0x4B14620: rand_drbg_get_entropy (rand_lib.c:172)\n==7146== by 0x4B12A09: RAND_DRBG_instantiate (drbg_lib.c:338)\n==7146== by 0x4B13937: drbg_setup (drbg_lib.c:892)\n==7146== by 0x4B13CC9: RAND_DRBG_get0_public (drbg_lib.c:1120)\n==7146== by 0x4B13CC9: RAND_DRBG_get0_public (drbg_lib.c:1109)\n==7146== by 0x4B13CEF: drbg_bytes (drbg_lib.c:963)\n==7146== by 0x87BD60: pg_strong_random (pg_strong_random.c:139)\n==7146== by 0x5CAFFC: InitProcessGlobals (postmaster.c:2581)\n==7146== by 0x81AAD7: InitStandaloneProcess (miscinit.c:322)\n==7146== by 0x681A61: PostgresMain (postgres.c:3732)\n==7146== by 0x4E3D86: main (main.c:224)\n==7146== Uninitialised value was created by a stack allocation\n==7146== at 0x4B143B0: rand_drbg_get_nonce (rand_lib.c:231)\n\nreading through the code lead me to figure out that that's due to a\nrecent openssl change:\nhttps://github.com/openssl/openssl/commit/b3d113ed2993801ee643126118ccf6592ad18ef7\nas explained in\nhttps://github.com/openssl/openssl/issues/8460\nand fixed since in\nhttps://github.com/openssl/openssl/commit/15d7e7997e219fc5fef3f6003cc6bd7b2e7379d4\n\nFor reasons I do not understand the \"cosmetic change\" was backpatched\ninto 1.1.1 And the fix for the cosmetic change, made on master at the\nend of March, was only backpatched to 1.1.1 *after* the 1.1.1c release\nwas made in late May. I mean, huh.\n\nThat release was then installed on skink recently:\n2019-06-01 06:39:20 upgrade libssl1.1:amd64 1.1.1b-2 1.1.1c-1\n\nAnd finally the reason for the issue only being visible on master: I am\nstupid, and matched branch names like REL9_6 instead of REL9_6_STABLE\netc to enable openssl (9.4 doesn't work against current openssl).\n\n\nI can't think of a better way to fix skink for now than just disabling\nopenssl for skink, until 1.1.1d is released.\n\nGreetings,\n\nAndres Freund\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skink&br=HEAD\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2019-06-10%2001%3A36%3A12\n\n\n",
"msg_date": "Tue, 11 Jun 2019 13:51:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> For reasons I do not understand the \"cosmetic change\" was backpatched\n> into 1.1.1 And the fix for the cosmetic change, made on master at the\n> end of March, was only backpatched to 1.1.1 *after* the 1.1.1c release\n> was made in late May. I mean, huh.\n\nBleah. Not that we've not made equally dumb mistakes :-(\n\n> I can't think of a better way to fix skink for now than just disabling\n> openssl for skink, until 1.1.1d is released.\n\nCouldn't you install a local valgrind exclusion matching this stack trace?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:55:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 16:55:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I can't think of a better way to fix skink for now than just disabling\n> > openssl for skink, until 1.1.1d is released.\n> \n> Couldn't you install a local valgrind exclusion matching this stack trace?\n\nUnfortunately no. The error spreads through significant parts of openssl\n*and* postgres, because it taints the returned random value, which then\nis used in a number of places. We could try to block all of those, but\nthat seems fairly painful. And one, to my knowledge, cannot do valgrind\nsuppressions based on the source of uninitialized memory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jun 2019 14:07:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 02:07:29PM -0700, Andres Freund wrote:\n> On 2019-06-11 16:55:28 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I can't think of a better way to fix skink for now than just disabling\n>>> openssl for skink, until 1.1.1d is released.\n\nThanks for digging into the details of that! I was wondering if we\ndid something wrong on our side but the backtraces were weird.\n--\nMichael",
"msg_date": "Wed, 12 Jun 2019 16:50:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-11 14:07:29 -0700, Andres Freund wrote:\n> On 2019-06-11 16:55:28 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I can't think of a better way to fix skink for now than just disabling\n> > > openssl for skink, until 1.1.1d is released.\n> > \n> > Couldn't you install a local valgrind exclusion matching this stack trace?\n> \n> Unfortunately no. The error spreads through significant parts of openssl\n> *and* postgres, because it taints the returned random value, which then\n> is used in a number of places. We could try to block all of those, but\n> that seems fairly painful. And one, to my knowledge, cannot do valgrind\n> suppressions based on the source of uninitialized memory.\n\nWhat we could do is add a suppression like:\n\n{\n broken-openssl-accesses-random\n Memcheck:Cond\n ...\n fun:pg_strong_random\n fun:InitProcessGlobals\n fun:PostmasterMain\n fun:main\n}\n\n(alternatively one suppression for each RAND_status, RAND_poll,\nRAND_bytes(), to avoid suppressing all of pg_strong_random itself)\n \nand then prevent spread of the uninitialized memory by adding a\n\t\tVALGRIND_MAKE_MEM_DEFINED(buf, len);\nafter a successful RAND_bytes() call.\n\nI tested that that quiesces the problem locally. Probably not worth\npushing something like that though?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jun 2019 16:08:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> What we could do is add a suppression like:\n\n> {\n> broken-openssl-accesses-random\n> Memcheck:Cond\n> ...\n> fun:pg_strong_random\n> fun:InitProcessGlobals\n> fun:PostmasterMain\n> fun:main\n> }\n\n> (alternatively one suppression for each RAND_status, RAND_poll,\n> RAND_bytes(), to avoid suppressing all of pg_strong_random itself)\n \n> and then prevent spread of the uninitialized memory by adding a\n> \t\tVALGRIND_MAKE_MEM_DEFINED(buf, len);\n> after a successful RAND_bytes() call.\n\n> I tested that that quiesces the problem locally. Probably not worth\n> pushing something like that though?\n\nYeah, that seems awfully aggressive to be pushing to machines that\ndon't have the problem. Did you get any sense of how fast the\nopenssl fix is goinng to show up?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jun 2019 19:25:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-18 19:25:12 -0400, Tom Lane wrote:\n> Did you get any sense of how fast the openssl fix is goinng to show up?\n\nIt's merged to both branches that contain the broken code. Now we need\nto wait for the next set of openssl releases, and then for distros to\npick that up. Based on the past release cadence\nhttps://www.openssl.org/news/openssl-1.1.1-notes.html\nthat seems to be likely to happen within 2-3 months.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jun 2019 16:34:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 04:34:07PM -0700, Andres Freund wrote:\n> It's merged to both branches that contain the broken code. Now we need\n> to wait for the next set of openssl releases, and then for distros to\n> pick that up. Based on the past release cadence\n> https://www.openssl.org/news/openssl-1.1.1-notes.html\n> that seems to be likely to happen within 2-3 months.\n\nIf that's for the buildfarm coverage. I would be of the opinion to\nwait a bit. Another possibility is that you could compile your own\nversion of OpenSSL with the patch included, say only 1.1.1c with the\npatch. Still that would cause the plpython tests to complain as the\nsystem's python may still link to the system's OpenSSL which is\nbroken?\n\nAnother possibility would be to move back to 1.1.1b for the time\nbeing...\n--\nMichael",
"msg_date": "Wed, 19 Jun 2019 11:28:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "On 2019-06-19 11:28:16 +0900, Michael Paquier wrote:\n> On Tue, Jun 18, 2019 at 04:34:07PM -0700, Andres Freund wrote:\n> > It's merged to both branches that contain the broken code. Now we need\n> > to wait for the next set of openssl releases, and then for distros to\n> > pick that up. Based on the past release cadence\n> > https://www.openssl.org/news/openssl-1.1.1-notes.html\n> > that seems to be likely to happen within 2-3 months.\n> \n> If that's for the buildfarm coverage. I would be of the opinion to\n> wait a bit.\n\nYea. For now I've just disabled ssl support on skink, but that has its\nown disadvantages.\n\n> Another possibility is that you could compile your own\n> version of OpenSSL with the patch included, say only 1.1.1c with the\n> patch.\n\nReally, I can do that?\n\n\n",
"msg_date": "Tue, 18 Jun 2019 19:44:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 07:44:26PM -0700, Andres Freund wrote:\n> Really, I can do that?\n\nHere is some of the stuff I use, just for the reference:\n./Configure linux-x86_64 --prefix=$HOME/stable/openssl/1.1.1/\n./config --prefix=$HOME/stable/openssl/1.1.1 shared\n--\nMichael",
"msg_date": "Wed, 19 Jun 2019 11:54:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: openssl valgrind failures on skink are due to openssl issue"
}
] |
[
{
"msg_contents": "Hi all,\n\nPaul and I have been hacking recently to implement parallel grouping\nsets, and here we have two implementations.\n\nImplementation 1\n================\n\nAttached is the patch and also there is a github branch [1] for this\nwork.\n\nParallel aggregation has already been supported in PostgreSQL and it is\nimplemented by aggregating in two stages. First, each worker performs an\naggregation step, producing a partial result for each group of which\nthat process is aware. Second, the partial results are transferred to\nthe leader via the Gather node. Finally, the leader merges the partial\nresults and produces the final result for each group.\n\nWe are implementing parallel grouping sets in the same way. The only\ndifference is that in the final stage, the leader performs a grouping\nsets aggregation, rather than a normal aggregation.\n\nThe plan looks like:\n\n# explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\ngrouping sets((c1,c2), (c1), (c2,c3));\n QUERY PLAN\n---------------------------------------------------------\n Finalize MixedAggregate\n Output: c1, c2, avg(c3), c3\n Hash Key: t2.c2, t2.c3\n Group Key: t2.c1, t2.c2\n Group Key: t2.c1\n -> Gather Merge\n Output: c1, c2, c3, (PARTIAL avg(c3))\n Workers Planned: 2\n -> Sort\n Output: c1, c2, c3, (PARTIAL avg(c3))\n Sort Key: t2.c1, t2.c2\n -> Partial HashAggregate\n Output: c1, c2, c3, PARTIAL avg(c3)\n Group Key: t2.c1, t2.c2, t2.c3\n -> Parallel Seq Scan on public.t2\n Output: c1, c2, c3\n(16 rows)\n\nAs the partial aggregation can be performed in parallel, we can expect a\nspeedup if the number of groups seen by the Finalize Aggregate node is\nsome less than the number of input rows.\n\nFor example, for the table provided in the test case within the patch,\nrunning the above query in my Linux box:\n\n# explain analyze select c1, c2, avg(c3) from t2 group by grouping\nsets((c1,c2), (c1), (c2,c3)); -- without patch\n Planning Time: 0.123 ms\n Execution Time: 9459.362 ms\n\n# explain analyze select c1, c2, avg(c3) from t2 group by grouping\nsets((c1,c2), (c1), (c2,c3)); -- with patch\n Planning Time: 0.204 ms\n Execution Time: 1077.654 ms\n\nBut sometimes we may not benefit from this patch. For example, in the\nworst-case scenario the number of groups seen by the Finalize Aggregate\nnode could be as many as the number of input rows which were seen by all\nworker processes in the Partial Aggregate stage. This is prone to\nhappening with this patch, because the group key for Partial Aggregate\nis all the columns involved in the grouping sets, such as in the above\nquery, it is (c1, c2, c3).\n\nSo, we have been working on another way to implement parallel grouping\nsets.\n\nImplementation 2\n================\n\nThis work can be found in github branch [2]. As it contains some hacky\ncodes and a list of TODO items, this is far from a patch. So please\nconsider it as a PoC.\n\nThe idea is instead of performing grouping sets aggregation in Finalize\nAggregate, we perform it in Partial Aggregate.\n\nThe plan looks like:\n\n# explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\ngrouping sets((c1,c2), (c1));\n QUERY PLAN\n--------------------------------------------------------------\n Finalize GroupAggregate\n Output: c1, c2, avg(c3), (gset_id)\n Group Key: t2.c1, t2.c2, (gset_id)\n -> Gather Merge\n Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n Workers Planned: 2\n -> Sort\n Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n Sort Key: t2.c1, t2.c2, (gset_id)\n -> Partial HashAggregate\n Output: c1, c2, gset_id, PARTIAL avg(c3)\n Hash Key: t2.c1, t2.c2\n Hash Key: t2.c1\n -> Parallel Seq Scan on public.t2\n Output: c1, c2, c3\n(15 rows)\n\nWith this method, there is a problem, i.e., in the final stage of\naggregation, the leader does not have a way to distinguish which tuple\ncomes from which grouping set, which turns out to be needed by leader\nfor merging the partial results.\n\nFor instance, suppose we have a table t(c1, c2, c3) containing one row\n(1, NULL, 3), and we are selecting agg(c3) group by grouping sets\n((c1,c2), (c1)). Then the leader would get two tuples via Gather node\nfor that row, both are (1, NULL, agg(3)), one is from group by (c1,c2)\nand one is from group by (c1). If the leader cannot tell that the\ntwo tuples are from two different grouping sets, it will merge them\nincorrectly.\n\nSo we add a hidden column 'gset_id', representing grouping set id, to\nthe targetlist of Partial Aggregate node, as well as to the group key\nfor Finalize Aggregate node. So only tuples coming from the same\ngrouping set can get merged in the final stage of aggregation.\n\nWith this method, for grouping sets with multiple rollups, to simplify\nthe implementation, we generate a separate aggregation path for each\nrollup, and then append them for the final path.\n\nReferences:\n[1] https://github.com/greenplum-db/postgres/tree/parallel_groupingsets\n[2] https://github.com/greenplum-db/postgres/tree/parallel_groupingsets_2\n\nAny comments and feedback are welcome.\n\nThanks\nRichard",
"msg_date": "Wed, 12 Jun 2019 10:58:44 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Parallel grouping sets"
},
{
"msg_contents": "On Wed, 12 Jun 2019 at 14:59, Richard Guo <riguo@pivotal.io> wrote:\n> Implementation 1\n\n> Parallel aggregation has already been supported in PostgreSQL and it is\n> implemented by aggregating in two stages. First, each worker performs an\n> aggregation step, producing a partial result for each group of which\n> that process is aware. Second, the partial results are transferred to\n> the leader via the Gather node. Finally, the leader merges the partial\n> results and produces the final result for each group.\n>\n> We are implementing parallel grouping sets in the same way. The only\n> difference is that in the final stage, the leader performs a grouping\n> sets aggregation, rather than a normal aggregation.\n\nHi Richard,\n\nI think it was you an I that discussed #1 at unconference at PGCon 2\nweeks ago. The good thing about #1 is that it can be implemented as\nplanner-only changes just by adding some additional paths and some\ncosting. #2 will be useful when we're unable to reduce the number of\ninputs to the final aggregate node by doing the initial grouping.\nHowever, since #1 is easier, then I'd suggest going with it first,\nsince it's the path of least resistance. #1 should be fine as long as\nyou properly cost the parallel agg and don't choose it when the number\nof groups going into the final agg isn't reduced by the partial agg\nnode. Which brings me to:\n\nYou'll need to do further work with the dNumGroups value. Since you're\ngrouping by all the columns/exprs in the grouping sets you'll need the\nnumber of groups to be an estimate of that.\n\nHere's a quick test I did that shows the problem:\n\ncreate table abc(a int, b int, c int);\ninsert into abc select a,b,1 from generate_Series(1,1000)\na,generate_Series(1,1000) b;\ncreate statistics abc_a_b_stats (ndistinct) on a,b from abc;\nanalyze abc;\n\n-- Here the Partial HashAggregate really should estimate that there\nwill be 1 million rows.\nexplain analyze select a,b,sum(c) from abc group by grouping sets ((a),(b));\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Finalize HashAggregate (cost=14137.67..14177.67 rows=2000 width=16)\n(actual time=1482.746..1483.203 rows=2000 loops=1)\n Hash Key: a\n Hash Key: b\n -> Gather (cost=13697.67..14117.67 rows=4000 width=16) (actual\ntime=442.140..765.931 rows=1000000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial HashAggregate (cost=12697.67..12717.67 rows=2000\nwidth=16) (actual time=402.917..526.045 rows=333333 loops=3)\n Group Key: a, b\n -> Parallel Seq Scan on abc (cost=0.00..9572.67\nrows=416667 width=12) (actual time=0.036..50.275 rows=333333 loops=3)\n Planning Time: 0.140 ms\n Execution Time: 1489.734 ms\n(11 rows)\n\nbut really, likely the parallel plan should not be chosen in this case\nsince we're not really reducing the number of groups going into the\nfinalize aggregate node. That'll need to be factored into the costing\nso that we don't choose the parallel plan when we're not going to\nreduce the work in the finalize aggregate node. I'm unsure exactly how\nthat'll look. Logically, I think the choice parallelize or not to\nparallelize needs to be if (cost_partial_agg + cost_gather +\ncost_final_agg < cost_agg) { do it in parallel } else { do it in\nserial }. If you build both a serial and parallel set of paths then\nyou should see which one is cheaper without actually constructing an\n\"if\" test like the one above.\n\nHere's a simple group by with the same group by clause items as you\nhave in the plan above that does get the estimated number of groups\nperfectly. The plan above should have the same estimate.\n\nexplain analyze select a,b,sum(c) from abc group by a,b;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=132154.34..152154.34 rows=1000000 width=16)\n(actual time=404.304..1383.343 rows=1000000 loops=1)\n Group Key: a, b\n -> Sort (cost=132154.34..134654.34 rows=1000000 width=12) (actual\ntime=404.291..620.774 rows=1000000 loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 21584kB\n -> Seq Scan on abc (cost=0.00..15406.00 rows=1000000\nwidth=12) (actual time=0.017..100.299 rows=1000000 loops=1)\n Planning Time: 0.115 ms\n Execution Time: 1412.034 ms\n(8 rows)\n\nAlso, in the tests:\n\n> insert into gstest select 1,10,100 from generate_series(1,1000000)i;\n> insert into gstest select 1,10,200 from generate_series(1,1000000)i;\n> insert into gstest select 1,20,30 from generate_series(1,1000000)i;\n> insert into gstest select 2,30,40 from generate_series(1,1000000)i;\n> insert into gstest select 2,40,50 from generate_series(1,1000000)i;\n> insert into gstest select 3,50,60 from generate_series(1,1000000)i;\n> insert into gstest select 1,NULL,000000 from generate_series(1,1000000)i;\n> analyze gstest;\n\nYou'll likely want to reduce the number of rows being used just to\nstop the regression tests becoming slow on older machines. I think\nsome of the other parallel aggregate tests use must fewer rows than\nwhat you're using there. You might be able to use the standard set of\nregression test tables too, tenk, tenk1 etc. That'll save the test\nhaving to build and populate one of its own.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 16:29:23 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 12:29 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Wed, 12 Jun 2019 at 14:59, Richard Guo <riguo@pivotal.io> wrote:\n> > Implementation 1\n>\n> > Parallel aggregation has already been supported in PostgreSQL and it is\n> > implemented by aggregating in two stages. First, each worker performs an\n> > aggregation step, producing a partial result for each group of which\n> > that process is aware. Second, the partial results are transferred to\n> > the leader via the Gather node. Finally, the leader merges the partial\n> > results and produces the final result for each group.\n> >\n> > We are implementing parallel grouping sets in the same way. The only\n> > difference is that in the final stage, the leader performs a grouping\n> > sets aggregation, rather than a normal aggregation.\n>\n> Hi Richard,\n>\n> I think it was you an I that discussed #1 at unconference at PGCon 2\n> weeks ago. The good thing about #1 is that it can be implemented as\n> planner-only changes just by adding some additional paths and some\n> costing. #2 will be useful when we're unable to reduce the number of\n> inputs to the final aggregate node by doing the initial grouping.\n> However, since #1 is easier, then I'd suggest going with it first,\n> since it's the path of least resistance. #1 should be fine as long as\n> you properly cost the parallel agg and don't choose it when the number\n> of groups going into the final agg isn't reduced by the partial agg\n> node. Which brings me to:\n>\n\nHi David,\n\nYes. Thank you for the discussion at PGCon. I learned a lot from that.\nAnd glad to meet you here. :)\n\nI agree with you on going with #1 first.\n\n\n>\n> You'll need to do further work with the dNumGroups value. Since you're\n> grouping by all the columns/exprs in the grouping sets you'll need the\n> number of groups to be an estimate of that.\n>\n\nExactly. The v1 patch estimates number of partial groups incorrectly, as\nit calculates the number of groups for each grouping set and then add\nthem for dNumPartialPartialGroups, while we actually should calculate\nthe number of groups for all the columns in the grouping sets. I have\nfixed this issue in v2 patch.\n\n\n>\n> Here's a quick test I did that shows the problem:\n>\n> create table abc(a int, b int, c int);\n> insert into abc select a,b,1 from generate_Series(1,1000)\n> a,generate_Series(1,1000) b;\n> create statistics abc_a_b_stats (ndistinct) on a,b from abc;\n> analyze abc;\n>\n> -- Here the Partial HashAggregate really should estimate that there\n> will be 1 million rows.\n> explain analyze select a,b,sum(c) from abc group by grouping sets\n> ((a),(b));\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Finalize HashAggregate (cost=14137.67..14177.67 rows=2000 width=16)\n> (actual time=1482.746..1483.203 rows=2000 loops=1)\n> Hash Key: a\n> Hash Key: b\n> -> Gather (cost=13697.67..14117.67 rows=4000 width=16) (actual\n> time=442.140..765.931 rows=1000000 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Partial HashAggregate (cost=12697.67..12717.67 rows=2000\n> width=16) (actual time=402.917..526.045 rows=333333 loops=3)\n> Group Key: a, b\n> -> Parallel Seq Scan on abc (cost=0.00..9572.67\n> rows=416667 width=12) (actual time=0.036..50.275 rows=333333 loops=3)\n> Planning Time: 0.140 ms\n> Execution Time: 1489.734 ms\n> (11 rows)\n>\n> but really, likely the parallel plan should not be chosen in this case\n> since we're not really reducing the number of groups going into the\n> finalize aggregate node. That'll need to be factored into the costing\n> so that we don't choose the parallel plan when we're not going to\n> reduce the work in the finalize aggregate node. I'm unsure exactly how\n> that'll look. Logically, I think the choice parallelize or not to\n> parallelize needs to be if (cost_partial_agg + cost_gather +\n> cost_final_agg < cost_agg) { do it in parallel } else { do it in\n> serial }. If you build both a serial and parallel set of paths then\n> you should see which one is cheaper without actually constructing an\n> \"if\" test like the one above.\n>\n\nBoth the serial and parallel set of paths would be built and the cheaper\none will be selected. So we don't need the 'if' test.\n\nWith v2 patch, the parallel plan will not be chosen for the above query:\n\n# explain analyze select a,b,sum(c) from abc group by grouping sets\n((a),(b));\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=20406.00..25426.00 rows=2000 width=16) (actual\ntime=935.048..935.697 rows=2000 loops=1)\n Hash Key: a\n Hash Key: b\n -> Seq Scan on abc (cost=0.00..15406.00 rows=1000000 width=12) (actual\ntime=0.041..170.906 rows=1000000 loops=1)\n Planning Time: 0.240 ms\n Execution Time: 935.978 ms\n(6 rows)\n\n\n>\n> Here's a simple group by with the same group by clause items as you\n> have in the plan above that does get the estimated number of groups\n> perfectly. The plan above should have the same estimate.\n>\n> explain analyze select a,b,sum(c) from abc group by a,b;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=132154.34..152154.34 rows=1000000 width=16)\n> (actual time=404.304..1383.343 rows=1000000 loops=1)\n> Group Key: a, b\n> -> Sort (cost=132154.34..134654.34 rows=1000000 width=12) (actual\n> time=404.291..620.774 rows=1000000 loops=1)\n> Sort Key: a, b\n> Sort Method: external merge Disk: 21584kB\n> -> Seq Scan on abc (cost=0.00..15406.00 rows=1000000\n> width=12) (actual time=0.017..100.299 rows=1000000 loops=1)\n> Planning Time: 0.115 ms\n> Execution Time: 1412.034 ms\n> (8 rows)\n>\n> Also, in the tests:\n>\n> > insert into gstest select 1,10,100 from generate_series(1,1000000)i;\n> > insert into gstest select 1,10,200 from generate_series(1,1000000)i;\n> > insert into gstest select 1,20,30 from generate_series(1,1000000)i;\n> > insert into gstest select 2,30,40 from generate_series(1,1000000)i;\n> > insert into gstest select 2,40,50 from generate_series(1,1000000)i;\n> > insert into gstest select 3,50,60 from generate_series(1,1000000)i;\n> > insert into gstest select 1,NULL,000000 from generate_series(1,1000000)i;\n> > analyze gstest;\n>\n> You'll likely want to reduce the number of rows being used just to\n> stop the regression tests becoming slow on older machines. I think\n> some of the other parallel aggregate tests use must fewer rows than\n> what you're using there. You might be able to use the standard set of\n> regression test tables too, tenk, tenk1 etc. That'll save the test\n> having to build and populate one of its own.\n>\n\nYes, that makes sense. Table size has been reduced in v2 patch.\nCurrently I do not use the standard regression test tables as I'd like\nto customize the table with some specific data for correctness\nverification. But we may switch to the standard test table later.\n\nAlso in v2 patch, I'v fixed two addition issues. One is about the sort\nkey for sort-based grouping sets in Partial Aggregate, which should be\nall the columns in parse->groupClause. The other one is about\nGroupingFunc. Since Partial Aggregate will not handle multiple grouping\nsets at once, it does not need to evaluate GroupingFunc. So GroupingFunc\nis removed from the targetlists of Partial Aggregate.\n\nThanks\nRichard",
"msg_date": "Thu, 13 Jun 2019 18:24:04 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 10:58:44AM +0800, Richard Guo wrote:\n>Hi all,\n>\n>Paul and I have been hacking recently to implement parallel grouping\n>sets, and here we have two implementations.\n>\n>Implementation 1\n>================\n>\n>Attached is the patch and also there is a github branch [1] for this\n>work.\n>\n>Parallel aggregation has already been supported in PostgreSQL and it is\n>implemented by aggregating in two stages. First, each worker performs an\n>aggregation step, producing a partial result for each group of which\n>that process is aware. Second, the partial results are transferred to\n>the leader via the Gather node. Finally, the leader merges the partial\n>results and produces the final result for each group.\n>\n>We are implementing parallel grouping sets in the same way. The only\n>difference is that in the final stage, the leader performs a grouping\n>sets aggregation, rather than a normal aggregation.\n>\n>The plan looks like:\n>\n># explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\n>grouping sets((c1,c2), (c1), (c2,c3));\n> QUERY PLAN\n>---------------------------------------------------------\n> Finalize MixedAggregate\n> Output: c1, c2, avg(c3), c3\n> Hash Key: t2.c2, t2.c3\n> Group Key: t2.c1, t2.c2\n> Group Key: t2.c1\n> -> Gather Merge\n> Output: c1, c2, c3, (PARTIAL avg(c3))\n> Workers Planned: 2\n> -> Sort\n> Output: c1, c2, c3, (PARTIAL avg(c3))\n> Sort Key: t2.c1, t2.c2\n> -> Partial HashAggregate\n> Output: c1, c2, c3, PARTIAL avg(c3)\n> Group Key: t2.c1, t2.c2, t2.c3\n> -> Parallel Seq Scan on public.t2\n> Output: c1, c2, c3\n>(16 rows)\n>\n>As the partial aggregation can be performed in parallel, we can expect a\n>speedup if the number of groups seen by the Finalize Aggregate node is\n>some less than the number of input rows.\n>\n>For example, for the table provided in the test case within the patch,\n>running the above query in my Linux box:\n>\n># explain analyze select c1, c2, avg(c3) from t2 group by grouping\n>sets((c1,c2), (c1), (c2,c3)); -- without patch\n> Planning Time: 0.123 ms\n> Execution Time: 9459.362 ms\n>\n># explain analyze select c1, c2, avg(c3) from t2 group by grouping\n>sets((c1,c2), (c1), (c2,c3)); -- with patch\n> Planning Time: 0.204 ms\n> Execution Time: 1077.654 ms\n>\n\nVery nice. That's pretty much exactly how I imagined it'd work.\n\n>But sometimes we may not benefit from this patch. For example, in the\n>worst-case scenario the number of groups seen by the Finalize Aggregate\n>node could be as many as the number of input rows which were seen by all\n>worker processes in the Partial Aggregate stage. This is prone to\n>happening with this patch, because the group key for Partial Aggregate\n>is all the columns involved in the grouping sets, such as in the above\n>query, it is (c1, c2, c3).\n>\n>So, we have been working on another way to implement parallel grouping\n>sets.\n>\n>Implementation 2\n>================\n>\n>This work can be found in github branch [2]. As it contains some hacky\n>codes and a list of TODO items, this is far from a patch. So please\n>consider it as a PoC.\n>\n>The idea is instead of performing grouping sets aggregation in Finalize\n>Aggregate, we perform it in Partial Aggregate.\n>\n>The plan looks like:\n>\n># explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\n>grouping sets((c1,c2), (c1));\n> QUERY PLAN\n>--------------------------------------------------------------\n> Finalize GroupAggregate\n> Output: c1, c2, avg(c3), (gset_id)\n> Group Key: t2.c1, t2.c2, (gset_id)\n> -> Gather Merge\n> Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n> Workers Planned: 2\n> -> Sort\n> Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n> Sort Key: t2.c1, t2.c2, (gset_id)\n> -> Partial HashAggregate\n> Output: c1, c2, gset_id, PARTIAL avg(c3)\n> Hash Key: t2.c1, t2.c2\n> Hash Key: t2.c1\n> -> Parallel Seq Scan on public.t2\n> Output: c1, c2, c3\n>(15 rows)\n>\n\nOK, I'm not sure I understand the point of this - can you give an\nexample which is supposed to benefit from this? Where does the speedup\ncame from? \n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 14 Jun 2019 01:45:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 11:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Wed, Jun 12, 2019 at 10:58:44AM +0800, Richard Guo wrote:\n\n> ># explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\n> >grouping sets((c1,c2), (c1));\n> > QUERY PLAN\n> >--------------------------------------------------------------\n> > Finalize GroupAggregate\n> > Output: c1, c2, avg(c3), (gset_id)\n> > Group Key: t2.c1, t2.c2, (gset_id)\n> > -> Gather Merge\n> > Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n> > Workers Planned: 2\n> > -> Sort\n> > Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n> > Sort Key: t2.c1, t2.c2, (gset_id)\n> > -> Partial HashAggregate\n> > Output: c1, c2, gset_id, PARTIAL avg(c3)\n> > Hash Key: t2.c1, t2.c2\n> > Hash Key: t2.c1\n> > -> Parallel Seq Scan on public.t2\n> > Output: c1, c2, c3\n> >(15 rows)\n> >\n>\n> OK, I'm not sure I understand the point of this - can you give an\n> example which is supposed to benefit from this? Where does the speedup\n> came from?\n\nI think this is a bad example since the first grouping set is a\nsuperset of the 2nd. If those were independent and each grouping set\nproduced a reasonable number of groups then it may be better to do it\nthis way instead of grouping by all exprs in all grouping sets in the\nfirst phase, as is done by #1. To do #2 would require that we tag\nthe aggregate state with the grouping set that belong to, which seem\nto be what gset_id is in Richard's output.\n\nIn my example upthread the first phase of aggregation produced a group\nper input row. Method #2 would work better for that case since it\nwould only produce 2000 groups instead of 1 million.\n\nLikely both methods would be good to consider, but since #1 seems much\neasier than #2, then to me it seems to make sense to start there.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 12:02:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 12:02:52PM +1200, David Rowley wrote:\n>On Fri, 14 Jun 2019 at 11:45, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> On Wed, Jun 12, 2019 at 10:58:44AM +0800, Richard Guo wrote:\n>\n>> ># explain (costs off, verbose) select c1, c2, avg(c3) from t2 group by\n>> >grouping sets((c1,c2), (c1));\n>> > QUERY PLAN\n>> >--------------------------------------------------------------\n>> > Finalize GroupAggregate\n>> > Output: c1, c2, avg(c3), (gset_id)\n>> > Group Key: t2.c1, t2.c2, (gset_id)\n>> > -> Gather Merge\n>> > Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n>> > Workers Planned: 2\n>> > -> Sort\n>> > Output: c1, c2, (gset_id), (PARTIAL avg(c3))\n>> > Sort Key: t2.c1, t2.c2, (gset_id)\n>> > -> Partial HashAggregate\n>> > Output: c1, c2, gset_id, PARTIAL avg(c3)\n>> > Hash Key: t2.c1, t2.c2\n>> > Hash Key: t2.c1\n>> > -> Parallel Seq Scan on public.t2\n>> > Output: c1, c2, c3\n>> >(15 rows)\n>> >\n>>\n>> OK, I'm not sure I understand the point of this - can you give an\n>> example which is supposed to benefit from this? Where does the speedup\n>> came from?\n>\n>I think this is a bad example since the first grouping set is a\n>superset of the 2nd. If those were independent and each grouping set\n>produced a reasonable number of groups then it may be better to do it\n>this way instead of grouping by all exprs in all grouping sets in the\n>first phase, as is done by #1. To do #2 would require that we tag\n>the aggregate state with the grouping set that belong to, which seem\n>to be what gset_id is in Richard's output.\n>\n\nAha! So if we have grouping sets (a,b) and (c,d), then with the first\napproach we'd do partial aggregate on (a,b,c,d) - which may produce\nquite a few distinct groups, making it inefficient. But with the second\napproach, we'd do just (a,b) and (c,d) and mark the rows with gset_id.\n\nNeat!\n\n>In my example upthread the first phase of aggregation produced a group\n>per input row. Method #2 would work better for that case since it\n>would only produce 2000 groups instead of 1 million.\n>\n>Likely both methods would be good to consider, but since #1 seems much\n>easier than #2, then to me it seems to make sense to start there.\n>\n\nYep. Thanks for the explanation.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 14 Jun 2019 02:44:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 10:58 AM Richard Guo <riguo@pivotal.io> wrote:\n\n> Hi all,\n>\n> Paul and I have been hacking recently to implement parallel grouping\n> sets, and here we have two implementations.\n>\n> Implementation 1\n> ================\n>\n> Attached is the patch and also there is a github branch [1] for this\n> work.\n>\n\nRebased with the latest master.\n\nThanks\nRichard",
"msg_date": "Tue, 30 Jul 2019 15:50:32 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 03:50:32PM +0800, Richard Guo wrote:\n>On Wed, Jun 12, 2019 at 10:58 AM Richard Guo <riguo@pivotal.io> wrote:\n>\n>> Hi all,\n>>\n>> Paul and I have been hacking recently to implement parallel grouping\n>> sets, and here we have two implementations.\n>>\n>> Implementation 1\n>> ================\n>>\n>> Attached is the patch and also there is a github branch [1] for this\n>> work.\n>>\n>\n>Rebased with the latest master.\n>\n\nHi Richard,\n\nthanks for the rebased patch. I think the patch is mostly fine (at least I\ndon't see any serious issues). A couple minor comments:\n\n1) I think get_number_of_groups() would deserve a short explanation why\nit's OK to handle (non-partial) grouping sets and regular GROUP BY in the\nsame branch. Before these cases were clearly separated, now it seems a bit\nmixed up and it may not be immediately obvious why it's OK.\n\n2) There are new regression tests, but they are not added to any schedule\n(parallel or serial), and so are not executed as part of \"make check\". I\nsuppose this is a mistake.\n\n3) The regression tests do check plan and results like this:\n\n EXPLAIN (COSTS OFF, VERBOSE) SELECT ...;\n SELECT ... ORDER BY 1, 2, 3;\n\nwhich however means that the query might easily use a different plan than\nwhat's verified in the eplain (thanks to the additional ORDER BY clause).\nSo I think this should explain and execute the same query.\n\n(In this case the plans seems to be the same, but that may easily change\nin the future, and we could miss it here, failing to verify the results.)\n\n4) It might be a good idea to check the negative case too, i.e. a query on\ndata set that we should not parallelize (because the number of partial\ngroups would be too high).\n\n\nDo you have any plans to hack on the second approach too? AFAICS those two\napproaches are complementary (address different data sets / queries), and\nit would be nice to have both. One of the things I've been wondering is if\nwe need to invent gset_id as a new concept, or if we could simply use the\nexisting GROUPING() function - that uniquely identifies the grouping set.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 30 Jul 2019 17:05:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 11:05 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Tue, Jul 30, 2019 at 03:50:32PM +0800, Richard Guo wrote:\n> >On Wed, Jun 12, 2019 at 10:58 AM Richard Guo <riguo@pivotal.io> wrote:\n> >\n> >> Hi all,\n> >>\n> >> Paul and I have been hacking recently to implement parallel grouping\n> >> sets, and here we have two implementations.\n> >>\n> >> Implementation 1\n> >> ================\n> >>\n> >> Attached is the patch and also there is a github branch [1] for this\n> >> work.\n> >>\n> >\n> >Rebased with the latest master.\n> >\n>\n> Hi Richard,\n>\n> thanks for the rebased patch. I think the patch is mostly fine (at least I\n> don't see any serious issues). A couple minor comments:\n>\n\nHi Tomas,\n\nThank you for reviewing this patch.\n\n\n>\n> 1) I think get_number_of_groups() would deserve a short explanation why\n> it's OK to handle (non-partial) grouping sets and regular GROUP BY in the\n> same branch. Before these cases were clearly separated, now it seems a bit\n> mixed up and it may not be immediately obvious why it's OK.\n>\n\nAdded a short comment in get_number_of_groups() explaining the behavior\nwhen doing partial aggregation for grouping sets.\n\n\n>\n> 2) There are new regression tests, but they are not added to any schedule\n> (parallel or serial), and so are not executed as part of \"make check\". I\n> suppose this is a mistake.\n>\n\nYes, thanks. Added the new regression test in parallel_schedule and\nserial_schedule.\n\n\n>\n> 3) The regression tests do check plan and results like this:\n>\n> EXPLAIN (COSTS OFF, VERBOSE) SELECT ...;\n> SELECT ... ORDER BY 1, 2, 3;\n>\n> which however means that the query might easily use a different plan than\n> what's verified in the eplain (thanks to the additional ORDER BY clause).\n> So I think this should explain and execute the same query.\n>\n> (In this case the plans seems to be the same, but that may easily change\n> in the future, and we could miss it here, failing to verify the results.)\n>\n\nThank you for pointing this out. Fixed it in V4 patch.\n\n\n>\n> 4) It might be a good idea to check the negative case too, i.e. a query on\n> data set that we should not parallelize (because the number of partial\n> groups would be too high).\n>\n\nYes, agree. Added a negative case.\n\n\n>\n>\n> Do you have any plans to hack on the second approach too? AFAICS those two\n> approaches are complementary (address different data sets / queries), and\n> it would be nice to have both. One of the things I've been wondering is if\n> we need to invent gset_id as a new concept, or if we could simply use the\n> existing GROUPING() function - that uniquely identifies the grouping set.\n>\n>\nYes, I'm planning to hack on the second approach in short future. I'm\nalso reconsidering the gset_id stuff since it brings a lot of complexity\nfor the second approach. I agree with you that we can try GROUPING()\nfunction to see if it can replace gset_id.\n\nThanks\nRichard",
"msg_date": "Wed, 31 Jul 2019 16:06:30 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi Richard & Tomas:\n\nI followed the idea of the second approach to add a gset_id in the\ntargetlist of the first stage of\ngrouping sets and uses it to combine the aggregate in final stage. gset_id\nstuff is still kept\nbecause of GROUPING() cannot uniquely identify a grouping set, grouping\nsets may contain\nduplicated set, eg: group by grouping sets((c1, c2), (c1,c2)).\n\nThere are some differences to implement the second approach comparing to\nthe original idea from\nRichard, gset_id is not used as additional group key in the final stage,\ninstead, we use it to\ndispatch the input tuple to the specified grouping set directly and then do\nthe aggregate.\nOne advantage of this is that we can handle multiple rollups with better\nperformance without APPEND node.\n\nthe plan now looks like:\n\ngpadmin=# explain select c1, c2 from gstest group by grouping\nsets(rollup(c1, c2), rollup(c3));\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Finalize MixedAggregate (cost=1000.00..73108.57 rows=8842 width=12)\n Dispatched by: (GROUPINGSETID())\n Hash Key: c1, c2\n Hash Key: c1\n Hash Key: c3\n Group Key: ()\n Group Key: ()\n -> Gather (cost=1000.00..71551.48 rows=17684 width=16)\n Workers Planned: 2\n -> Partial MixedAggregate (cost=0.00..68783.08 rows=8842\nwidth=16)\n Hash Key: c1, c2\n Hash Key: c1\n Hash Key: c3\n Group Key: ()\n Group Key: ()\n -> Parallel Seq Scan on gstest (cost=0.00..47861.33\nrows=2083333 width=12)\n(16 rows)\n\ngpadmin=# set enable_hashagg to off;\ngpadmin=# explain select c1, c2 from gstest group by grouping\nsets(rollup(c1, c2), rollup(c3));\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Finalize GroupAggregate (cost=657730.66..663207.45 rows=8842 width=12)\n Dispatched by: (GROUPINGSETID())\n Group Key: c1, c2\n Sort Key: c1\n Group Key: c1\n Group Key: ()\n Group Key: ()\n Sort Key: c3\n Group Key: c3\n -> Sort (cost=657730.66..657774.87 rows=17684 width=16)\n Sort Key: c1, c2\n -> Gather (cost=338722.94..656483.04 rows=17684 width=16)\n Workers Planned: 2\n -> Partial GroupAggregate (cost=337722.94..653714.64\nrows=8842 width=16)\n Group Key: c1, c2\n Group Key: c1\n Group Key: ()\n Group Key: ()\n Sort Key: c3\n Group Key: c3\n -> Sort (cost=337722.94..342931.28 rows=2083333\nwidth=12)\n Sort Key: c1, c2\n -> Parallel Seq Scan on gstest\n (cost=0.00..47861.33 rows=2083333 width=12)\n\nReferences:\n[1] https://github.com/greenplum-db/postgres/tree/parallel_groupingsets\n<https://github.com/greenplum-db/postgres/tree/parallel_groupingsets_3>_3\n\nOn Wed, Jul 31, 2019 at 4:07 PM Richard Guo <riguo@pivotal.io> wrote:\n\n> On Tue, Jul 30, 2019 at 11:05 PM Tomas Vondra <\n> tomas.vondra@2ndquadrant.com> wrote:\n>\n>> On Tue, Jul 30, 2019 at 03:50:32PM +0800, Richard Guo wrote:\n>> >On Wed, Jun 12, 2019 at 10:58 AM Richard Guo <riguo@pivotal.io> wrote:\n>> >\n>> >> Hi all,\n>> >>\n>> >> Paul and I have been hacking recently to implement parallel grouping\n>> >> sets, and here we have two implementations.\n>> >>\n>> >> Implementation 1\n>> >> ================\n>> >>\n>> >> Attached is the patch and also there is a github branch [1] for this\n>> >> work.\n>> >>\n>> >\n>> >Rebased with the latest master.\n>> >\n>>\n>> Hi Richard,\n>>\n>> thanks for the rebased patch. I think the patch is mostly fine (at least I\n>> don't see any serious issues). A couple minor comments:\n>>\n>\n> Hi Tomas,\n>\n> Thank you for reviewing this patch.\n>\n>\n>>\n>> 1) I think get_number_of_groups() would deserve a short explanation why\n>> it's OK to handle (non-partial) grouping sets and regular GROUP BY in the\n>> same branch. Before these cases were clearly separated, now it seems a bit\n>> mixed up and it may not be immediately obvious why it's OK.\n>>\n>\n> Added a short comment in get_number_of_groups() explaining the behavior\n> when doing partial aggregation for grouping sets.\n>\n>\n>>\n>> 2) There are new regression tests, but they are not added to any schedule\n>> (parallel or serial), and so are not executed as part of \"make check\". I\n>> suppose this is a mistake.\n>>\n>\n> Yes, thanks. Added the new regression test in parallel_schedule and\n> serial_schedule.\n>\n>\n>>\n>> 3) The regression tests do check plan and results like this:\n>>\n>> EXPLAIN (COSTS OFF, VERBOSE) SELECT ...;\n>> SELECT ... ORDER BY 1, 2, 3;\n>>\n>> which however means that the query might easily use a different plan than\n>> what's verified in the eplain (thanks to the additional ORDER BY clause).\n>> So I think this should explain and execute the same query.\n>>\n>> (In this case the plans seems to be the same, but that may easily change\n>> in the future, and we could miss it here, failing to verify the results.)\n>>\n>\n> Thank you for pointing this out. Fixed it in V4 patch.\n>\n>\n>>\n>> 4) It might be a good idea to check the negative case too, i.e. a query on\n>> data set that we should not parallelize (because the number of partial\n>> groups would be too high).\n>>\n>\n> Yes, agree. Added a negative case.\n>\n>\n>>\n>>\n>> Do you have any plans to hack on the second approach too? AFAICS those two\n>> approaches are complementary (address different data sets / queries), and\n>> it would be nice to have both. One of the things I've been wondering is if\n>> we need to invent gset_id as a new concept, or if we could simply use the\n>> existing GROUPING() function - that uniquely identifies the grouping set.\n>>\n>>\n> Yes, I'm planning to hack on the second approach in short future. I'm\n> also reconsidering the gset_id stuff since it brings a lot of complexity\n> for the second approach. I agree with you that we can try GROUPING()\n> function to see if it can replace gset_id.\n>\n> Thanks\n> Richard\n>",
"msg_date": "Mon, 30 Sep 2019 17:41:23 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi Hackers,\n\nRichard pointed out that he get incorrect results with the patch I\nattached, there are bugs somewhere,\nI fixed them now and attached the newest version, please refer to [1] for\nthe fix.\n\nThanks,\nPengzhou\n\nReferences:\n[1] https://github.com/greenplum-db/postgres/tree/parallel_groupingsets\n<https://github.com/greenplum-db/postgres/tree/parallel_groupingsets_3>_3\n\nOn Mon, Sep 30, 2019 at 5:41 PM Pengzhou Tang <ptang@pivotal.io> wrote:\n\n> Hi Richard & Tomas:\n>\n> I followed the idea of the second approach to add a gset_id in the\n> targetlist of the first stage of\n> grouping sets and uses it to combine the aggregate in final stage. gset_id\n> stuff is still kept\n> because of GROUPING() cannot uniquely identify a grouping set, grouping\n> sets may contain\n> duplicated set, eg: group by grouping sets((c1, c2), (c1,c2)).\n>\n> There are some differences to implement the second approach comparing to\n> the original idea from\n> Richard, gset_id is not used as additional group key in the final stage,\n> instead, we use it to\n> dispatch the input tuple to the specified grouping set directly and then\n> do the aggregate.\n> One advantage of this is that we can handle multiple rollups with better\n> performance without APPEND node.\n>\n> the plan now looks like:\n>\n> gpadmin=# explain select c1, c2 from gstest group by grouping\n> sets(rollup(c1, c2), rollup(c3));\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------\n> Finalize MixedAggregate (cost=1000.00..73108.57 rows=8842 width=12)\n> Dispatched by: (GROUPINGSETID())\n> Hash Key: c1, c2\n> Hash Key: c1\n> Hash Key: c3\n> Group Key: ()\n> Group Key: ()\n> -> Gather (cost=1000.00..71551.48 rows=17684 width=16)\n> Workers Planned: 2\n> -> Partial MixedAggregate (cost=0.00..68783.08 rows=8842\n> width=16)\n> Hash Key: c1, c2\n> Hash Key: c1\n> Hash Key: c3\n> Group Key: ()\n> Group Key: ()\n> -> Parallel Seq Scan on gstest (cost=0.00..47861.33\n> rows=2083333 width=12)\n> (16 rows)\n>\n> gpadmin=# set enable_hashagg to off;\n> gpadmin=# explain select c1, c2 from gstest group by grouping\n> sets(rollup(c1, c2), rollup(c3));\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------\n> Finalize GroupAggregate (cost=657730.66..663207.45 rows=8842 width=12)\n> Dispatched by: (GROUPINGSETID())\n> Group Key: c1, c2\n> Sort Key: c1\n> Group Key: c1\n> Group Key: ()\n> Group Key: ()\n> Sort Key: c3\n> Group Key: c3\n> -> Sort (cost=657730.66..657774.87 rows=17684 width=16)\n> Sort Key: c1, c2\n> -> Gather (cost=338722.94..656483.04 rows=17684 width=16)\n> Workers Planned: 2\n> -> Partial GroupAggregate (cost=337722.94..653714.64\n> rows=8842 width=16)\n> Group Key: c1, c2\n> Group Key: c1\n> Group Key: ()\n> Group Key: ()\n> Sort Key: c3\n> Group Key: c3\n> -> Sort (cost=337722.94..342931.28 rows=2083333\n> width=12)\n> Sort Key: c1, c2\n> -> Parallel Seq Scan on gstest\n> (cost=0.00..47861.33 rows=2083333 width=12)\n>\n> References:\n> [1] https://github.com/greenplum-db/postgres/tree/parallel_groupingsets\n> <https://github.com/greenplum-db/postgres/tree/parallel_groupingsets_3>_3\n>\n> On Wed, Jul 31, 2019 at 4:07 PM Richard Guo <riguo@pivotal.io> wrote:\n>\n>> On Tue, Jul 30, 2019 at 11:05 PM Tomas Vondra <\n>> tomas.vondra@2ndquadrant.com> wrote:\n>>\n>>> On Tue, Jul 30, 2019 at 03:50:32PM +0800, Richard Guo wrote:\n>>> >On Wed, Jun 12, 2019 at 10:58 AM Richard Guo <riguo@pivotal.io> wrote:\n>>> >\n>>> >> Hi all,\n>>> >>\n>>> >> Paul and I have been hacking recently to implement parallel grouping\n>>> >> sets, and here we have two implementations.\n>>> >>\n>>> >> Implementation 1\n>>> >> ================\n>>> >>\n>>> >> Attached is the patch and also there is a github branch [1] for this\n>>> >> work.\n>>> >>\n>>> >\n>>> >Rebased with the latest master.\n>>> >\n>>>\n>>> Hi Richard,\n>>>\n>>> thanks for the rebased patch. I think the patch is mostly fine (at least\n>>> I\n>>> don't see any serious issues). A couple minor comments:\n>>>\n>>\n>> Hi Tomas,\n>>\n>> Thank you for reviewing this patch.\n>>\n>>\n>>>\n>>> 1) I think get_number_of_groups() would deserve a short explanation why\n>>> it's OK to handle (non-partial) grouping sets and regular GROUP BY in the\n>>> same branch. Before these cases were clearly separated, now it seems a\n>>> bit\n>>> mixed up and it may not be immediately obvious why it's OK.\n>>>\n>>\n>> Added a short comment in get_number_of_groups() explaining the behavior\n>> when doing partial aggregation for grouping sets.\n>>\n>>\n>>>\n>>> 2) There are new regression tests, but they are not added to any schedule\n>>> (parallel or serial), and so are not executed as part of \"make check\". I\n>>> suppose this is a mistake.\n>>>\n>>\n>> Yes, thanks. Added the new regression test in parallel_schedule and\n>> serial_schedule.\n>>\n>>\n>>>\n>>> 3) The regression tests do check plan and results like this:\n>>>\n>>> EXPLAIN (COSTS OFF, VERBOSE) SELECT ...;\n>>> SELECT ... ORDER BY 1, 2, 3;\n>>>\n>>> which however means that the query might easily use a different plan than\n>>> what's verified in the eplain (thanks to the additional ORDER BY clause).\n>>> So I think this should explain and execute the same query.\n>>>\n>>> (In this case the plans seems to be the same, but that may easily change\n>>> in the future, and we could miss it here, failing to verify the results.)\n>>>\n>>\n>> Thank you for pointing this out. Fixed it in V4 patch.\n>>\n>>\n>>>\n>>> 4) It might be a good idea to check the negative case too, i.e. a query\n>>> on\n>>> data set that we should not parallelize (because the number of partial\n>>> groups would be too high).\n>>>\n>>\n>> Yes, agree. Added a negative case.\n>>\n>>\n>>>\n>>>\n>>> Do you have any plans to hack on the second approach too? AFAICS those\n>>> two\n>>> approaches are complementary (address different data sets / queries), and\n>>> it would be nice to have both. One of the things I've been wondering is\n>>> if\n>>> we need to invent gset_id as a new concept, or if we could simply use the\n>>> existing GROUPING() function - that uniquely identifies the grouping set.\n>>>\n>>>\n>> Yes, I'm planning to hack on the second approach in short future. I'm\n>> also reconsidering the gset_id stuff since it brings a lot of complexity\n>> for the second approach. I agree with you that we can try GROUPING()\n>> function to see if it can replace gset_id.\n>>\n>> Thanks\n>> Richard\n>>\n>",
"msg_date": "Thu, 28 Nov 2019 19:07:22 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Thu, Nov 28, 2019 at 07:07:22PM +0800, Pengzhou Tang wrote:\n> Richard pointed out that he get incorrect results with the patch I\n> attached, there are bugs somewhere,\n> I fixed them now and attached the newest version, please refer to [1] for\n> the fix.\n\nMr Robot is reporting that the latest patch fails to build at least on\nWindows. Could you please send a rebase? I have moved for now the\npatch to next CF, waiting on author.\n--\nMichael",
"msg_date": "Sun, 1 Dec 2019 11:02:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Sun, Dec 1, 2019 at 10:03 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Nov 28, 2019 at 07:07:22PM +0800, Pengzhou Tang wrote:\n> > Richard pointed out that he get incorrect results with the patch I\n> > attached, there are bugs somewhere,\n> > I fixed them now and attached the newest version, please refer to [1] for\n> > the fix.\n>\n> Mr Robot is reporting that the latest patch fails to build at least on\n> Windows. Could you please send a rebase? I have moved for now the\n> patch to next CF, waiting on author.\n\n\nThanks for reporting this issue. Here is the rebase.\n\nThanks\nRichard",
"msg_date": "Wed, 8 Jan 2020 15:24:21 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "I realized that there are two patches in this thread that are\nimplemented according to different methods, which causes confusion. So I\ndecide to update this thread with only one patch, i.e. the patch for\n'Implementation 1' as described in the first email and then move the\nother patch to a separate thread.\n\nWith this idea, here is the patch for 'Implementation 1' that is rebased\nwith the latest master.\n\nThanks\nRichard\n\nOn Wed, Jan 8, 2020 at 3:24 PM Richard Guo <riguo@pivotal.io> wrote:\n\n>\n> On Sun, Dec 1, 2019 at 10:03 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>> On Thu, Nov 28, 2019 at 07:07:22PM +0800, Pengzhou Tang wrote:\n>> > Richard pointed out that he get incorrect results with the patch I\n>> > attached, there are bugs somewhere,\n>> > I fixed them now and attached the newest version, please refer to [1]\n>> for\n>> > the fix.\n>>\n>> Mr Robot is reporting that the latest patch fails to build at least on\n>> Windows. Could you please send a rebase? I have moved for now the\n>> patch to next CF, waiting on author.\n>\n>\n> Thanks for reporting this issue. Here is the rebase.\n>\n> Thanks\n> Richard\n>",
"msg_date": "Sun, 19 Jan 2020 16:52:40 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 2:23 PM Richard Guo <riguo@pivotal.io> wrote:\n>\n> I realized that there are two patches in this thread that are\n> implemented according to different methods, which causes confusion.\n>\n\nBoth the idea seems to be different. Is the second approach [1]\ninferior for any case as compared to the first approach? Can we keep\nboth approaches for parallel grouping sets, if so how? If not, then\nwon't the code by the first approach be useless once we commit second\napproach?\n\n\n[1] - https://www.postgresql.org/message-id/CAN_9JTwtTTnxhbr5AHuqVcriz3HxvPpx1JWE--DCSdJYuHrLtA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jan 2020 16:17:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Thu, Jan 23, 2020 at 2:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 19, 2020 at 2:23 PM Richard Guo <riguo@pivotal.io> wrote:\n> >\n> > I realized that there are two patches in this thread that are\n> > implemented according to different methods, which causes confusion.\n> >\n>\n> Both the idea seems to be different. Is the second approach [1]\n> inferior for any case as compared to the first approach? Can we keep\n> both approaches for parallel grouping sets, if so how? If not, then\n> won't the code by the first approach be useless once we commit second\n> approach?\n>\n>\n> [1] - https://www.postgresql.org/message-id/CAN_9JTwtTTnxhbr5AHuqVcriz3HxvPpx1JWE--DCSdJYuHrLtA%40mail.gmail.com\n>\n\nI glanced over both patches. Just the opposite, I have a hunch that v3\nis always better than v5. Here's my 6-minute understanding of both.\n\nv5 (the one with a simple partial aggregate) works by pushing a little\nbit of partial aggregate onto workers, and perform grouping aggregate\nabove gather. This has two interesting outcomes: we can execute\nunmodified partial aggregate on the workers, and execute almost\nunmodified rollup aggreegate once the trans values are gathered. A\nparallel plan for a query like\n\nSELECT count(*) FROM foo GROUP BY GROUPING SETS (a), (b), (c), ();\n\ncan be\n\nFinalize GroupAggregate\n Output: count(*)\n Group Key: a\n Group Key: b\n Group Key: c\n Group Key: ()\n Gather Merge\n Partial GroupAggregate\n Output: PARTIAL count(*)\n Group Key: a, b, c\n Sort\n Sort Key: a, b, c\n Parallel Seq Scan on foo\n\n\nv3 (\"the one with grouping set id\") really turns the plan from a tree to\na multiplexed pipe: we can execute grouping aggregate on the workers,\nbut only partially. When we emit the trans values, also tag the tuple\nwith a group id. After gather, finalize the aggregates with a modified\ngrouping aggregate. Unlike a non-split grouping aggregate, the finalize\ngrouping aggregate does not \"flow\" the results from one rollup to the\nnext one. Instead, each group only advances on partial inputs tagged for\nthe group.\n\nFinalize HashAggregate\n Output: count(*)\n Dispatched by: (GroupingSetID())\n Group Key: a\n Group Key: b\n Group Key: c\n Gather\n Partial GroupAggregate\n Output: PARTIAL count(*), GroupingSetID()\n Group Key: a\n Sort Key: b\n Group Key: b\n Sort Key: c\n Group Key: c\n Sort\n Sort Key: a\n Parallel Seq Scan on foo\n\nNote that for the first approach to be viable, the partial aggregate\n*has to* use a group key that's the union of all grouping sets. In cases\nwhere individual columns have a low cardinality but joint cardinality is\nhigh (say columns a, b, c each has 16 distinct values, but they are\nindependent, so there are 4096 distinct values on (a,b,c)), this results\nin fairly high traffic through the shm tuple queue.\n\nCheers,\nJesse\n\n\n",
"msg_date": "Fri, 24 Jan 2020 14:51:56 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Sat, Jan 25, 2020 at 4:22 AM Jesse Zhang <sbjesse@gmail.com> wrote:\n>\n> On Thu, Jan 23, 2020 at 2:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Jan 19, 2020 at 2:23 PM Richard Guo <riguo@pivotal.io> wrote:\n> > >\n> > > I realized that there are two patches in this thread that are\n> > > implemented according to different methods, which causes confusion.\n> > >\n> >\n> > Both the idea seems to be different. Is the second approach [1]\n> > inferior for any case as compared to the first approach? Can we keep\n> > both approaches for parallel grouping sets, if so how? If not, then\n> > won't the code by the first approach be useless once we commit second\n> > approach?\n> >\n> >\n> > [1] - https://www.postgresql.org/message-id/CAN_9JTwtTTnxhbr5AHuqVcriz3HxvPpx1JWE--DCSdJYuHrLtA%40mail.gmail.com\n> >\n>\n> I glanced over both patches. Just the opposite, I have a hunch that v3\n> is always better than v5.\n>\n\nThis is what I also understood after reading this thread. So, my\nquestion is why not just review v3 and commit something on those lines\neven though it would take a bit more time. It is possible that if we\ndecide to go with v5, we can make it happen earlier, but later when we\ntry to get v3, the code committed as part of v5 might not be of any\nuse or if it is useful, then in which cases?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 Jan 2020 16:01:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi Jesse,\n\nThanks for reviewing these two patches.\n\nOn Sat, Jan 25, 2020 at 6:52 AM Jesse Zhang <sbjesse@gmail.com> wrote:\n\n>\n> I glanced over both patches. Just the opposite, I have a hunch that v3\n> is always better than v5. Here's my 6-minute understanding of both.\n>\n> v5 (the one with a simple partial aggregate) works by pushing a little\n> bit of partial aggregate onto workers, and perform grouping aggregate\n> above gather. This has two interesting outcomes: we can execute\n> unmodified partial aggregate on the workers, and execute almost\n> unmodified rollup aggreegate once the trans values are gathered. A\n> parallel plan for a query like\n>\n> SELECT count(*) FROM foo GROUP BY GROUPING SETS (a), (b), (c), ();\n>\n> can be\n>\n> Finalize GroupAggregate\n> Output: count(*)\n> Group Key: a\n> Group Key: b\n> Group Key: c\n> Group Key: ()\n> Gather Merge\n> Partial GroupAggregate\n> Output: PARTIAL count(*)\n> Group Key: a, b, c\n> Sort\n> Sort Key: a, b, c\n> Parallel Seq Scan on foo\n>\n\nYes, this is the idea of v5 patch.\n\n\n\n> v3 (\"the one with grouping set id\") really turns the plan from a tree to\n> a multiplexed pipe: we can execute grouping aggregate on the workers,\n> but only partially. When we emit the trans values, also tag the tuple\n> with a group id. After gather, finalize the aggregates with a modified\n> grouping aggregate. Unlike a non-split grouping aggregate, the finalize\n> grouping aggregate does not \"flow\" the results from one rollup to the\n> next one. Instead, each group only advances on partial inputs tagged for\n> the group.\n>\n> Finalize HashAggregate\n> Output: count(*)\n> Dispatched by: (GroupingSetID())\n> Group Key: a\n> Group Key: b\n> Group Key: c\n> Gather\n> Partial GroupAggregate\n> Output: PARTIAL count(*), GroupingSetID()\n> Group Key: a\n> Sort Key: b\n> Group Key: b\n> Sort Key: c\n> Group Key: c\n> Sort\n> Sort Key: a\n> Parallel Seq Scan on foo\n>\n\nYes, this is what v3 patch does.\n\nWe (Pengzhou and I) had an offline discussion on this plan and we have\nsome other idea. Since we have tagged 'GroupingSetId' for each tuple\nproduced by partial aggregate, why not then perform a normal grouping\nsets aggregation in the final phase, with the 'GroupingSetId' included\nin the group keys? The plan looks like:\n\n# explain (costs off, verbose)\nselect c1, c2, c3, avg(c3) from gstest group by grouping\nsets((c1,c2),(c1),(c2,c3));\n QUERY PLAN\n------------------------------------------------------------------\n Finalize GroupAggregate\n Output: c1, c2, c3, avg(c3)\n Group Key: (gset_id), gstest.c1, gstest.c2, gstest.c3\n -> Sort\n Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3))\n Sort Key: (gset_id), gstest.c1, gstest.c2, gstest.c3\n -> Gather\n Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3))\n Workers Planned: 4\n -> Partial HashAggregate\n Output: c1, c2, c3, gset_id, PARTIAL avg(c3)\n Hash Key: gstest.c1, gstest.c2\n Hash Key: gstest.c1\n Hash Key: gstest.c2, gstest.c3\n -> Parallel Seq Scan on public.gstest\n Output: c1, c2, c3\n\nThis plan should be able to give the correct results. We are still\nthinking if it is a better plan than the 'multiplexed pipe' plan as in\nv3. Inputs of thoughts here would be appreciated.\n\n\n> Note that for the first approach to be viable, the partial aggregate\n> *has to* use a group key that's the union of all grouping sets. In cases\n\nwhere individual columns have a low cardinality but joint cardinality is\n> high (say columns a, b, c each has 16 distinct values, but they are\n> independent, so there are 4096 distinct values on (a,b,c)), this results\n> in fairly high traffic through the shm tuple queue.\n>\n\nYes, you are right. This is the case mentioned by David earlier in [1].\nIn this case, ideally the parallel plan would fail when competing with\nnon-parallel plan in add_path() and so not be chosen.\n\n[1] -\nhttps://www.postgresql.org/message-id/CAKJS1f8Q9muALhkapbnO3bPUgAmZkWq9tM_crk8o9=JiiOPWsg@mail.gmail.com\n\nThanks\nRichard\n\nHi Jesse,Thanks for reviewing these two patches.On Sat, Jan 25, 2020 at 6:52 AM Jesse Zhang <sbjesse@gmail.com> wrote:\nI glanced over both patches. Just the opposite, I have a hunch that v3\nis always better than v5. Here's my 6-minute understanding of both.\n\nv5 (the one with a simple partial aggregate) works by pushing a little\nbit of partial aggregate onto workers, and perform grouping aggregate\nabove gather. This has two interesting outcomes: we can execute\nunmodified partial aggregate on the workers, and execute almost\nunmodified rollup aggreegate once the trans values are gathered. A\nparallel plan for a query like\n\nSELECT count(*) FROM foo GROUP BY GROUPING SETS (a), (b), (c), ();\n\ncan be\n\nFinalize GroupAggregate\n Output: count(*)\n Group Key: a\n Group Key: b\n Group Key: c\n Group Key: ()\n Gather Merge\n Partial GroupAggregate\n Output: PARTIAL count(*)\n Group Key: a, b, c\n Sort\n Sort Key: a, b, c\n Parallel Seq Scan on fooYes, this is the idea of v5 patch. \nv3 (\"the one with grouping set id\") really turns the plan from a tree to\na multiplexed pipe: we can execute grouping aggregate on the workers,\nbut only partially. When we emit the trans values, also tag the tuple\nwith a group id. After gather, finalize the aggregates with a modified\ngrouping aggregate. Unlike a non-split grouping aggregate, the finalize\ngrouping aggregate does not \"flow\" the results from one rollup to the\nnext one. Instead, each group only advances on partial inputs tagged for\nthe group.\n\nFinalize HashAggregate\n Output: count(*)\n Dispatched by: (GroupingSetID())\n Group Key: a\n Group Key: b\n Group Key: c\n Gather\n Partial GroupAggregate\n Output: PARTIAL count(*), GroupingSetID()\n Group Key: a\n Sort Key: b\n Group Key: b\n Sort Key: c\n Group Key: c\n Sort\n Sort Key: a\n Parallel Seq Scan on fooYes, this is what v3 patch does.We (Pengzhou and I) had an offline discussion on this plan and we havesome other idea. Since we have tagged 'GroupingSetId' for each tupleproduced by partial aggregate, why not then perform a normal groupingsets aggregation in the final phase, with the 'GroupingSetId' includedin the group keys? The plan looks like:# explain (costs off, verbose)select c1, c2, c3, avg(c3) from gstest group by grouping sets((c1,c2),(c1),(c2,c3)); QUERY PLAN------------------------------------------------------------------ Finalize GroupAggregate Output: c1, c2, c3, avg(c3) Group Key: (gset_id), gstest.c1, gstest.c2, gstest.c3 -> Sort Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3)) Sort Key: (gset_id), gstest.c1, gstest.c2, gstest.c3 -> Gather Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3)) Workers Planned: 4 -> Partial HashAggregate Output: c1, c2, c3, gset_id, PARTIAL avg(c3) Hash Key: gstest.c1, gstest.c2 Hash Key: gstest.c1 Hash Key: gstest.c2, gstest.c3 -> Parallel Seq Scan on public.gstest Output: c1, c2, c3This plan should be able to give the correct results. We are stillthinking if it is a better plan than the 'multiplexed pipe' plan as inv3. Inputs of thoughts here would be appreciated.\n\nNote that for the first approach to be viable, the partial aggregate\n*has to* use a group key that's the union of all grouping sets. In cases\nwhere individual columns have a low cardinality but joint cardinality is\nhigh (say columns a, b, c each has 16 distinct values, but they are\nindependent, so there are 4096 distinct values on (a,b,c)), this results\nin fairly high traffic through the shm tuple queue.Yes, you are right. This is the case mentioned by David earlier in [1].In this case, ideally the parallel plan would fail when competing withnon-parallel plan in add_path() and so not be chosen.[1] - https://www.postgresql.org/message-id/CAKJS1f8Q9muALhkapbnO3bPUgAmZkWq9tM_crk8o9=JiiOPWsg@mail.gmail.com ThanksRichard",
"msg_date": "Mon, 3 Feb 2020 16:07:33 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi Amit,\n\nThanks for reviewing these two patches.\n\nOn Sat, Jan 25, 2020 at 6:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> This is what I also understood after reading this thread. So, my\n> question is why not just review v3 and commit something on those lines\n> even though it would take a bit more time. It is possible that if we\n> decide to go with v5, we can make it happen earlier, but later when we\n> try to get v3, the code committed as part of v5 might not be of any\n> use or if it is useful, then in which cases?\n>\n\nYes, approach #2 (v3) would be generally better than approach #1 (v5) in\nperformance. I started with approach #1 because it is much easier.\n\nIf we decide to go with approach #2, I think we can now concentrate on\nv3 patch.\n\nFor v3 patch, we have some other idea, which is to perform a normal\ngrouping sets aggregation in the final phase, with 'GroupingSetId'\nincluded in the group keys (as described in the previous email). With\nthis idea, we can avoid a lot of hacky codes in current v3 patch.\n\nThanks\nRichard\n\nHi Amit,Thanks for reviewing these two patches.On Sat, Jan 25, 2020 at 6:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\nThis is what I also understood after reading this thread. So, my\nquestion is why not just review v3 and commit something on those lines\neven though it would take a bit more time. It is possible that if we\ndecide to go with v5, we can make it happen earlier, but later when we\ntry to get v3, the code committed as part of v5 might not be of any\nuse or if it is useful, then in which cases?Yes, approach #2 (v3) would be generally better than approach #1 (v5) inperformance. I started with approach #1 because it is much easier.If we decide to go with approach #2, I think we can now concentrate onv3 patch.For v3 patch, we have some other idea, which is to perform a normalgrouping sets aggregation in the final phase, with 'GroupingSetId'included in the group keys (as described in the previous email). Withthis idea, we can avoid a lot of hacky codes in current v3 patch.ThanksRichard",
"msg_date": "Mon, 3 Feb 2020 17:27:22 +0800",
"msg_from": "Richard Guo <riguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 12:07 AM Richard Guo <riguo@pivotal.io> wrote:\n>\n> Hi Jesse,\n>\n> Thanks for reviewing these two patches.\nI enjoyed it!\n\n>\n> On Sat, Jan 25, 2020 at 6:52 AM Jesse Zhang <sbjesse@gmail.com> wrote:\n>>\n>>\n>> I glanced over both patches. Just the opposite, I have a hunch that v3\n>> is always better than v5. Here's my 6-minute understanding of both.\n>>\n>> v3 (\"the one with grouping set id\") really turns the plan from a tree to\n>> a multiplexed pipe: we can execute grouping aggregate on the workers,\n>> but only partially. When we emit the trans values, also tag the tuple\n>> with a group id. After gather, finalize the aggregates with a modified\n>> grouping aggregate. Unlike a non-split grouping aggregate, the finalize\n>> grouping aggregate does not \"flow\" the results from one rollup to the\n>> next one. Instead, each group only advances on partial inputs tagged for\n>> the group.\n>>\n>\n> Yes, this is what v3 patch does.\n>\n> We (Pengzhou and I) had an offline discussion on this plan and we have\n> some other idea. Since we have tagged 'GroupingSetId' for each tuple\n> produced by partial aggregate, why not then perform a normal grouping\n> sets aggregation in the final phase, with the 'GroupingSetId' included\n> in the group keys? The plan looks like:\n>\n> # explain (costs off, verbose)\n> select c1, c2, c3, avg(c3) from gstest group by grouping sets((c1,c2),(c1),(c2,c3));\n> QUERY PLAN\n> ------------------------------------------------------------------\n> Finalize GroupAggregate\n> Output: c1, c2, c3, avg(c3)\n> Group Key: (gset_id), gstest.c1, gstest.c2, gstest.c3\n> -> Sort\n> Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3))\n> Sort Key: (gset_id), gstest.c1, gstest.c2, gstest.c3\n> -> Gather\n> Output: c1, c2, c3, (gset_id), (PARTIAL avg(c3))\n> Workers Planned: 4\n> -> Partial HashAggregate\n> Output: c1, c2, c3, gset_id, PARTIAL avg(c3)\n> Hash Key: gstest.c1, gstest.c2\n> Hash Key: gstest.c1\n> Hash Key: gstest.c2, gstest.c3\n> -> Parallel Seq Scan on public.gstest\n> Output: c1, c2, c3\n>\n> This plan should be able to give the correct results. We are still\n> thinking if it is a better plan than the 'multiplexed pipe' plan as in\n> v3. Inputs of thoughts here would be appreciated.\n\nHa, I believe you meant to say a \"normal aggregate\", because what's\nperformed above gather is no longer \"grouping sets\", right?\n\nThe group key idea is clever in that it helps \"discriminate\" tuples by\ntheir grouping set id. I haven't completely thought this through, but my\nhunch is that this leaves some money on the table, for example, won't it\nalso lead to more expensive (and unnecessary) sorting and hashing? The\ngroupings with a few partials are now sharing the same tuplesort with\nthe groupings with a lot of groups even though we only want to tell\ngrouping 1 *apart from* grouping 10, not neccessarily that grouping 1\nneeds to come before grouping 10. That's why I like the multiplexed pipe\n/ \"dispatched by grouping set id\" idea: we only pay for sorting (or\nhashing) within each grouping. That said, I'm open to the criticism that\nkeeping multiple tuplesort and agg hash tabes running is expensive in\nitself, memory-wise ...\n\nCheers,\nJesse\n\n\n",
"msg_date": "Mon, 3 Feb 2020 11:53:56 -0800",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Thanks to reviewing those patches.\n\nHa, I believe you meant to say a \"normal aggregate\", because what's\n> performed above gather is no longer \"grouping sets\", right?\n>\n> The group key idea is clever in that it helps \"discriminate\" tuples by\n> their grouping set id. I haven't completely thought this through, but my\n> hunch is that this leaves some money on the table, for example, won't it\n> also lead to more expensive (and unnecessary) sorting and hashing? The\n> groupings with a few partials are now sharing the same tuplesort with\n> the groupings with a lot of groups even though we only want to tell\n> grouping 1 *apart from* grouping 10, not neccessarily that grouping 1\n> needs to come before grouping 10. That's why I like the multiplexed pipe\n> / \"dispatched by grouping set id\" idea: we only pay for sorting (or\n> hashing) within each grouping. That said, I'm open to the criticism that\n> keeping multiple tuplesort and agg hash tabes running is expensive in\n> itself, memory-wise ...\n>\n> Cheers,\n> Jesse\n\n\nThat's something we need to testing, thanks. Meanwhile, for the approach to\nuse \"normal aggregate\" with grouping set id, one concern is that it cannot\nuse\nMixed Hashed which means if a grouping sets contain both non-hashable or\nnon-sortable sets, it will fallback to one-phase aggregate.\n\nThanks to reviewing those patches.Ha, I believe you meant to say a \"normal aggregate\", because what's\nperformed above gather is no longer \"grouping sets\", right?\n\nThe group key idea is clever in that it helps \"discriminate\" tuples by\ntheir grouping set id. I haven't completely thought this through, but my\nhunch is that this leaves some money on the table, for example, won't it\nalso lead to more expensive (and unnecessary) sorting and hashing? The\ngroupings with a few partials are now sharing the same tuplesort with\nthe groupings with a lot of groups even though we only want to tell\ngrouping 1 *apart from* grouping 10, not neccessarily that grouping 1\nneeds to come before grouping 10. That's why I like the multiplexed pipe\n/ \"dispatched by grouping set id\" idea: we only pay for sorting (or\nhashing) within each grouping. That said, I'm open to the criticism that\nkeeping multiple tuplesort and agg hash tabes running is expensive in\nitself, memory-wise ...\n\nCheers,\nJesseThat's something we need to testing, thanks. Meanwhile, for the approach touse \"normal aggregate\" with grouping set id, one concern is that it cannot useMixed Hashed which means if a grouping sets contain both non-hashable ornon-sortable sets, it will fallback to one-phase aggregate.",
"msg_date": "Mon, 10 Feb 2020 11:37:19 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "To summarize the current state of parallel grouping sets, we now have\ntwo available implementations for it.\n\n1) Each worker performs an aggregation step, producing a partial result\nfor each group of which that process is aware. Then the partial results\nare gathered to the leader, which then performs a grouping sets\naggregation, as in patch [1].\n\nThis implementation is not very efficient sometimes, because the group\nkey for Partial Aggregate has to be all the columns involved in the\ngrouping sets.\n\n2) Each worker performs a grouping sets aggregation on its partial\ndata, and tags 'GroupingSetId' for each tuple produced by partial\naggregate. Then the partial results are gathered to the leader, and the\nleader performs a modified grouping aggregate, which dispatches the\npartial results into different pipe according to 'GroupingSetId', as in\npatch [2], or instead as another method, the leader performs a normal\naggregation, with 'GroupingSetId' included in the group keys, as\ndiscussed in [3].\n\nThe second implementation would be generally better than the first one\nin performance, and we have decided to concentrate on it.\n\n[1]\nhttps://www.postgresql.org/message-id/CAN_9JTx3NM12ZDzEYcOVLFiCBvwMHyM0gENvtTpKBoOOgcs=kw@mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAN_9JTwtTTnxhbr5AHuqVcriz3HxvPpx1JWE--DCSdJYuHrLtA@mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/CAN_9JTwtzttEmdXvMbJqXt=51kXiBTCKEPKq6kk2PZ6Xz6m5ig@mail.gmail.com\n\nThanks\nRichard\n\n>\n\nTo summarize the current state of parallel grouping sets, we now havetwo available implementations for it.1) Each worker performs an aggregation step, producing a partial resultfor each group of which that process is aware. Then the partial resultsare gathered to the leader, which then performs a grouping setsaggregation, as in patch [1].This implementation is not very efficient sometimes, because the groupkey for Partial Aggregate has to be all the columns involved in thegrouping sets.2) Each worker performs a grouping sets aggregation on its partialdata, and tags 'GroupingSetId' for each tuple produced by partialaggregate. Then the partial results are gathered to the leader, and theleader performs a modified grouping aggregate, which dispatches thepartial results into different pipe according to 'GroupingSetId', as inpatch [2], or instead as another method, the leader performs a normalaggregation, with 'GroupingSetId' included in the group keys, asdiscussed in [3].The second implementation would be generally better than the first onein performance, and we have decided to concentrate on it.[1] https://www.postgresql.org/message-id/CAN_9JTx3NM12ZDzEYcOVLFiCBvwMHyM0gENvtTpKBoOOgcs=kw@mail.gmail.com[2] https://www.postgresql.org/message-id/CAN_9JTwtTTnxhbr5AHuqVcriz3HxvPpx1JWE--DCSdJYuHrLtA@mail.gmail.com[3] https://www.postgresql.org/message-id/CAN_9JTwtzttEmdXvMbJqXt=51kXiBTCKEPKq6kk2PZ6Xz6m5ig@mail.gmail.comThanksRichard",
"msg_date": "Mon, 24 Feb 2020 18:27:07 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi there,\n\nWe want to update our work on the parallel groupingsets, the attached\npatchset implements parallel grouping sets with the strategy proposed in\nhttps://www.postgresql.org/message-id/CAG4reARMcyn+X8gGRQEZyt32NoHc9MfznyPsg_C_V9G+dnQ15Q@mail.gmail.com\n\nIt contains some refinement of our code and adds LLVM support. It also\ncontains a few patches refactoring the grouping sets code to make the\nparallel grouping sets implementation cleaner.\n\nLike simple parallel aggregate, we separate the process of grouping sets\ninto two stages:\n\n*The partial stage: *\nthe partial stage is much the same as the current grouping sets\nimplementation, the differences are:\n- In the partial stage, like in regular parallel aggregation, only partial\n aggregate results (e.g. transvalues) are produced.\n- The output of the partial stage includes a grouping set ID to allow for\n disambiguation during the final stage\n\nThe optimizations of the existing grouping sets implementation are\npreserved during the partial stage, like:\n- Grouping sets that can be combined in one rollup are still grouped\n together (for group agg).\n- Hashaggs can be performed concurrently with the first group agg.\n- All hash transitions can be done in one expression state.\n\n*The final stage*:\nIn the final stage, the partial aggregate results are combined according to\nthe grouping set id. None of the optimizations of the partial stage can be\nleveraged in the final stage. So all rollups are extracted and each rollup\ncontains only one grouping set, each aggregate phase processes a single\ngrouping set. In this stage, tuples are multiplexed into the different\nphases\naccording to the grouping set id before we actually aggregate it.\n\nAn alternative approach to the final stage implementation that we considered\nwas using a single AGG with grouping clause: gsetid + all grouping columns.\nIn the end, we decided against it because it doesn't support mixed\naggregation,\nfirstly, once the grouping columns are a mix of unsortable and unhashable\ncolumns, it cannot produce a path in the final stage, secondly, mixed\naggregation\nis the cheapest path in some cases and this way can not support it.\nMeanwhile,\nif the union of all the grouping columns is large, this parallel implementation\nwill\nincur undue costs.\n\n\nThe patches included in this patchset are as follows:\n\n0001-All-grouping-sets-do-their-own-sorting.patch\n\nThis is a refactoring patch for the existing code. It moves the phase 0 SORT\ninto the AGG instead of assuming that the input is already sorted.\n\nPostgres used to add a SORT path explicitly beneath the AGG for sort group\naggregate. Grouping sets path also adds a SORT path for the first sort\naggregate phase but the following sort aggregate phases do their own sorting\nusing a tuplesort. This commit unifies the way grouping sets paths do\nsorting,\nall sort aggregate phases now do their own sorting using tuplesort.\n\nWe did this refactoring to support the final stage of parallel grouping\nsets.\nAdding a SORT path underneath the AGG in the final stage is wasteful. With\nthis patch, all non-hashed aggregate phases can do their own sorting after\nthe tuples are redirected.\n\nUnpatched:\ntpch=# explain (costs off) select count(*) from customer group by grouping\nsets (c_custkey, c_name);\n QUERY PLAN\n----------------------------------\n GroupAggregate\n Group Key: c_custkey\n Sort Key: c_name\n Group Key: c_name\n -> Sort\n Sort Key: c_custkey\n -> Seq Scan on customer\n\nPatched:\ntpch=# explain (costs off) select count(*) from customer group by grouping\nsets (c_custkey, c_name);\n QUERY PLAN\n----------------------------\n GroupAggregate\n Sort Key: c_custkey\n Group Key: c_custkey\n Sort Key: c_name\n Group Key: c_name\n -> Seq Scan on customer\n\n\n0002-fix-a-numtrans-bug.patch\n\nBugfix for the additional size of the hash table for hash aggregate,\nthe additional\nsize is always zero.\nhttps://www.postgresql.org/message-id/CAG4reATfHUFVek4Hj6t2oDMqW%3DK02JBWLbURNSpftPhL5XrNRQ%40mail.gmail.com\n\n0003-Reorganise-the-aggregate-phases.patch\n\nPlanner used to organize the grouping sets in [HASHED]->[SORTED] order.\nHASHED aggregates were always located before SORTED aggregate. And\nExecInitAgg() organized the aggregate phases in [HASHED]->[SORTED] order.\nAll HASHED grouping sets are squeezed into phase 0 when executing the\nAGG node. For AGG_HASHED or AGG_MIXED strategies, however, the executor\nwill start from executing phase 1-3 assuming they are all groupaggs and then\nreturn to phase 0 to execute hashaggs if it is AGG_MIXED.\n\nWhen adding support for parallel grouping sets, this was a big barrier.\nFirstly, we needed complicated logic to locate the first sort rollup/phase\nand\nhandle the special order for a differentstrategy in many places.\n\nSecondly, squeezing all hashed grouping sets to phase 0 doesn't work for the\nfinal stage. We can't put all transition functions into one expression\nstate in the\nfinal stage. ExecEvalExpr() is optimized to evaluate all the hashed grouping\nsets for the same tuple, however, each input to the final stage is a trans\nvalue,\nso you inherently should not evaluate more than one grouping set for the\nsame input.\n\nThis commit organizes the grouping sets in a more natural way:\n[SORTED]->[HASHED].\n\nThe executor now starts execution from phase 0 for all strategies, the\nHASHED\nsets are no longer squeezed into a single phase. Instead, a HASHED set has\nits\nown phase and we use other ways to put all hash transitions in one\nexpression\nstate for the partial stage.\n\nThis commit also moves 'sort_in' from the AggState to the AggStatePerPhase*\nstructure, this helps to handle more complicated cases necessitated by the\nintroduction of parallel grouping sets. For example, we might need to add a\ntuplestore 'store_in' to store partial aggregates results for PLAIN sets\nthen.\n\nIt also gives us a chance to keep the first TupleSortState, so we do not do\na resort\nwhen rescanning.\n\n0004-Parallel-grouping-sets.patch\n\nThis is the main logic. Patch 0001 and 0003 allow it to be pretty simple.\n\nHere is an example plan with the patch applied:\ntpch=# explain (costs off) select sum(l_quantity) as sum_qty, count(*) as\ncount_order from lineitem group by grouping sets((l_returnflag,\nl_linestatus), (), l_suppkey);\n QUERY PLAN\n----------------------------------------------------\n Finalize MixedAggregate\n Filtered by: (GROUPINGSETID())\n Sort Key: l_suppkey\n Group Key: l_suppkey\n Group Key: ()\n Hash Key: l_returnflag, l_linestatus\n -> Gather\n Workers Planned: 7\n -> Partial MixedAggregate\n Sort Key: l_suppkey\n Group Key: l_suppkey\n Group Key: ()\n Hash Key: l_returnflag, l_linestatus\n -> Parallel Seq Scan on lineitem\n(14 rows)\n\nWe have done some performance tests as well using a groupingsets-enhanced\nsubset of TPCH. TPCH didn't contain grouping sets queries, so we changed all\n\"group by\" clauses to \"group by rollup\" clauses. We chose 14 queries the\ntest.\n\nWe noticed no performance regressions. 3 queries showed performance\nimprovements\ndue to parallelism: (tpch scale is 10 and max_parallel_workers_per_gather\nis 8)\n\n1.sql: 16150.780 ms vs 116093.550 ms\n13.sql: 5288.635 ms vs 19541.981 ms\n18.sql: 52985.084 ms vs 67980.856 ms\n\nThanks,\nPengzhou & Melanie & Jesse",
"msg_date": "Sat, 14 Mar 2020 11:01:33 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi,\n\nunfortunately this got a bit broken by the disk-based hash aggregation,\ncommitted today, and so it needs a rebase. I've started looking at the\npatch before that, and I have it rebased on e00912e11a9e (i.e. the\ncommit before the one that breaks it).\n\nAttached is the rebased patch series (now broken), with a couple of\ncommits with some minor cosmetic changes I propose to make (easier than\nexplaining it on a list, it's mostly about whitespace, comments etc).\nFeel free to reject the changes, it's up to you.\n\nI'll continue doing the review, but it'd be good to have a fully rebased\nversion.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 19 Mar 2020 03:09:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Thanks you to review this patch.\n\nOn Thu, Mar 19, 2020 at 10:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> unfortunately this got a bit broken by the disk-based hash aggregation,\n> committed today, and so it needs a rebase. I've started looking at the\n> patch before that, and I have it rebased on e00912e11a9e (i.e. the\n> commit before the one that breaks it).\n\n\nI spent the day to look into the details of the hash spill patch and\nfinally can\nsuccessfully rebase it, I tested the first 5 patches and they all passed the\ninstallcheck, the 0006-parallel-xxx.path is not tested yet and I also need\nto\nmake hash spill work in the final stage of parallel grouping sets, will do\nthat\ntomorrow.\n\nthe conflicts mainly located in the handling of hash spill for grouping\nsets,\nthe 0004-reorganise-xxxx patch also make the refilling the hash table stage\neasier and\ncan avoid the nullcheck in that stage.\n\n\n>\nAttached is the rebased patch series (now broken), with a couple of\n> commits with some minor cosmetic changes I propose to make (easier than\n> explaining it on a list, it's mostly about whitespace, comments etc).\n> Feel free to reject the changes, it's up to you.\n\n\nThanks, I will enhance the comments and take care of the whitespace.\n\n>\n>\nI'll continue doing the review, but it'd be good to have a fully rebased\n> version.\n\n\nVery appreciate it.\n\n\nThanks,\nPengzhou\n\nThanks you to review this patch.On Thu, Mar 19, 2020 at 10:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:Hi,\n\nunfortunately this got a bit broken by the disk-based hash aggregation,\ncommitted today, and so it needs a rebase. I've started looking at the\npatch before that, and I have it rebased on e00912e11a9e (i.e. the\ncommit before the one that breaks it).I spent the day to look into the details of the hash spill patch and finally cansuccessfully rebase it, I tested the first 5 patches and they all passed theinstallcheck, the 0006-parallel-xxx.path is not tested yet and I also need tomake hash spill work in the final stage of parallel grouping sets, will do thattomorrow.the conflicts mainly located in the handling of hash spill for grouping sets,the 0004-reorganise-xxxx patch also make the refilling the hash table stage easier andcan avoid the nullcheck in that stage. \nAttached is the rebased patch series (now broken), with a couple of\ncommits with some minor cosmetic changes I propose to make (easier than\nexplaining it on a list, it's mostly about whitespace, comments etc).\nFeel free to reject the changes, it's up to you.Thanks, I will enhance the comments and take care of the whitespace. \nI'll continue doing the review, but it'd be good to have a fully rebased\nversion. Very appreciate it. Thanks,Pengzhou",
"msg_date": "Fri, 20 Mar 2020 00:38:30 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "Hi Tomas,\n\nI rebased the code and resolved the comments you attached, some unresolved\ncomments are explained in 0002-fixes.patch, please take a look.\n\nI also make the hash spill working for parallel grouping sets, the plan\nlooks like:\n\ngpadmin=# explain select g100, g10, sum(g::numeric), count(*), max(g::text)\nfrom gstest_p group by cube (g100,g10);\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Finalize MixedAggregate (cost=1000.00..7639.95 rows=1111 width=80)\n Filtered by: (GROUPINGSETID())\n Group Key: ()\n Hash Key: g100, g10\n Hash Key: g100\n Hash Key: g10\n Planned Partitions: 4\n -> Gather (cost=1000.00..6554.34 rows=7777 width=84)\n Workers Planned: 7\n -> Partial MixedAggregate (cost=0.00..4776.64 rows=1111 width=84)\n Group Key: ()\n Hash Key: g100, g10\n Hash Key: g100\n Hash Key: g10\n Planned Partitions: 4\n -> Parallel Seq Scan on gstest_p (cost=0.00..1367.71\nrows=28571 width=12)\n(16 rows)\n\nThanks,\nPengzhou\n\nOn Thu, Mar 19, 2020 at 10:09 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> Hi,\n>\n> unfortunately this got a bit broken by the disk-based hash aggregation,\n> committed today, and so it needs a rebase. I've started looking at the\n> patch before that, and I have it rebased on e00912e11a9e (i.e. the\n> commit before the one that breaks it).\n>\n> Attached is the rebased patch series (now broken), with a couple of\n> commits with some minor cosmetic changes I propose to make (easier than\n> explaining it on a list, it's mostly about whitespace, comments etc).\n> Feel free to reject the changes, it's up to you.\n>\n> I'll continue doing the review, but it'd be good to have a fully rebased\n> version.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com&d=DwIBAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=L968W84_Yb9HJKtAAZUSYw&m=hYswOh9Appfj1CipZAY8-RyPSLWnua0VLEaMDCJ2L3s&s=iYybgoMynB_mcwDfPDmJv3afu-Xdis45lMkS-_6LGnQ&e=\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>",
"msg_date": "Fri, 20 Mar 2020 19:57:02 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 07:57:02PM +0800, Pengzhou Tang wrote:\n>Hi Tomas,\n>\n>I rebased the code and resolved the comments you attached, some unresolved\n>comments are explained in 0002-fixes.patch, please take a look.\n>\n>I also make the hash spill working for parallel grouping sets, the plan\n>looks like:\n>\n>gpadmin=# explain select g100, g10, sum(g::numeric), count(*), max(g::text)\n>from gstest_p group by cube (g100,g10);\n> QUERY PLAN\n>-------------------------------------------------------------------------------------------\n> Finalize MixedAggregate (cost=1000.00..7639.95 rows=1111 width=80)\n> Filtered by: (GROUPINGSETID())\n> Group Key: ()\n> Hash Key: g100, g10\n> Hash Key: g100\n> Hash Key: g10\n> Planned Partitions: 4\n> -> Gather (cost=1000.00..6554.34 rows=7777 width=84)\n> Workers Planned: 7\n> -> Partial MixedAggregate (cost=0.00..4776.64 rows=1111 width=84)\n> Group Key: ()\n> Hash Key: g100, g10\n> Hash Key: g100\n> Hash Key: g10\n> Planned Partitions: 4\n> -> Parallel Seq Scan on gstest_p (cost=0.00..1367.71\n>rows=28571 width=12)\n>(16 rows)\n>\n\nHmmm, OK. I think there's some sort of memory leak, though. I've tried\nrunning a simple grouping set query on catalog_sales table from TPC-DS\nscale 100GB test. The query is pretty simple:\n\n select count(*) from catalog_sales\n group by cube (cs_warehouse_sk, cs_ship_mode_sk, cs_call_center_sk);\n\nwith a partial MixedAggregate plan (attached). When executed, it however\nallocates more and more memory, and eventually gets killed by an OOM\nkiller. This is on a machine with 8GB of RAM, work_mem=4MB (and 4\nparallel workers).\n\nThe memory context stats from a running process before it gets killed by\nOOM look like this\n\n TopMemoryContext: 101560 total in 6 blocks; 7336 free (6 chunks); 94224 used\n TopTransactionContext: 73816 total in 4 blocks; 11624 free (0 chunks); 62192 used\n ExecutorState: 1375731712 total in 174 blocks; 5391392 free (382 chunks); 1370340320 used\n HashAgg meta context: 315784 total in 10 blocks; 15400 free (2 chunks); 300384 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n ...\n\nThat's 1.3GB allocated in ExecutorState - that doesn't seem right.\n\nFWIW there are only very few groups (each attribute has fewer than 30\ndistinct values, so there's only about ~1000 groups. On master it works\njust fine, of course.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 24 Mar 2020 04:13:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": ">\n>\n> The memory context stats from a running process before it gets killed by\n> OOM look like this\n>\n> TopMemoryContext: 101560 total in 6 blocks; 7336 free (6 chunks); 94224\n> used\n> TopTransactionContext: 73816 total in 4 blocks; 11624 free (0\n> chunks); 62192 used\n> ExecutorState: 1375731712 total in 174 blocks; 5391392 free (382\n> chunks); 1370340320 used\n> HashAgg meta context: 315784 total in 10 blocks; 15400 free (2\n> chunks); 300384 used\n> ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264\n> used\n> ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264\n> used\n> ExprContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264\n> used\n> ...\n>\n> That's 1.3GB allocated in ExecutorState - that doesn't seem right.\n>\n> FWIW there are only very few groups (each attribute has fewer than 30\n> distinct values, so there's only about ~1000 groups. On master it works\n> just fine, of course.\n>\n>\nThanks a lot, the patch has a memory leak in the lookup_hash_entries, it\nuses a list_concat there\nand causes a 64-byte leak for every tuple, has fixed that.\n\nAlso, resolved conflicts and rebased the code.\n\nThanks,\nPengzhou",
"msg_date": "Wed, 25 Mar 2020 22:35:32 +0800",
"msg_from": "Pengzhou Tang <ptang@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
},
{
"msg_contents": "> On 25 Mar 2020, at 15:35, Pengzhou Tang <ptang@pivotal.io> wrote:\n\n> Thanks a lot, the patch has a memory leak in the lookup_hash_entries, it uses a list_concat there\n> and causes a 64-byte leak for every tuple, has fixed that.\n> \n> Also, resolved conflicts and rebased the code.\n\nWhile there hasn't been a review of this version, it no longer applies to HEAD.\nThere was also considerable discussion in a (virtual) hallway-track session\nduring PGCon which reviewed the approach (for lack of a better description),\ndeeming that nodeAgg.c needs a refactoring before complicating it further.\nBased on that, and an off-list discussion with Melanie who had picked up the\npatch, I'm marking this Returned with Feedback.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 12 Jul 2020 22:30:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Parallel grouping sets"
}
] |
[
{
"msg_contents": "Hi all,\n\nLong-running vacuum could be sometimes cancelled by administrator. And\nautovacuums could be cancelled by concurrent processes. Even if it\nretries after cancellation, since it always restart from the first\nblock of table it could vacuums blocks again that we vacuumed last\ntime. We have visibility map to skip scanning all-visible blocks but\nin case where the table is large and often modified, we're more likely\nto reclaim more garbage from blocks other than we processed last time\nthan scanning from the first block.\n\nSo I'd like to propose to make vacuums save its progress and resume\nvacuuming based on it. The mechanism I'm thinking is simple; vacuums\nperiodically report the current block number to the stats collector.\nIf table has indexes, reports it after heap vacuum whereas reports it\nevery certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.\nWe can see that value on new column vacuum_resume_block of\npg_stat_all_tables. I'm going to add one vacuum command option RESUME\nand one new reloption vacuum_resume. If the option is true vacuums\nfetch the block number from stats collector before starting and start\nvacuuming from that block. I wonder if we could make it true by\ndefault for autovacuums but it must be false when aggressive vacuum.\n\nIf we start to vacuum from not first block, we can update neither\nrelfrozenxid nor relfrozenxmxid. And we might not be able to update\neven relation statistics.\n\nComment and feedback are very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 12 Jun 2019 13:30:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> Long-running vacuum could be sometimes cancelled by administrator. And\n> autovacuums could be cancelled by concurrent processes. Even if it\n> retries after cancellation, since it always restart from the first\n> block of table it could vacuums blocks again that we vacuumed last\n> time. We have visibility map to skip scanning all-visible blocks but\n> in case where the table is large and often modified, we're more likely\n> to reclaim more garbage from blocks other than we processed last time\n> than scanning from the first block.\n>\n> So I'd like to propose to make vacuums save its progress and resume\n> vacuuming based on it. The mechanism I'm thinking is simple; vacuums\n> periodically report the current block number to the stats collector.\n> If table has indexes, reports it after heap vacuum whereas reports it\n> every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.\n> We can see that value on new column vacuum_resume_block of\n> pg_stat_all_tables. I'm going to add one vacuum command option RESUME\n> and one new reloption vacuum_resume. If the option is true vacuums\n> fetch the block number from stats collector before starting and start\n> vacuuming from that block. I wonder if we could make it true by\n> default for autovacuums but it must be false when aggressive vacuum.\n>\n> If we start to vacuum from not first block, we can update neither\n> relfrozenxid nor relfrozenxmxid. And we might not be able to update\n> even relation statistics.\n>\n\nAttached the first version of patch. And registered this item to the\nnext commit fest.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Tue, 16 Jul 2019 20:56:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Tue, 16 Jul 2019 at 13:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi all,\n> >\n> > Long-running vacuum could be sometimes cancelled by administrator. And\n> > autovacuums could be cancelled by concurrent processes. Even if it\n> > retries after cancellation, since it always restart from the first\n> > block of table it could vacuums blocks again that we vacuumed last\n> > time. We have visibility map to skip scanning all-visible blocks but\n> > in case where the table is large and often modified, we're more likely\n> > to reclaim more garbage from blocks other than we processed last time\n> > than scanning from the first block.\n> >\n> > So I'd like to propose to make vacuums save its progress and resume\n> > vacuuming based on it. The mechanism I'm thinking is simple; vacuums\n> > periodically report the current block number to the stats collector.\n> > If table has indexes, reports it after heap vacuum whereas reports it\n> > every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.\n> > We can see that value on new column vacuum_resume_block of\n> > pg_stat_all_tables. I'm going to add one vacuum command option RESUME\n> > and one new reloption vacuum_resume. If the option is true vacuums\n> > fetch the block number from stats collector before starting and start\n> > vacuuming from that block. I wonder if we could make it true by\n> > default for autovacuums but it must be false when aggressive vacuum.\n> >\n> > If we start to vacuum from not first block, we can update neither\n> > relfrozenxid nor relfrozenxmxid. And we might not be able to update\n> > even relation statistics.\n> >\n\nSounds like an interesting idea, but does it really help? Because if\nvacuum was interrupted previously, wouldn't it already know the dead\ntuples, etc in the next run quite quickly, as the VM, FSM is already\nupdated for the page in the previous run.\n\nA few minor things I noticed in the first look,\n+/*\n+ * When a table has no indexes, save the progress every 8GB so that we can\n+ * resume vacuum from the middle of table. When table has indexes we save it\n+ * after the second heap pass finished.\n+ */\n+#define VACUUM_RESUME_BLK_INTERVAL 1024 /* 8MB */\nDiscrepancy with the memory unit here.\n\n/* No found valid saved block number, resume from the first block */\nCan be better framed.\n\n--\nRegards,\nRafia Sabih\n\n\n",
"msg_date": "Thu, 8 Aug 2019 15:42:33 +0200",
"msg_from": "Rafia Sabih <rafia.pghackers@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 10:42 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n>\n> On Tue, 16 Jul 2019 at 13:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jun 12, 2019 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Hi all,\n> > >\n> > > Long-running vacuum could be sometimes cancelled by administrator. And\n> > > autovacuums could be cancelled by concurrent processes. Even if it\n> > > retries after cancellation, since it always restart from the first\n> > > block of table it could vacuums blocks again that we vacuumed last\n> > > time. We have visibility map to skip scanning all-visible blocks but\n> > > in case where the table is large and often modified, we're more likely\n> > > to reclaim more garbage from blocks other than we processed last time\n> > > than scanning from the first block.\n> > >\n> > > So I'd like to propose to make vacuums save its progress and resume\n> > > vacuuming based on it. The mechanism I'm thinking is simple; vacuums\n> > > periodically report the current block number to the stats collector.\n> > > If table has indexes, reports it after heap vacuum whereas reports it\n> > > every certain amount of blocks (e.g. 1024 blocks = 8MB) if no indexes.\n> > > We can see that value on new column vacuum_resume_block of\n> > > pg_stat_all_tables. I'm going to add one vacuum command option RESUME\n> > > and one new reloption vacuum_resume. If the option is true vacuums\n> > > fetch the block number from stats collector before starting and start\n> > > vacuuming from that block. I wonder if we could make it true by\n> > > default for autovacuums but it must be false when aggressive vacuum.\n> > >\n> > > If we start to vacuum from not first block, we can update neither\n> > > relfrozenxid nor relfrozenxmxid. And we might not be able to update\n> > > even relation statistics.\n> > >\n>\n> Sounds like an interesting idea, but does it really help? Because if\n> vacuum was interrupted previously, wouldn't it already know the dead\n> tuples, etc in the next run quite quickly, as the VM, FSM is already\n> updated for the page in the previous run.\n\nSince tables are modified even during vacuum, if vacuum runs again\nafter interruption it could need to vacuum the part of table again\nthat has already been cleaned by the last vacuum. But the rest part of\nthe table is likely to have more garbage in many cases. Therefore I\nthink this would be helpful especially for a case where table is large\nand heavily updated. Even if the table has not gotten dirtied since\nthe last vacuum it can skip already-vacuumed pages by looking vm or\nthe last vacuumed block. I think that it doesn't make thing worse than\ntoday's vacuum in many cases.\n\n>\n> A few minor things I noticed in the first look,\n\nThanks for reviewing the patch.\n\n> +/*\n> + * When a table has no indexes, save the progress every 8GB so that we can\n> + * resume vacuum from the middle of table. When table has indexes we save it\n> + * after the second heap pass finished.\n> + */\n> +#define VACUUM_RESUME_BLK_INTERVAL 1024 /* 8MB */\n> Discrepancy with the memory unit here.\n>\n\nFixed.\n\n> /* No found valid saved block number, resume from the first block */\n> Can be better framed.\n\nFixed.\n\nAttached the updated version patch.\n\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Mon, 19 Aug 2019 10:38:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Monday, August 19, 2019 10:39 AM (GMT+9), Masahiko Sawada wrote:\r\n> Fixed.\r\n> \r\n> Attached the updated version patch.\r\n\r\nHi Sawada-san,\r\n\r\nI haven't tested it with heavily updated large tables, but I think the patch\r\nis reasonable as it helps to shorten the execution time of vacuum by removing\r\nthe redundant vacuuming and prioritizing reclaiming the garbage instead.\r\nI'm not sure if it's commonly reported to have problems even when we repeat\r\nvacuuming the already-vacuumed blocks, but I think it's a reasonable improvement.\r\n\r\nI skimmed the patch and have few comments. If they deem fit, feel free to\r\nfollow, but it's also ok if you don't.\r\n1.\r\n>+ <entry>Block number to resume vacuuming from</entry>\r\nPerhaps you could drop \"from\".\r\n\r\n2.\r\n>+ <xref linkend=\"pg-stat-all-tables-view\"/>. This behavior is helpful\r\n>+ when to resume vacuuming from interruption and cancellation.The default\r\nwhen resuming vacuum run from interruption and cancellation.\r\nThere should also be space between period and \"The\".\r\n\r\n3.\r\n>+ set to true. This option is ignored if either the <literal>FULL</literal>,\r\n>+ the <literal>FREEZE</literal> or <literal>DISABLE_PAGE_SKIPPING</literal>\r\n>+ option is used.\r\n..if either of the <literal>FULL</literal>, <literal>FREEZE</literal>, or <literal>DISABLE_PAGE_SKIPPING</literal> options is used.\r\n\r\n4.\r\n>+\t\t\t\tnext_fsm_block_to_vacuum,\r\n>+\t\t\t\tnext_block_to_resume;\r\nClearer one would be \"next_block_to_resume_vacuum\"?\r\nYou may disregard if that's too long.\r\n\r\n5.\r\n>+\tAssert(start_blkno <= nblocks);\t/* both are the same iif it's empty */\r\niif -> if there are no blocks / if nblocks is 0\r\n\r\n6.\r\n>+\t * If not found a valid saved block number, resume from the\r\n>+\t * first block.\r\n>+\t */\r\n>+\tif (!found ||\r\n>+\t\ttabentry->vacuum_resume_block >= RelationGetNumberOfBlocks(onerel))\r\nThis describes when vacuum_resume_block > RelationGetNumberOfBlocks.., isn't it?\r\nPerhaps a better framing would be\r\n\"If the saved block number is found invalid,...\",\r\n\r\n7.\r\n>+\tbool\t\tvacuum_resume;\t\t/* enables vacuum to resuming from last\r\n>+\t\t\t\t\t\t\t\t\t * vacuumed block. */\r\nresuming --> resume\r\n\r\n\r\nRegards,\r\nKirk Jamison\r\n",
"msg_date": "Tue, 27 Aug 2019 05:55:18 +0000",
"msg_from": "\"Jamison, Kirk\" <k.jamison@jp.fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Tue, Aug 27, 2019 at 2:55 PM Jamison, Kirk <k.jamison@jp.fujitsu.com> wrote:\n>\n> On Monday, August 19, 2019 10:39 AM (GMT+9), Masahiko Sawada wrote:\n> > Fixed.\n> >\n> > Attached the updated version patch.\n>\n> Hi Sawada-san,\n>\n> I haven't tested it with heavily updated large tables, but I think the patch\n> is reasonable as it helps to shorten the execution time of vacuum by removing\n> the redundant vacuuming and prioritizing reclaiming the garbage instead.\n> I'm not sure if it's commonly reported to have problems even when we repeat\n> vacuuming the already-vacuumed blocks, but I think it's a reasonable improvement.\n>\n> I skimmed the patch and have few comments. If they deem fit, feel free to\n> follow, but it's also ok if you don't.\n\nThank you for reviewing this patch! I've attached the updated patch\nincorporated all your comments and some improvements.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center",
"msg_date": "Thu, 29 Aug 2019 16:36:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "Apparently this patch now has a duplicate OID. Please do use random\nOIDs >8000 as suggested by the unused_oids script.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 25 Sep 2019 17:53:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Thu, Sep 26, 2019 at 1:53 AM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n> Apparently this patch now has a duplicate OID. Please do use random\n> OIDs >8000 as suggested by the unused_oids script.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n>\nI have updated the patch using OIDs > 8000\n\n\n-- \nIbrar Ahmed",
"msg_date": "Thu, 31 Oct 2019 20:34:01 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> Sounds like an interesting idea, but does it really help? Because if\n> vacuum was interrupted previously, wouldn't it already know the dead\n> tuples, etc in the next run quite quickly, as the VM, FSM is already\n> updated for the page in the previous run.\n\n+1. I don't deny that a patch like this could sometimes save\nsomething, but it doesn't seem like it would save all that much all\nthat often. If your autovacuum runs are being frequently cancelled,\nthat's going to be a big problem, I think. And as Rafia says, even\nthough you might do a little extra work reclaiming garbage from\nsubsequently-modified pages toward the beginning of the table, it\nwould be unusual if they'd *all* been modified. Plus, if they've\nrecently been modified, they're more likely to be in cache.\n\nI think this patch really needs a test scenario or demonstration of\nsome kind to prove that it produces a measurable benefit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 1 Nov 2019 13:10:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Sat, 2 Nov 2019 at 02:10, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > Sounds like an interesting idea, but does it really help? Because if\n> > vacuum was interrupted previously, wouldn't it already know the dead\n> > tuples, etc in the next run quite quickly, as the VM, FSM is already\n> > updated for the page in the previous run.\n>\n> +1. I don't deny that a patch like this could sometimes save\n> something, but it doesn't seem like it would save all that much all\n> that often. If your autovacuum runs are being frequently cancelled,\n> that's going to be a big problem, I think.\n\nI've observed the case where user wants to cancel a very long running\nautovacuum (sometimes for anti-wraparound) for doing DDL or something\nmaintenance works. If the table is very large autovacuum could take a\nlong time and they might not reclaim garbage enough.\n\n> And as Rafia says, even\n> though you might do a little extra work reclaiming garbage from\n> subsequently-modified pages toward the beginning of the table, it\n> would be unusual if they'd *all* been modified. Plus, if they've\n> recently been modified, they're more likely to be in cache.\n>\n> I think this patch really needs a test scenario or demonstration of\n> some kind to prove that it produces a measurable benefit.\n\nOkay. A simple test could be that we cancel a long running vacuum on a\nlarge table that is being updated and rerun vacuum. And then we see\nthe garbage on that table. I'll test it.\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 5 Nov 2019 15:57:07 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "\n+\tVACOPT_RESUME = 1 << 8\t\t/* resume from the previous point */\n\nI think this unused ENUM value is not needed.\n\nRegards,\n\nYu Kimura\n\n\n\n",
"msg_date": "Thu, 07 Nov 2019 18:09:00 +0900",
"msg_from": "btkimurayuzk <btkimurayuzk@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On Tue, 5 Nov 2019 at 15:57, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sat, 2 Nov 2019 at 02:10, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Aug 8, 2019 at 9:42 AM Rafia Sabih <rafia.pghackers@gmail.com> wrote:\n> > > Sounds like an interesting idea, but does it really help? Because if\n> > > vacuum was interrupted previously, wouldn't it already know the dead\n> > > tuples, etc in the next run quite quickly, as the VM, FSM is already\n> > > updated for the page in the previous run.\n> >\n> > +1. I don't deny that a patch like this could sometimes save\n> > something, but it doesn't seem like it would save all that much all\n> > that often. If your autovacuum runs are being frequently cancelled,\n> > that's going to be a big problem, I think.\n>\n> I've observed the case where user wants to cancel a very long running\n> autovacuum (sometimes for anti-wraparound) for doing DDL or something\n> maintenance works. If the table is very large autovacuum could take a\n> long time and they might not reclaim garbage enough.\n>\n> > And as Rafia says, even\n> > though you might do a little extra work reclaiming garbage from\n> > subsequently-modified pages toward the beginning of the table, it\n> > would be unusual if they'd *all* been modified. Plus, if they've\n> > recently been modified, they're more likely to be in cache.\n> >\n> > I think this patch really needs a test scenario or demonstration of\n> > some kind to prove that it produces a measurable benefit.\n>\n> Okay. A simple test could be that we cancel a long running vacuum on a\n> large table that is being updated and rerun vacuum. And then we see\n> the garbage on that table. I'll test it.\n>\n\nAttached the updated version patch.\n\nI've measured the effect by this patch. In the test, I simulate the\ncase where autovacuum running on the table that is being updated is\ncanceled in the middle of vacuum, and then rerun (or resume)\nautovacuum on the table. Since the vacuum resume block is saved after\nheap vacuum, I set maintenance_work_mem so that vacuum on that table\nneeds heap vacuum twice or more. In other words, maintenance_work_mem\nare used up during autovacuum at least more than once. The detail step\nis:\n\n1. Make table dirty for 15 min\n2. Run vacuum with vacuum delays\n3. After the first heap vacuum, cancel it\n4. Rerun vacuum (or with the patch resume vacuum)\nThrough step #2 to step #4 the table is being updated in background. I\nused pgbench and \\random command, so the table is updated uniformly.\n\n I've measured the dead tuple percentage of the table. In these tests,\nhow long step #4 took and how much collected garbage at step #4 are\nimportant.\n\n1. Canceled vacuum after processing about 20% of table at step #2.\n1-1. HEAD\nAfter making dirtied (after step #1): 6.96%\nAfter cancellation (after step #3): 6.13%\n\nAt step #4, vacuum reduced it to 4.01% and took 12m 49s. The vacuum\nefficiency is 0.16%/m (2.12% down in 12.8min).\n\n1-2. Patched (resume vacuum)\nAfter making dirtied (after step #1): 6.92%\nAfter cancellation (after step #3): 5.84%\n\nAt step #4, vacuum reduced it to 4.32% and took 10m 26s. The vacuum\nefficiency is 0.14%/m.\n\n------\n2. Canceled vacuum after processing about 40% of table at step #2.\n2-1. HEAD\nAfter making dirtied (after step #1): 6.97%\nAfter cancellation (after step #3): 4.56%\n\nAt step #4, vacuum reduced it to 1.91% and took 8m 15s.The vacuum\nefficiency is 0.32%/m.\n\n2-2. Patched (resume vacuum)\nAfter making dirtied (after step #1): 6.97%\nAfter cancellation (after step #3): 4.46%\n\nAt step #4, vacuum reduced it to 1.94% and took 6m 30s. The vacuum\nefficiency is 0.38%/m.\n\n-----\n3. Canceled vacuum after processing about 70% of table at step #2.\n3-1. HEAD\nAfter making dirtied (after step #1): 6.97%\nAfter cancellation (after step #3): 4.73%\n\nAt step #4, vacuum reduced it to 2.32% and took 8m 11s. The vacuum\nefficiency is 0.29%/m.\n\n3-2. Patched (resume vacuum)\nAfter making dirtied (after step #1): 6.96%\nAfter cancellation (after step #3): 4.73%\n\nAt step #4, vacuum reduced it to 3.25% and took 4m 12s. The vacuum\nefficiency is 0.35%/m.\n\nAccording to those results, it's thought that the more we resume\nvacuum from the tail of the table, the efficiency is good. Since the\ntable is being updated uniformly even during autovacuum it was more\nefficient to restart autovacuum from last position rather than from\nthe beginning of the table. I think that results shows somewhat the\nbenefit of this patch but I'm concerned that it might be difficult for\nusers when to use this option. In practice the efficiency completely\ndepends on the dispersion of updated pages, and that test made pages\ndirty uniformly, which is not a common situation. So probably if we\nwant this feature, I think we should automatically enable resuming\nwhen we can basically be sure that resuming is better. For example, we\nremember both the last vacuumed block and how many vacuum-able pages\nseems to exist from there, and we decide to resume vacuum if we can\nexpect to process more many pages.\n\nRegards\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 28 Feb 2020 22:56:40 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: not tested\nDocumentation: not tested\n\nPlease fix the regression test cases.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 05 Mar 2020 16:10:16 +0000",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
},
{
"msg_contents": "On 2/28/20 8:56 AM, Masahiko Sawada wrote:\n> \n> According to those results, it's thought that the more we resume\n> vacuum from the tail of the table, the efficiency is good. Since the\n> table is being updated uniformly even during autovacuum it was more\n> efficient to restart autovacuum from last position rather than from\n> the beginning of the table. I think that results shows somewhat the\n> benefit of this patch but I'm concerned that it might be difficult for\n> users when to use this option. In practice the efficiency completely\n> depends on the dispersion of updated pages, and that test made pages\n> dirty uniformly, which is not a common situation. So probably if we\n> want this feature, I think we should automatically enable resuming\n> when we can basically be sure that resuming is better. For example, we\n> remember both the last vacuumed block and how many vacuum-able pages\n> seems to exist from there, and we decide to resume vacuum if we can\n> expect to process more many pages.\n\nI have to say I'm a bit confused by the point of this patch. I get that \nstarting in progress is faster but that's only true because the entire \ntable is not being vacuumed?\n\nIf as you say:\n\n > If we start to vacuum from not first block, we can update neither\n > relfrozenxid nor relfrozenxmxid. And we might not be able to update\n > even relation statistics.\n\nThen we'll still need to vacuum the entire table before we can be sure \nthe oldest xid has been removed/frozen. If we could do those updates on \na resume then that would change my thoughts on the feature a lot.\n\nWhat am I missing?\n\nI'm marking this Returned with Feedback due concerns expressed up-thread \n(and mine) and because the patch has been Waiting on Author for nearly \nthe entire CF.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 8 Apr 2020 10:00:22 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Resume vacuum and autovacuum from interruption and cancellation"
}
] |
[
{
"msg_contents": "Hello,\n\nIf tables has a lot of rows with large objects (>1_000_000) that\nremoved throughout the day, it would be useful to know how many\nLOs going to be removed.\n\nFirst patch - print the number of large objects going to be removed,\nsecond patch - print how many LOs removed in percent.\n\nCan anyone please review.\n\nPlease cc, I am not subscribed to the list.\n\nRegards,\nTimur",
"msg_date": "Wed, 12 Jun 2019 12:20:44 +0600",
"msg_from": "Timur Birsh <taem@linukz.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "12.06.2019, 14:31, \"Timur Birsh\" <taem@linukz.org>:\n> Please cc, I am not subscribed to the list.\n\nI have subscribed to the mailing list, there is no need to cc me.\n\nThank you.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:49:46 +0600",
"msg_from": "Timur Birsh <taem@linukz.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jun 13, 2019 at 10:49:46AM +0600, Timur Birsh wrote:\n> 12.06.2019, 14:31, \"Timur Birsh\" <taem@linukz.org>:\n>> Please cc, I am not subscribed to the list.\n> \n> I have subscribed to the mailing list, there is no need to cc me.\n\nWelcome. Nice to see that you have subscribed to the lists.\n\nPlease note that we have some guidelines regarding the way patches are\nsubmitted:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\nBased on what I can see with your patch, things are in good shape on\nthis side.\n\nNow, if you want to get review for your patch, you should register it\nin what we call the commit fest app, which is here: \nhttps://commitfest.postgresql.org/23/\n\nCommit fests happen every two months for a duration of one month, and\nthe next one which will begin the development cycle of v13 begins on\nthe 1st of July. As a basic rule, it is expected that for one patch\nsubmitted, you should review another patch of equal difficulty to keep\nsome balance in the force.\n\nRegarding the patch, there is an argument to be made for reporting a\nrate as well as the actual numbers of deleted and to-delete items.\n\n+ if (param->verbose)\n+ {\n+ snprintf(buf, BUFSIZE, \"SELECT count(*) FROM vacuum_l\");\n+ res = PQexec(conn, buf);\nThat part is costly.\n\nThanks!\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 15:11:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "Hello Michael,\n\n13.06.2019, 12:11, \"Michael Paquier\" <michael@paquier.xyz>:\n> Welcome. Nice to see that you have subscribed to the lists.\n\nThank you for your explanations!\n\n> Now, if you want to get review for your patch, you should register it\n> in what we call the commit fest app, which is here:\n> https://commitfest.postgresql.org/23/\n\nDone. Please see https://commitfest.postgresql.org/23/2148/\n\n> Commit fests happen every two months for a duration of one month, and\n> the next one which will begin the development cycle of v13 begins on\n> the 1st of July. As a basic rule, it is expected that for one patch\n> submitted, you should review another patch of equal difficulty to keep\n> some balance in the force.\n\nOk.\n\n> Regarding the patch, there is an argument to be made for reporting a\n> rate as well as the actual numbers of deleted and to-delete items.\n>\n> + if (param->verbose)\n> + {\n> + snprintf(buf, BUFSIZE, \"SELECT count(*) FROM vacuum_l\");\n> + res = PQexec(conn, buf);\n> That part is costly.\n\nJust to be sure, a new command line argument needs to be added for\nreporting the numbers? Should it implies --verbose argument?\n\nThanks,\nTimur\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:25:38 +0600",
"msg_from": "Timur Birsh <taem@linukz.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 01:25:38PM +0600, Timur Birsh wrote:\n> Just to be sure, a new command line argument needs to be added for\n> reporting the numbers? Should it implies --verbose argument?\n\nNope. I mean that running a SELECT count(*) can be costly for many\nitems.\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 16:57:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "13.06.2019, 13:57, \"Michael Paquier\" <michael@paquier.xyz>:\n> On Thu, Jun 13, 2019 at 01:25:38PM +0600, Timur Birsh wrote:\n>> Just to be sure, a new command line argument needs to be added for\n>> reporting the numbers? Should it implies --verbose argument?\n>\n> Nope. I mean that running a SELECT count(*) can be costly for many\n> items.\n\nUnderstood, thanks.\n\nI found a way to get the number of LOs that will be removed without\nthe SELECT count(*) - PQcmdTuples(). Please find attached patch v2.\nI fixed some indentation in the variable declaration blocks.\n\nThere is a database with tables that have a lot of tuples with large objects:\n\n# select count(*) from pg_largeobject_metadata;\n count\n----------\n 44707424\n(1 row)\n\nAn application that uses this database from time to time deletes and adds a lot\nof rows, it happens that more than 10,000,000 orphaned LOs remain in the\ndatabase. Removing such a number of items takes a long time.\nI guess, it would be helpful to know how many LOs going to be removed and\nreport deleted percentage.\n\nThanks,\nTimur",
"msg_date": "Fri, 14 Jun 2019 10:48:41 +0600",
"msg_from": "Timur Birsh <taem@linukz.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "\tTimur Birsh wrote:\n\n> Please find attached patch v2.\n> I fixed some indentation in the variable declaration blocks.\n\nThe tab width should be 4. Please have a look at\nhttps://www.postgresql.org/docs/current/source-format.html\nIt also explains why opportunistic reformatting is futile, anyway:\n\n \"Your code will get run through pgindent before the next release, so\n there's no point in making it look nice under some other set of\n formatting conventions. A good rule of thumb for patches is “make\n the new code look like the existing code around it”.\"\n\n> An application that uses this database from time to time deletes and\n> adds a lot of rows, it happens that more than 10,000,000 orphaned\n> LOs remain in the database. Removing such a number of items takes a\n> long time.\n\nIt might be useful to display the progress report in the loop, but\nit appears that even when there's nothing to remove, vacuumlo is\nlikely to take a long time, because of the method it uses:\n\n#1. it builds a temp table with the OIDs of all large objects.\n\n#2. for each non-system OID column in the db, it deletes from the temp\n table each value existing under that column, assuming that it's a\n reference to a large object (incidentally if you have OID columns\n that don't refer to large objects in your schemas, they get\n dragged into this. Also in case of OID reuse and bad luck they may\n permanently block the removal of some orphaned large objects).\n\n#3. it creates a holdable cursor to iterate on the temp table.\n\n#4. finally it calls lo_unlink() on each remaining OID in batched\n transactions.\n\nThe design with #1 and #2 dates back from the very first version,\nin 1999.\nNowadays, maybe we could skip these steps by creating a cursor\ndirectly for a generated query that would look like this:\n\n SELECT oid FROM pg_largeobject_metadata lo WHERE NOT EXISTS (\n SELECT 1 FROM schema1.tablename1 WHERE oid_column1 = lo.oid\n UNION ALL\n SELECT 1 FROM schema2.tablename2 WHERE oid_column2 = lo.oid\n UNION ALL\n ...\n );\n\nThat might be much faster than #1 and #2, especially in the case when\nthere's only one SELECT in that subquery and no UNION ALL is even\nnecessary.\n\nFor #4, a more modern approach could be to move that step into a\nserver-side DO block or a procedure, as transaction control is\navailable in them since version 11. This would avoid one client-server\nround-trip per LO to delete, plus the round trips for the\ncursor fetches. In the mentioned case of millions of objects to\nunlink, that might be significant. In this case, progress report would\nhave to be done with RAISE NOTICE or some such.\n\nIn fact, this leads to another idea that vacuumlo as a client-side app\ncould be obsoleted and replaced by a paragraph in the doc\nwith a skeleton of an implementation in a DO block,\nin which a user could replace the blind search in all OID\ncolumns by a custom subquery targeting specifically their schema.\nAs a code block, it would be directly embeddable in a psql script or\nin a procedure called by pg_cron or any equivalent tool.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 17 Jul 2019 13:31:05 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "On Wed, Jul 17, 2019 at 01:31:05PM +0200, Daniel Verite wrote:\n> The tab width should be 4. Please have a look at\n> https://www.postgresql.org/docs/current/source-format.html\n> It also explains why opportunistic reformatting is futile, anyway:\n\n- char *schema,\n- *table,\n- *field;\n+ char *schema,\n+ *table,\n+ *field;\nThe patch has some noise. For something of this size, I don't think\nthat it is an issue though ;)\n\n> It might be useful to display the progress report in the loop, but\n> it appears that even when there's nothing to remove, vacuumlo is\n> likely to take a long time, because of the method it uses:\n>\n> [stuff]\n> \n> That might be much faster than #1 and #2, especially in the case when\n> there's only one SELECT in that subquery and no UNION ALL is even\n> necessary.\n\nSure. However do we need to introduce this much complication as a\ngoal for this patch though whose goal is just to provide hints about\nthe progress of the work done by vacuumlo? I have just looked at the\nlatest patch and the thing is actually much more simple than what I\nrecalled.\n\nOne comment I have is if we should also report in the progress not\nonly the percentage, but also the raw numbers of deleted entries with\nthe total numbers of entries to delete. Timur, what do you think?\n--\nMichael",
"msg_date": "Fri, 6 Sep 2019 16:29:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
},
{
"msg_contents": "\tMichael Paquier wrote:\n\n> Sure. However do we need to introduce this much complication as a\n> goal for this patch though whose goal is just to provide hints about\n> the progress of the work done by vacuumlo? \n\nYeah, I went off on a tangent when realizing that ~500 lines of C\nclient-side code in vacuumlo could be turned into ~50 lines of\nplpgsql in a block.\nThat was not meant as on objection to the patch\n(besides I followed the plpgsql approach and got disappointed with the\nperformance of lo_unlink() in a loop compared to the client-side\nequivalent, so I won't bother -hackers with this idea anymore, until I\nfigure out why it's not faster and if I can do something about it).\n\nOne comment about the patch:\n\n+\tlong\t\tto_delete = 0;\n...\n+\tto_delete = strtol(PQcmdTuples(res), NULL, 10);\n\nI believe the maximum number of large objects is almost 2^32, and as a\ncount above 2^31 may not fit into a signed long, shouldn't we use\nan unsigned long instead? This would also apply to the preexisting\n\"deleted\" variable.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 06 Sep 2019 17:25:57 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] vacuumlo: print the number of large objects going to be\n removed"
}
] |
[
{
"msg_contents": "I check the “alter database, alter role \" and \"set \" command, but none of\nthem can set the parameters to all the existing sessions. do we have a\nway to do that? looks the \"assign_hook\" can be used to customize this, is\nit a right way to do that?\n\nI check the “alter database, alter role \" and \"set \" command, but none of them can set the parameters to all the existing sessions. do we have a way to do that? looks the \"assign_hook\" can be used to customize this, is it a right way to do that?",
"msg_date": "Wed, 12 Jun 2019 15:58:06 +0800",
"msg_from": "alex lock <alock303@gmail.com>",
"msg_from_op": true,
"msg_subject": "set parameter for all existing session"
},
{
"msg_contents": "Hi\n\nst 12. 6. 2019 v 9:58 odesílatel alex lock <alock303@gmail.com> napsal:\n\n> I check the “alter database, alter role \" and \"set \" command, but none of\n> them can set the parameters to all the existing sessions. do we have a\n> way to do that? looks the \"assign_hook\" can be used to customize this, is\n> it a right way to do that?\n>\n>\nMaybe you miss to call pg_reload_conf();\n\nexample:\n\nalter system set work_mem to '10MB';\nselect pg_reload_conf();\n\nin other session you can:\n\nshow work_mem;\n\nRegards\n\nPavel\n\nHist 12. 6. 2019 v 9:58 odesílatel alex lock <alock303@gmail.com> napsal:I check the “alter database, alter role \" and \"set \" command, but none of them can set the parameters to all the existing sessions. do we have a way to do that? looks the \"assign_hook\" can be used to customize this, is it a right way to do that?Maybe you miss to call pg_reload_conf();example:alter system set work_mem to '10MB';select pg_reload_conf();in other session you can:show work_mem;RegardsPavel",
"msg_date": "Wed, 12 Jun 2019 10:24:46 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: set parameter for all existing session"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 4:25 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> st 12. 6. 2019 v 9:58 odesílatel alex lock <alock303@gmail.com> napsal:\n>\n>> I check the “alter database, alter role \" and \"set \" command, but none of\n>> them can set the parameters to all the existing sessions. do we have a\n>> way to do that? looks the \"assign_hook\" can be used to customize this, is\n>> it a right way to do that?\n>>\n>>\n> Maybe you miss to call pg_reload_conf();\n>\n> example:\n>\n> alter system set work_mem to '10MB';\n> select pg_reload_conf();\n>\n\nThanks, it works!\n\n>\n> in other session you can:\n>\n> show work_mem;\n>\n> Regards\n>\n> Pavel\n>\n\nOn Wed, Jun 12, 2019 at 4:25 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hist 12. 6. 2019 v 9:58 odesílatel alex lock <alock303@gmail.com> napsal:I check the “alter database, alter role \" and \"set \" command, but none of them can set the parameters to all the existing sessions. do we have a way to do that? looks the \"assign_hook\" can be used to customize this, is it a right way to do that?Maybe you miss to call pg_reload_conf();example:alter system set work_mem to '10MB';select pg_reload_conf();Thanks, it works! in other session you can:show work_mem;RegardsPavel",
"msg_date": "Wed, 12 Jun 2019 16:30:19 +0800",
"msg_from": "alex lock <alock303@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: set parameter for all existing session"
}
] |
[
{
"msg_contents": "The current catalog files all do this:\n\n CATALOG(pg_aggregate,2600,AggregateRelationId)\n {\n ...\n } FormData_pg_aggregate;\n\n typedef FormData_pg_aggregate *Form_pg_aggregate;\n\nThe bottom part of this seems redundant. With the attached patch, we\ncan generate that automatically, so this becomes just\n\n CATALOG(pg_aggregate,2600,AggregateRelationId)\n {\n ...\n };\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Jun 2019 13:52:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "catalog files simplification"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 7:52 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> The current catalog files all do this:\n>\n> CATALOG(pg_aggregate,2600,AggregateRelationId)\n> {\n> ...\n> } FormData_pg_aggregate;\n>\n> typedef FormData_pg_aggregate *Form_pg_aggregate;\n>\n> The bottom part of this seems redundant. With the attached patch, we\n> can generate that automatically, so this becomes just\n>\n> CATALOG(pg_aggregate,2600,AggregateRelationId)\n> {\n> ...\n> };\n\nMaybe the macro definition could be split across several lines instead\nof having one really long line?\n\nAre some compilers going to be sad about typedef struct x x; preceding\nany declaration or definition of struct x?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:21:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: catalog files simplification"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 12, 2019 at 7:52 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> The current catalog files all do this:\n>> \n>> CATALOG(pg_aggregate,2600,AggregateRelationId)\n>> {\n>> ...\n>> } FormData_pg_aggregate;\n>> \n>> typedef FormData_pg_aggregate *Form_pg_aggregate;\n>> \n>> The bottom part of this seems redundant. With the attached patch, we\n>> can generate that automatically, so this becomes just\n>> \n>> CATALOG(pg_aggregate,2600,AggregateRelationId)\n>> {\n>> ...\n>> };\n\n> Maybe the macro definition could be split across several lines instead\n> of having one really long line?\n\nI think that would complicate Catalog.pm; not clear if it's worth it.\n\n> Are some compilers going to be sad about typedef struct x x; preceding\n> any declaration or definition of struct x?\n\nNope, we have lots of instances of that already, cf \"opaque struct\"\ndeclarations in various headers.\n\nA bigger objection might be that this would leave us with no obvious-\nto-the-untrained-eye declaration point for either the struct name or\nthe two typedef names. That might make tools like ctags sad. Perhaps\nit's not really any worse than today, but it bears investigation.\n\nWe should also check whether pgindent has any issue with this layout.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:34:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: catalog files simplification"
},
{
"msg_contents": "I wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Maybe the macro definition could be split across several lines instead\n>> of having one really long line?\n\n> I think that would complicate Catalog.pm; not clear if it's worth it.\n\nOh, cancel that --- in an uncaffeinated moment, I thought you were asking\nabout splitting the *call* sites of the CATALOG() macro. I agree that\nthe *definition* could be laid out better than it is here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 09:54:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: catalog files simplification"
},
{
"msg_contents": "On 2019-06-12 15:34, Tom Lane wrote:\n> A bigger objection might be that this would leave us with no obvious-\n> to-the-untrained-eye declaration point for either the struct name or\n> the two typedef names. That might make tools like ctags sad. Perhaps\n> it's not really any worse than today, but it bears investigation.\n\nAt least with GNU Global, it finds FormData_pg_foo but not Form_pg_foo.\nBut you can find the latter using grep. This patch would hide both of\nthose even from grep, so maybe it isn't a good idea then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Jun 2019 08:53:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: catalog files simplification"
}
] |
[
{
"msg_contents": "The shared library code has some support for non-ELF BSD systems. I\nsuspect that this is long obsolete. Could we remove it? See attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Jun 2019 15:53:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Are there still non-ELF BSD systems?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> The shared library code has some support for non-ELF BSD systems. I\n> suspect that this is long obsolete. Could we remove it? See attached.\n\nI checked around a bit ... all of the *BSD systems in the buildfarm\nreport ELF_SYS='true', so it's safe to say that the code you want to\nremove is untested.\n\nExcavation in the various BSDens' change logs suggests that the last\none to fully drop a.out was OpenBSD, which didn't do so until 5.5\n(released 1 May 2015). That's more recent than I'd have hoped for,\nthough it looks like the holdout architectures were ones we don't\nsupport anyway (e.g., m68k, vax).\n\nIf we're considering this change for v13, it's hard to believe\nthere'd be any objections in practice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 11:06:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are there still non-ELF BSD systems?"
},
{
"msg_contents": "On 2019-06-12 16:06, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I checked around a bit ... all of the *BSD systems in the buildfarm\n> report ELF_SYS='true', so it's safe to say that the code you want to\n> remove is untested.\n> \n> Excavation in the various BSDens' change logs suggests that the last\n> one to fully drop a.out was OpenBSD, which didn't do so until 5.5\n> (released 1 May 2015). That's more recent than I'd have hoped for,\n> though it looks like the holdout architectures were ones we don't\n> support anyway (e.g., m68k, vax).\n> \n> If we're considering this change for v13, it's hard to believe\n> there'd be any objections in practice.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 2 Jul 2019 00:03:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Are there still non-ELF BSD systems?"
}
] |
[
{
"msg_contents": "Hello Amit,\n\nCan you also review the following fixes?:\n2.1. bt_binsrch_insert -> _bt_binsrch_insert (an internal inconsistency)\n2.2. EWOULDBOCK -> EWOULDBLOCK (a typo)\n2.3. FORGET_RELATION_FSYNC & FORGET_DATABASE_FSYNC ->\nSYNC_FORGET_REQUEST (orphaned after 3eb77eba)\n2.4. GetNewObjectIdWithIndex -> GetNewOidWithIndex (an internal\ninconsistency)\n2.5. get_opclass_family_and_input_type ->\nget_opclass_opfamily_and_input_type (an internal inconsistency)\n2.6. HAVE_BUILTIN_CLZ -> HAVE__BUILTIN_CLZ (missing underscore)\n2.7. HAVE_BUILTIN_CTZ -> HAVE__BUILTIN_CTZ (missing underscore)\n2.8. MultiInsertInfoNextFreeSlot -> CopyMultiInsertInfoNextFreeSlot (an\ninternal inconsistency)\n2.9. targetIsArray -> targetIsSubscripting (an internal inconsistency)\n2.10. tss_htup -> remove (orphaned after 2e3da03e)\n\nI can't see another inconsistencies for v12 for now, but there are some\nthat appeared before.\nIf this work can be performed more effectively or should be\npostponed/canceled, please let me know.\n\nBest regards,\nAlexander",
"msg_date": "Wed, 12 Jun 2019 17:34:06 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix inconsistencies for v12 (pass 2)"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 05:34:06PM +0300, Alexander Lakhin wrote:\n> I can't see another inconsistencies for v12 for now, but there are some\n> that appeared before.\n> If this work can be performed more effectively or should be\n> postponed/canceled, please let me know.\n\nNote sure that it is much productive to have one patch with basically \none-liners in each one... Anyway..\n\nAll your suggestions are right. I do have one doubt for the\nsuggestion in execnodes.h:\n@@ -1571,7 +1571,6 @@ typedef struct TidScanState\n int tss_NumTids;\n int tss_TidPtr;\n ItemPointerData *tss_TidList;\n- HeapTupleData tss_htup;\n} TidScanState;\nThe last trace of tss_htup has been removed as of 2e3da03, and I see\nno mention of it in the related thread. Andres, is that intentional\nfor table AMs to keep a trace of a currently-fetched tuple for a TID\nscan or something that can be removed? The field is still\ndocumented, so the patch is incomplete if we finish by removing the\nfield. And my take is that we should keep it.\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 17:10:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12 (pass 2)"
},
{
"msg_contents": "Hello Michael,\n13.06.2019 11:10, Michael Paquier wrote:\n> On Wed, Jun 12, 2019 at 05:34:06PM +0300, Alexander Lakhin wrote:\n>> I can't see another inconsistencies for v12 for now, but there are some\n>> that appeared before.\n>> If this work can be performed more effectively or should be\n>> postponed/canceled, please let me know.\n> Note sure that it is much productive to have one patch with basically \n> one-liners in each one... Anyway..\nAs the proposed fixes are independent, I decided to separate them. I\nwill make a single patch on next iteration.\n> All your suggestions are right. I do have one doubt for the\n> suggestion in execnodes.h:\n> @@ -1571,7 +1571,6 @@ typedef struct TidScanState\n> int tss_NumTids;\n> int tss_TidPtr;\n> ItemPointerData *tss_TidList;\n> - HeapTupleData tss_htup;\n> } TidScanState;\n> The last trace of tss_htup has been removed as of 2e3da03, and I see\n> no mention of it in the related thread. Andres, is that intentional\n> for table AMs to keep a trace of a currently-fetched tuple for a TID\n> scan or something that can be removed? The field is still\n> documented, so the patch is incomplete if we finish by removing the\n> field. And my take is that we should keep it.\nYes, you're right. I've completed the patch for a possible elimination\nof the field.\n\nBest regards,\nAlexander",
"msg_date": "Thu, 13 Jun 2019 11:28:42 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix inconsistencies for v12 (pass 2)"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 11:28:42AM +0300, Alexander Lakhin wrote:\n> Yes, you're right. I've completed the patch for a possible elimination\n> of the field.\n\nFor now I have discarded this one, and committed the rest as the\ninconsistencies stand out. Good catches by the way.\n\nYour patch was actually incorrect in checkpointer.c. 3eb77eb has\nrefactored the fsync queue and has removed FORGET_DATABASE_FSYNC, but\nit has been replaced by SYNC_FILTER_REQUEST as equivalent in the\nshared queue to forget database-level stuff.\n--\nMichael",
"msg_date": "Fri, 14 Jun 2019 09:42:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12 (pass 2)"
},
{
"msg_contents": "Hello,\n13.06.2019 11:10, Michael Paquier wrote:\n> The last trace of tss_htup has been removed as of 2e3da03, and I see\n> no mention of it in the related thread. Andres, is that intentional\n> for table AMs to keep a trace of a currently-fetched tuple for a TID\n> scan or something that can be removed? The field is still\n> documented, so the patch is incomplete if we finish by removing the\n> field. And my take is that we should keep it.\nAndres, I've found another unused structure field \"was_xmin\" in the\nwas_running structure, having the following comment:\n* Outdated: This struct isn't used for its original purpose anymore, but\n* can't be removed / changed in a minor version, because it's stored\n* on-disk.\nThis comment lives here since 955a684e, May 13 2017. Shouldn't the\noutdated structure be removed in v12?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 14 Jun 2019 07:16:15 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix inconsistencies for v12 (pass 2)"
}
] |
[
{
"msg_contents": "The #ifdef guards in sha2.h are using USE_SSL when they in fact are guarding\nthe inclusion of OpenSSL specific code. This has never caused any issues as\nthere only is a single supported TLS backend in core so far, but since we’ve\nspent a significant amount of energy on making the TLS backend non-hardcoded\nit seems we should fix this too. The Makefile around sha2.c/sha2_openssl.c is\nalready testing for openssl rather than ssl (which given src/Makefile.global\nvariables makes perfect sense of course).\n\ncheers ./daniel",
"msg_date": "Thu, 13 Jun 2019 09:32:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Backend specific ifdefs in sha2.h"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 09:32:28AM +0200, Daniel Gustafsson wrote:\n> The #ifdef guards in sha2.h are using USE_SSL when they in fact are guarding\n> the inclusion of OpenSSL specific code. This has never caused any issues as\n> there only is a single supported TLS backend in core so far, but since we’ve\n> spent a significant amount of energy on making the TLS backend non-hardcoded\n> it seems we should fix this too. The Makefile around sha2.c/sha2_openssl.c is\n> already testing for openssl rather than ssl (which given src/Makefile.global\n> variables makes perfect sense of course).\n\nRight, good catch. I would not back-patch that though as currently\nUSE_SSL <=> USE_OPENSSL. Any suggestions or thoughts from others?\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 17:29:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Backend specific ifdefs in sha2.h"
},
{
"msg_contents": "> On 13 Jun 2019, at 10:29, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I would not back-patch that though as currently\n> USE_SSL <=> USE_OPENSSL.\n\nRight, there is no use in backporting of course.\n\ncheers ./daniel\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:31:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Backend specific ifdefs in sha2.h"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 10:31:23AM +0200, Daniel Gustafsson wrote:\n> Right, there is no use in backporting of course.\n\nAnd applied now, in time for beta2.\n--\nMichael",
"msg_date": "Fri, 14 Jun 2019 09:10:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Backend specific ifdefs in sha2.h"
}
] |
[
{
"msg_contents": "Currently, calling pg_upgrade with an invalid command-line option aborts\npg_upgrade but leaves a pg_upgrade_internal.log file lying around. This\npatch reorder things a bit so that that file is not created until all\nthe options have been parsed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 13 Jun 2019 10:19:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 5:19 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> Currently, calling pg_upgrade with an invalid command-line option aborts\n> pg_upgrade but leaves a pg_upgrade_internal.log file lying around. This\n> patch reorder things a bit so that that file is not created until all\n> the options have been parsed.\n>\n\n- pg_fatal(\"Try \\\"%s --help\\\" for more\ninformation.\\n\",\n- os_info.progname);\n- break;\n+ fprintf(stderr, _(\"Try \\\"%s --help\\\"\nfor more information.\\n\"),\n+ os_info.progname);\n+ exit(1);\n\nWhy do we need to change pg_fatal() to fprintf() & exit()? It seems to\nme that we can still use pg_fatal() here since we write the message to\nstderr.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 13 Jun 2019 21:30:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "On 2019-06-13 14:30, Masahiko Sawada wrote:\n> Why do we need to change pg_fatal() to fprintf() & exit()? It seems to\n> me that we can still use pg_fatal() here since we write the message to\n> stderr.\n\nIt just makes the output more consistent with other tools, e.g.,\n\nold:\n\npg_upgrade: unrecognized option `--foo'\n\nTry \"pg_upgrade --help\" for more information.\nFailure, exiting\n\nnew:\n\npg_upgrade: unrecognized option `--foo'\nTry \"pg_upgrade --help\" for more information.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:03:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "> On 13 Jun 2019, at 10:19, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> Currently, calling pg_upgrade with an invalid command-line option aborts\n> pg_upgrade but leaves a pg_upgrade_internal.log file lying around. This\n> patch reorder things a bit so that that file is not created until all\n> the options have been parsed.\n\n+1 on doing this. \n\n+\tif ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, \"a\")) == NULL)\n+\t\tpg_fatal(\"could not write to log file \\\"%s\\\"\\n\", INTERNAL_LOG_FILE);\n\nWhile we’re at it, should we change this to “could not open log file” to make\nthe messaging more consistent across the utilities (pg_basebackup and psql both\nuse “could not open”)?\n\ncheers ./daniel\n\n",
"msg_date": "Fri, 14 Jun 2019 12:34:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 12:34:36PM +0200, Daniel Gustafsson wrote:\n> +\tif ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, \"a\")) == NULL)\n> +\t\tpg_fatal(\"could not write to log file \\\"%s\\\"\\n\", INTERNAL_LOG_FILE);\n> \n> While we’re at it, should we change this to “could not open log file” to make\n> the messaging more consistent across the utilities (pg_basebackup and psql both\n> use “could not open”)?\n\nI would suggest \"could not open file \\\"%s\\\": %s\" instead with a proper\nstrerror().\n--\nMichael",
"msg_date": "Tue, 18 Jun 2019 17:15:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "> On 18 Jun 2019, at 10:15, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jun 14, 2019 at 12:34:36PM +0200, Daniel Gustafsson wrote:\n>> +\tif ((log_opts.internal = fopen_priv(INTERNAL_LOG_FILE, \"a\")) == NULL)\n>> +\t\tpg_fatal(\"could not write to log file \\\"%s\\\"\\n\", INTERNAL_LOG_FILE);\n>> \n>> While we’re at it, should we change this to “could not open log file” to make\n>> the messaging more consistent across the utilities (pg_basebackup and psql both\n>> use “could not open”)?\n> \n> I would suggest \"could not open file \\\"%s\\\": %s\" instead with a proper\n> strerror().\n\nCorrect, that matches how pg_basebackup and psql does it.\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 18 Jun 2019 10:25:44 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 10:25:44AM +0200, Daniel Gustafsson wrote:\n> Correct, that matches how pg_basebackup and psql does it.\n\nPerhaps you have a patch at hand? I can see four strings in\npg_upgrade, two in exec.c and two in option.c, which could be\nimproved.\n--\nMichael",
"msg_date": "Wed, 19 Jun 2019 11:24:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "On 2019-06-19 04:24, Michael Paquier wrote:\n> On Tue, Jun 18, 2019 at 10:25:44AM +0200, Daniel Gustafsson wrote:\n>> Correct, that matches how pg_basebackup and psql does it.\n> \n> Perhaps you have a patch at hand? I can see four strings in\n> pg_upgrade, two in exec.c and two in option.c, which could be\n> improved.\n\nCommitted my patch and the fixes for those error messages.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 21:51:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
},
{
"msg_contents": "> On 19 Jun 2019, at 21:51, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> On 2019-06-19 04:24, Michael Paquier wrote:\n>> On Tue, Jun 18, 2019 at 10:25:44AM +0200, Daniel Gustafsson wrote:\n>>> Correct, that matches how pg_basebackup and psql does it.\n>> \n>> Perhaps you have a patch at hand? I can see four strings in\n>> pg_upgrade, two in exec.c and two in option.c, which could be\n>> improved.\n> \n> Committed my patch and the fixes for those error messages.\n\nLooks good, thanks!\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 19 Jun 2019 22:42:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Improve invalid option handling"
}
] |
[
{
"msg_contents": "Hi Hackers\n\nI would like to embark on a journey to try to implement this issue I\nfound on TODO list –\nhttps://www.postgresql.org/message-id/flat/CAM3SWZSpdPB3uErnXWMt3q74y0r%2B84ZsOt2U3HKKes_V7O%2B0Qg%40mail.gmail.com\nIn short: pgss distinguishes \"SELECT * WHERE id IN (1, 2)\" and \"SELECT\n* WHERE id IN (1, 2, 3)\" as two separate queryId's, resulting in\nseparate entries in pgss. While in practice in most cases it should be\nconsidered as the same thing.\n\nThough it was added in TODO by Bruce Momjian some time ago, I\npersonally have been annoyed by this issue, because we use pgss as a\ndata source in our monitoring system okmeter.io – so we've been using\nsome work arounds for this in our system.\n\nThe way AFAIU it is suggested to be handled in the previous thread is\nto not jumble ArrayExpr recursively and just treat it as \"some list of\nzero or more nodes\".\nI have already lurked around related code, but I have stumbled on some\nproblems with the way I see I can implement this.\n\nSo I want to ask for advice and maybe even guidance because I'm new to\nPG internals and not a regular in C coding.\n\n1. ArrayExpr\nArrayExpr is used to represent not only \"IN\" clauses, but also for\nexample \"SELECT ARRAY[1, 2, 3]\" and maybe some other cases I didn't\nthink of.\nThat brings the question whether \"IN (...)\" should be handled\nseparately from actual usage of ARRAY.\nOr it is okay for any ARRAY to be jumbled w/o respect to number of\nentries in it?\nWith that, \"SELECT ARRAY[1, 2]\" becomes undistinguishable from \"SELECT\nARRAY[1, 2, 3]\" etc in pgss.\n\nI'm asking this because I'm not sure if it would be okay to handle\nboth cases in the same way.\nFor example \"SELECT ARRAY[1, 2, a] FROM table\" and \"SELECT ARRAY[b]\nFROM table\" might end up in the same pgss entry.\n\nWhile a separate handling for \"IN (...)\" seems to require lots of\nchanges – starting from parser (new parser node type) and further.\nHow should I proceed?\n\n2 Weird arrays - with Consts and Params or const expressions or\ndifferent types etc\nSELECT * FROM test WHERE a IN (1, $1)\nSELECT * FROM test WHERE a IN (1, 3+1)\nSELECT * FROM test WHERE a IN (1, 2.1)\nSELECT * FROM test WHERE a IN (1.1, 2.1) etc.\nHow those should be handled?\nShould those be indistinguishable from \"IN ($1, $2, $3)\" as well?\nOr such non realistic usage examples are negligible and no one cares\nwhat happens with them?\n\n3 Tests in pgss.sql/out and Vars\nI would like someone to point me in a direction of how could I\nimplement a test that will query\n\"SELECT * FROM test WHERE a IN ($1, $2, $3, ...)\" with params, not\nconsts, because I think this is the most common case actually.\nAnd existing tests only check consts in \"IN\" list.\nI don't see a way to implement such a test with the existing test\ninfrastructure.\nThough if that might be considered out of the scope for this TODO, it\nwould be okay with me.\n\n\nI would appreciate any feedback.\n---\nPavel Trukhanov\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:14:23 +0300",
"msg_from": "Pavel Trukhanov <pavel.trukhanov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "> On Thu, Jun 13, 2019 at 1:38 PM Pavel Trukhanov <pavel.trukhanov@gmail.com> wrote:\n>\n> Hi Hackers\n>\n> I would like to embark on a journey to try to implement this issue I\n> found on TODO list –\n> https://www.postgresql.org/message-id/flat/CAM3SWZSpdPB3uErnXWMt3q74y0r%2B84ZsOt2U3HKKes_V7O%2B0Qg%40mail.gmail.com\n> In short: pgss distinguishes \"SELECT * WHERE id IN (1, 2)\" and \"SELECT\n> * WHERE id IN (1, 2, 3)\" as two separate queryId's, resulting in\n> separate entries in pgss. While in practice in most cases it should be\n> considered as the same thing.\n>\n> Though it was added in TODO by Bruce Momjian some time ago, I\n> personally have been annoyed by this issue, because we use pgss as a\n> data source in our monitoring system okmeter.io – so we've been using\n> some work arounds for this in our system.\n\nThanks! One more time stumbled upon it just now, so I agree it would be nice.\n\n> 1. ArrayExpr\n> ArrayExpr is used to represent not only \"IN\" clauses, but also for\n> example \"SELECT ARRAY[1, 2, 3]\" and maybe some other cases I didn't\n> think of.\n> That brings the question whether \"IN (...)\" should be handled\n> separately from actual usage of ARRAY.\n\nIf I understand correctly, \"IN (...)\" is reprecented by A_Expr with kind\nAEXPR_IN, and only in transformAExprIn converted to ArrayExpr if possible. So\nprobably it doesn't makes sense to introduce another one.\n\n> For example \"SELECT ARRAY[1, 2, a] FROM table\" and \"SELECT ARRAY[b]\n> FROM table\" might end up in the same pgss entry.\n\nWhat are a, b here, parameters?\n\n> 3 Tests in pgss.sql/out and Vars\n> I would like someone to point me in a direction of how could I\n> implement a test that will query\n> \"SELECT * FROM test WHERE a IN ($1, $2, $3, ...)\" with params, not\n> consts\n\nWouldn't a prepared statement work? It will create an ArrayExpr with Params\ninside.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 15:06:46 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Thanks for your reply\n\n> If I understand correctly, \"IN (...)\" is reprecented by A_Expr with kind\n> AEXPR_IN, and only in transformAExprIn converted to ArrayExpr if possible.\nThat seems to be correct, yes, thank you.\n\n> So probably it doesn't makes sense to introduce another one.\nThough I might've used wrong words to describe my holdback here, what\nI meant is that I'll need to create new node type (in primnodes.h?)\nfor IN-list, that will allow to differentiate it from direct \"ARRAY\"\nusage.\nThis will require changes to parse_expr.c, execExpr.c, etc, which\nseems to be overkill for such issue IMO, hence the question.\nPlease advise.\n\n> > For example \"SELECT ARRAY[1, 2, a] FROM table\" and \"SELECT ARRAY[b]\n> > FROM table\" might end up in the same pgss entry.\n>\n> What are a, b here, parameters?\n\nHere a and b are table columns.\nI couldn't come up with other examples of ARRAY usage, would\nappreciate any suggestions.\n\n\n> > 3 Tests in pgss.sql/out and Vars\n> > I would like someone to point me in a direction of how could I\n> > implement a test that will query\n> > \"SELECT * FROM test WHERE a IN ($1, $2, $3, ...)\" with params, not\n> > consts\n>\n> Wouldn't a prepared statement work? It will create an ArrayExpr with Params\n> inside.\n\nThanks for the tip. It seems to work, at least it looks like it.\n\n\n",
"msg_date": "Thu, 13 Jun 2019 18:13:54 +0300",
"msg_from": "Pavel Trukhanov <pavel.trukhanov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Pavel Trukhanov <pavel.trukhanov@gmail.com> writes:\n> Though I might've used wrong words to describe my holdback here, what\n> I meant is that I'll need to create new node type (in primnodes.h?)\n> for IN-list, that will allow to differentiate it from direct \"ARRAY\"\n> usage.\n> This will require changes to parse_expr.c, execExpr.c, etc, which\n> seems to be overkill for such issue IMO, hence the question.\n\nI do not think you need new expression infrastructure. IMO what's going\non here is that we're indulging in premature optimization in the parser.\nIt would be better from a structural standpoint if the output of parse\nanalysis were closer to what the user wrote, and then the business of\nseparating Vars from Consts and reducing the Consts to an array were\nhandled in the planner's expression preprocessing phase.\n\nSo maybe what you should be thinking about is a preliminary patch that's\nmostly in the nature of refactoring, to move that processing to where\nit should be.\n\nOf course, life is never quite that simple; there are at least two\nissues you'd have to think about.\n\n* The parsing phase is responsible for determining the semantics of\nthe query, in particular resolving the data types of the IN-list items\nand choosing the comparison operators that will be used. The planner\nis not allowed to rethink that. What I'm not clear about offhand is\nwhether the existing coding in parse analysis might lead to different\nchoices of data types/operators than a more straightforward approach\ndoes. If so, we'd have to decide whether that's okay.\n\n* Postponing this work might make things slower overall, which wouldn't\nmatter too much for short IN-lists, but you can bet that people who\nthrow ten-thousand-entry IN-lists at us will notice. So you'd need to\nkeep an eye on efficiency and make sure you don't end up repeating\nsimilar processing over and over.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 20:46:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Thanks for the feedback.\n\nI thought once again about separating IN from ARRAY, with refactoring\netc as Tom suggested, and now I don't think it's worth it to do so.\nI've tried to implement that, and besides that it will require to\nchange things in every part of query processing pipeline, it seems\nthat most of the times I will have to repeat (copy/paste) for IN case\nall the code that now works in for ARRAY. At first I though there will\nbe simplifications, that will justify such refactoring - i.e. I\nthought I could at least drop \"multidims\" bool that tells ARRAY[] from\nARRAY[ARRAY[]]. But it turns out it's not the case – one can write\nsomething like \"x IN (ARRAY[1], ARRAY[1,2])\" that will result in\nmultidim IN-list array.\n\nSo I don't think there's actually enough benefit to split those two apart.\n\nNow I want to do this: just add a meta info (basically a bool field)\nto the ArrayExpr struct, so on later stages we could tell if that's an\nArrayExpr of an ARRAY or of an IN list. Plus to add ignoring updating\nJumble for expression subtree within IN-list array.\n\nIf that approach doesn't seem too bad to anyone, I would like to go\nforward and submit a patch – it seems pretty straightforward to\nimplement that.\n\nThoughts?\n\nThank you.\n ---\nPasha Trukhanov\n\n\n",
"msg_date": "Sat, 15 Jun 2019 16:06:24 +0300",
"msg_from": "Pavel Trukhanov <pavel.trukhanov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "On Sat., Jun. 15, 2019, 12:29 p.m. Pavel Trukhanov, <\npavel.trukhanov@gmail.com> wrote:\n\n>\n> So I don't think there's actually enough benefit to split those two apart.\n>\n> Now I want to do this: just add a meta info (basically a bool field)\n> to the ArrayExpr struct, so on later stages we could tell if that's an\n> ArrayExpr of an ARRAY or of an IN list. Plus to add ignoring updating\n> Jumble for expression subtree within IN-list array.\n>\n> If that approach doesn't seem too bad to anyone, I would like to go\n> forward and submit a patch – it seems pretty straightforward to\n> implement that.\n>\n\nSo what would this do for someone who explicitly writes:\n\nWHERE col = ANY ?\n\nand pass an array?\n\n>\n\nOn Sat., Jun. 15, 2019, 12:29 p.m. Pavel Trukhanov, <pavel.trukhanov@gmail.com> wrote:So I don't think there's actually enough benefit to split those two apart.\n\nNow I want to do this: just add a meta info (basically a bool field)\nto the ArrayExpr struct, so on later stages we could tell if that's an\nArrayExpr of an ARRAY or of an IN list. Plus to add ignoring updating\nJumble for expression subtree within IN-list array.\n\nIf that approach doesn't seem too bad to anyone, I would like to go\nforward and submit a patch – it seems pretty straightforward to\nimplement that.So what would this do for someone who explicitly writes:WHERE col = ANY ?and pass an array?",
"msg_date": "Sat, 15 Jun 2019 20:30:06 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "On Sat., Jun. 15, 2019, 8:30 p.m. Greg Stark, <stark@mit.edu> wrote:\n\n>\n>\n> So what would this do for someone who explicitly writes:\n>\n> WHERE col = ANY ?\n>\n> and pass an array?\n>\n\nActually thinking about this for two more seconds the question is what it\nwould do with a query like\n\nWHERE col = ANY '1,2,3'::integer[]\n\nOr\n\nWHERE col = ANY ARRAY[1,2,3]\n\nWhichever the language binding that is failing to do parameterized queries\nis doing to emulate them.\n\n>\n\nOn Sat., Jun. 15, 2019, 8:30 p.m. Greg Stark, <stark@mit.edu> wrote:So what would this do for someone who explicitly writes:WHERE col = ANY ?and pass an array?Actually thinking about this for two more seconds the question is what it would do with a query likeWHERE col = ANY '1,2,3'::integer[]Or WHERE col = ANY ARRAY[1,2,3]Whichever the language binding that is failing to do parameterized queries is doing to emulate them.",
"msg_date": "Sat, 15 Jun 2019 20:34:00 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Actually thinking about this for two more seconds the question is what it\n> would do with a query like\n> WHERE col = ANY '1,2,3'::integer[]\n> Or\n> WHERE col = ANY ARRAY[1,2,3]\n> Whichever the language binding that is failing to do parameterized queries\n> is doing to emulate them.\n\nYeah, one interesting question is whether this is actually modeling\nwhat happens with real-world applications --- are they sending Consts,\nor Params?\n\nI resolutely dislike the idea of marking arrays that came from IN\ndifferently from other ones; that's just a hack and it's going to give\nrise to unexplainable behavioral differences for logically-equivalent\nqueries.\n\nOne idea that comes to me after looking at the code involved is that\nit might be an improvement across-the-board if transformAExprIn were to\nreduce the generated ArrayExpr to an array Const immediately, in cases\nwhere all the inputs are Consts. That is going to happen anyway come\nplan time, so it'd have zero impact on semantics or query performance.\nDoing it earlier would cost nothing, and could even be a net win, by\nreducing per-parse-node overhead in places like the rewriter.\n\nThe advantage for the problem at hand is that a Const that's an array\nof 2 elements is going to look the same as a Const that's any other\nnumber of elements so far as pg_stat_statements is concerned.\n\nThis doesn't help of course in cases where the values aren't all\nConsts. Since we eliminated Vars already, the main practical case\nwould be that they're Params, leading us back to the previous\nquestion of whether apps are binding queries with different numbers\nof parameter markers in an IN, and how hard pg_stat_statements should\ntry to fuzz that if they are.\n\nThen, to Greg's point, there's a question of whether transformArrayExpr\nshould do likewise, ie try to produce an array Const immediately.\nI'm a bit less excited about that, but consistency suggests that\nwe should have it act the same as the IN case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Jun 2019 16:10:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Thanks for your input.\n\nAs for real-world applications – being a founder of a server monitoring\nsaas (okmeter) I have access to stats on hundreds of postgres installations.\n\nIt shows that IN with a variable number of params is ~7 times more used\nthan ANY(array).\n\n\nOn Wed, Jun 26, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Greg Stark <stark@mit.edu> writes:\n> > Actually thinking about this for two more seconds the question is what it\n> > would do with a query like\n> > WHERE col = ANY '1,2,3'::integer[]\n> > Or\n> > WHERE col = ANY ARRAY[1,2,3]\n> > Whichever the language binding that is failing to do parameterized\n> queries\n> > is doing to emulate them.\n>\n> Yeah, one interesting question is whether this is actually modeling\n> what happens with real-world applications --- are they sending Consts,\n> or Params?\n>\n> I resolutely dislike the idea of marking arrays that came from IN\n> differently from other ones; that's just a hack and it's going to give\n> rise to unexplainable behavioral differences for logically-equivalent\n> queries.\n>\n> One idea that comes to me after looking at the code involved is that\n> it might be an improvement across-the-board if transformAExprIn were to\n> reduce the generated ArrayExpr to an array Const immediately, in cases\n> where all the inputs are Consts. That is going to happen anyway come\n> plan time, so it'd have zero impact on semantics or query performance.\n> Doing it earlier would cost nothing, and could even be a net win, by\n> reducing per-parse-node overhead in places like the rewriter.\n>\n> The advantage for the problem at hand is that a Const that's an array\n> of 2 elements is going to look the same as a Const that's any other\n> number of elements so far as pg_stat_statements is concerned.\n>\n> This doesn't help of course in cases where the values aren't all\n> Consts. Since we eliminated Vars already, the main practical case\n> would be that they're Params, leading us back to the previous\n> question of whether apps are binding queries with different numbers\n> of parameter markers in an IN, and how hard pg_stat_statements should\n> try to fuzz that if they are.\n>\n> Then, to Greg's point, there's a question of whether transformArrayExpr\n> should do likewise, ie try to produce an array Const immediately.\n> I'm a bit less excited about that, but consistency suggests that\n> we should have it act the same as the IN case.\n>\n> regards, tom lane\n>\n\nThanks for your input.As for real-world applications – being a founder of a server monitoring saas (okmeter) I have access to stats on hundreds of postgres installations.It shows that IN with a variable number of params is ~7 times more used than ANY(array).On Wed, Jun 26, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Greg Stark <stark@mit.edu> writes:\n> Actually thinking about this for two more seconds the question is what it\n> would do with a query like\n> WHERE col = ANY '1,2,3'::integer[]\n> Or\n> WHERE col = ANY ARRAY[1,2,3]\n> Whichever the language binding that is failing to do parameterized queries\n> is doing to emulate them.\n\nYeah, one interesting question is whether this is actually modeling\nwhat happens with real-world applications --- are they sending Consts,\nor Params?\n\nI resolutely dislike the idea of marking arrays that came from IN\ndifferently from other ones; that's just a hack and it's going to give\nrise to unexplainable behavioral differences for logically-equivalent\nqueries.\n\nOne idea that comes to me after looking at the code involved is that\nit might be an improvement across-the-board if transformAExprIn were to\nreduce the generated ArrayExpr to an array Const immediately, in cases\nwhere all the inputs are Consts. That is going to happen anyway come\nplan time, so it'd have zero impact on semantics or query performance.\nDoing it earlier would cost nothing, and could even be a net win, by\nreducing per-parse-node overhead in places like the rewriter.\n\nThe advantage for the problem at hand is that a Const that's an array\nof 2 elements is going to look the same as a Const that's any other\nnumber of elements so far as pg_stat_statements is concerned.\n\nThis doesn't help of course in cases where the values aren't all\nConsts. Since we eliminated Vars already, the main practical case\nwould be that they're Params, leading us back to the previous\nquestion of whether apps are binding queries with different numbers\nof parameter markers in an IN, and how hard pg_stat_statements should\ntry to fuzz that if they are.\n\nThen, to Greg's point, there's a question of whether transformArrayExpr\nshould do likewise, ie try to produce an array Const immediately.\nI'm a bit less excited about that, but consistency suggests that\nwe should have it act the same as the IN case.\n\n regards, tom lane",
"msg_date": "Wed, 2 Oct 2019 21:33:34 -0400",
"msg_from": "Pavel Trukhanov <pavel.trukhanov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "> On Thu, Oct 3, 2019 at 3:33 AM Pavel Trukhanov <pavel.trukhanov@gmail.com> wrote:\n>\n>> On Wed, Jun 26, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Greg Stark <stark@mit.edu> writes:\n>> > Actually thinking about this for two more seconds the question is what it\n>> > would do with a query like\n>> > WHERE col = ANY '1,2,3'::integer[]\n>> > Or\n>> > WHERE col = ANY ARRAY[1,2,3]\n>> > Whichever the language binding that is failing to do parameterized queries\n>> > is doing to emulate them.\n>>\n>> Yeah, one interesting question is whether this is actually modeling\n>> what happens with real-world applications --- are they sending Consts,\n>> or Params?\n>>\n>> I resolutely dislike the idea of marking arrays that came from IN\n>> differently from other ones; that's just a hack and it's going to give\n>> rise to unexplainable behavioral differences for logically-equivalent\n>> queries.\n>\n> Thanks for your input.\n>\n> As for real-world applications – being a founder of a server monitoring saas\n> (okmeter) I have access to stats on hundreds of postgres installations.\n>\n> It shows that IN with a variable number of params is ~7 times more used than\n> ANY(array).\n\nHi,\n\nI would like to do some archaeology and inquire about this thread, since\nunfortunately there was no patch presented as far as I see.\n\nIIUC the ideas suggested in this thread are evolving mostly about modifying\nparser:\n\n> On Fri, Jun 14, 2019 at 2:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I do not think you need new expression infrastructure. IMO what's going on\n> here is that we're indulging in premature optimization in the parser. It\n> would be better from a structural standpoint if the output of parse analysis\n> were closer to what the user wrote, and then the business of separating Vars\n> from Consts and reducing the Consts to an array were handled in the planner's\n> expression preprocessing phase.\n>\n> So maybe what you should be thinking about is a preliminary patch that's\n> mostly in the nature of refactoring, to move that processing to where it\n> should be.\n>\n> Of course, life is never quite that simple; there are at least two\n> issues you'd have to think about.\n>\n> * The parsing phase is responsible for determining the semantics of\n> the query, in particular resolving the data types of the IN-list items\n> and choosing the comparison operators that will be used. The planner\n> is not allowed to rethink that. What I'm not clear about offhand is\n> whether the existing coding in parse analysis might lead to different\n> choices of data types/operators than a more straightforward approach\n> does. If so, we'd have to decide whether that's okay.\n>\n> * Postponing this work might make things slower overall, which wouldn't\n> matter too much for short IN-lists, but you can bet that people who\n> throw ten-thousand-entry IN-lists at us will notice. So you'd need to\n> keep an eye on efficiency and make sure you don't end up repeating\n> similar processing over and over.\n\nThis puzzles me, since the original issue sounds like a \"representation\"\nproblem, when we want to calculate jumble hash in a way that obviously\nrepeating parameters or constants are hashed into one value. I see the point in\nideas like this:\n\n>> One idea that comes to me after looking at the code involved is that\n>> it might be an improvement across-the-board if transformAExprIn were to\n>> reduce the generated ArrayExpr to an array Const immediately, in cases\n>> where all the inputs are Consts. That is going to happen anyway come\n>> plan time, so it'd have zero impact on semantics or query performance.\n>> Doing it earlier would cost nothing, and could even be a net win, by\n>> reducing per-parse-node overhead in places like the rewriter.\n>>\n>> The advantage for the problem at hand is that a Const that's an array\n>> of 2 elements is going to look the same as a Const that's any other\n>> number of elements so far as pg_stat_statements is concerned.\n>>\n>> This doesn't help of course in cases where the values aren't all\n>> Consts. Since we eliminated Vars already, the main practical case\n>> would be that they're Params, leading us back to the previous\n>> question of whether apps are binding queries with different numbers\n>> of parameter markers in an IN, and how hard pg_stat_statements should\n>> try to fuzz that if they are.\n>>\n>> Then, to Greg's point, there's a question of whether transformArrayExpr\n>> should do likewise, ie try to produce an array Const immediately.\n>> I'm a bit less excited about that, but consistency suggests that\n>> we should have it act the same as the IN case.\n\nInterestingly enough, something similar was already mentioned in [1]. But no\none jumped into this, probably due to its relative complexity, lack of personal\ntime resources or not clear way to handle Params (I'm actually not sure about\nthe statistics for Consts vs Params myself and need to check this, but can\neasily imagine both could be an often problem).\n\nAnother idea also was mentioned in [1]:\n\n> I wonder whether we could improve this by arranging things so that both\n> Consts and Params contribute zero to the jumble hash, and a list of these\n> things also contributes zero, regardless of the length of the list.\n\nTaking everything into account, is there anything particularly wrong about\napproach of squashing down lists of constants/parameters in pg_stat_statements\nitself? This sounds simpler, and judging from my experiments even preventing\njumbling of ArrayExpr and rte values constants of the same type with a position\nindex above some threshold will already help a lot in many cases that I\nobserve.\n\n[1]: https://www.postgresql.org/message-id/flat/CAM3SWZSpdPB3uErnXWMt3q74y0r%2B84ZsOt2U3HKKes_V7O%2B0Qg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 21 Jul 2020 18:01:52 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
},
{
"msg_contents": "Hey, let me know if there's any way I can help.\n\nI would argue that making even a small improvement here would be beneficial\nto many.\n\nOn Tue, Jul 21, 2020 at 11:59 AM Dmitry Dolgov <9erthalion6@gmail.com>\nwrote:\n\n> > On Thu, Oct 3, 2019 at 3:33 AM Pavel Trukhanov <\n> pavel.trukhanov@gmail.com> wrote:\n> >\n> >> On Wed, Jun 26, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >> Greg Stark <stark@mit.edu> writes:\n> >> > Actually thinking about this for two more seconds the question is\n> what it\n> >> > would do with a query like\n> >> > WHERE col = ANY '1,2,3'::integer[]\n> >> > Or\n> >> > WHERE col = ANY ARRAY[1,2,3]\n> >> > Whichever the language binding that is failing to do parameterized\n> queries\n> >> > is doing to emulate them.\n> >>\n> >> Yeah, one interesting question is whether this is actually modeling\n> >> what happens with real-world applications --- are they sending Consts,\n> >> or Params?\n> >>\n> >> I resolutely dislike the idea of marking arrays that came from IN\n> >> differently from other ones; that's just a hack and it's going to give\n> >> rise to unexplainable behavioral differences for logically-equivalent\n> >> queries.\n> >\n> > Thanks for your input.\n> >\n> > As for real-world applications – being a founder of a server monitoring\n> saas\n> > (okmeter) I have access to stats on hundreds of postgres installations.\n> >\n> > It shows that IN with a variable number of params is ~7 times more used\n> than\n> > ANY(array).\n>\n> Hi,\n>\n> I would like to do some archaeology and inquire about this thread, since\n> unfortunately there was no patch presented as far as I see.\n>\n> IIUC the ideas suggested in this thread are evolving mostly about modifying\n> parser:\n>\n> > On Fri, Jun 14, 2019 at 2:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I do not think you need new expression infrastructure. IMO what's going\n> on\n> > here is that we're indulging in premature optimization in the parser. It\n> > would be better from a structural standpoint if the output of parse\n> analysis\n> > were closer to what the user wrote, and then the business of separating\n> Vars\n> > from Consts and reducing the Consts to an array were handled in the\n> planner's\n> > expression preprocessing phase.\n> >\n> > So maybe what you should be thinking about is a preliminary patch that's\n> > mostly in the nature of refactoring, to move that processing to where it\n> > should be.\n> >\n> > Of course, life is never quite that simple; there are at least two\n> > issues you'd have to think about.\n> >\n> > * The parsing phase is responsible for determining the semantics of\n> > the query, in particular resolving the data types of the IN-list items\n> > and choosing the comparison operators that will be used. The planner\n> > is not allowed to rethink that. What I'm not clear about offhand is\n> > whether the existing coding in parse analysis might lead to different\n> > choices of data types/operators than a more straightforward approach\n> > does. If so, we'd have to decide whether that's okay.\n> >\n> > * Postponing this work might make things slower overall, which wouldn't\n> > matter too much for short IN-lists, but you can bet that people who\n> > throw ten-thousand-entry IN-lists at us will notice. So you'd need to\n> > keep an eye on efficiency and make sure you don't end up repeating\n> > similar processing over and over.\n>\n> This puzzles me, since the original issue sounds like a \"representation\"\n> problem, when we want to calculate jumble hash in a way that obviously\n> repeating parameters or constants are hashed into one value. I see the\n> point in\n> ideas like this:\n>\n> >> One idea that comes to me after looking at the code involved is that\n> >> it might be an improvement across-the-board if transformAExprIn were to\n> >> reduce the generated ArrayExpr to an array Const immediately, in cases\n> >> where all the inputs are Consts. That is going to happen anyway come\n> >> plan time, so it'd have zero impact on semantics or query performance.\n> >> Doing it earlier would cost nothing, and could even be a net win, by\n> >> reducing per-parse-node overhead in places like the rewriter.\n> >>\n> >> The advantage for the problem at hand is that a Const that's an array\n> >> of 2 elements is going to look the same as a Const that's any other\n> >> number of elements so far as pg_stat_statements is concerned.\n> >>\n> >> This doesn't help of course in cases where the values aren't all\n> >> Consts. Since we eliminated Vars already, the main practical case\n> >> would be that they're Params, leading us back to the previous\n> >> question of whether apps are binding queries with different numbers\n> >> of parameter markers in an IN, and how hard pg_stat_statements should\n> >> try to fuzz that if they are.\n> >>\n> >> Then, to Greg's point, there's a question of whether transformArrayExpr\n> >> should do likewise, ie try to produce an array Const immediately.\n> >> I'm a bit less excited about that, but consistency suggests that\n> >> we should have it act the same as the IN case.\n>\n> Interestingly enough, something similar was already mentioned in [1]. But\n> no\n> one jumped into this, probably due to its relative complexity, lack of\n> personal\n> time resources or not clear way to handle Params (I'm actually not sure\n> about\n> the statistics for Consts vs Params myself and need to check this, but can\n> easily imagine both could be an often problem).\n>\n> Another idea also was mentioned in [1]:\n>\n> > I wonder whether we could improve this by arranging things so that both\n> > Consts and Params contribute zero to the jumble hash, and a list of these\n> > things also contributes zero, regardless of the length of the list.\n>\n> Taking everything into account, is there anything particularly wrong about\n> approach of squashing down lists of constants/parameters in\n> pg_stat_statements\n> itself? This sounds simpler, and judging from my experiments even\n> preventing\n> jumbling of ArrayExpr and rte values constants of the same type with a\n> position\n> index above some threshold will already help a lot in many cases that I\n> observe.\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/CAM3SWZSpdPB3uErnXWMt3q74y0r%2B84ZsOt2U3HKKes_V7O%2B0Qg%40mail.gmail.com\n>\n\nHey, let me know if there's any way I can help.I would argue that making even a small improvement here would be beneficial to many.On Tue, Jul 21, 2020 at 11:59 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Thu, Oct 3, 2019 at 3:33 AM Pavel Trukhanov <pavel.trukhanov@gmail.com> wrote:\n>\n>> On Wed, Jun 26, 2019 at 11:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Greg Stark <stark@mit.edu> writes:\n>> > Actually thinking about this for two more seconds the question is what it\n>> > would do with a query like\n>> > WHERE col = ANY '1,2,3'::integer[]\n>> > Or\n>> > WHERE col = ANY ARRAY[1,2,3]\n>> > Whichever the language binding that is failing to do parameterized queries\n>> > is doing to emulate them.\n>>\n>> Yeah, one interesting question is whether this is actually modeling\n>> what happens with real-world applications --- are they sending Consts,\n>> or Params?\n>>\n>> I resolutely dislike the idea of marking arrays that came from IN\n>> differently from other ones; that's just a hack and it's going to give\n>> rise to unexplainable behavioral differences for logically-equivalent\n>> queries.\n>\n> Thanks for your input.\n>\n> As for real-world applications – being a founder of a server monitoring saas\n> (okmeter) I have access to stats on hundreds of postgres installations.\n>\n> It shows that IN with a variable number of params is ~7 times more used than\n> ANY(array).\n\nHi,\n\nI would like to do some archaeology and inquire about this thread, since\nunfortunately there was no patch presented as far as I see.\n\nIIUC the ideas suggested in this thread are evolving mostly about modifying\nparser:\n\n> On Fri, Jun 14, 2019 at 2:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I do not think you need new expression infrastructure. IMO what's going on\n> here is that we're indulging in premature optimization in the parser. It\n> would be better from a structural standpoint if the output of parse analysis\n> were closer to what the user wrote, and then the business of separating Vars\n> from Consts and reducing the Consts to an array were handled in the planner's\n> expression preprocessing phase.\n>\n> So maybe what you should be thinking about is a preliminary patch that's\n> mostly in the nature of refactoring, to move that processing to where it\n> should be.\n>\n> Of course, life is never quite that simple; there are at least two\n> issues you'd have to think about.\n>\n> * The parsing phase is responsible for determining the semantics of\n> the query, in particular resolving the data types of the IN-list items\n> and choosing the comparison operators that will be used. The planner\n> is not allowed to rethink that. What I'm not clear about offhand is\n> whether the existing coding in parse analysis might lead to different\n> choices of data types/operators than a more straightforward approach\n> does. If so, we'd have to decide whether that's okay.\n>\n> * Postponing this work might make things slower overall, which wouldn't\n> matter too much for short IN-lists, but you can bet that people who\n> throw ten-thousand-entry IN-lists at us will notice. So you'd need to\n> keep an eye on efficiency and make sure you don't end up repeating\n> similar processing over and over.\n\nThis puzzles me, since the original issue sounds like a \"representation\"\nproblem, when we want to calculate jumble hash in a way that obviously\nrepeating parameters or constants are hashed into one value. I see the point in\nideas like this:\n\n>> One idea that comes to me after looking at the code involved is that\n>> it might be an improvement across-the-board if transformAExprIn were to\n>> reduce the generated ArrayExpr to an array Const immediately, in cases\n>> where all the inputs are Consts. That is going to happen anyway come\n>> plan time, so it'd have zero impact on semantics or query performance.\n>> Doing it earlier would cost nothing, and could even be a net win, by\n>> reducing per-parse-node overhead in places like the rewriter.\n>>\n>> The advantage for the problem at hand is that a Const that's an array\n>> of 2 elements is going to look the same as a Const that's any other\n>> number of elements so far as pg_stat_statements is concerned.\n>>\n>> This doesn't help of course in cases where the values aren't all\n>> Consts. Since we eliminated Vars already, the main practical case\n>> would be that they're Params, leading us back to the previous\n>> question of whether apps are binding queries with different numbers\n>> of parameter markers in an IN, and how hard pg_stat_statements should\n>> try to fuzz that if they are.\n>>\n>> Then, to Greg's point, there's a question of whether transformArrayExpr\n>> should do likewise, ie try to produce an array Const immediately.\n>> I'm a bit less excited about that, but consistency suggests that\n>> we should have it act the same as the IN case.\n\nInterestingly enough, something similar was already mentioned in [1]. But no\none jumped into this, probably due to its relative complexity, lack of personal\ntime resources or not clear way to handle Params (I'm actually not sure about\nthe statistics for Consts vs Params myself and need to check this, but can\neasily imagine both could be an often problem).\n\nAnother idea also was mentioned in [1]:\n\n> I wonder whether we could improve this by arranging things so that both\n> Consts and Params contribute zero to the jumble hash, and a list of these\n> things also contributes zero, regardless of the length of the list.\n\nTaking everything into account, is there anything particularly wrong about\napproach of squashing down lists of constants/parameters in pg_stat_statements\nitself? This sounds simpler, and judging from my experiments even preventing\njumbling of ArrayExpr and rte values constants of the same type with a position\nindex above some threshold will already help a lot in many cases that I\nobserve.\n\n[1]: https://www.postgresql.org/message-id/flat/CAM3SWZSpdPB3uErnXWMt3q74y0r%2B84ZsOt2U3HKKes_V7O%2B0Qg%40mail.gmail.com",
"msg_date": "Fri, 7 Aug 2020 13:42:42 -0400",
"msg_from": "Pavel Trukhanov <pavel.trukhanov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve handling of pg_stat_statements handling of bind \"IN\"\n variables"
}
] |
[
{
"msg_contents": "Hi,\n\nI've been reading through the documentation regarding jsonpath and\njsonb_path_query etc., and I have found it lacking explanation for\nsome functionality, and I've also had some confusion when using the\nfeature.\n\n? operator\n==========\nThe first mention of '?' is in section 9.15, where it says:\n\n\"Suppose you would like to retrieve all heart rate values higher than\n130. You can achieve this using the following expression:\n'$.track.segments[*].HR ? (@ > 130)'\"\n\nSo what is the ? operator doing here? Sure, there's the regular ?\noperator, which is given as an example further down the page:\n\n'{\"a\":1, \"b\":2}'::jsonb ? 'b'\n\nBut this doesn't appear to have the same purpose.\n\n\nlike_regex\n==========\nThen there's like_regex, which shows an example that uses the keyword\n\"flag\", but that is the only instance of that keyword being mentioned,\nand the flags available to this expression aren't anywhere to be seen.\n\n\nis unknown\n==========\n\"is unknown\" suggests a boolean output, but the example shows an\noutput of \"infinity\". While I understand what it does, this appears\ninconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\npg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\npg_is_in_backup() etc.).\n\n\n$varname\n==========\nThe jsonpath variable, $varname, has an incomplete description: \"A\nnamed variable. Its value must be set in the PASSING clause of an\nSQL/JSON query function. for details.\"\n\n\nBinary operation error\n==========\nI get an error when I run this query:\n\npostgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\npsql: ERROR: right operand of jsonpath operator + is not a single numeric value\n\nWhile I know it's correct to get an error in this scenario as there is\nno element beyond 0, the message I get is confusing. I'd expect this\nif it encountered another array in that position, but not for\nexceeding the upper bound of the array.\n\n\nCryptic error\n==========\npostgres=# SELECT jsonb_path_query('[1, \"2\",\n{},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\npsql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\nLINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n ^\nAgain, I expect an error, but the message produced doesn't help me.\nI'll remove the ANY_P if I can find it.\n\n\nCan't use nested arrays with jsonpath\n==========\n\nI encounter an error in this scenario:\n\npostgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\npsql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\nLINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n\nSo these filter operators only work with scalars?\n\n\nThanks\n\nThom\n\n\n",
"msg_date": "Thu, 13 Jun 2019 14:59:51 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "SQL/JSON path issues/questions"
},
{
"msg_contents": "Hi, Thom.\n\nAt Thu, 13 Jun 2019 14:59:51 +0100, Thom Brown <thom@linux.com> wrote\nin <CAA-aLv4VVX=b9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw@mail.gmail.com>\n> Hi,\n>\n> I've been reading through the documentation regarding jsonpath and\n> jsonb_path_query etc., and I have found it lacking explanation for\n> some functionality, and I've also had some confusion when using the\n> feature.\n>\n> ? operator\n> ==========\n> The first mention of '?' is in section 9.15, where it says:\n>\n> \"Suppose you would like to retrieve all heart rate values higher than\n> 130. You can achieve this using the following expression:\n> '$.track.segments[*].HR ? (@ > 130)'\"\n>\n> So what is the ? operator doing here? Sure, there's the regular ?\n\nIt is described just above as:\n\n| Each filter expression must be enclosed in parentheses and\n| preceded by a question mark.\n\n> operator, which is given as an example further down the page:\n>\n> '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n>\n> But this doesn't appear to have the same purpose.\n\nThe section is mentioning path expressions and the '?' is a jsonb\noperator. It's somewhat confusing but not so much comparing with\naround..\n\n> like_regex\n> ==========\n> Then there's like_regex, which shows an example that uses the keyword\n> \"flag\", but that is the only instance of that keyword being mentioned,\n> and the flags available to this expression aren't anywhere to be seen.\n\nIt is described as POSIX regular expressions. So '9.7.3 POSIX\nRegular Expressions' is that. But linking it would\nhelpful. (attached 0001)\n\n> is unknown\n> ==========\n> \"is unknown\" suggests a boolean output, but the example shows an\n> output of \"infinity\". While I understand what it does, this appears\n> inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> pg_is_in_backup() etc.).\n\nIt's the right behavior. Among them, only \"infinity\" gives\n\"unknown' for the test (@ > 0). -1 gives false, 2 and 3 true.\n\n> $varname\n> ==========\n> The jsonpath variable, $varname, has an incomplete description: \"A\n> named variable. Its value must be set in the PASSING clause of an\n> SQL/JSON query function. for details.\"\n\nYeah, it is apparently chopped amid. In the sgml source, the\nmissing part is \"<!-- TBD: See <xref\nlinkend=\"sqljson-input-clause\"/> -->\", and the PASSING clause is\nnot implemented yet. On the other hand a similar stuff is\ncurrently implemented as vas parameter in some jsonb\nfunctions. Linking it to there might be helpful (Attached 0002).\n\n\n> Binary operation error\n> ==========\n> I get an error when I run this query:\n>\n> postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n>\n> While I know it's correct to get an error in this scenario as there is\n> no element beyond 0, the message I get is confusing. I'd expect this\n> if it encountered another array in that position, but not for\n> exceeding the upper bound of the array.\n\nSomething like attached makes it clerer? (Attached 0003)\n\n| ERROR: right operand of jsonpath operator + is not a single numeric value\n| DETAIL: It was an array with 0 elements.\n\n> Cryptic error\n> ==========\n> postgres=# SELECT jsonb_path_query('[1, \"2\",\n> {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> ^\n> Again, I expect an error, but the message produced doesn't help me.\n> I'll remove the ANY_P if I can find it.\n\nYeah, I had a similar error:\n\n=# select jsonb_path_query('[-1,2,7, \"infinity\"]', '$[*] ? (($hoge) is\nunknown)', '{\"hoge\": (@ > 0)}');\nERROR: syntax error, unexpected IS_P at or near \" \" of jsonpath input\n\nWhen the errors are issued, the caller side is commented as:\n\njsonpath_scan.l:481\n> jsonpath_yyerror(NULL, \"bogus input\"); /* shouldn't happen */\n\nThe error message is reasonable if it were really shouldn't\nhappen, but it quite easily happen. I don't have an idea of how\nto fix it for the present..\n\n> Can't use nested arrays with jsonpath\n> ==========\n>\n> I encounter an error in this scenario:\n>\n> postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n>\n> So these filter operators only work with scalars?\n\nPerhaps true. It seems that SQL/JSON is saying so. Array is not\ncomparable with anything. (See 6.13.5 Comparison predicates in\n[1])\n\n[1] http://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n\nregards.",
"msg_date": "Fri, 14 Jun 2019 16:15:36 +0900",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jun 14, 2019 at 10:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 13 Jun 2019 14:59:51 +0100, Thom Brown <thom@linux.com> wrote\n> in <CAA-aLv4VVX=b9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw@mail.gmail.com>\n> > Hi,\n> >\n> > I've been reading through the documentation regarding jsonpath and\n> > jsonb_path_query etc., and I have found it lacking explanation for\n> > some functionality, and I've also had some confusion when using the\n> > feature.\n> >\n> > ? operator\n> > ==========\n> > The first mention of '?' is in section 9.15, where it says:\n> >\n> > \"Suppose you would like to retrieve all heart rate values higher than\n> > 130. You can achieve this using the following expression:\n> > '$.track.segments[*].HR ? (@ > 130)'\"\n> >\n> > So what is the ? operator doing here? Sure, there's the regular ?\n>\n> It is described just above as:\n>\n> | Each filter expression must be enclosed in parentheses and\n> | preceded by a question mark.\n\n+1\n\n> > operator, which is given as an example further down the page:\n> >\n> > '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n> >\n> > But this doesn't appear to have the same purpose.\n>\n> The section is mentioning path expressions and the '?' is a jsonb\n> operator. It's somewhat confusing but not so much comparing with\n> around..\n\n+1\n\n> > like_regex\n> > ==========\n> > Then there's like_regex, which shows an example that uses the keyword\n> > \"flag\", but that is the only instance of that keyword being mentioned,\n> > and the flags available to this expression aren't anywhere to be seen.\n>\n> It is described as POSIX regular expressions. So '9.7.3 POSIX\n> Regular Expressions' is that. But linking it would\n> helpful. (attached 0001)\n\nActually, standard requires supporting the same regex flags as\nXQuery/XPath does [1]. Perhaps, we found that we miss support for 'q'\nflag, while it's trivial. Attached patch fixes that. Documentation\nshould contain description of flags. That will be posted as separate\npatch.\n\n> > is unknown\n> > ==========\n> > \"is unknown\" suggests a boolean output, but the example shows an\n> > output of \"infinity\". While I understand what it does, this appears\n> > inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> > pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> > pg_is_in_backup() etc.).\n>\n> It's the right behavior. Among them, only \"infinity\" gives\n> \"unknown' for the test (@ > 0). -1 gives false, 2 and 3 true.\n\n+1\nWe follow here SQL standard for jsonpath language. There is no direct\nanalogy with our SQL-level functions.\n\n>\n> > $varname\n> > ==========\n> > The jsonpath variable, $varname, has an incomplete description: \"A\n> > named variable. Its value must be set in the PASSING clause of an\n> > SQL/JSON query function. for details.\"\n>\n> Yeah, it is apparently chopped amid. In the sgml source, the\n> missing part is \"<!-- TBD: See <xref\n> linkend=\"sqljson-input-clause\"/> -->\", and the PASSING clause is\n> not implemented yet. On the other hand a similar stuff is\n> currently implemented as vas parameter in some jsonb\n> functions. Linking it to there might be helpful (Attached 0002).\n>\n> > Binary operation error\n> > ==========\n> > I get an error when I run this query:\n> >\n> > postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> > psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n> >\n> > While I know it's correct to get an error in this scenario as there is\n> > no element beyond 0, the message I get is confusing. I'd expect this\n> > if it encountered another array in that position, but not for\n> > exceeding the upper bound of the array.\n>\n> Something like attached makes it clerer? (Attached 0003)\n\nThank you. Will review these two and commit.\n\n> | ERROR: right operand of jsonpath operator + is not a single numeric value\n> | DETAIL: It was an array with 0 elements.\n>\n> > Cryptic error\n> > ==========\n> > postgres=# SELECT jsonb_path_query('[1, \"2\",\n> > {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> > psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> > LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> > ^\n> > Again, I expect an error, but the message produced doesn't help me.\n> > I'll remove the ANY_P if I can find it.\n>\n> Yeah, I had a similar error:\n>\n> =# select jsonb_path_query('[-1,2,7, \"infinity\"]', '$[*] ? (($hoge) is\n> unknown)', '{\"hoge\": (@ > 0)}');\n> ERROR: syntax error, unexpected IS_P at or near \" \" of jsonpath input\n>\n> When the errors are issued, the caller side is commented as:\n>\n> jsonpath_scan.l:481\n> > jsonpath_yyerror(NULL, \"bogus input\"); /* shouldn't happen */\n>\n> The error message is reasonable if it were really shouldn't\n> happen, but it quite easily happen. I don't have an idea of how\n> to fix it for the present..\n\nI'm also not sure. Need further thinking about it.\n\n> > Can't use nested arrays with jsonpath\n> > ==========\n> >\n> > I encounter an error in this scenario:\n> >\n> > postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> > psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> > LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n> >\n> > So these filter operators only work with scalars?\n>\n> Perhaps true. It seems that SQL/JSON is saying so. Array is not\n> comparable with anything. (See 6.13.5 Comparison predicates in\n> [1])\n\nThat's true. But we may we extended version of jsonpath having more\nfeatures than standard defined. We can pick proposal [2] to evade\npossible incompatibility with future standard updates.\n\nLinks.\n\n1. https://www.w3.org/TR/xpath-functions/#func-matches\n2. https://www.postgresql.org/message-id/5CF28EA0.80902%40anastigmatix.net\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 14 Jun 2019 23:25:22 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 5:00 PM Thom Brown <thom@linux.com> wrote:\n> I've been reading through the documentation regarding jsonpath and\n> jsonb_path_query etc., and I have found it lacking explanation for\n> some functionality, and I've also had some confusion when using the\n> feature.\n\nBTW, I've some general idea about jsonpath documentation structure.\nRight now definition of jsonpath language is spread between sections\n\"JSON Types\" [1] and \"JSON Functions, Operators, and Expressions\" [2].\nThank might be confusing. I think it would be more readable if whole\njsonpath language definition would be given in a single place. I\npropose to move whole definition of jsonpath to section [1] leaving\nsection [2] just with SQL-level functions. Any thoughts?\n\nLinks.\n\n1. https://www.postgresql.org/docs/devel/datatype-json.html#DATATYPE-JSONPATH\n2. https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 14 Jun 2019 23:44:49 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "I'm going to push attached 3 patches if no objections.\n\nRegarding 0003-Separate-two-distinctive-json-errors.patch, I think it\nrequires more thoughts.\n\n RETURN_ERROR(ereport(ERROR,\n (errcode(ERRCODE_SINGLETON_JSON_ITEM_REQUIRED),\n errmsg(\"left operand of jsonpath\noperator %s is not a single numeric value\",\n- jspOperationName(jsp->type)))));\n+ jspOperationName(jsp->type)),\n+ (llen != 1 ?\n+ errdetail(\"It was an array with %d\nelements.\", llen):\n+ errdetail(\"The only element was not a\nnumeric.\")))));\n\nWhen we have more than 1 value, it's no exactly array. Jsonpath can\nextract values from various parts of json document, which never\nconstitute and array. Should we say something like \"There are %d\nvalues\"? Also, probably we should display the type of single element\nif it's not numeric. jsonb_path_match() also throws\nERRCODE_SINGLETON_JSON_ITEM_REQUIRED, should we add similar\nerrdetail() there?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 17 Jun 2019 11:36:12 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 6/17/19 11:36 AM, Alexander Korotkov wrote:\n> I'm going to push attached 3 patches if no objections.\n>\n> Regarding 0003-Separate-two-distinctive-json-errors.patch, I think it\n> requires more thoughts.\n>\n> RETURN_ERROR(ereport(ERROR,\n> (errcode(ERRCODE_SINGLETON_JSON_ITEM_REQUIRED),\n> errmsg(\"left operand of jsonpath\n> operator %s is not a single numeric value\",\n> - jspOperationName(jsp->type)))));\n> + jspOperationName(jsp->type)),\n> + (llen != 1 ?\n> + errdetail(\"It was an array with %d\n> elements.\", llen):\n> + errdetail(\"The only element was not a\n> numeric.\")))));\n>\n> When we have more than 1 value, it's no exactly array. Jsonpath can\n> extract values from various parts of json document, which never\n> constitute and array. Should we say something like \"There are %d\n> values\"? Also, probably we should display the type of single element\n> if it's not numeric. jsonb_path_match() also throws\n> ERRCODE_SINGLETON_JSON_ITEM_REQUIRED, should we add similar\n> errdetail() there?\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\nHi Alexander,\n\nWhile I have no objections to the proposed fixes, I think we can further \nimprove patch 0003 and the text it refers to.\nIn attempt to clarify jsonpath docs and address the concern that ? is \nhard to trace in the current text, I'd also like to propose patch 0004.\nPlease see both of them attached.\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 17 Jun 2019 13:07:15 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 08:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> Hi, Thom.\n>\n> At Thu, 13 Jun 2019 14:59:51 +0100, Thom Brown <thom@linux.com> wrote\n> in <CAA-aLv4VVX=b9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw@mail.gmail.com>\n> > Hi,\n> >\n> > I've been reading through the documentation regarding jsonpath and\n> > jsonb_path_query etc., and I have found it lacking explanation for\n> > some functionality, and I've also had some confusion when using the\n> > feature.\n> >\n> > ? operator\n> > ==========\n> > The first mention of '?' is in section 9.15, where it says:\n> >\n> > \"Suppose you would like to retrieve all heart rate values higher than\n> > 130. You can achieve this using the following expression:\n> > '$.track.segments[*].HR ? (@ > 130)'\"\n> >\n> > So what is the ? operator doing here? Sure, there's the regular ?\n>\n> It is described just above as:\n>\n> | Each filter expression must be enclosed in parentheses and\n> | preceded by a question mark.\n\nCan I suggest that, rather than using \"question mark\", we use the \"?\"\nsymbol, or provide a syntax structure which shows something like:\n\n<path expression> ? <filter expression>\n\nThis not only makes this key information clearer and more prominent,\nbut it also makes the \"?\" symbol searchable in a browser for anyone\nwanting to find out what that symbol is doing.\n\n> > operator, which is given as an example further down the page:\n> >\n> > '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n> >\n> > But this doesn't appear to have the same purpose.\n>\n> The section is mentioning path expressions and the '?' is a jsonb\n> operator. It's somewhat confusing but not so much comparing with\n> around..\n>\n> > like_regex\n> > ==========\n> > Then there's like_regex, which shows an example that uses the keyword\n> > \"flag\", but that is the only instance of that keyword being mentioned,\n> > and the flags available to this expression aren't anywhere to be seen.\n>\n> It is described as POSIX regular expressions. So '9.7.3 POSIX\n> Regular Expressions' is that. But linking it would\n> helpful. (attached 0001)\n>\n> > is unknown\n> > ==========\n> > \"is unknown\" suggests a boolean output, but the example shows an\n> > output of \"infinity\". While I understand what it does, this appears\n> > inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> > pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> > pg_is_in_backup() etc.).\n>\n> It's the right behavior. Among them, only \"infinity\" gives\n> \"unknown' for the test (@ > 0). -1 gives false, 2 and 3 true.\n\nI still find it counter-intuitive.\n>\n> > $varname\n> > ==========\n> > The jsonpath variable, $varname, has an incomplete description: \"A\n> > named variable. Its value must be set in the PASSING clause of an\n> > SQL/JSON query function. for details.\"\n>\n> Yeah, it is apparently chopped amid. In the sgml source, the\n> missing part is \"<!-- TBD: See <xref\n> linkend=\"sqljson-input-clause\"/> -->\", and the PASSING clause is\n> not implemented yet. On the other hand a similar stuff is\n> currently implemented as vas parameter in some jsonb\n> functions. Linking it to there might be helpful (Attached 0002).\n>\n>\n> > Binary operation error\n> > ==========\n> > I get an error when I run this query:\n> >\n> > postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> > psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n> >\n> > While I know it's correct to get an error in this scenario as there is\n> > no element beyond 0, the message I get is confusing. I'd expect this\n> > if it encountered another array in that position, but not for\n> > exceeding the upper bound of the array.\n>\n> Something like attached makes it clerer? (Attached 0003)\n>\n> | ERROR: right operand of jsonpath operator + is not a single numeric value\n> | DETAIL: It was an array with 0 elements.\n\nMy first thought upon seeing this error message would be, \"I don't see\nan array with 0 elements.\"\n\n>\n> > Cryptic error\n> > ==========\n> > postgres=# SELECT jsonb_path_query('[1, \"2\",\n> > {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> > psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> > LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> > ^\n> > Again, I expect an error, but the message produced doesn't help me.\n> > I'll remove the ANY_P if I can find it.\n>\n> Yeah, I had a similar error:\n>\n> =# select jsonb_path_query('[-1,2,7, \"infinity\"]', '$[*] ? (($hoge) is\n> unknown)', '{\"hoge\": (@ > 0)}');\n> ERROR: syntax error, unexpected IS_P at or near \" \" of jsonpath input\n>\n> When the errors are issued, the caller side is commented as:\n>\n> jsonpath_scan.l:481\n> > jsonpath_yyerror(NULL, \"bogus input\"); /* shouldn't happen */\n>\n> The error message is reasonable if it were really shouldn't\n> happen, but it quite easily happen. I don't have an idea of how\n> to fix it for the present..\n>\n> > Can't use nested arrays with jsonpath\n> > ==========\n> >\n> > I encounter an error in this scenario:\n> >\n> > postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> > psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> > LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n> >\n> > So these filter operators only work with scalars?\n>\n> Perhaps true. It seems that SQL/JSON is saying so. Array is not\n> comparable with anything. (See 6.13.5 Comparison predicates in\n> [1])\n>\n> [1] http://standards.iso.org/ittf/PubliclyAvailableStandards/c067367_ISO_IEC_TR_19075-6_2017.zip\n>\n> regards.\n\n\n",
"msg_date": "Mon, 17 Jun 2019 18:39:54 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 8:40 PM Thom Brown <thom@linux.com> wrote:\n> On Fri, 14 Jun 2019 at 08:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > Hi, Thom.\n> >\n> > At Thu, 13 Jun 2019 14:59:51 +0100, Thom Brown <thom@linux.com> wrote\n> > in <CAA-aLv4VVX=b9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw@mail.gmail.com>\n> > > Hi,\n> > >\n> > > I've been reading through the documentation regarding jsonpath and\n> > > jsonb_path_query etc., and I have found it lacking explanation for\n> > > some functionality, and I've also had some confusion when using the\n> > > feature.\n> > >\n> > > ? operator\n> > > ==========\n> > > The first mention of '?' is in section 9.15, where it says:\n> > >\n> > > \"Suppose you would like to retrieve all heart rate values higher than\n> > > 130. You can achieve this using the following expression:\n> > > '$.track.segments[*].HR ? (@ > 130)'\"\n> > >\n> > > So what is the ? operator doing here? Sure, there's the regular ?\n> >\n> > It is described just above as:\n> >\n> > | Each filter expression must be enclosed in parentheses and\n> > | preceded by a question mark.\n>\n> Can I suggest that, rather than using \"question mark\", we use the \"?\"\n> symbol, or provide a syntax structure which shows something like:\n>\n> <path expression> ? <filter expression>\n>\n> This not only makes this key information clearer and more prominent,\n> but it also makes the \"?\" symbol searchable in a browser for anyone\n> wanting to find out what that symbol is doing.\n\nSounds like a good point for me.\n\n> > > operator, which is given as an example further down the page:\n> > >\n> > > '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n> > >\n> > > But this doesn't appear to have the same purpose.\n> >\n> > The section is mentioning path expressions and the '?' is a jsonb\n> > operator. It's somewhat confusing but not so much comparing with\n> > around..\n> >\n> > > like_regex\n> > > ==========\n> > > Then there's like_regex, which shows an example that uses the keyword\n> > > \"flag\", but that is the only instance of that keyword being mentioned,\n> > > and the flags available to this expression aren't anywhere to be seen.\n> >\n> > It is described as POSIX regular expressions. So '9.7.3 POSIX\n> > Regular Expressions' is that. But linking it would\n> > helpful. (attached 0001)\n> >\n> > > is unknown\n> > > ==========\n> > > \"is unknown\" suggests a boolean output, but the example shows an\n> > > output of \"infinity\". While I understand what it does, this appears\n> > > inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> > > pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> > > pg_is_in_backup() etc.).\n> >\n> > It's the right behavior. Among them, only \"infinity\" gives\n> > \"unknown' for the test (@ > 0). -1 gives false, 2 and 3 true.\n>\n> I still find it counter-intuitive.\n\nIt might be so. But it's defined do in SQL Standard 2016. Following\nan SQL standard was always a project priority. We unlikely going to\nsay: \"We don't want to follow a standard, because it doesn't looks\nsimilar to our home brew functions.\"\n\n> > > $varname\n> > > ==========\n> > > The jsonpath variable, $varname, has an incomplete description: \"A\n> > > named variable. Its value must be set in the PASSING clause of an\n> > > SQL/JSON query function. for details.\"\n> >\n> > Yeah, it is apparently chopped amid. In the sgml source, the\n> > missing part is \"<!-- TBD: See <xref\n> > linkend=\"sqljson-input-clause\"/> -->\", and the PASSING clause is\n> > not implemented yet. On the other hand a similar stuff is\n> > currently implemented as vas parameter in some jsonb\n> > functions. Linking it to there might be helpful (Attached 0002).\n> >\n> >\n> > > Binary operation error\n> > > ==========\n> > > I get an error when I run this query:\n> > >\n> > > postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> > > psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n> > >\n> > > While I know it's correct to get an error in this scenario as there is\n> > > no element beyond 0, the message I get is confusing. I'd expect this\n> > > if it encountered another array in that position, but not for\n> > > exceeding the upper bound of the array.\n> >\n> > Something like attached makes it clerer? (Attached 0003)\n> >\n> > | ERROR: right operand of jsonpath operator + is not a single numeric value\n> > | DETAIL: It was an array with 0 elements.\n>\n> My first thought upon seeing this error message would be, \"I don't see\n> an array with 0 elements.\"\n\nYes, it looks counter-intuitive for me too. There is really no array\nwith 0 elements. Actually, jsonpath subexpression selects no items.\nWe probably should adjust the message accordingly.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Mon, 17 Jun 2019 23:13:04 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 6/17/19 4:13 PM, Alexander Korotkov wrote:\n> On Mon, Jun 17, 2019 at 8:40 PM Thom Brown <thom@linux.com> wrote:\n>>>> \"is unknown\" suggests a boolean output, but the example shows an\n>>>> output of \"infinity\". While I understand what it does, this appears\n>>>> inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n>>>> pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n>>>> pg_is_in_backup() etc.).\n>>>\n>>> It's the right behavior. Among them, only \"infinity\" gives\n>>> \"unknown' for the test (@ > 0). -1 gives false, 2 and 3 true.\n>>\n>> I still find it counter-intuitive.\n> \n> It might be so. But it's defined do in SQL Standard 2016.\n\nIIUC, this comes about simply because the JSON data model for numeric\nvalues does not have any infinity or NaN.\n\nSo the example given in our doc is sort of a trick example that does\ndouble duty: it demonstrates that (@ > 0) is Unknown when @ is a string,\nbecause numbers and strings are incomparable, and it *also* sort of\nslyly reminds the reader that JSON numbers have no infinity, and\ntherefore \"infinity\" is nothing but a run-of-the-mill string.\n\nBut maybe it is just too brow-furrowingly clever to ask one example\nto make both of those points. Maybe it would be clearer to use some\nstring other than \"infinity\" to make the first point:\n\n[-1, 2, 7, \"some string\"] | $[*] ? ((@ > 0) is unknown) | \"some string\"\n\n... and then if the reminder about infinity is worth making, repeat\nthe example:\n\n[-1, 2, 7, \"infinity\"] | $[*] ? ((@ > 0) is unknown) | \"infinity\"\n\nwith a note that it's a trick example as a reminder that JSON numbers\ndon't have infinity or NaN and so it is no different from any other\nstring.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 17 Jun 2019 16:57:19 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, 13 Jun 2019 at 14:59, Thom Brown <thom@linux.com> wrote:\n>\n> Hi,\n>\n> I've been reading through the documentation regarding jsonpath and\n> jsonb_path_query etc., and I have found it lacking explanation for\n> some functionality, and I've also had some confusion when using the\n> feature.\n>\n> ? operator\n> ==========\n> The first mention of '?' is in section 9.15, where it says:\n>\n> \"Suppose you would like to retrieve all heart rate values higher than\n> 130. You can achieve this using the following expression:\n> '$.track.segments[*].HR ? (@ > 130)'\"\n>\n> So what is the ? operator doing here? Sure, there's the regular ?\n> operator, which is given as an example further down the page:\n>\n> '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n>\n> But this doesn't appear to have the same purpose.\n>\n>\n> like_regex\n> ==========\n> Then there's like_regex, which shows an example that uses the keyword\n> \"flag\", but that is the only instance of that keyword being mentioned,\n> and the flags available to this expression aren't anywhere to be seen.\n>\n>\n> is unknown\n> ==========\n> \"is unknown\" suggests a boolean output, but the example shows an\n> output of \"infinity\". While I understand what it does, this appears\n> inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> pg_is_in_backup() etc.).\n>\n>\n> $varname\n> ==========\n> The jsonpath variable, $varname, has an incomplete description: \"A\n> named variable. Its value must be set in the PASSING clause of an\n> SQL/JSON query function. for details.\"\n>\n>\n> Binary operation error\n> ==========\n> I get an error when I run this query:\n>\n> postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n>\n> While I know it's correct to get an error in this scenario as there is\n> no element beyond 0, the message I get is confusing. I'd expect this\n> if it encountered another array in that position, but not for\n> exceeding the upper bound of the array.\n>\n>\n> Cryptic error\n> ==========\n> postgres=# SELECT jsonb_path_query('[1, \"2\",\n> {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> ^\n> Again, I expect an error, but the message produced doesn't help me.\n> I'll remove the ANY_P if I can find it.\n>\n>\n> Can't use nested arrays with jsonpath\n> ==========\n>\n> I encounter an error in this scenario:\n>\n> postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n>\n> So these filter operators only work with scalars?\n>\n>\n\nAnother observation about the documentation is that the examples given\nin 9.15. JSON Functions, Operators, and Expressions aren't all\nfunctional. Some example JSON is provided, followed by example\njsonpath queries which could be used against it. These will produce\nresults for the reader wishing to test them out until this example:\n\n'$.track.segments[*].HR ? (@ > 130)'\n\nThis is because there is no HR value greater than 130. May I propose\nsetting this and all similar examples to (@ > 120) instead?\n\nAlso, this example doesn't work:\n\n'$.track ? (@.segments[*] ? (@.HR > 130)).segments.size()'\n\nThis gives me:\n\npsql: ERROR: syntax error, unexpected $end at end of jsonpath input\nLINE 13: }','$.track ? (@.segments[*]');\n ^\n\nThanks\n\nThom\n\n\n",
"msg_date": "Wed, 19 Jun 2019 17:06:51 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 8:40 PM Thom Brown <thom@linux.com> wrote:\n> On Fri, 14 Jun 2019 at 08:16, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > Hi, Thom.\n> >\n> > At Thu, 13 Jun 2019 14:59:51 +0100, Thom Brown <thom@linux.com> wrote\n> > in <CAA-aLv4VVX=b9RK5hkfPXJczqaiTdqO04teW9i0wiQVhdKcqzw@mail.gmail.com>\n> > > Hi,\n> > >\n> > > I've been reading through the documentation regarding jsonpath and\n> > > jsonb_path_query etc., and I have found it lacking explanation for\n> > > some functionality, and I've also had some confusion when using the\n> > > feature.\n> > >\n> > > ? operator\n> > > ==========\n> > > The first mention of '?' is in section 9.15, where it says:\n> > >\n> > > \"Suppose you would like to retrieve all heart rate values higher than\n> > > 130. You can achieve this using the following expression:\n> > > '$.track.segments[*].HR ? (@ > 130)'\"\n> > >\n> > > So what is the ? operator doing here? Sure, there's the regular ?\n> >\n> > It is described just above as:\n> >\n> > | Each filter expression must be enclosed in parentheses and\n> > | preceded by a question mark.\n>\n> Can I suggest that, rather than using \"question mark\", we use the \"?\"\n> symbol, or provide a syntax structure which shows something like:\n>\n> <path expression> ? <filter expression>\n>\n> This not only makes this key information clearer and more prominent,\n> but it also makes the \"?\" symbol searchable in a browser for anyone\n> wanting to find out what that symbol is doing.\n\nSounds good for me.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 19 Jun 2019 21:59:03 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 7:07 PM Thom Brown <thom@linux.com> wrote:\n> On Thu, 13 Jun 2019 at 14:59, Thom Brown <thom@linux.com> wrote:\n> >\n> > Hi,\n> >\n> > I've been reading through the documentation regarding jsonpath and\n> > jsonb_path_query etc., and I have found it lacking explanation for\n> > some functionality, and I've also had some confusion when using the\n> > feature.\n> >\n> > ? operator\n> > ==========\n> > The first mention of '?' is in section 9.15, where it says:\n> >\n> > \"Suppose you would like to retrieve all heart rate values higher than\n> > 130. You can achieve this using the following expression:\n> > '$.track.segments[*].HR ? (@ > 130)'\"\n> >\n> > So what is the ? operator doing here? Sure, there's the regular ?\n> > operator, which is given as an example further down the page:\n> >\n> > '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n> >\n> > But this doesn't appear to have the same purpose.\n> >\n> >\n> > like_regex\n> > ==========\n> > Then there's like_regex, which shows an example that uses the keyword\n> > \"flag\", but that is the only instance of that keyword being mentioned,\n> > and the flags available to this expression aren't anywhere to be seen.\n> >\n> >\n> > is unknown\n> > ==========\n> > \"is unknown\" suggests a boolean output, but the example shows an\n> > output of \"infinity\". While I understand what it does, this appears\n> > inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> > pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> > pg_is_in_backup() etc.).\n> >\n> >\n> > $varname\n> > ==========\n> > The jsonpath variable, $varname, has an incomplete description: \"A\n> > named variable. Its value must be set in the PASSING clause of an\n> > SQL/JSON query function. for details.\"\n> >\n> >\n> > Binary operation error\n> > ==========\n> > I get an error when I run this query:\n> >\n> > postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> > psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n> >\n> > While I know it's correct to get an error in this scenario as there is\n> > no element beyond 0, the message I get is confusing. I'd expect this\n> > if it encountered another array in that position, but not for\n> > exceeding the upper bound of the array.\n> >\n> >\n> > Cryptic error\n> > ==========\n> > postgres=# SELECT jsonb_path_query('[1, \"2\",\n> > {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> > psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> > LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> > ^\n> > Again, I expect an error, but the message produced doesn't help me.\n> > I'll remove the ANY_P if I can find it.\n> >\n> >\n> > Can't use nested arrays with jsonpath\n> > ==========\n> >\n> > I encounter an error in this scenario:\n> >\n> > postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> > psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> > LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n> >\n> > So these filter operators only work with scalars?\n> >\n> >\n>\n> Another observation about the documentation is that the examples given\n> in 9.15. JSON Functions, Operators, and Expressions aren't all\n> functional. Some example JSON is provided, followed by example\n> jsonpath queries which could be used against it. These will produce\n> results for the reader wishing to test them out until this example:\n>\n> '$.track.segments[*].HR ? (@ > 130)'\n>\n> This is because there is no HR value greater than 130. May I propose\n> setting this and all similar examples to (@ > 120) instead?\n\nMakes sense to me.\n\n> Also, this example doesn't work:\n>\n> '$.track ? (@.segments[*] ? (@.HR > 130)).segments.size()'\n>\n> This gives me:\n>\n> psql: ERROR: syntax error, unexpected $end at end of jsonpath input\n> LINE 13: }','$.track ? (@.segments[*]');\n> ^\n\nPerhaps it should be following:\n\n'$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size()'\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 19 Jun 2019 22:04:43 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Hi, Liudmila!\n\n> While I have no objections to the proposed fixes, I think we can further\n> improve patch 0003 and the text it refers to.\n> In attempt to clarify jsonpath docs and address the concern that ? is\n> hard to trace in the current text, I'd also like to propose patch 0004.\n> Please see both of them attached.\n\nThank you for your editing. I'm going to commit them as well.\n\nBut I'm going to commit your changes separately from 0003 I've posted\nbefore. Because 0003 fixes factual error, while you're proposing set\nof grammar/style fixes.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 19 Jun 2019 22:14:50 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 10:14 PM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> > While I have no objections to the proposed fixes, I think we can further\n> > improve patch 0003 and the text it refers to.\n> > In attempt to clarify jsonpath docs and address the concern that ? is\n> > hard to trace in the current text, I'd also like to propose patch 0004.\n> > Please see both of them attached.\n>\n> Thank you for your editing. I'm going to commit them as well.\n>\n> But I'm going to commit your changes separately from 0003 I've posted\n> before. Because 0003 fixes factual error, while you're proposing set\n> of grammar/style fixes.\n\nI made some review of these patches. My notes are following:\n\n <para>\n- See also <xref linkend=\"functions-aggregate\"/> for the aggregate\n- function <function>json_agg</function> which aggregates record\n- values as JSON, and the aggregate function\n- <function>json_object_agg</function> which aggregates pairs of values\n- into a JSON object, and their <type>jsonb</type> equivalents,\n+ See also <xref linkend=\"functions-aggregate\"/> for details on\n+ <function>json_agg</function> function that aggregates record\n+ values as JSON, <function>json_object_agg</function> function\n+ that aggregates pairs of values into a JSON object, and their\n<type>jsonb</type> equivalents,\n <function>jsonb_agg</function> and <function>jsonb_object_agg</function>.\n </para>\n\nThis part is not directly related to jsonpath, and it has been there\nfor a long time. I'd like some native english speaker to review this\nchange before committing this.\n\n <para>\n- Expression inside subscript may consititue an integer,\n- numeric expression or any other <literal>jsonpath</literal> expression\n- returning single numeric value. The <literal>last</literal> keyword\n- can be used in the expression denoting the last subscript in an array.\n- That's helpful for handling arrays of unknown length.\n+ The specified <replaceable>index</replaceable> can be an integer,\n+ as well as a numeric or <literal>jsonpath</literal> expression that\n+ returns a single integer value. Zero index corresponds to the first\n+ array element. To access the last element in an array, you can use\n+ the <literal>last</literal> keyword, which is useful for handling\n+ arrays of unknown length.\n </para>\n\nI think this part requires more work. Let's see what cases do we have\nwith examples:\n\n1) Integer: '$.ar[1]'\n2) Numeric: '$.ar[1.5]' (converted to integer)\n3) Some numeric expression: '$.ar[last - 1]'\n4) Arbitrary jsonpath expression: '$.ar[$.ar2.size() + $.num - 2]'\n\nIn principle, it not necessary to divide 3 and 4, or divide 1 and 2.\nOr we may don't describe cases at all, but just say it's a jsonpath\nexpression returning numeric, which is converted to integer.\n\nAlso, note that we do not necessary *access* last array element with\n\"last\" keyword. \"last\" keyword denotes index of last element in\nexpression. But completely different element might be actually\naccessed.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 21 Jun 2019 20:04:31 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 6/21/19 8:04 PM, Alexander Korotkov wrote:\n> On Wed, Jun 19, 2019 at 10:14 PM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n>>> While I have no objections to the proposed fixes, I think we can further\n>>> improve patch 0003 and the text it refers to.\n>>> In attempt to clarify jsonpath docs and address the concern that ? is\n>>> hard to trace in the current text, I'd also like to propose patch 0004.\n>>> Please see both of them attached.\n>> Thank you for your editing. I'm going to commit them as well.\n>>\n>> But I'm going to commit your changes separately from 0003 I've posted\n>> before. Because 0003 fixes factual error, while you're proposing set\n>> of grammar/style fixes.\n> I made some review of these patches. My notes are following:\n>\n> <para>\n> - See also <xref linkend=\"functions-aggregate\"/> for the aggregate\n> - function <function>json_agg</function> which aggregates record\n> - values as JSON, and the aggregate function\n> - <function>json_object_agg</function> which aggregates pairs of values\n> - into a JSON object, and their <type>jsonb</type> equivalents,\n> + See also <xref linkend=\"functions-aggregate\"/> for details on\n> + <function>json_agg</function> function that aggregates record\n> + values as JSON, <function>json_object_agg</function> function\n> + that aggregates pairs of values into a JSON object, and their\n> <type>jsonb</type> equivalents,\n> <function>jsonb_agg</function> and <function>jsonb_object_agg</function>.\n> </para>\n>\n> This part is not directly related to jsonpath, and it has been there\n> for a long time. I'd like some native english speaker to review this\n> change before committing this.\n>\n> <para>\n> - Expression inside subscript may consititue an integer,\n> - numeric expression or any other <literal>jsonpath</literal> expression\n> - returning single numeric value. The <literal>last</literal> keyword\n> - can be used in the expression denoting the last subscript in an array.\n> - That's helpful for handling arrays of unknown length.\n> + The specified <replaceable>index</replaceable> can be an integer,\n> + as well as a numeric or <literal>jsonpath</literal> expression that\n> + returns a single integer value. Zero index corresponds to the first\n> + array element. To access the last element in an array, you can use\n> + the <literal>last</literal> keyword, which is useful for handling\n> + arrays of unknown length.\n> </para>\n>\n> I think this part requires more work. Let's see what cases do we have\n> with examples:\n>\n> 1) Integer: '$.ar[1]'\n> 2) Numeric: '$.ar[1.5]' (converted to integer)\n> 3) Some numeric expression: '$.ar[last - 1]'\n> 4) Arbitrary jsonpath expression: '$.ar[$.ar2.size() + $.num - 2]'\n>\n> In principle, it not necessary to divide 3 and 4, or divide 1 and 2.\n> Or we may don't describe cases at all, but just say it's a jsonpath\n> expression returning numeric, which is converted to integer.\n>\n> Also, note that we do not necessary *access* last array element with\n> \"last\" keyword. \"last\" keyword denotes index of last element in\n> expression. But completely different element might be actually\n> accessed.\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\nHi Alexander,\n\nThank you for the catch! Please see the modified version of patch 0004 \nattached.\n\nAs for your comment on patch 0003, since I'm not a native speaker, I can \nonly refer to a recent discussion in pgsql-docs mailing list that seems \nto support my view on a similar issue:\n\nhttps://www.postgresql.org/message-id/9484.1558050957%40sss.pgh.pa.us\n\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 25 Jun 2019 18:38:27 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Wed, 19 Jun 2019 at 20:04, Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Wed, Jun 19, 2019 at 7:07 PM Thom Brown <thom@linux.com> wrote:\n> > On Thu, 13 Jun 2019 at 14:59, Thom Brown <thom@linux.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I've been reading through the documentation regarding jsonpath and\n> > > jsonb_path_query etc., and I have found it lacking explanation for\n> > > some functionality, and I've also had some confusion when using the\n> > > feature.\n> > >\n> > > ? operator\n> > > ==========\n> > > The first mention of '?' is in section 9.15, where it says:\n> > >\n> > > \"Suppose you would like to retrieve all heart rate values higher than\n> > > 130. You can achieve this using the following expression:\n> > > '$.track.segments[*].HR ? (@ > 130)'\"\n> > >\n> > > So what is the ? operator doing here? Sure, there's the regular ?\n> > > operator, which is given as an example further down the page:\n> > >\n> > > '{\"a\":1, \"b\":2}'::jsonb ? 'b'\n> > >\n> > > But this doesn't appear to have the same purpose.\n> > >\n> > >\n> > > like_regex\n> > > ==========\n> > > Then there's like_regex, which shows an example that uses the keyword\n> > > \"flag\", but that is the only instance of that keyword being mentioned,\n> > > and the flags available to this expression aren't anywhere to be seen.\n> > >\n> > >\n> > > is unknown\n> > > ==========\n> > > \"is unknown\" suggests a boolean output, but the example shows an\n> > > output of \"infinity\". While I understand what it does, this appears\n> > > inconsistent with all other \"is...\" functions (e.g. is_valid(lsn),\n> > > pg_is_other_temp_schema(oid), pg_opclass_is_visible(opclass_oid),\n> > > pg_is_in_backup() etc.).\n> > >\n> > >\n> > > $varname\n> > > ==========\n> > > The jsonpath variable, $varname, has an incomplete description: \"A\n> > > named variable. Its value must be set in the PASSING clause of an\n> > > SQL/JSON query function. for details.\"\n> > >\n> > >\n> > > Binary operation error\n> > > ==========\n> > > I get an error when I run this query:\n> > >\n> > > postgres=# SELECT jsonb_path_query('[2]', '2 + $[1]');\n> > > psql: ERROR: right operand of jsonpath operator + is not a single numeric value\n> > >\n> > > While I know it's correct to get an error in this scenario as there is\n> > > no element beyond 0, the message I get is confusing. I'd expect this\n> > > if it encountered another array in that position, but not for\n> > > exceeding the upper bound of the array.\n> > >\n> > >\n> > > Cryptic error\n> > > ==========\n> > > postgres=# SELECT jsonb_path_query('[1, \"2\",\n> > > {},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].type()');\n> > > psql: ERROR: syntax error, unexpected ANY_P at or near \"**\" of jsonpath input\n> > > LINE 1: ...},[{\"a\":2}],2.3,null,\"2019-06-05T13:25:43.511Z\"]','$[**].typ...\n> > > ^\n> > > Again, I expect an error, but the message produced doesn't help me.\n> > > I'll remove the ANY_P if I can find it.\n> > >\n> > >\n> > > Can't use nested arrays with jsonpath\n> > > ==========\n> > >\n> > > I encounter an error in this scenario:\n> > >\n> > > postgres=# select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == [1,2])');\n> > > psql: ERROR: syntax error, unexpected '[' at or near \"[\" of jsonpath input\n> > > LINE 1: select jsonb_path_query('[1, 2, 1, [1,2], 3]','$[*] ? (@ == ...\n> > >\n> > > So these filter operators only work with scalars?\n> > >\n> > >\n> >\n> > Another observation about the documentation is that the examples given\n> > in 9.15. JSON Functions, Operators, and Expressions aren't all\n> > functional. Some example JSON is provided, followed by example\n> > jsonpath queries which could be used against it. These will produce\n> > results for the reader wishing to test them out until this example:\n> >\n> > '$.track.segments[*].HR ? (@ > 130)'\n> >\n> > This is because there is no HR value greater than 130. May I propose\n> > setting this and all similar examples to (@ > 120) instead?\n>\n> Makes sense to me.\n>\n> > Also, this example doesn't work:\n> >\n> > '$.track ? (@.segments[*] ? (@.HR > 130)).segments.size()'\n> >\n> > This gives me:\n> >\n> > psql: ERROR: syntax error, unexpected $end at end of jsonpath input\n> > LINE 13: }','$.track ? (@.segments[*]');\n> > ^\n>\n> Perhaps it should be following:\n>\n> '$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size()'\n\nI'm not clear on why the original example doesn't work here.\n\nThom\n\n\n",
"msg_date": "Thu, 27 Jun 2019 14:56:45 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, Jun 27, 2019 at 4:57 PM Thom Brown <thom@linux.com> wrote:\n> On Wed, 19 Jun 2019 at 20:04, Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Wed, Jun 19, 2019 at 7:07 PM Thom Brown <thom@linux.com> wrote:\n> > > On Thu, 13 Jun 2019 at 14:59, Thom Brown <thom@linux.com> wrote:\n> > > Also, this example doesn't work:\n> > >\n> > > '$.track ? (@.segments[*] ? (@.HR > 130)).segments.size()'\n> > >\n> > > This gives me:\n> > >\n> > > psql: ERROR: syntax error, unexpected $end at end of jsonpath input\n> > > LINE 13: }','$.track ? (@.segments[*]');\n> > > ^\n> >\n> > Perhaps it should be following:\n> >\n> > '$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size()'\n>\n> I'm not clear on why the original example doesn't work here.\n\nIt doesn't work because filter expression should be predicate, i.e.\nalways return bool. In the original example filter expression selects\nsome json elements. My original idea was that it was accidentally\ncome from some of our extensions where we've allowed that. But it\nappears to be just plain wrong example, which never worked. Sorry for\nthat.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 28 Jun 2019 05:50:15 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 6:38 PM Liudmila Mantrova\n<l.mantrova@postgrespro.ru> wrote:\n> Thank you for the catch! Please see the modified version of patch 0004\n> attached.\n\nI tried to review and revise the part related to filters, but I failed\nbecause I don't understand the notions used in the documentation.\n\nWhat is the difference between filter expression and filter condition?\n I can guess that filter expression contains question mark,\nparentheses and filter condition inside. But this sentence is in\ncontradiction with my guess: \"A filter expression must be enclosed in\nparentheses and preceded by a question mark\". So, filter expression\nis inside the parentheses. Then what is filter condition? The same?\n\n> Each filter expression can provide one or more filters\n> that are applied to the result of the path evaluation.\n\n\nSo additionally to filter condition and filter expression we introduce\nthe notion of just filter. What is it? Could we make it without\nintroduction of new notion?\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 28 Jun 2019 06:47:59 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 2019-Jun-28, Alexander Korotkov wrote:\n\n> On Tue, Jun 25, 2019 at 6:38 PM Liudmila Mantrova\n> <l.mantrova@postgrespro.ru> wrote:\n> > Thank you for the catch! Please see the modified version of patch 0004\n> > attached.\n> \n> I tried to review and revise the part related to filters, but I failed\n> because I don't understand the notions used in the documentation.\n> \n> What is the difference between filter expression and filter condition?\n> I can guess that filter expression contains question mark,\n> parentheses and filter condition inside. But this sentence is in\n> contradiction with my guess: \"A filter expression must be enclosed in\n> parentheses and preceded by a question mark\". So, filter expression\n> is inside the parentheses. Then what is filter condition? The same?\n\nThe SQL standard defines \"JSON filter expressions\" (in 9.39 of the 2016\nedition). It does not use either term \"filter condition\" nor bare\n\"filter\"; it uses \"JSON path predicate\" which is the part of the JSON\nfilter expression that is preceded by the question mark and enclosed by\nparens.\n\nMaybe we should stick with the standard terminology ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Jun 2019 01:09:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 8:10 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jun-28, Alexander Korotkov wrote:\n>\n> > On Tue, Jun 25, 2019 at 6:38 PM Liudmila Mantrova\n> > <l.mantrova@postgrespro.ru> wrote:\n> > > Thank you for the catch! Please see the modified version of patch 0004\n> > > attached.\n> >\n> > I tried to review and revise the part related to filters, but I failed\n> > because I don't understand the notions used in the documentation.\n> >\n> > What is the difference between filter expression and filter condition?\n> > I can guess that filter expression contains question mark,\n> > parentheses and filter condition inside. But this sentence is in\n> > contradiction with my guess: \"A filter expression must be enclosed in\n> > parentheses and preceded by a question mark\". So, filter expression\n> > is inside the parentheses. Then what is filter condition? The same?\n>\n> The SQL standard defines \"JSON filter expressions\" (in 9.39 of the 2016\n> edition). It does not use either term \"filter condition\" nor bare\n> \"filter\"; it uses \"JSON path predicate\" which is the part of the JSON\n> filter expression that is preceded by the question mark and enclosed by\n> parens.\n\nYes, this is what I used in my talk\nhttp://www.sai.msu.su/~megera/postgres/talks/jsonpath-ibiza-2019.pdf\n\n>\n> Maybe we should stick with the standard terminology ...\n\nSure.\n\nAs for the jsonpath documentation, I think we should remember, that\njsonpath is a part of SQL/JSON, and in the following releases we will\nexpand documentation to include SQL/JSON functions, so I suggest to\nhave one chapter SQL/JSON with following structure:\n1. Introduction\n1.1 SQL/JSON data model\n1.2 SQL/JSON path language\n1.3 <SQL/JSON functions> -- to be added\n2. PostgreSQL implementation\n2.1 jsonpath data type -- link from json data types\n2.2 jsonpath functions and operators -- link from functions\n2.3 Indexing\n\nI plan to work on a separate chapter \"JSON handling in PostgreSQL\" for\nPG13, which includes\nJSON(b) data types, functions, indexing and SQL/JSON.\n\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n\n-- \nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 28 Jun 2019 09:00:30 +0300",
"msg_from": "Oleg Bartunov <obartunov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Fri, Jun 28, 2019 at 9:01 AM Oleg Bartunov <obartunov@postgrespro.ru> wrote:\n> On Fri, Jun 28, 2019 at 8:10 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> >\n> > On 2019-Jun-28, Alexander Korotkov wrote:\n> >\n> > > On Tue, Jun 25, 2019 at 6:38 PM Liudmila Mantrova\n> > > <l.mantrova@postgrespro.ru> wrote:\n> > > > Thank you for the catch! Please see the modified version of patch 0004\n> > > > attached.\n> > >\n> > > I tried to review and revise the part related to filters, but I failed\n> > > because I don't understand the notions used in the documentation.\n> > >\n> > > What is the difference between filter expression and filter condition?\n> > > I can guess that filter expression contains question mark,\n> > > parentheses and filter condition inside. But this sentence is in\n> > > contradiction with my guess: \"A filter expression must be enclosed in\n> > > parentheses and preceded by a question mark\". So, filter expression\n> > > is inside the parentheses. Then what is filter condition? The same?\n> >\n> > The SQL standard defines \"JSON filter expressions\" (in 9.39 of the 2016\n> > edition). It does not use either term \"filter condition\" nor bare\n> > \"filter\"; it uses \"JSON path predicate\" which is the part of the JSON\n> > filter expression that is preceded by the question mark and enclosed by\n> > parens.\n>\n> Yes, this is what I used in my talk\n> http://www.sai.msu.su/~megera/postgres/talks/jsonpath-ibiza-2019.pdf\n>\n> >\n> > Maybe we should stick with the standard terminology ...\n>\n> Sure.\n\n+1\n\n> As for the jsonpath documentation, I think we should remember, that\n> jsonpath is a part of SQL/JSON, and in the following releases we will\n> expand documentation to include SQL/JSON functions, so I suggest to\n> have one chapter SQL/JSON with following structure:\n> 1. Introduction\n> 1.1 SQL/JSON data model\n> 1.2 SQL/JSON path language\n> 1.3 <SQL/JSON functions> -- to be added\n> 2. PostgreSQL implementation\n> 2.1 jsonpath data type -- link from json data types\n> 2.2 jsonpath functions and operators -- link from functions\n> 2.3 Indexing\n>\n> I plan to work on a separate chapter \"JSON handling in PostgreSQL\" for\n> PG13, which includes\n> JSON(b) data types, functions, indexing and SQL/JSON.\n\nIt would be great if you manage to do this.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 28 Jun 2019 14:30:50 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 6/28/19 6:47 AM, Alexander Korotkov wrote:\n> On Tue, Jun 25, 2019 at 6:38 PM Liudmila Mantrova\n> <l.mantrova@postgrespro.ru> wrote:\n>> Thank you for the catch! Please see the modified version of patch 0004\n>> attached.\n> I tried to review and revise the part related to filters, but I failed\n> because I don't understand the notions used in the documentation.\n>\n> What is the difference between filter expression and filter condition?\n> I can guess that filter expression contains question mark,\n> parentheses and filter condition inside. But this sentence is in\n> contradiction with my guess: \"A filter expression must be enclosed in\n> parentheses and preceded by a question mark\". So, filter expression\n> is inside the parentheses. Then what is filter condition? The same?\n>\n>> Each filter expression can provide one or more filters\n>> that are applied to the result of the path evaluation.\n>\n> So additionally to filter condition and filter expression we introduce\n> the notion of just filter. What is it? Could we make it without\n> introduction of new notion?\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\nHi,\n\nI have rechecked the standard and I agree that we should use \"filter \nexpression\" whenever possible.\n\"A filter expression must be enclosed in parentheses...\" looks like an \noversight, so I fixed it. As for what's actually enclosed, I believe we \ncan still use the word \"condition\" here as it's easy to understand and \nis already used in our docs, e.g. in description of the WHERE clause \nthat serves a similar purpose.\nThe new version of the patch fixes the terminology, tweaks the examples, \nand provides some grammar and style fixes in the jsonpath-related chapters.\n\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 3 Jul 2019 17:27:51 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Hi!\n\nOn Wed, Jul 3, 2019 at 5:27 PM Liudmila Mantrova\n<l.mantrova@postgrespro.ru> wrote:\n>\n> I have rechecked the standard and I agree that we should use \"filter\n> expression\" whenever possible.\n> \"A filter expression must be enclosed in parentheses...\" looks like an\n> oversight, so I fixed it. As for what's actually enclosed, I believe we\n> can still use the word \"condition\" here as it's easy to understand and\n> is already used in our docs, e.g. in description of the WHERE clause\n> that serves a similar purpose.\n> The new version of the patch fixes the terminology, tweaks the examples,\n> and provides some grammar and style fixes in the jsonpath-related chapters.\n\n\nIt looks good to me. But this sentence looks a bit too complicated.\n\n\"It can be followed by one or more accessor operators to define the\nJSON element on a lower nesting level by which to filter the result.\"\n\nCould we phrase this as following?\n\n\"In order to filter the result by values lying on lower nesting level,\n@ operator can be followed by one or more accessor operators.\"\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 3 Jul 2019 23:59:01 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On 7/3/19 11:59 PM, Alexander Korotkov wrote:\n> Hi!\n>\n> On Wed, Jul 3, 2019 at 5:27 PM Liudmila Mantrova\n> <l.mantrova@postgrespro.ru> wrote:\n>> I have rechecked the standard and I agree that we should use \"filter\n>> expression\" whenever possible.\n>> \"A filter expression must be enclosed in parentheses...\" looks like an\n>> oversight, so I fixed it. As for what's actually enclosed, I believe we\n>> can still use the word \"condition\" here as it's easy to understand and\n>> is already used in our docs, e.g. in description of the WHERE clause\n>> that serves a similar purpose.\n>> The new version of the patch fixes the terminology, tweaks the examples,\n>> and provides some grammar and style fixes in the jsonpath-related chapters.\n>\n> It looks good to me. But this sentence looks a bit too complicated.\n>\n> \"It can be followed by one or more accessor operators to define the\n> JSON element on a lower nesting level by which to filter the result.\"\n>\n> Could we phrase this as following?\n>\n> \"In order to filter the result by values lying on lower nesting level,\n> @ operator can be followed by one or more accessor operators.\"\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n\nThank you!\n\nI think we can make this sentence even shorter, the fix is attached:\n\n\"To refer to a JSON element stored at a lower nesting level, add one or \nmore accessor operators after <literal>@</literal>.\"\n\n\n-- \nLiudmila Mantrova\nTechnical writer at Postgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 4 Jul 2019 16:38:14 +0300",
"msg_from": "Liudmila Mantrova <l.mantrova@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, Jul 4, 2019 at 4:38 PM Liudmila Mantrova\n<l.mantrova@postgrespro.ru> wrote:\n> Thank you!\n>\n> I think we can make this sentence even shorter, the fix is attached:\n>\n> \"To refer to a JSON element stored at a lower nesting level, add one or\n> more accessor operators after <literal>@</literal>.\"\n\nThanks, looks good to me. Attached revision of patch contains commit\nmessage. I'm going to commit this on no objections.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 8 Jul 2019 00:30:01 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 12:30 AM Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n> On Thu, Jul 4, 2019 at 4:38 PM Liudmila Mantrova\n> <l.mantrova@postgrespro.ru> wrote:\n> > Thank you!\n> >\n> > I think we can make this sentence even shorter, the fix is attached:\n> >\n> > \"To refer to a JSON element stored at a lower nesting level, add one or\n> > more accessor operators after <literal>@</literal>.\"\n>\n> Thanks, looks good to me. Attached revision of patch contains commit\n> message. I'm going to commit this on no objections.\n\nSo, pushed!\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 10 Jul 2019 07:58:26 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Wed, 10 Jul 2019 at 05:58, Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Mon, Jul 8, 2019 at 12:30 AM Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> > On Thu, Jul 4, 2019 at 4:38 PM Liudmila Mantrova\n> > <l.mantrova@postgrespro.ru> wrote:\n> > > Thank you!\n> > >\n> > > I think we can make this sentence even shorter, the fix is attached:\n> > >\n> > > \"To refer to a JSON element stored at a lower nesting level, add one or\n> > > more accessor operators after <literal>@</literal>.\"\n> >\n> > Thanks, looks good to me. Attached revision of patch contains commit\n> > message. I'm going to commit this on no objections.\n>\n> So, pushed!\n\nI've just noticed the >= operator shows up as just > in the \"jsonpath\nFilter Expression Elements\" table, and the <= example only shows <.\n\nThom\n\n\n",
"msg_date": "Thu, 11 Jul 2019 15:09:28 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 5:10 PM Thom Brown <thom@linux.com> wrote:\n> On Wed, 10 Jul 2019 at 05:58, Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Mon, Jul 8, 2019 at 12:30 AM Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > > On Thu, Jul 4, 2019 at 4:38 PM Liudmila Mantrova\n> > > <l.mantrova@postgrespro.ru> wrote:\n> > > > Thank you!\n> > > >\n> > > > I think we can make this sentence even shorter, the fix is attached:\n> > > >\n> > > > \"To refer to a JSON element stored at a lower nesting level, add one or\n> > > > more accessor operators after <literal>@</literal>.\"\n> > >\n> > > Thanks, looks good to me. Attached revision of patch contains commit\n> > > message. I'm going to commit this on no objections.\n> >\n> > So, pushed!\n>\n> I've just noticed the >= operator shows up as just > in the \"jsonpath\n> Filter Expression Elements\" table, and the <= example only shows <.\n\nThank you for catching this! Fix just pushed.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 11 Jul 2019 18:23:20 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, 11 Jul 2019 at 16:23, Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Thu, Jul 11, 2019 at 5:10 PM Thom Brown <thom@linux.com> wrote:\n> > On Wed, 10 Jul 2019 at 05:58, Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > >\n> > > On Mon, Jul 8, 2019 at 12:30 AM Alexander Korotkov\n> > > <a.korotkov@postgrespro.ru> wrote:\n> > > > On Thu, Jul 4, 2019 at 4:38 PM Liudmila Mantrova\n> > > > <l.mantrova@postgrespro.ru> wrote:\n> > > > > Thank you!\n> > > > >\n> > > > > I think we can make this sentence even shorter, the fix is attached:\n> > > > >\n> > > > > \"To refer to a JSON element stored at a lower nesting level, add one or\n> > > > > more accessor operators after <literal>@</literal>.\"\n> > > >\n> > > > Thanks, looks good to me. Attached revision of patch contains commit\n> > > > message. I'm going to commit this on no objections.\n> > >\n> > > So, pushed!\n> >\n> > I've just noticed the >= operator shows up as just > in the \"jsonpath\n> > Filter Expression Elements\" table, and the <= example only shows <.\n>\n> Thank you for catching this! Fix just pushed.\n\nThanks.\n\nNow I'm looking at the @? and @@ operators, and getting a bit\nconfused. This following query returns true, but I can't determine\nwhy:\n\n# SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n ?column?\n----------\n t\n(1 row)\n\n\"b\" is not a valid item, so there should be no match. Perhaps it's my\nmisunderstanding of how these operators are supposed to work, but the\ndocumentation is quite terse on the behaviour.\n\nThom\n\n\n",
"msg_date": "Tue, 16 Jul 2019 19:21:56 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 9:22 PM Thom Brown <thom@linux.com> wrote:\n> Now I'm looking at the @? and @@ operators, and getting a bit\n> confused. This following query returns true, but I can't determine\n> why:\n>\n> # SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> \"b\" is not a valid item, so there should be no match. Perhaps it's my\n> misunderstanding of how these operators are supposed to work, but the\n> documentation is quite terse on the behaviour.\n\nSo, the result of jsonpath evaluation is single value \"false\".\n\n# SELECT jsonb_path_query_array('{\"a\":[1,2,3,4,5]}'::jsonb, '$.b == \"hello\"');\n jsonb_path_query_array\n------------------------\n [false]\n(1 row)\n\n@@ operator checks that result is \"true\". This is why it returns \"false\".\n\n@? operator checks if result is not empty. So, it's single \"false\"\nvalue, not empty list. This is why it returns \"true\".\n\nPerhaps, we need to clarify this in docs providing more explanation.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 16 Jul 2019 21:44:39 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Tue, 16 Jul 2019 at 19:44, Alexander Korotkov\n<a.korotkov@postgrespro.ru> wrote:\n>\n> On Tue, Jul 16, 2019 at 9:22 PM Thom Brown <thom@linux.com> wrote:\n> > Now I'm looking at the @? and @@ operators, and getting a bit\n> > confused. This following query returns true, but I can't determine\n> > why:\n> >\n> > # SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n> > ?column?\n> > ----------\n> > t\n> > (1 row)\n> >\n> > \"b\" is not a valid item, so there should be no match. Perhaps it's my\n> > misunderstanding of how these operators are supposed to work, but the\n> > documentation is quite terse on the behaviour.\n>\n> So, the result of jsonpath evaluation is single value \"false\".\n>\n> # SELECT jsonb_path_query_array('{\"a\":[1,2,3,4,5]}'::jsonb, '$.b == \"hello\"');\n> jsonb_path_query_array\n> ------------------------\n> [false]\n> (1 row)\n>\n> @@ operator checks that result is \"true\". This is why it returns \"false\".\n>\n> @? operator checks if result is not empty. So, it's single \"false\"\n> value, not empty list. This is why it returns \"true\".\n>\n> Perhaps, we need to clarify this in docs providing more explanation.\n\nUnderstood. Thanks.\n\nAlso, is there a reason why jsonb_path_query doesn't have an operator analog?\n\nThom\n\n\n",
"msg_date": "Thu, 18 Jul 2019 15:08:06 +0100",
"msg_from": "Thom Brown <thom@linux.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "On Thu, Jul 18, 2019 at 5:08 PM Thom Brown <thom@linux.com> wrote:\n> On Tue, 16 Jul 2019 at 19:44, Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Tue, Jul 16, 2019 at 9:22 PM Thom Brown <thom@linux.com> wrote:\n> > > Now I'm looking at the @? and @@ operators, and getting a bit\n> > > confused. This following query returns true, but I can't determine\n> > > why:\n> > >\n> > > # SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n> > > ?column?\n> > > ----------\n> > > t\n> > > (1 row)\n> > >\n> > > \"b\" is not a valid item, so there should be no match. Perhaps it's my\n> > > misunderstanding of how these operators are supposed to work, but the\n> > > documentation is quite terse on the behaviour.\n> >\n> > So, the result of jsonpath evaluation is single value \"false\".\n> >\n> > # SELECT jsonb_path_query_array('{\"a\":[1,2,3,4,5]}'::jsonb, '$.b == \"hello\"');\n> > jsonb_path_query_array\n> > ------------------------\n> > [false]\n> > (1 row)\n> >\n> > @@ operator checks that result is \"true\". This is why it returns \"false\".\n> >\n> > @? operator checks if result is not empty. So, it's single \"false\"\n> > value, not empty list. This is why it returns \"true\".\n> >\n> > Perhaps, we need to clarify this in docs providing more explanation.\n>\n> Understood. Thanks.\n>\n> Also, is there a reason why jsonb_path_query doesn't have an operator analog?\n\nThe point of existing operator analogues is index support. We\nintroduced operators for searches we can accelerate using GIN indexes.\n\njsonb_path_query() doesn't return bool. So, even if we have an\noperator for that, it wouldn't get index support.\n\nHowever, we can discuss introduction of operator analogues for other\nfunctions as syntax sugar.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 19 Jul 2019 12:02:32 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "I would like to help review this documentation. Can you please point me in\nthe right direction?\nThanks\nSteve\n\nOn Fri, Jul 19, 2019 at 2:02 AM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> On Thu, Jul 18, 2019 at 5:08 PM Thom Brown <thom@linux.com> wrote:\n> > On Tue, 16 Jul 2019 at 19:44, Alexander Korotkov\n> > <a.korotkov@postgrespro.ru> wrote:\n> > >\n> > > On Tue, Jul 16, 2019 at 9:22 PM Thom Brown <thom@linux.com> wrote:\n> > > > Now I'm looking at the @? and @@ operators, and getting a bit\n> > > > confused. This following query returns true, but I can't determine\n> > > > why:\n> > > >\n> > > > # SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n> > > > ?column?\n> > > > ----------\n> > > > t\n> > > > (1 row)\n> > > >\n> > > > \"b\" is not a valid item, so there should be no match. Perhaps it's\n> my\n> > > > misunderstanding of how these operators are supposed to work, but the\n> > > > documentation is quite terse on the behaviour.\n> > >\n> > > So, the result of jsonpath evaluation is single value \"false\".\n> > >\n> > > # SELECT jsonb_path_query_array('{\"a\":[1,2,3,4,5]}'::jsonb, '$.b ==\n> \"hello\"');\n> > > jsonb_path_query_array\n> > > ------------------------\n> > > [false]\n> > > (1 row)\n> > >\n> > > @@ operator checks that result is \"true\". This is why it returns\n> \"false\".\n> > >\n> > > @? operator checks if result is not empty. So, it's single \"false\"\n> > > value, not empty list. This is why it returns \"true\".\n> > >\n> > > Perhaps, we need to clarify this in docs providing more explanation.\n> >\n> > Understood. Thanks.\n> >\n> > Also, is there a reason why jsonb_path_query doesn't have an operator\n> analog?\n>\n> The point of existing operator analogues is index support. We\n> introduced operators for searches we can accelerate using GIN indexes.\n>\n> jsonb_path_query() doesn't return bool. So, even if we have an\n> operator for that, it wouldn't get index support.\n>\n> However, we can discuss introduction of operator analogues for other\n> functions as syntax sugar.\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n\nI would like to help review this documentation. Can you please point me in the right direction?ThanksSteveOn Fri, Jul 19, 2019 at 2:02 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:On Thu, Jul 18, 2019 at 5:08 PM Thom Brown <thom@linux.com> wrote:\n> On Tue, 16 Jul 2019 at 19:44, Alexander Korotkov\n> <a.korotkov@postgrespro.ru> wrote:\n> >\n> > On Tue, Jul 16, 2019 at 9:22 PM Thom Brown <thom@linux.com> wrote:\n> > > Now I'm looking at the @? and @@ operators, and getting a bit\n> > > confused. This following query returns true, but I can't determine\n> > > why:\n> > >\n> > > # SELECT '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.b == \"hello\"'::jsonpath;\n> > > ?column?\n> > > ----------\n> > > t\n> > > (1 row)\n> > >\n> > > \"b\" is not a valid item, so there should be no match. Perhaps it's my\n> > > misunderstanding of how these operators are supposed to work, but the\n> > > documentation is quite terse on the behaviour.\n> >\n> > So, the result of jsonpath evaluation is single value \"false\".\n> >\n> > # SELECT jsonb_path_query_array('{\"a\":[1,2,3,4,5]}'::jsonb, '$.b == \"hello\"');\n> > jsonb_path_query_array\n> > ------------------------\n> > [false]\n> > (1 row)\n> >\n> > @@ operator checks that result is \"true\". This is why it returns \"false\".\n> >\n> > @? operator checks if result is not empty. So, it's single \"false\"\n> > value, not empty list. This is why it returns \"true\".\n> >\n> > Perhaps, we need to clarify this in docs providing more explanation.\n>\n> Understood. Thanks.\n>\n> Also, is there a reason why jsonb_path_query doesn't have an operator analog?\n\nThe point of existing operator analogues is index support. We\nintroduced operators for searches we can accelerate using GIN indexes.\n\njsonb_path_query() doesn't return bool. So, even if we have an\noperator for that, it wouldn't get index support.\n\nHowever, we can discuss introduction of operator analogues for other\nfunctions as syntax sugar.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 19 Jul 2019 11:53:00 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Hi Steven,\n\nOn Fri, Jul 19, 2019 at 9:53 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> I would like to help review this documentation. Can you please point me in the right direction?\n\nThank you for your interest. You're welcome to do review.\n\nPlease take a look at instruction for reviewing a patch [1] and\nworking with git [2]. Also, in order to build a doc you will need to\nsetup a toolset first [3].\n\nLinks\n\n1. https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n2. https://wiki.postgresql.org/wiki/Working_with_git#Testing_a_patch\n3. https://www.postgresql.org/docs/devel/docguide-toolsets.html\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sat, 20 Jul 2019 21:48:07 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Thanks so much, hope to get to it over this weekend.\n\nOn Sat, Jul 20, 2019, 11:48 AM Alexander Korotkov <a.korotkov@postgrespro.ru>\nwrote:\n\n> Hi Steven,\n>\n> On Fri, Jul 19, 2019 at 9:53 PM Steven Pousty <steve.pousty@gmail.com>\n> wrote:\n> > I would like to help review this documentation. Can you please point me\n> in the right direction?\n>\n> Thank you for your interest. You're welcome to do review.\n>\n> Please take a look at instruction for reviewing a patch [1] and\n> working with git [2]. Also, in order to build a doc you will need to\n> setup a toolset first [3].\n>\n> Links\n>\n> 1. https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> 2. https://wiki.postgresql.org/wiki/Working_with_git#Testing_a_patch\n> 3. https://www.postgresql.org/docs/devel/docguide-toolsets.html\n>\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\nThanks so much, hope to get to it over this weekend. On Sat, Jul 20, 2019, 11:48 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:Hi Steven,\n\nOn Fri, Jul 19, 2019 at 9:53 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> I would like to help review this documentation. Can you please point me in the right direction?\n\nThank you for your interest. You're welcome to do review.\n\nPlease take a look at instruction for reviewing a patch [1] and\nworking with git [2]. Also, in order to build a doc you will need to\nsetup a toolset first [3].\n\nLinks\n\n1. https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n2. https://wiki.postgresql.org/wiki/Working_with_git#Testing_a_patch\n3. https://www.postgresql.org/docs/devel/docguide-toolsets.html\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 20 Jul 2019 12:43:10 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
},
{
"msg_contents": "Ok I have the toolset.\nWhere do I find the PR for the doc on this work. I only feel qualified to\nreview the doc.\nThanks\nSteve\n\nOn Sat, Jul 20, 2019 at 11:48 AM Alexander Korotkov <\na.korotkov@postgrespro.ru> wrote:\n\n> Hi Steven,\n>\n> On Fri, Jul 19, 2019 at 9:53 PM Steven Pousty <steve.pousty@gmail.com>\n> wrote:\n> > I would like to help review this documentation. Can you please point me\n> in the right direction?\n>\n> Thank you for your interest. You're welcome to do review.\n>\n> Please take a look at instruction for reviewing a patch [1] and\n> working with git [2]. Also, in order to build a doc you will need to\n> setup a toolset first [3].\n>\n> Links\n>\n> 1. https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n> 2. https://wiki.postgresql.org/wiki/Working_with_git#Testing_a_patch\n> 3. https://www.postgresql.org/docs/devel/docguide-toolsets.html\n>\n>\n> ------\n> Alexander Korotkov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n\nOk I have the toolset.Where do I find the PR for the doc on this work. I only feel qualified to review the doc.ThanksSteveOn Sat, Jul 20, 2019 at 11:48 AM Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:Hi Steven,\n\nOn Fri, Jul 19, 2019 at 9:53 PM Steven Pousty <steve.pousty@gmail.com> wrote:\n> I would like to help review this documentation. Can you please point me in the right direction?\n\nThank you for your interest. You're welcome to do review.\n\nPlease take a look at instruction for reviewing a patch [1] and\nworking with git [2]. Also, in order to build a doc you will need to\nsetup a toolset first [3].\n\nLinks\n\n1. https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n2. https://wiki.postgresql.org/wiki/Working_with_git#Testing_a_patch\n3. https://www.postgresql.org/docs/devel/docguide-toolsets.html\n\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 23 Jul 2019 17:24:11 -0700",
"msg_from": "Steven Pousty <steve.pousty@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON path issues/questions"
}
] |
[
{
"msg_contents": "The release notes say:\n\n <listitem>\n<!--\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2018-12-30 [b5415e3c2] Support parameterized TidPaths.\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n2018-12-30 [0a6ea4001] Add a hash opclass for type \"tid\".\n-->\n\n <para>\n Improve optimization of self-joins (Tom Lane)\n </para>\n </listitem>\n\nI don't think that's an accurate summary of those two items. It's\ntrue that they could make self-joins more efficient, but my reading is\nthat it would only do so if the self-join happened to use the ctid\ncolumn. If you're writing SELECT * FROM foo a, foo b WHERE a.ctid =\nb.ctid, it might very well help; but if you write SELECT * FROM foo a,\nfoo b WHERE a.x = b.x, it won't, not even if there is a unique index\non x. Or so I think.\n\nSo I think that this should probably be changed to say something like\n\"Improve optimization of self-joins on ctid columns\" or \"Improve\noptimization of joins involving columns of type tid.\"\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:14:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "release notes: tids & self-joins"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The release notes say:\n> <listitem>\n> <!--\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2018-12-30 [b5415e3c2] Support parameterized TidPaths.\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> 2018-12-30 [0a6ea4001] Add a hash opclass for type \"tid\".\n> -->\n\n> <para>\n> Improve optimization of self-joins (Tom Lane)\n> </para>\n> </listitem>\n\n> I don't think that's an accurate summary of those two items. It's\n> true that they could make self-joins more efficient, but my reading is\n> that it would only do so if the self-join happened to use the ctid\n> column.\n\nYeah. I think Bruce misread the commit messages, which commented that\njoining on TID is only likely to be useful in a self-join.\n\n> So I think that this should probably be changed to say something like\n> \"Improve optimization of self-joins on ctid columns\" or \"Improve\n> optimization of joins involving columns of type tid.\"\n\nThe latter seems fine to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:22:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: release notes: tids & self-joins"
},
{
"msg_contents": "On Fri, 14 Jun 2019 at 05:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > So I think that this should probably be changed to say something like\n> > \"Improve optimization of self-joins on ctid columns\" or \"Improve\n> > optimization of joins involving columns of type tid.\"\n>\n> The latter seems fine to me.\n\nThe latter seems a bit inaccurate to me given the fact that a column\nwith the type tid could exist elsewhere in the table. Perhaps\n\"columns of type tid\" can be swapped with \"a table's ctid column\".\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 08:53:05 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: release notes: tids & self-joins"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Fri, 14 Jun 2019 at 05:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> So I think that this should probably be changed to say something like\n>>> \"Improve optimization of self-joins on ctid columns\" or \"Improve\n>>> optimization of joins involving columns of type tid.\"\n\n>> The latter seems fine to me.\n\n> The latter seems a bit inaccurate to me given the fact that a column\n> with the type tid could exist elsewhere in the table. Perhaps\n> \"columns of type tid\" can be swapped with \"a table's ctid column\".\n\nIt's true that the parameterized-tidscan patch only helps for joins\nto CTID, but the other patch helps for joins to any tid column.\nSo I still say Robert's wording is fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 13 Jun 2019 17:19:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: release notes: tids & self-joins"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 01:22:16PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > The release notes say:\n> > <listitem>\n> > <!--\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2018-12-30 [b5415e3c2] Support parameterized TidPaths.\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > 2018-12-30 [0a6ea4001] Add a hash opclass for type \"tid\".\n> > -->\n> \n> > <para>\n> > Improve optimization of self-joins (Tom Lane)\n> > </para>\n> > </listitem>\n> \n> > I don't think that's an accurate summary of those two items. It's\n> > true that they could make self-joins more efficient, but my reading is\n> > that it would only do so if the self-join happened to use the ctid\n> > column.\n> \n> Yeah. I think Bruce misread the commit messages, which commented that\n> joining on TID is only likely to be useful in a self-join.\n> \n> > So I think that this should probably be changed to say something like\n> > \"Improve optimization of self-joins on ctid columns\" or \"Improve\n> > optimization of joins involving columns of type tid.\"\n> \n> The latter seems fine to me.\n\nI have updated to use the later wording.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 22:53:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: release notes: tids & self-joins"
}
] |
[
{
"msg_contents": "The documentation for the new REINDEX CONCURRENTLY option says:\n\n\"When this option is used, PostgreSQL will rebuild the index without\ntaking any locks that prevent concurrent inserts, updates, or deletes\non the table; whereas a standard reindex build locks out writes (but\nnot reads) on the table until it's done.\"\n\nThis statement appears to be false, not because it's wrong about\nREINDEX CONCURRENTLY but because it's wrong about regular REINDEX.\n\nS1:\n\nrhaas=# begin;\nBEGIN\nrhaas=# select * from pgbench_branches where filler = 'afafa';\n bid | bbalance | filler\n-----+----------+--------\n(0 rows)\n\nS2:\n\nrhaas=# reindex index pgbench_branches_pkey;\n-- hangs\n\nTyping \"COMMIT;\" or \"ROLLBACK;\" in S1 unblocks the reindex and it\nsucceeds, but otherwise it doesn't, contrary to the claim that a\nregular REINDEX does not block reads. The reason for this seems to be\nthat the REINDEX acquires AccessExclusiveLock on all of the indexes of\nthe table, and a SELECT acquires AccessShareLock on all indexes of the\ntable (even if the particular plan at issue does not use them); e.g.\nin this case the plan is a Seq Scan. REINDEX acquires only ShareLock\non the table itself, but this apparently does nobody wanting to run a\nquery any good.\n\nIs it supposed to work this way? Am I confused?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jun 2019 16:04:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "REINDEX locking"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 1:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Typing \"COMMIT;\" or \"ROLLBACK;\" in S1 unblocks the reindex and it\n> succeeds, but otherwise it doesn't, contrary to the claim that a\n> regular REINDEX does not block reads. The reason for this seems to be\n> that the REINDEX acquires AccessExclusiveLock on all of the indexes of\n> the table, and a SELECT acquires AccessShareLock on all indexes of the\n> table (even if the particular plan at issue does not use them); e.g.\n> in this case the plan is a Seq Scan. REINDEX acquires only ShareLock\n> on the table itself, but this apparently does nobody wanting to run a\n> query any good.\n>\n> Is it supposed to work this way? Am I confused?\n\nI've always thought that this framing was very user-hostile.\nTheoretically, REINDEX doesn't have to block reads (e.g. it won't with\nprepared statements when various conditions are met), but in practice\nthe behavior isn't meaningfully different from blocking reads.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:10:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX locking"
},
{
"msg_contents": "On 2019-Jun-13, Robert Haas wrote:\n\n> Typing \"COMMIT;\" or \"ROLLBACK;\" in S1 unblocks the reindex and it\n> succeeds, but otherwise it doesn't, contrary to the claim that a\n> regular REINDEX does not block reads. The reason for this seems to be\n> that the REINDEX acquires AccessExclusiveLock on all of the indexes of\n> the table, and a SELECT acquires AccessShareLock on all indexes of the\n> table (even if the particular plan at issue does not use them); e.g.\n> in this case the plan is a Seq Scan. REINDEX acquires only ShareLock\n> on the table itself, but this apparently does nobody wanting to run a\n> query any good.\n\nYeah, this has been mentioned before, and it's pretty infuriating, but I\ndon't think we have any solution currently in the cards. I think a\nworkaround is to use prepared queries that don't involve the index,\nsince it's only the planning phase that wants to acquire lock on indexes\nthat execution doesn't use. I don't see this as a practical solution.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 16:10:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: REINDEX locking"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 4:10 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jun-13, Robert Haas wrote:\n> > Typing \"COMMIT;\" or \"ROLLBACK;\" in S1 unblocks the reindex and it\n> > succeeds, but otherwise it doesn't, contrary to the claim that a\n> > regular REINDEX does not block reads. The reason for this seems to be\n> > that the REINDEX acquires AccessExclusiveLock on all of the indexes of\n> > the table, and a SELECT acquires AccessShareLock on all indexes of the\n> > table (even if the particular plan at issue does not use them); e.g.\n> > in this case the plan is a Seq Scan. REINDEX acquires only ShareLock\n> > on the table itself, but this apparently does nobody wanting to run a\n> > query any good.\n>\n> Yeah, this has been mentioned before, and it's pretty infuriating, but I\n> don't think we have any solution currently in the cards. I think a\n> workaround is to use prepared queries that don't involve the index,\n> since it's only the planning phase that wants to acquire lock on indexes\n> that execution doesn't use. I don't see this as a practical solution.\n\nWow, that's not nice at all. I feel like we should at least document\nthis better.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jun 2019 16:22:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: REINDEX locking"
}
] |
[
{
"msg_contents": "Avoid spurious deadlocks when upgrading a tuple lock\n\nWhen two (or more) transactions are waiting for transaction T1 to release a\ntuple-level lock, and transaction T1 upgrades its lock to a higher level, a\nspurious deadlock can be reported among the waiting transactions when T1\nfinishes. The simplest example case seems to be:\n\nT1: select id from job where name = 'a' for key share;\nY: select id from job where name = 'a' for update; -- starts waiting for X\nZ: select id from job where name = 'a' for key share;\nT1: update job set name = 'b' where id = 1;\nZ: update job set name = 'c' where id = 1; -- starts waiting for X\nT1: rollback;\n\nAt this point, transaction Y is rolled back on account of a deadlock: Y\nholds the heavyweight tuple lock and is waiting for the Xmax to be released,\nwhile Z holds part of the multixact and tries to acquire the heavyweight\nlock (per protocol) and goes to sleep; once X releases its part of the\nmultixact, Z is awakened only to be put back to sleep on the heavyweight\nlock that Y is holding while sleeping. Kaboom.\n\nThis can be avoided by having Z skip the heavyweight lock acquisition. As\nfar as I can see, the biggest downside is that if there are multiple Z\ntransactions, the order in which they resume after X finishes is not\nguaranteed.\n\nBackpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don't\nwork there (because isolationtester is not smart enough), so I'm not going\nto risk it.\n\nAuthor: Oleksii Kliukin\nDiscussion: https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/de87a084c0a5ac927017cd0834b33a932651cfc9\n\nModified Files\n--------------\nsrc/backend/access/heap/README.tuplock | 10 ++\nsrc/backend/access/heap/heapam.c | 84 +++++++++---\n.../expected/tuplelock-upgrade-no-deadlock.out | 150 +++++++++++++++++++++\nsrc/test/isolation/isolation_schedule | 1 +\n.../specs/tuplelock-upgrade-no-deadlock.spec | 57 ++++++++\n5 files changed, 281 insertions(+), 21 deletions(-)\n\n",
"msg_date": "Thu, 13 Jun 2019 21:32:14 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Avoid spurious deadlocks when upgrading a tuple lock\n\nI'm now getting\n\nheapam.c: In function 'heap_lock_tuple':\nheapam.c:4041: warning: 'skip_tuple_lock' may be used uninitialized in this function\n\nPlease fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 10:10:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-14, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Avoid spurious deadlocks when upgrading a tuple lock\n> \n> I'm now getting\n> \n> heapam.c: In function 'heap_lock_tuple':\n> heapam.c:4041: warning: 'skip_tuple_lock' may be used uninitialized in this function\n\nHm, I don't get that warning. Does this patch silence it, please?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:11:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-14, Tom Lane wrote:\n>> I'm now getting\n>> heapam.c: In function 'heap_lock_tuple':\n>> heapam.c:4041: warning: 'skip_tuple_lock' may be used uninitialized in this function\n\n> Hm, I don't get that warning. Does this patch silence it, please?\n\nUh, no patch attached? But initializing the variable where it's\ndeclared would certainly silence it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:28:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "I wrote:\n>> Hm, I don't get that warning. Does this patch silence it, please?\n\n> Uh, no patch attached? But initializing the variable where it's\n> declared would certainly silence it.\n\nBTW, after looking around a bit I wonder if this complaint isn't\nexposing an actual logic bug. Shouldn't skip_tuple_lock have\na lifetime similar to first_time?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:32:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-14, Tom Lane wrote:\n\n> I wrote:\n> >> Hm, I don't get that warning. Does this patch silence it, please?\n> \n> > Uh, no patch attached? But initializing the variable where it's\n> > declared would certainly silence it.\n> \n> BTW, after looking around a bit I wonder if this complaint isn't\n> exposing an actual logic bug. Shouldn't skip_tuple_lock have\n> a lifetime similar to first_time?\n\nI think you're right. I should come up with a test case that exercises\nthat case.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:37:36 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-14, Tom Lane wrote:\n\n> I wrote:\n> >> Hm, I don't get that warning. Does this patch silence it, please?\n> \n> > Uh, no patch attached? But initializing the variable where it's\n> > declared would certainly silence it.\n> \n> BTW, after looking around a bit I wonder if this complaint isn't\n> exposing an actual logic bug. Shouldn't skip_tuple_lock have\n> a lifetime similar to first_time?\n\nI think there are worse problems here. I tried the attached isolation\nspec. Note that the only difference in the two permutations is that s0\nfinishes earlier in one than the other; yet the first one works fine and\nthe second one hangs until killed by the 180s timeout. (s3 isn't\nreleased for a reason I'm not sure I understand.)\n\nI don't think I'm going to have time to investigate this deeply over the\nweekend, so I think the safest course of action is to revert this for\nnext week's set.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 14 Jun 2019 23:43:37 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-14, Tom Lane wrote:\n>> BTW, after looking around a bit I wonder if this complaint isn't\n>> exposing an actual logic bug. Shouldn't skip_tuple_lock have\n>> a lifetime similar to first_time?\n\n> I think there are worse problems here. I tried the attached isolation\n> spec. Note that the only difference in the two permutations is that s0\n> finishes earlier in one than the other; yet the first one works fine and\n> the second one hangs until killed by the 180s timeout. (s3 isn't\n> released for a reason I'm not sure I understand.)\n\nUgh.\n\n> I don't think I'm going to have time to investigate this deeply over the\n> weekend, so I think the safest course of action is to revert this for\n> next week's set.\n\n+1. This is an old bug, we don't have to improve it for this release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Jun 2019 12:25:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-14, Alvaro Herrera wrote:\n\n> I think there are worse problems here. I tried the attached isolation\n> spec. Note that the only difference in the two permutations is that s0\n> finishes earlier in one than the other; yet the first one works fine and\n> the second one hangs until killed by the 180s timeout. (s3 isn't\n> released for a reason I'm not sure I understand.)\n\nActually, those behaviors both seem correct to me now that I look\ncloser. So this was a false alarm. In the code before de87a084c0, the\nfirst permutation deadlocks, and the second permutation hangs. The only\nbehavior change is that the first one no longer deadlocks, which is the\ndesired change.\n\nI'm still trying to form a case to exercise the case of skip_tuple_lock\nhaving the wrong lifetime.\n\n\nThe fact that both permutations behave differently, even though the\nonly difference is where s0 commits relative to the s3_share step, is an\nartifact of our unusual tuple locking implementation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Jun 2019 13:01:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Jun-14, Alvaro Herrera wrote:\n> \n>> I think there are worse problems here. I tried the attached isolation\n>> spec. Note that the only difference in the two permutations is that s0\n>> finishes earlier in one than the other; yet the first one works fine and\n>> the second one hangs until killed by the 180s timeout. (s3 isn't\n>> released for a reason I'm not sure I understand.)\n> \n> Actually, those behaviors both seem correct to me now that I look\n> closer. So this was a false alarm. In the code before de87a084c0, the\n> first permutation deadlocks, and the second permutation hangs. The only\n> behavior change is that the first one no longer deadlocks, which is the\n> desired change.\n> \n> I'm still trying to form a case to exercise the case of skip_tuple_lock\n> having the wrong lifetime.\n\nHm… I think it was an oversight from my part not to give skip_lock_tuple the\nsame lifetime as have_tuple_lock or first_time (also initializing it to\nfalse at the same time). Even if now it might not break anything in an\nobvious way, a backward jump to l3 label will leave skip_lock_tuple\nuninitialized, making it very dangerous for any future code that will rely\non its value.\n\n> The fact that both permutations behave differently, even though the\n> only difference is where s0 commits relative to the s3_share step, is an\n> artifact of our unusual tuple locking implementation.\n\nCheers,\nOleksii\n\n",
"msg_date": "Sun, 16 Jun 2019 00:12:21 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-16, Oleksii Kliukin wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > On 2019-Jun-14, Alvaro Herrera wrote:\n> > \n> >> I think there are worse problems here. I tried the attached isolation\n> >> spec. Note that the only difference in the two permutations is that s0\n> >> finishes earlier in one than the other; yet the first one works fine and\n> >> the second one hangs until killed by the 180s timeout. (s3 isn't\n> >> released for a reason I'm not sure I understand.)\n> > \n> > Actually, those behaviors both seem correct to me now that I look\n> > closer. So this was a false alarm. In the code before de87a084c0, the\n> > first permutation deadlocks, and the second permutation hangs. The only\n> > behavior change is that the first one no longer deadlocks, which is the\n> > desired change.\n> > \n> > I'm still trying to form a case to exercise the case of skip_tuple_lock\n> > having the wrong lifetime.\n> \n> Hm… I think it was an oversight from my part not to give skip_lock_tuple the\n> same lifetime as have_tuple_lock or first_time (also initializing it to\n> false at the same time). Even if now it might not break anything in an\n> obvious way, a backward jump to l3 label will leave skip_lock_tuple\n> uninitialized, making it very dangerous for any future code that will rely\n> on its value.\n\nBut that's not the danger ... with the current coding, it's initialized\nto false every time through that block; that means the tuple lock will\nnever be skipped if we jump back to l3. So the danger is that the first\niteration sets the variable, then jumps back; second iteration\ninitializes the variable again, so instead of skipping the lock, it\ntakes it, causing a spurious deadlock.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 15 Jun 2019 18:47:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-15, Alvaro Herrera wrote:\n\n> But that's not the danger ... with the current coding, it's initialized\n> to false every time through that block; that means the tuple lock will\n> never be skipped if we jump back to l3. So the danger is that the first\n> iteration sets the variable, then jumps back; second iteration\n> initializes the variable again, so instead of skipping the lock, it\n> takes it, causing a spurious deadlock.\n\nSo, I'm too lazy today to generate a case that fully reproduces the\ndeadlock, because you need to stall 's2' a little bit using the\nwell-known advisory lock trick, but this one hits the code that would\nre-initialize the variable.\n\nI'm going to push the change of lifetime of the variable for now.\n\nsetup\n{\n drop table if exists tlu_job;\n create table tlu_job (id integer primary key, name text);\n\n insert into tlu_job values(1, 'a');\n}\n\n\nteardown\n{\n drop table tlu_job;\n}\n\nsession \"s0\"\nsetup { begin; set deadlock_timeout=1}\nstep \"s0_fornokeyupdate\" { select id from tlu_job where id = 1 for no key update; }\nstep \"s0_update\" { update tlu_job set name = 's0' where id = 1; }\nstep \"s0_commit\" { commit; }\n\nsession \"s1\"\nsetup { begin; set deadlock_timeout=1}\nstep \"s1_for_key_share\" { select id from tlu_job where id = 1 for key share; }\nstep \"s1_for_update\" { select id from tlu_job where id = 1 for update; }\nstep \"s1_rollback\" { rollback; }\n\nsession \"s2\"\nsetup { begin; set deadlock_timeout=1}\nstep \"s2_for_key_share\" { select id from tlu_job where id = 1 for key share; }\nstep \"s2_for_share\" { select id from tlu_job where id = 1 for share; }\nstep \"s2_rollback\" { rollback; }\n\nsession \"s3\"\nsetup { begin; set deadlock_timeout=1}\nstep \"s3_update\" { update tlu_job set name = 'c' where id = 1; }\nstep \"s3_rollback\" { rollback; }\n\npermutation \"s1_for_key_share\" \"s2_for_key_share\" \"s0_fornokeyupdate\" \"s2_for_share\" \"s0_update\" \"s0_commit\" \"s1_rollback\" \"s2_rollback\" \"s3_rollback\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 16 Jun 2019 15:04:41 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I'm going to push the change of lifetime of the variable for now.\n\nIf you're going to push anything before tomorrow's wrap, please do it\n*now* not later. We're running out of time to get a full sample of\nbuildfarm results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jun 2019 20:02:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-16, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > I'm going to push the change of lifetime of the variable for now.\n> \n> If you're going to push anything before tomorrow's wrap, please do it\n> *now* not later. We're running out of time to get a full sample of\n> buildfarm results.\n\nYeah, I had to bail out earlier today, so the only thing I'm confident\npushing now is a revert.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 16 Jun 2019 20:40:02 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-16, Tom Lane wrote:\n>> If you're going to push anything before tomorrow's wrap, please do it\n>> *now* not later. We're running out of time to get a full sample of\n>> buildfarm results.\n\n> Yeah, I had to bail out earlier today, so the only thing I'm confident\n> pushing now is a revert.\n\nYeah, let's do that. I don't want to risk shipping broken code.\nWe can try again for the next updates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jun 2019 21:10:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On Sun, Jun 16, 2019 at 09:10:13PM -0400, Tom Lane wrote:\n> Yeah, let's do that. I don't want to risk shipping broken code.\n> We can try again for the next updates.\n\nCould you revert asap please then?\n--\nMichael",
"msg_date": "Mon, 17 Jun 2019 10:32:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-17, Michael Paquier wrote:\n\n> On Sun, Jun 16, 2019 at 09:10:13PM -0400, Tom Lane wrote:\n> > Yeah, let's do that. I don't want to risk shipping broken code.\n> > We can try again for the next updates.\n> \n> Could you revert asap please then?\n\nDone.\n\nI initially thought to keep the test in place, but then realized it\nmight be unstable, so I removed that too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 16 Jun 2019 22:25:55 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-17, Michael Paquier wrote:\n>> Could you revert asap please then?\n\n> Done.\n\nThanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jun 2019 22:27:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On Sun, Jun 16, 2019 at 10:27:25PM -0400, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2019-Jun-17, Michael Paquier wrote:\n>>> Could you revert asap please then?\n> \n>> Done.\n> \n> Thanks.\n\nThanks, Alvaro.\n--\nMichael",
"msg_date": "Mon, 17 Jun 2019 11:44:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-16, Alvaro Herrera wrote:\n\n> So, I'm too lazy today to generate a case that fully reproduces the\n> deadlock, because you need to stall 's2' a little bit using the\n> well-known advisory lock trick, but this one hits the code that would\n> re-initialize the variable.\n\nHere's such a case. I was unable to reproduce the condition with a\nsmaller sequence of commands. This one does hit the deadlock when used\nwith the previous code, as expected; with the fixed code (ie.\nskip_tuple_lock in the outer scope and same lifetime as \"first_time\")\nthen it works fine, no deadlock.\n\nI'm going to push the fixed commit this afternoon, including this as an\nadditional permutation in the spec file.\n\nsetup\n{\n drop table if exists tlu_job;\n create table tlu_job (id integer primary key, name text);\n\n insert into tlu_job values(1, 'a');\n}\n\nteardown\n{\n drop table tlu_job;\n}\n\nsession \"s0\"\nsetup { begin; }\nstep \"s0_keyshare\" { select id from tlu_job where id = 1 for key share; }\nstep \"s0_share\" { select id from tlu_job where id = 1 for share; }\nstep \"s0_rollback\" { rollback; } \n\nsession \"s1\"\nsetup { begin; }\nstep \"s1_keyshare\" { select id from tlu_job where id = 1 for key share; }\nstep \"s1_savept_e\" { savepoint s1_e; }\nstep \"s1_share\" { select id from tlu_job where id = 1 for share; }\nstep \"s1_savept_f\" { savepoint s1_f; }\nstep \"s1_fornokeyupd\" { select id from tlu_job where id = 1 for no key update; }\nstep \"s1_rollback_f\" { rollback to s1_f; }\nstep \"s1_rollback_e\" { rollback to s1_e; }\nstep \"s1_rollback\" { rollback; }\n\nsession \"s2\"\nsetup { begin; }\nstep \"s2_keyshare\" { select id from tlu_job where id = 1 for key share; }\nstep \"s2_fornokeyupd\" { select id from tlu_job where id = 1 for no key update; }\nstep \"s2_rollback\" { rollback; }\n\nsession \"s3\"\nsetup { begin; }\nstep \"s3_for_update\" { select id from tlu_job where id = 1 for update; }\nstep \"s3_rollback\" { rollback; }\n\npermutation \"s1_keyshare\" \"s3_for_update\" \"s2_keyshare\" \"s1_savept_e\" \"s1_share\" \"s1_savept_f\" \"s1_fornokeyupd\" \"s2_fornokeyupd\" \"s0_keyshare\" \"s1_rollback_f\" \"s0_share\" \"s1_rollback_e\" \"s1_rollback\" \"s2_rollback\" \"s0_rollback\" \"s3_rollback\"\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Jun 2019 12:26:32 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Jun-16, Oleksii Kliukin wrote:\n> \n>> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>> \n>>> On 2019-Jun-14, Alvaro Herrera wrote:\n>>> \n>>>> I think there are worse problems here. I tried the attached isolation\n>>>> spec. Note that the only difference in the two permutations is that s0\n>>>> finishes earlier in one than the other; yet the first one works fine and\n>>>> the second one hangs until killed by the 180s timeout. (s3 isn't\n>>>> released for a reason I'm not sure I understand.)\n>>> \n>>> Actually, those behaviors both seem correct to me now that I look\n>>> closer. So this was a false alarm. In the code before de87a084c0, the\n>>> first permutation deadlocks, and the second permutation hangs. The only\n>>> behavior change is that the first one no longer deadlocks, which is the\n>>> desired change.\n>>> \n>>> I'm still trying to form a case to exercise the case of skip_tuple_lock\n>>> having the wrong lifetime.\n>> \n>> Hm… I think it was an oversight from my part not to give skip_lock_tuple the\n>> same lifetime as have_tuple_lock or first_time (also initializing it to\n>> false at the same time). Even if now it might not break anything in an\n>> obvious way, a backward jump to l3 label will leave skip_lock_tuple\n>> uninitialized, making it very dangerous for any future code that will rely\n>> on its value.\n> \n> But that's not the danger ... with the current coding, it's initialized\n> to false every time through that block; that means the tuple lock will\n> never be skipped if we jump back to l3. So the danger is that the first\n> iteration sets the variable, then jumps back; second iteration\n> initializes the variable again, so instead of skipping the lock, it\n> takes it, causing a spurious deadlock.\n\nSorry, I was confused, as I was looking only at\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=de87a084c0a5ac927017cd0834b33a932651cfc9\n\nwithout taking your subsequent commit that silences compiler warnings at\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3da73d6839dc47f1f47ca57974bf28e5abd9b572\ninto consideration. With that commit, the danger is indeed in resetting the\nskip mechanism on each jump and potentially causing deadlocks.\n\nCheers,\nOleksii\n\n",
"msg_date": "Tue, 18 Jun 2019 19:13:49 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "On 2019-Jun-18, Oleksii Kliukin wrote:\n\n> Sorry, I was confused, as I was looking only at\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=de87a084c0a5ac927017cd0834b33a932651cfc9\n> \n> without taking your subsequent commit that silences compiler warnings at\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3da73d6839dc47f1f47ca57974bf28e5abd9b572\n> into consideration. With that commit, the danger is indeed in resetting the\n> skip mechanism on each jump and potentially causing deadlocks.\n\nYeah, I understand the confusion.\n\nAnyway, as bugs go, this one seems pretty benign. It would result in a\nunexplained deadlock, very rarely, and only for people who use a very\nstrange locking pattern that includes (row-level) lock upgrades. I\nthink it also requires aborted savepoints too, though I don't rule out\nthe possibility that there might be a way to reproduce this without\nthat.\n\nI pushed the patch again just now, with the new permutation.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 18 Jun 2019 18:25:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Jun-18, Oleksii Kliukin wrote:\n> \n>> Sorry, I was confused, as I was looking only at\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=de87a084c0a5ac927017cd0834b33a932651cfc9\n>> \n>> without taking your subsequent commit that silences compiler warnings at\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3da73d6839dc47f1f47ca57974bf28e5abd9b572\n>> into consideration. With that commit, the danger is indeed in resetting the\n>> skip mechanism on each jump and potentially causing deadlocks.\n> \n> Yeah, I understand the confusion.\n> \n> Anyway, as bugs go, this one seems pretty benign. It would result in a\n> unexplained deadlock, very rarely, and only for people who use a very\n> strange locking pattern that includes (row-level) lock upgrades. I\n> think it also requires aborted savepoints too, though I don't rule out\n> the possibility that there might be a way to reproduce this without\n> that.\n> \n> I pushed the patch again just now, with the new permutation.\n\nThank you very much for working on it and committing the fix!\n\nCheers,\nOleksii\n\n",
"msg_date": "Wed, 19 Jun 2019 15:08:31 +0100",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Avoid spurious deadlocks when upgrading a tuple lock"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit 6753333f switched from a semaphore-based waiting to latch-based\nwaiting for ProcSleep()/ProcWakeup(), but left behind some stray\nreferences to semaphores. PSA.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 13 Jun 2019 16:00:30 -0700",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Obsolete comments about semaphores in proc.c"
},
{
"msg_contents": "> On 14 Jun 2019, at 01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Commit 6753333f switched from a semaphore-based waiting to latch-based\n> waiting for ProcSleep()/ProcWakeup(), but left behind some stray\n> references to semaphores. PSA.\n\nLGTM\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 17 Jun 2019 13:28:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Obsolete comments about semaphores in proc.c"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 01:28:42PM +0200, Daniel Gustafsson wrote:\n>> On 14 Jun 2019, at 01:00, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n>> Commit 6753333f switched from a semaphore-based waiting to latch-based\n>> waiting for ProcSleep()/ProcWakeup(), but left behind some stray\n>> references to semaphores. PSA.\n> \n> LGTM\n\nFine seen from here as well. I am not spotting other areas, FWIW.\n--\nMichael",
"msg_date": "Mon, 17 Jun 2019 22:20:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Obsolete comments about semaphores in proc.c"
}
] |
[
{
"msg_contents": "Hi\n\nConsider the following cascading standby setup with PostgreSQL 12:\n\n- there exists a running primary \"A\"\n- standby \"B\" is cloned from primary \"A\" using \"pg_basebackup --write-recovery-conf\"\n- standby \"C\" is cloned from standby \"B\" using \"pg_basebackup --write-recovery-conf\"\n\nSo far, so good, everything works as expected.\n\nNow, for whatever reason, the user wishes standby \"C\" to follow another upstream\nnode (which one is not important here), so the user, in the comfort of their own psql\ncommand line (no more pesky recovery.conf editing!) issues the following:\n\n ALTER SYSTEM SET primary_conninfo = 'host=someothernode';\n\nand restarts the instance, and... it stays connected to the original upstream node.\n\nWhich is unexpected.\n\nExamining the the restarted instance, \"SHOW primary_conninfo\" still displays\nthe original value for \"primary_conninfo\". Mysteriouser and mysteriouser.\n\nWhat has happened here is that with the option \"--write-recovery-conf\", Pg12's\npg_basebackup (correctly IMHO) appends the recovery settings to \"postgresql.auto.conf\".\n\nHowever, on standby \"C\", pg_basebackup has dutifully copied over standby \"B\"'s\nexisting \"postgresql.auto.conf\", which already contains standby \"B\"'s\nreplication configuration, and appended standby \"C\"'s replication configuration\nto that, which (before \"ALTER SYSTEM\" was invoked) looked something like this:\n\n\t# Do not edit this file manually!\n\t# It will be overwritten by the ALTER SYSTEM command.\n\tprimary_conninfo = 'host=node_A'\n\tprimary_conninfo = 'host=node_B'\n\nwhich is expected, and works because the last entry in the file is evaluated, so\non startup, standby \"C\" follows standby \"B\".\n\nHowever, executing \"ALTER SYSTEM SET primary_conninfo = 'host=someothernode'\" left\nstandby \"C\"'s \"postgresql.auto.conf\" file looking like this:\n\n\t# Do not edit this file manually!\n\t# It will be overwritten by the ALTER SYSTEM command.\n\tprimary_conninfo = 'host=someothernode'\n\tprimary_conninfo = 'host=node_B'\n\nwhich seems somewhat broken, to say the least.\n\nAs-is, the user will either need to repeatedly issue \"ALTER SYSTEM RESET primary_conninfo\"\nuntil the duplicates are cleared (which means \"ALTER SYSTEM RESET ...\" doesn't work as\nadvertised, and is not an obvious solution anyway), or ignore the \"Do not edit this file manually!\"\nwarning and remove the offending entry/entries (which, if done safely, should involve\nshutting down the instance first).\n\nNote this issue is not specific to pg_basebackup, primary_conninfo (or any other settings\nformerly in recovery.conf), it has just manifested itself as the built-in toolset as of now\nprovides a handy way of getting into this situation without too much effort (and any\nutilities which clone standbys and append the replication configuration to\n\"postgresql.auto.conf\" in lieu of creating \"recovery.conf\" will be prone to running into\nthe same situation).\n\nI had previously always assumed that ALTER SYSTEM would change the *last* occurrence for\nthe parameter in \"postgresql.auto.conf\", in the same way you'd need to be sure to change\nthe last occurrence in the normal configuration files, however this actually not the case -\nas per replace_auto_config_value() ( src/backend/utils/misc/guc.c ):\n\n /* Search the list for an existing match (we assume there's only one) */\n\nthe *first* match is replaced.\n\nAttached patch attempts to rectify this situation by having replace_auto_config_value()\ndeleting any duplicate entries first, before making any changes to the last entry.\n\nArguably it might be sufficient (and simpler) to just scan the list for the last\nentry, but removing preceding duplicates seems cleaner, as it's pointless\n(and a potential source of confusion) keeping entries around which will never be used.\n\nAlso attached is a set of TAP tests to check ALTER SYSTEM works as expected (or\nat least as seems correct to me).\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 14 Jun 2019 15:15:48 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> Consider the following cascading standby setup with PostgreSQL 12:\n> \n> - there exists a running primary \"A\"\n> - standby \"B\" is cloned from primary \"A\" using \"pg_basebackup --write-recovery-conf\"\n> - standby \"C\" is cloned from standby \"B\" using \"pg_basebackup --write-recovery-conf\"\n> \n> So far, so good, everything works as expected.\n> \n> Now, for whatever reason, the user wishes standby \"C\" to follow another upstream\n> node (which one is not important here), so the user, in the comfort of their own psql\n> command line (no more pesky recovery.conf editing!) issues the following:\n> \n> ALTER SYSTEM SET primary_conninfo = 'host=someothernode';\n> \n> and restarts the instance, and... it stays connected to the original upstream node.\n> \n> Which is unexpected.\n> \n> Examining the the restarted instance, \"SHOW primary_conninfo\" still displays\n> the original value for \"primary_conninfo\". Mysteriouser and mysteriouser.\n> \n> What has happened here is that with the option \"--write-recovery-conf\", Pg12's\n> pg_basebackup (correctly IMHO) appends the recovery settings to \"postgresql.auto.conf\".\n> \n> However, on standby \"C\", pg_basebackup has dutifully copied over standby \"B\"'s\n> existing \"postgresql.auto.conf\", which already contains standby \"B\"'s\n> replication configuration, and appended standby \"C\"'s replication configuration\n> to that, which (before \"ALTER SYSTEM\" was invoked) looked something like this:\n> \n> \t# Do not edit this file manually!\n> \t# It will be overwritten by the ALTER SYSTEM command.\n> \tprimary_conninfo = 'host=node_A'\n> \tprimary_conninfo = 'host=node_B'\n> \n> which is expected, and works because the last entry in the file is evaluated, so\n> on startup, standby \"C\" follows standby \"B\".\n> \n> However, executing \"ALTER SYSTEM SET primary_conninfo = 'host=someothernode'\" left\n> standby \"C\"'s \"postgresql.auto.conf\" file looking like this:\n> \n> \t# Do not edit this file manually!\n> \t# It will be overwritten by the ALTER SYSTEM command.\n> \tprimary_conninfo = 'host=someothernode'\n> \tprimary_conninfo = 'host=node_B'\n> \n> which seems somewhat broken, to say the least.\n\nYes, it's completely broken, which I've complained about at least twice\non this list to no avail.\n\nThanks for putting together an example case pointing out why it's a\nserious issue. The right thing to do here it so create an open item for\nPG12 around this.\n\n> As-is, the user will either need to repeatedly issue \"ALTER SYSTEM RESET primary_conninfo\"\n> until the duplicates are cleared (which means \"ALTER SYSTEM RESET ...\" doesn't work as\n> advertised, and is not an obvious solution anyway), or ignore the \"Do not edit this file manually!\"\n> warning and remove the offending entry/entries (which, if done safely, should involve\n> shutting down the instance first).\n> \n> Note this issue is not specific to pg_basebackup, primary_conninfo (or any other settings\n> formerly in recovery.conf), it has just manifested itself as the built-in toolset as of now\n> provides a handy way of getting into this situation without too much effort (and any\n> utilities which clone standbys and append the replication configuration to\n> \"postgresql.auto.conf\" in lieu of creating \"recovery.conf\" will be prone to running into\n> the same situation).\n\nThis is absolutely the fault of the system for putting in multiple\nentries into the auto.conf, which it wasn't ever written to handle.\n\n> I had previously always assumed that ALTER SYSTEM would change the *last* occurrence for\n> the parameter in \"postgresql.auto.conf\", in the same way you'd need to be sure to change\n> the last occurrence in the normal configuration files, however this actually not the case -\n> as per replace_auto_config_value() ( src/backend/utils/misc/guc.c ):\n> \n> /* Search the list for an existing match (we assume there's only one) */\n> \n> the *first* match is replaced.\n> \n> Attached patch attempts to rectify this situation by having replace_auto_config_value()\n> deleting any duplicate entries first, before making any changes to the last entry.\n\nWhile this might be a good belt-and-suspenders kind of change to\ninclude, I don't think pg_basebackup should be causing us to have\nmultiple entries in the file in the first place..\n\n> Arguably it might be sufficient (and simpler) to just scan the list for the last\n> entry, but removing preceding duplicates seems cleaner, as it's pointless\n> (and a potential source of confusion) keeping entries around which will never be used.\n\nI don't think we should only change the last entry, that seems like a\nreally bad idea. I agree that we should clean up the file if we come\nacross it being invalid.\n\n> Also attached is a set of TAP tests to check ALTER SYSTEM works as expected (or\n> at least as seems correct to me).\n\nIn my view, at least, we should have a similar test for pg_basebackup to\nmake sure that it doesn't create an invalid .auto.conf file.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 14 Jun 2019 12:08:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 9:38 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> >\n> > Note this issue is not specific to pg_basebackup, primary_conninfo (or any other settings\n> > formerly in recovery.conf), it has just manifested itself as the built-in toolset as of now\n> > provides a handy way of getting into this situation without too much effort (and any\n> > utilities which clone standbys and append the replication configuration to\n> > \"postgresql.auto.conf\" in lieu of creating \"recovery.conf\" will be prone to running into\n> > the same situation).\n>\n> This is absolutely the fault of the system for putting in multiple\n> entries into the auto.conf, which it wasn't ever written to handle.\n>\n\nRight. I think if possible, it should use existing infrastructure to\nwrite to postgresql.auto.conf rather than inventing a new way to\nchange it. Apart from this issue, if we support multiple ways to edit\npostgresql.auto.conf, we might end up with more problems like this in\nthe future where one system is not aware of the way file being edited\nby another system.\n\n> > I had previously always assumed that ALTER SYSTEM would change the *last* occurrence for\n> > the parameter in \"postgresql.auto.conf\", in the same way you'd need to be sure to change\n> > the last occurrence in the normal configuration files, however this actually not the case -\n> > as per replace_auto_config_value() ( src/backend/utils/misc/guc.c ):\n> >\n> > /* Search the list for an existing match (we assume there's only one) */\n> >\n> > the *first* match is replaced.\n> >\n> > Attached patch attempts to rectify this situation by having replace_auto_config_value()\n> > deleting any duplicate entries first, before making any changes to the last entry.\n>\n> While this might be a good belt-and-suspenders kind of change to\n> include,\n>\n\nAnother possibility to do something on these lines is to extend Alter\nSystem Reset <config_param> to remove all the duplicate entries. Then\nthe user has a way to remove all duplicate entries if any and set the\nnew value. I think handling duplicate entries in *.auto.conf files is\nan enhancement of the existing system and there could be multiple\nthings we can do there, so we shouldn't try to do that as a bug-fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 15 Jun 2019 11:15:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Amit Kapila (amit.kapila16@gmail.com) wrote:\n> On Fri, Jun 14, 2019 at 9:38 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> > >\n> > > Note this issue is not specific to pg_basebackup, primary_conninfo (or any other settings\n> > > formerly in recovery.conf), it has just manifested itself as the built-in toolset as of now\n> > > provides a handy way of getting into this situation without too much effort (and any\n> > > utilities which clone standbys and append the replication configuration to\n> > > \"postgresql.auto.conf\" in lieu of creating \"recovery.conf\" will be prone to running into\n> > > the same situation).\n> >\n> > This is absolutely the fault of the system for putting in multiple\n> > entries into the auto.conf, which it wasn't ever written to handle.\n> \n> Right. I think if possible, it should use existing infrastructure to\n> write to postgresql.auto.conf rather than inventing a new way to\n> change it. Apart from this issue, if we support multiple ways to edit\n> postgresql.auto.conf, we might end up with more problems like this in\n> the future where one system is not aware of the way file being edited\n> by another system.\n\nI agere that there should have been some effort put into making the way\nALTER SYSTEM is modified be consistent between the backend and utilities\nlike pg_basebackup (which would also help third party tools understand\nhow a non-backend application should be modifying the file).\n\n> > > I had previously always assumed that ALTER SYSTEM would change the *last* occurrence for\n> > > the parameter in \"postgresql.auto.conf\", in the same way you'd need to be sure to change\n> > > the last occurrence in the normal configuration files, however this actually not the case -\n> > > as per replace_auto_config_value() ( src/backend/utils/misc/guc.c ):\n> > >\n> > > /* Search the list for an existing match (we assume there's only one) */\n> > >\n> > > the *first* match is replaced.\n> > >\n> > > Attached patch attempts to rectify this situation by having replace_auto_config_value()\n> > > deleting any duplicate entries first, before making any changes to the last entry.\n> >\n> > While this might be a good belt-and-suspenders kind of change to\n> > include,\n> \n> Another possibility to do something on these lines is to extend Alter\n> System Reset <config_param> to remove all the duplicate entries. Then\n> the user has a way to remove all duplicate entries if any and set the\n> new value. I think handling duplicate entries in *.auto.conf files is\n> an enhancement of the existing system and there could be multiple\n> things we can do there, so we shouldn't try to do that as a bug-fix.\n\nUnless there's actually a use-case for duplicate entries in\npostgresql.auto.conf, what we should do is clean them up (and possibly\nthrow a WARNING or similar at the user saying \"something modified your\npostgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\nevery ALTER SYSTEM call.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 16 Jun 2019 13:16:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Unless there's actually a use-case for duplicate entries in\n> postgresql.auto.conf,\n\nThere isn't --- guc.c will just discard the earlier duplicates.\n\n> what we should do is clean them up (and possibly\n> throw a WARNING or similar at the user saying \"something modified your\n> postgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\n> every ALTER SYSTEM call.\n\n+1 for having ALTER SYSTEM clean out duplicates. Not sure whether\na WARNING would seem too in-your-face.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 Jun 2019 13:21:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Unless there's actually a use-case for duplicate entries in\n> > postgresql.auto.conf,\n> \n> There isn't --- guc.c will just discard the earlier duplicates.\n\nOne might be able to argue for trying to create a stack or some such, to\nallow you to more easily move between values or to see what the value\nwas set to at some point in the past, etc etc. Until we see an actual\nthought out use-case along those lines that requires supporting\nduplicates in some fashion though, with code to make it all work, I\ndon't think we should allow it.\n\n> > what we should do is clean them up (and possibly\n> > throw a WARNING or similar at the user saying \"something modified your\n> > postgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\n> > every ALTER SYSTEM call.\n> \n> +1 for having ALTER SYSTEM clean out duplicates. Not sure whether\n> a WARNING would seem too in-your-face.\n\nI'd hope for a warning from basically every part of the system when it\ndetects, clearly, that a file was changed in a way that it shouldn't\nhave been. If we don't throw a warning, then we're implying that it's\nacceptable, but then cleaning up the duplicates, which seems pretty\nconfusing.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 16 Jun 2019 13:43:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Sun, Jun 16, 2019 at 7:43 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n>\n> > > what we should do is clean them up (and possibly\n> > > throw a WARNING or similar at the user saying \"something modified your\n> > > postgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\n> > > every ALTER SYSTEM call.\n> >\n> > +1 for having ALTER SYSTEM clean out duplicates. Not sure whether\n> > a WARNING would seem too in-your-face.\n>\n> I'd hope for a warning from basically every part of the system when it\n> detects, clearly, that a file was changed in a way that it shouldn't\n> have been. If we don't throw a warning, then we're implying that it's\n> acceptable, but then cleaning up the duplicates, which seems pretty\n> confusing.\n>\n\n+1. Silently \"fixing\" the file by cleaning up duplicates is going to be\neven more confusing to uses who had seen them be there before.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sun, Jun 16, 2019 at 7:43 PM Stephen Frost <sfrost@snowman.net> wrote:\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > what we should do is clean them up (and possibly\n> > throw a WARNING or similar at the user saying \"something modified your\n> > postgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\n> > every ALTER SYSTEM call.\n> \n> +1 for having ALTER SYSTEM clean out duplicates. Not sure whether\n> a WARNING would seem too in-your-face.\n\nI'd hope for a warning from basically every part of the system when it\ndetects, clearly, that a file was changed in a way that it shouldn't\nhave been. If we don't throw a warning, then we're implying that it's\nacceptable, but then cleaning up the duplicates, which seems pretty\nconfusing.+1. Silently \"fixing\" the file by cleaning up duplicates is going to be even more confusing to uses who had seen them be there before. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sun, 16 Jun 2019 19:58:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 17/06/2019 05:58, Magnus Hagander wrote:\n> On Sun, Jun 16, 2019 at 7:43 PM Stephen Frost <sfrost@snowman.net \n> <mailto:sfrost@snowman.net>> wrote:\n>\n>\n> * Tom Lane (tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>) wrote:\n> > Stephen Frost <sfrost@snowman.net <mailto:sfrost@snowman.net>>\n> writes:\n>\n> > > what we should do is clean them up (and possibly\n> > > throw a WARNING or similar at the user saying \"something\n> modified your\n> > > postgresql.auto.conf in an unexpected way\"). I'd suggest we\n> do that on\n> > > every ALTER SYSTEM call.\n> >\n> > +1 for having ALTER SYSTEM clean out duplicates. Not sure whether\n> > a WARNING would seem too in-your-face.\n>\n> I'd hope for a warning from basically every part of the system when it\n> detects, clearly, that a file was changed in a way that it shouldn't\n> have been. If we don't throw a warning, then we're implying that it's\n> acceptable, but then cleaning up the duplicates, which seems pretty\n> confusing.\n>\n>\n> +1. Silently \"fixing\" the file by cleaning up duplicates is going to \n> be even more confusing to uses who had seen them be there before.\n>\n> -- \n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nI thinking fixing this silently should be at least a hanging offence.\n\nAt one time I came a cross a language PL/1, that would silently \n'correct' some mistakes, without indicating what it did. I thought this \nwas extremely dangerous, that could lead to some very nasty and \nunexpected bugs!\n\nIt is most important that people be aware of possibly conflicting \nchanges, or that values they saw in postgresql.conf may have been changed.\n\nHmm... this suggests that all the implied defaults should be explicitly \nset! Would that be too greater change to make?\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Mon, 17 Jun 2019 10:56:41 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 12:57 AM Gavin Flower\n<GavinFlower@archidevsys.co.nz> wrote:\n>\n>\n> I thinking fixing this silently should be at least a hanging offence.\n>\n\nMaybe adding a MD5 header to the file to check if it has been altered\noutside guc.c might be enough.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n\n",
"msg_date": "Mon, 17 Jun 2019 16:42:32 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi\n\nOn 6/15/19 1:08 AM, Stephen Frost wrote:\n > Greetings,\n >\n > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n >> Consider the following cascading standby setup with PostgreSQL 12:\n >>\n >> - there exists a running primary \"A\"\n >> - standby \"B\" is cloned from primary \"A\" using \"pg_basebackup --write-recovery-conf\"\n >> - standby \"C\" is cloned from standby \"B\" using \"pg_basebackup --write-recovery-conf\"\n(...)\n >> However, executing \"ALTER SYSTEM SET primary_conninfo = 'host=someothernode'\" left\n >> standby \"C\"'s \"postgresql.auto.conf\" file looking like this:\n >>\n >> \t# Do not edit this file manually!\n >> \t# It will be overwritten by the ALTER SYSTEM command.\n >> \tprimary_conninfo = 'host=someothernode'\n >> \tprimary_conninfo = 'host=node_B'\n >>\n >> which seems somewhat broken, to say the least.\n >\n > Yes, it's completely broken, which I've complained about at least twice\n > on this list to no avail.\n >\n > Thanks for putting together an example case pointing out why it's a\n > serious issue. The right thing to do here it so create an open item for\n > PG12 around this.\n\nDone.\n\n >> Attached patch attempts to rectify this situation by having replace_auto_config_value()\n >> deleting any duplicate entries first, before making any changes to the last entry.\n >\n > While this might be a good belt-and-suspenders kind of change to\n > include, I don't think pg_basebackup should be causing us to have\n > multiple entries in the file in the first place..\n(...)\n >> Also attached is a set of TAP tests to check ALTER SYSTEM works as expected (or\n >> at least as seems correct to me).\n >\n > In my view, at least, we should have a similar test for pg_basebackup to\n > make sure that it doesn't create an invalid .auto.conf file.\n\nIndeed... I'd be happy to create tests... but first we need a definition of what\nconstitutes a valid .auto.conf file.\n\nIf that definition includes requiring that a parameter may occur only once, then\nwe need to provide a way for utilities such as pg_basebackup to write the replication\nconfiguration to a configuration file in such a way that it doesn't somehow\nrender that file invalid.\n\nIn Pg11 and earlier, it was just a case of writing (or overwriting) recovery.conf.\n\nIn Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\nbut as demonstrated that can cause problems with duplicate entries.\n\nHaving pg_basebackup, or any other utility which clones a standby, parse the file\nitself to remove duplicates seems like a recipe for pain and badly duplicated effort\n(unless there's some way of calling the configuration parsing code while the\nserver is not running).\n\nWe could declare that the .auto.conf file will be reset to the default state when\na standby is cloned, but the implicit behaviour so far has been to copy the file\nas-is (as would happen with any other configuration files in the data directory).\n\nWe could avoid the need for modifying the .auto.conf file by declaring that a\nconfiguration with a specific name in the data directory (let's call it\n\"recovery.conf\" or \"replication.conf\") can be used by any utilities writing\nreplication configuration (though of course in contrast to the old recovery.conf\nit would be included, if exists, as a normal configuration file, though then the\nprecedence would need to be defined, etc..). I'm not sure off the top of my head\nwhether something like that has already been discussed, though it's presumably a\nbit late in the release cycle to make such changes anyway?\n\n >>> This is absolutely the fault of the system for putting in multiple\n >>> entries into the auto.conf, which it wasn't ever written to handle.\n >>\n > * Amit Kapila (amit.kapila16@gmail.com) wrote:\n >> Right. I think if possible, it should use existing infrastructure to\n >> write to postgresql.auto.conf rather than inventing a new way to\n >> change it. Apart from this issue, if we support multiple ways to edit\n >> postgresql.auto.conf, we might end up with more problems like this in\n >> the future where one system is not aware of the way file being edited\n >> by another system.\n >\n > I agere that there should have been some effort put into making the way\n > ALTER SYSTEM is modified be consistent between the backend and utilities\n > like pg_basebackup (which would also help third party tools understand\n > how a non-backend application should be modifying the file).\n\nDid you mean to say \"the way postgresql.auto.conf is modified\"?\n\nI suggest explicitly documenting postgresql.auto.conf behaviour (and the circumstances\nwhere it's acceptable to modify it outside of ALTER SYSTEM calls) in the documentation\n(and possibly in the code), so anyone writing utilities which need to\nappend to postgresql.auto.conf knows what the situation is.\n\nSomething along the following lines?\n\n- postgresql.auto.conf is maintained by PostgreSQL and can be rewritten at will by the system\n at any time\n- there is no guarantee that items in postgresql.auto.conf will be in a particular order\n- it must never be manually edited (though it may be viewed)\n- postgresql.auto.conf may be appended to by utilities which need to write configuration\n items and which and need a guarantee that the items will be read by the server at startup\n (but only when the server is down of course)\n- any duplicate items will be removed when ALTER SYSTEM is executed to change or reset\n an item (a WARNING will be emitted about duplicate items removed)\n- comment lines (apart from the warning at the top of the file) will be silently removed\n (this is currently the case anyway)\n\n\nI will happily work on those changes in the next few days if agreed.\n\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 17 Jun 2019 23:50:33 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 6/17/19 2:58 AM, Magnus Hagander wrote:\n> On Sun, Jun 16, 2019 at 7:43 PM Stephen Frost <sfrost@snowman.net <mailto:sfrost@snowman.net>> wrote:\n> \n> \n> * Tom Lane (tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>) wrote:\n> > Stephen Frost <sfrost@snowman.net <mailto:sfrost@snowman.net>> writes:\n> \n> > > what we should do is clean them up (and possibly\n> > > throw a WARNING or similar at the user saying \"something modified your\n> > > postgresql.auto.conf in an unexpected way\"). I'd suggest we do that on\n> > > every ALTER SYSTEM call.\n> >\n> > +1 for having ALTER SYSTEM clean out duplicates. Not sure whether\n> > a WARNING would seem too in-your-face.\n> \n> I'd hope for a warning from basically every part of the system when it\n> detects, clearly, that a file was changed in a way that it shouldn't\n> have been. If we don't throw a warning, then we're implying that it's\n> acceptable, but then cleaning up the duplicates, which seems pretty\n> confusing.\n> \n> > +1. Silently \"fixing\" the file by cleaning up duplicates is going to be even\n > more confusing o uses who had seen them be there before.\n\nSome sort of notification is definitely appropriate here.\n\nHowever, going back to the original scenario (cascaded standby set up using\n\"pg_basebackup --write-recovery-conf\") there would now be a warning emitted\nthe first time anyone executes ALTER SYSTEM (about duplicate \"primary_conninfo\"\nentries) which would not have occured in Pg11 and earlier (and which will\nno doubt cause consternation along the lines \"how did my postgresql.auto.conf\nget modified in an unexpected way? OMG? Bug? Was I hacked?\").\n\n\nRegards\n\nIan Barwick\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 17 Jun 2019 23:51:38 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> However, going back to the original scenario (cascaded standby set up using\n> \"pg_basebackup --write-recovery-conf\") there would now be a warning emitted\n> the first time anyone executes ALTER SYSTEM (about duplicate \"primary_conninfo\"\n> entries) which would not have occured in Pg11 and earlier (and which will\n> no doubt cause consternation along the lines \"how did my postgresql.auto.conf\n> get modified in an unexpected way? OMG? Bug? Was I hacked?\").\n\nNo, I don't think we should end up in a situation where this happens.\n\nI agree that this implies making pg_basebackup more intelligent when\nit's dealing with that file but I simply don't have a lot of sympathy\nabout that, it's not news to anyone who has been paying attention.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Jun 2019 11:05:24 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> On 6/15/19 1:08 AM, Stephen Frost wrote:\n> > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> >> Consider the following cascading standby setup with PostgreSQL 12:\n> >>\n> >> - there exists a running primary \"A\"\n> >> - standby \"B\" is cloned from primary \"A\" using \"pg_basebackup --write-recovery-conf\"\n> >> - standby \"C\" is cloned from standby \"B\" using \"pg_basebackup --write-recovery-conf\"\n> (...)\n> >> However, executing \"ALTER SYSTEM SET primary_conninfo = 'host=someothernode'\" left\n> >> standby \"C\"'s \"postgresql.auto.conf\" file looking like this:\n> >>\n> >> \t# Do not edit this file manually!\n> >> \t# It will be overwritten by the ALTER SYSTEM command.\n> >> \tprimary_conninfo = 'host=someothernode'\n> >> \tprimary_conninfo = 'host=node_B'\n> >>\n> >> which seems somewhat broken, to say the least.\n> >\n> > Yes, it's completely broken, which I've complained about at least twice\n> > on this list to no avail.\n> >\n> > Thanks for putting together an example case pointing out why it's a\n> > serious issue. The right thing to do here it so create an open item for\n> > PG12 around this.\n> \n> Done.\n\nThanks.\n\n> >> Attached patch attempts to rectify this situation by having replace_auto_config_value()\n> >> deleting any duplicate entries first, before making any changes to the last entry.\n> >\n> > While this might be a good belt-and-suspenders kind of change to\n> > include, I don't think pg_basebackup should be causing us to have\n> > multiple entries in the file in the first place..\n> (...)\n> >> Also attached is a set of TAP tests to check ALTER SYSTEM works as expected (or\n> >> at least as seems correct to me).\n> >\n> > In my view, at least, we should have a similar test for pg_basebackup to\n> > make sure that it doesn't create an invalid .auto.conf file.\n> \n> Indeed... I'd be happy to create tests... but first we need a definition of what\n> constitutes a valid .auto.conf file.\n> \n> If that definition includes requiring that a parameter may occur only once, then\n> we need to provide a way for utilities such as pg_basebackup to write the replication\n> configuration to a configuration file in such a way that it doesn't somehow\n> render that file invalid.\n\nYes, I think that we do need to require that a parameter only occur once\nand pg_basebackup and friends need to be able to manage that.\n\n> In Pg11 and earlier, it was just a case of writing (or overwriting) recovery.conf.\n\nRight.\n\n> In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n> but as demonstrated that can cause problems with duplicate entries.\n\nCode can have bugs. :) I'd argue that this is such a bug that needs to\nbe fixed..\n\n> Having pg_basebackup, or any other utility which clones a standby, parse the file\n> itself to remove duplicates seems like a recipe for pain and badly duplicated effort\n> (unless there's some way of calling the configuration parsing code while the\n> server is not running).\n\nI don't really see that there's much hope for it.\n\n> We could declare that the .auto.conf file will be reset to the default state when\n> a standby is cloned, but the implicit behaviour so far has been to copy the file\n> as-is (as would happen with any other configuration files in the data directory).\n> \n> We could avoid the need for modifying the .auto.conf file by declaring that a\n> configuration with a specific name in the data directory (let's call it\n> \"recovery.conf\" or \"replication.conf\") can be used by any utilities writing\n> replication configuration (though of course in contrast to the old recovery.conf\n> it would be included, if exists, as a normal configuration file, though then the\n> precedence would need to be defined, etc..). I'm not sure off the top of my head\n> whether something like that has already been discussed, though it's presumably a\n> bit late in the release cycle to make such changes anyway?\n\nThis was discussed a fair bit, including suggestions along exactly those\nlines. There were various arguments for and against, so you might want\nto review the threads where that discussion happened to see what the\nreasoning was for not having such an independent file.\n\n> >>> This is absolutely the fault of the system for putting in multiple\n> >>> entries into the auto.conf, which it wasn't ever written to handle.\n> >>\n> > * Amit Kapila (amit.kapila16@gmail.com) wrote:\n> >> Right. I think if possible, it should use existing infrastructure to\n> >> write to postgresql.auto.conf rather than inventing a new way to\n> >> change it. Apart from this issue, if we support multiple ways to edit\n> >> postgresql.auto.conf, we might end up with more problems like this in\n> >> the future where one system is not aware of the way file being edited\n> >> by another system.\n> >\n> > I agere that there should have been some effort put into making the way\n> > ALTER SYSTEM is modified be consistent between the backend and utilities\n> > like pg_basebackup (which would also help third party tools understand\n> > how a non-backend application should be modifying the file).\n> \n> Did you mean to say \"the way postgresql.auto.conf is modified\"?\n\nAh, yes, more-or-less. I think I was going for 'the way ALTER SYSTEM\nmodifies postgresql.auto.conf'.\n\n> I suggest explicitly documenting postgresql.auto.conf behaviour (and the circumstances\n> where it's acceptable to modify it outside of ALTER SYSTEM calls) in the documentation\n> (and possibly in the code), so anyone writing utilities which need to\n> append to postgresql.auto.conf knows what the situation is.\n\nYeah, I would think that, ideally, we'd have some code in the common\nlibrary that other utilities could leverage and which the backend would\nalso use.\n\n> - postgresql.auto.conf is maintained by PostgreSQL and can be rewritten at will by the system\n> at any time\n\nI'd further say something along the lines of 'utilities should not\nmodify a postgresql.auto.conf that's in place under a running PostgreSQL\ncluster'.\n\n> - there is no guarantee that items in postgresql.auto.conf will be in a particular order\n> - it must never be manually edited (though it may be viewed)\n\n'must' is perhaps a bit strong... I would say something like\n\"shouldn't\", as users may *have* to modify it, if PostgreSQL won't\nstart due to some configuration in it.\n\n> - postgresql.auto.conf may be appended to by utilities which need to write configuration\n> items and which and need a guarantee that the items will be read by the server at startup\n> (but only when the server is down of course)\n\nWell, I wouldn't support saying \"append\" since that's what got us into\nthis mess. :)\n\n> - any duplicate items will be removed when ALTER SYSTEM is executed to change or reset\n> an item (a WARNING will be emitted about duplicate items removed)\n> - comment lines (apart from the warning at the top of the file) will be silently removed\n> (this is currently the case anyway)\n\nI'd rather say that 'any duplicate items should be removed, and a\nWARNING emitted when detected', or something along those lines. Same\nfor comment lines...\n\n> I will happily work on those changes in the next few days if agreed.\n\nGreat!\n\nThanks,\n\nStephen",
"msg_date": "Mon, 17 Jun 2019 11:41:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 5:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n>\n> * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> > On 6/15/19 1:08 AM, Stephen Frost wrote:\n> > > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n>\n> > >> Attached patch attempts to rectify this situation by having\n> replace_auto_config_value()\n> > >> deleting any duplicate entries first, before making any changes to\n> the last entry.\n> > >\n> > > While this might be a good belt-and-suspenders kind of change to\n> > > include, I don't think pg_basebackup should be causing us to have\n> > > multiple entries in the file in the first place..\n> > (...)\n> > >> Also attached is a set of TAP tests to check ALTER SYSTEM works as\n> expected (or\n> > >> at least as seems correct to me).\n> > >\n> > > In my view, at least, we should have a similar test for pg_basebackup\n> to\n> > > make sure that it doesn't create an invalid .auto.conf file.\n> >\n> > Indeed... I'd be happy to create tests... but first we need a definition\n> of what\n> > constitutes a valid .auto.conf file.\n> >\n> > If that definition includes requiring that a parameter may occur only\n> once, then\n> > we need to provide a way for utilities such as pg_basebackup to write\n> the replication\n> > configuration to a configuration file in such a way that it doesn't\n> somehow\n> > render that file invalid.\n>\n> Yes, I think that we do need to require that a parameter only occur once\n> and pg_basebackup and friends need to be able to manage that.\n>\n\n+1.\n\n\n> > I agere that there should have been some effort put into making the way\n>\n> > ALTER SYSTEM is modified be consistent between the backend and utilities\n> > > like pg_basebackup (which would also help third party tools understand\n> > > how a non-backend application should be modifying the file).\n> >\n> > Did you mean to say \"the way postgresql.auto.conf is modified\"?\n>\n> Ah, yes, more-or-less. I think I was going for 'the way ALTER SYSTEM\n> modifies postgresql.auto.conf'.\n>\n> > I suggest explicitly documenting postgresql.auto.conf behaviour (and the\n> circumstances\n> > where it's acceptable to modify it outside of ALTER SYSTEM calls) in the\n> documentation\n> > (and possibly in the code), so anyone writing utilities which need to\n> > append to postgresql.auto.conf knows what the situation is.\n>\n> Yeah, I would think that, ideally, we'd have some code in the common\n> library that other utilities could leverage and which the backend would\n> also use.\n>\n> > - postgresql.auto.conf is maintained by PostgreSQL and can be rewritten\n> at will by the system\n> > at any time\n>\n> I'd further say something along the lines of 'utilities should not\n> modify a postgresql.auto.conf that's in place under a running PostgreSQL\n> cluster'.\n>\n\nDo we need to differ between \"external\" and \"internal\" utilities here?\n\n\n\n> > - there is no guarantee that items in postgresql.auto.conf will be in a\n> particular order\n> > - it must never be manually edited (though it may be viewed)\n>\n> 'must' is perhaps a bit strong... I would say something like\n> \"shouldn't\", as users may *have* to modify it, if PostgreSQL won't\n> start due to some configuration in it.\n>\n\n\n+1.\n\n\n> - postgresql.auto.conf may be appended to by utilities which need to\n> write configuration\n> > items and which and need a guarantee that the items will be read by\n> the server at startup\n> > (but only when the server is down of course)\n>\n> Well, I wouldn't support saying \"append\" since that's what got us into\n> this mess. :)\n>\n> > - any duplicate items will be removed when ALTER SYSTEM is executed to\n> change or reset\n> > an item (a WARNING will be emitted about duplicate items removed)\n> > - comment lines (apart from the warning at the top of the file) will be\n> silently removed\n> > (this is currently the case anyway)\n>\n> I'd rather say that 'any duplicate items should be removed, and a\n> WARNING emitted when detected', or something along those lines. Same\n> for comment lines...\n>\n\nI think it's perfectly fine to silently drop comments (other than the one\nat the very top which says don't touch this file).\n\n//Magnus\n\nOn Mon, Jun 17, 2019 at 5:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n* Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> On 6/15/19 1:08 AM, Stephen Frost wrote:\n> > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> >> Attached patch attempts to rectify this situation by having replace_auto_config_value()\n> >> deleting any duplicate entries first, before making any changes to the last entry.\n> >\n> > While this might be a good belt-and-suspenders kind of change to\n> > include, I don't think pg_basebackup should be causing us to have\n> > multiple entries in the file in the first place..\n> (...)\n> >> Also attached is a set of TAP tests to check ALTER SYSTEM works as expected (or\n> >> at least as seems correct to me).\n> >\n> > In my view, at least, we should have a similar test for pg_basebackup to\n> > make sure that it doesn't create an invalid .auto.conf file.\n> \n> Indeed... I'd be happy to create tests... but first we need a definition of what\n> constitutes a valid .auto.conf file.\n> \n> If that definition includes requiring that a parameter may occur only once, then\n> we need to provide a way for utilities such as pg_basebackup to write the replication\n> configuration to a configuration file in such a way that it doesn't somehow\n> render that file invalid.\n\nYes, I think that we do need to require that a parameter only occur once\nand pg_basebackup and friends need to be able to manage that.+1.> > I agere that there should have been some effort put into making the way\n> > ALTER SYSTEM is modified be consistent between the backend and utilities\n> > like pg_basebackup (which would also help third party tools understand\n> > how a non-backend application should be modifying the file).\n> \n> Did you mean to say \"the way postgresql.auto.conf is modified\"?\n\nAh, yes, more-or-less. I think I was going for 'the way ALTER SYSTEM\nmodifies postgresql.auto.conf'.\n\n> I suggest explicitly documenting postgresql.auto.conf behaviour (and the circumstances\n> where it's acceptable to modify it outside of ALTER SYSTEM calls) in the documentation\n> (and possibly in the code), so anyone writing utilities which need to\n> append to postgresql.auto.conf knows what the situation is.\n\nYeah, I would think that, ideally, we'd have some code in the common\nlibrary that other utilities could leverage and which the backend would\nalso use.\n\n> - postgresql.auto.conf is maintained by PostgreSQL and can be rewritten at will by the system\n> at any time\n\nI'd further say something along the lines of 'utilities should not\nmodify a postgresql.auto.conf that's in place under a running PostgreSQL\ncluster'.Do we need to differ between \"external\" and \"internal\" utilities here? > - there is no guarantee that items in postgresql.auto.conf will be in a particular order\n> - it must never be manually edited (though it may be viewed)\n\n'must' is perhaps a bit strong... I would say something like\n\"shouldn't\", as users may *have* to modify it, if PostgreSQL won't\nstart due to some configuration in it.+1.\n> - postgresql.auto.conf may be appended to by utilities which need to write configuration\n> items and which and need a guarantee that the items will be read by the server at startup\n> (but only when the server is down of course)\n\nWell, I wouldn't support saying \"append\" since that's what got us into\nthis mess. :)\n\n> - any duplicate items will be removed when ALTER SYSTEM is executed to change or reset\n> an item (a WARNING will be emitted about duplicate items removed)\n> - comment lines (apart from the warning at the top of the file) will be silently removed\n> (this is currently the case anyway)\n\nI'd rather say that 'any duplicate items should be removed, and a\nWARNING emitted when detected', or something along those lines. Same\nfor comment lines...I think it's perfectly fine to silently drop comments (other than the one at the very top which says don't touch this file).//Magnus",
"msg_date": "Tue, 18 Jun 2019 11:07:23 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Jun 17, 2019 at 5:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I'd further say something along the lines of 'utilities should not\n> > modify a postgresql.auto.conf that's in place under a running PostgreSQL\n> > cluster'.\n> \n> Do we need to differ between \"external\" and \"internal\" utilities here?\n\nI don't think so..? Is there something there that you're thinking would\nbe different between them?\n\n> > I'd rather say that 'any duplicate items should be removed, and a\n> > WARNING emitted when detected', or something along those lines. Same\n> > for comment lines...\n> \n> I think it's perfectly fine to silently drop comments (other than the one\n> at the very top which says don't touch this file).\n\nI'm not sure why that's different? I don't really think that I agree\nwith you on this one- anything showing up in that file that we're ending\nup removing must have gotten there because someone or something didn't\nrealize the rules around managing the file, and that's a problem...\n\nThanks,\n\nStephen",
"msg_date": "Tue, 18 Jun 2019 09:37:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 3:37 PM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Magnus Hagander (magnus@hagander.net) wrote:\n> > On Mon, Jun 17, 2019 at 5:41 PM Stephen Frost <sfrost@snowman.net>\n> wrote:\n> > > I'd further say something along the lines of 'utilities should not\n> > > modify a postgresql.auto.conf that's in place under a running\n> PostgreSQL\n> > > cluster'.\n> >\n> > Do we need to differ between \"external\" and \"internal\" utilities here?\n>\n> I don't think so..? Is there something there that you're thinking would\n> be different between them?\n>\n\nProbably not. In general thinking that we could \"allow\" internal tools to\ndo things externals shouldn't do, for example using internal APIs. But it's\nprobably a bad idea to go down that road.\n\n\n> > I'd rather say that 'any duplicate items should be removed, and a\n> > > WARNING emitted when detected', or something along those lines. Same\n> > > for comment lines...\n> >\n> > I think it's perfectly fine to silently drop comments (other than the one\n> > at the very top which says don't touch this file).\n>\n> I'm not sure why that's different? I don't really think that I agree\n> with you on this one- anything showing up in that file that we're ending\n> up removing must have gotten there because someone or something didn't\n> realize the rules around managing the file, and that's a problem...\n>\n\nI'm not strongly against it, I just consider it unnecessary :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Jun 18, 2019 at 3:37 PM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Magnus Hagander (magnus@hagander.net) wrote:\n> On Mon, Jun 17, 2019 at 5:41 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > I'd further say something along the lines of 'utilities should not\n> > modify a postgresql.auto.conf that's in place under a running PostgreSQL\n> > cluster'.\n> \n> Do we need to differ between \"external\" and \"internal\" utilities here?\n\nI don't think so..? Is there something there that you're thinking would\nbe different between them?Probably not. In general thinking that we could \"allow\" internal tools to do things externals shouldn't do, for example using internal APIs. But it's probably a bad idea to go down that road.\n> > I'd rather say that 'any duplicate items should be removed, and a\n> > WARNING emitted when detected', or something along those lines. Same\n> > for comment lines...\n> \n> I think it's perfectly fine to silently drop comments (other than the one\n> at the very top which says don't touch this file).\n\nI'm not sure why that's different? I don't really think that I agree\nwith you on this one- anything showing up in that file that we're ending\nup removing must have gotten there because someone or something didn't\nrealize the rules around managing the file, and that's a problem...I'm not strongly against it, I just consider it unnecessary :) -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 18 Jun 2019 16:32:54 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 8:20 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n> On 6/15/19 1:08 AM, Stephen Frost wrote:\n> > * Amit Kapila (amit.kapila16@gmail.com) wrote:\n> >> Right. I think if possible, it should use existing infrastructure to\n> >> write to postgresql.auto.conf rather than inventing a new way to\n> >> change it. Apart from this issue, if we support multiple ways to edit\n> >> postgresql.auto.conf, we might end up with more problems like this in\n> >> the future where one system is not aware of the way file being edited\n> >> by another system.\n> >\n> > I agere that there should have been some effort put into making the way\n> > ALTER SYSTEM is modified be consistent between the backend and utilities\n> > like pg_basebackup (which would also help third party tools understand\n> > how a non-backend application should be modifying the file).\n>\n> Did you mean to say \"the way postgresql.auto.conf is modified\"?\n>\n\nYes, that is what we are discussing here. I think what we can do here\nis to extract the functionality to set the parameter in .auto.conf\nfrom AlterSystemSetConfigFile and expose it via a function that takes\n(option_name, value) as a parameter. Then we can expose it via some\nSQL function like set_auto_config (similar to what we have now for\nset_config/set_config_by_name). I think if we have something like\nthat then pg_basebackup or any other utility can use it in a\nconsistent way.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jun 2019 09:16:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 6/19/19 12:46 PM, Amit Kapila wrote:\n> On Mon, Jun 17, 2019 at 8:20 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n>> On 6/15/19 1:08 AM, Stephen Frost wrote:\n>> > * Amit Kapila (amit.kapila16@gmail.com) wrote:\n>> >> Right. I think if possible, it should use existing infrastructure to\n>> >> write to postgresql.auto.conf rather than inventing a new way to\n>> >> change it. Apart from this issue, if we support multiple ways to edit\n>> >> postgresql.auto.conf, we might end up with more problems like this in\n>> >> the future where one system is not aware of the way file being edited\n>> >> by another system.\n>> >\n>> > I agere that there should have been some effort put into making the way\n>> > ALTER SYSTEM is modified be consistent between the backend and utilities\n>> > like pg_basebackup (which would also help third party tools understand\n>> > how a non-backend application should be modifying the file).\n>>\n>> Did you mean to say \"the way postgresql.auto.conf is modified\"?\n>>\n> \n> Yes, that is what we are discussing here. I think what we can do here\n> is to extract the functionality to set the parameter in .auto.conf\n> from AlterSystemSetConfigFile and expose it via a function that takes\n> (option_name, value) as a parameter.\n\nYup, I was just considering what's involved there, will reply to another\nmail in the thread on that.\n\n> Then we can expose it via some\n> SQL function like set_auto_config (similar to what we have now for\n> set_config/set_config_by_name). I think if we have something like\n> that then pg_basebackup or any other utility can use it in a\n> consistent way.\n\nUmm, but the point is here, the server will *not* be running at this point,\nso calling an SQL function is out of the question. And if the server\nis running, then you just execute \"ALTER SYSTEM\".\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 13:39:05 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "n 6/18/19 12:41 AM, Stephen Frost wrote:\n > Greetings,\n >\n > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote\n(...)\n\n >> I suggest explicitly documenting postgresql.auto.conf behaviour (and the circumstances\n >> where it's acceptable to modify it outside of ALTER SYSTEM calls) in the documentation\n >> (and possibly in the code), so anyone writing utilities which need to\n >> append to postgresql.auto.conf knows what the situation is.\n >\n > Yeah, I would think that, ideally, we'd have some code in the common\n > library that other utilities could leverage and which the backend would\n > also use.\n\nSo maybe something along the lines of creating a stripped-down variant of\nAlterSystemSetConfigFile() (from \"src/backend/utils/misc/guc.c\") which can be\ncalled from front-end code to safely modify .auto.conf while the server is *not*\nrunning.\n\nI'm not terribly familiar with the GUC code, but that would presumably mean making\nparts of the GUC parsing/handling code linkable externally (ParseConfigFp() etc.)\nas you'd need to parse the file before rewriting it. Something like (minimal\npseudo-code):\n\n void\n alter_system_set(char *name, char *value)\n {\n /*\n * check here that the server is *not* running\n */\n ...\n ParseConfigFp(infile, AutoConfFileName, 0, LOG, &head, &tail)\n ...\n\n /*\n * some robust portable way of ensuring another process can't\n * modify the file(s) until we're done\n */\n lock_file(AutoConfFileName);\n\n replace_auto_config_value(&head, &tail, name, value);\n\n write_auto_conf_file(AutoConfTmpFileName, head)\n\n durable_rename(AutoConfTmpFileName, AutoConfFileName, ERROR);\n\n FreeConfigVariables(head);\n unlock_file(AutoConfFileName);\n }\n\nI'm not sure how feasible it is to validate the provided parameter\nwithout the server running, but if not possible, that's not any worse than the\nstatus quo, i.e. the utility has to be trusted to write the correct parameters\nto the file anyway.\n\nThe question is though - is this a change which is practical to make at this point\nin the release cycle for Pg12?\n\n\nRegards\n\nIan Barwick\n\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 19 Jun 2019 13:57:08 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 10:09 AM Ian Barwick\n<ian.barwick@2ndquadrant.com> wrote:\n>\n> On 6/19/19 12:46 PM, Amit Kapila wrote:\n> > On Mon, Jun 17, 2019 at 8:20 PM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n> >> On 6/15/19 1:08 AM, Stephen Frost wrote:\n> >> > * Amit Kapila (amit.kapila16@gmail.com) wrote:\n> >> >> Right. I think if possible, it should use existing infrastructure to\n> >> >> write to postgresql.auto.conf rather than inventing a new way to\n> >> >> change it. Apart from this issue, if we support multiple ways to edit\n> >> >> postgresql.auto.conf, we might end up with more problems like this in\n> >> >> the future where one system is not aware of the way file being edited\n> >> >> by another system.\n> >> >\n> >> > I agere that there should have been some effort put into making the way\n> >> > ALTER SYSTEM is modified be consistent between the backend and utilities\n> >> > like pg_basebackup (which would also help third party tools understand\n> >> > how a non-backend application should be modifying the file).\n> >>\n> >> Did you mean to say \"the way postgresql.auto.conf is modified\"?\n> >>\n> >\n> > Yes, that is what we are discussing here. I think what we can do here\n> > is to extract the functionality to set the parameter in .auto.conf\n> > from AlterSystemSetConfigFile and expose it via a function that takes\n> > (option_name, value) as a parameter.\n>\n> Yup, I was just considering what's involved there, will reply to another\n> mail in the thread on that.\n>\n> > Then we can expose it via some\n> > SQL function like set_auto_config (similar to what we have now for\n> > set_config/set_config_by_name). I think if we have something like\n> > that then pg_basebackup or any other utility can use it in a\n> > consistent way.\n>\n> Umm, but the point is here, the server will *not* be running at this point,\n> so calling an SQL function is out of the question. And if the server\n> is running, then you just execute \"ALTER SYSTEM\".\n>\n\nSure, SQL function will be a by-product of this. Can't we expose some\nfunction that can be used by base backup?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jun 2019 10:27:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Wed, Jun 19, 2019 at 10:27 AM Ian Barwick\n<ian.barwick@2ndquadrant.com> wrote:\n>\n> n 6/18/19 12:41 AM, Stephen Frost wrote:\n> > Greetings,\n> >\n> > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote\n> (...)\n>\n> >> I suggest explicitly documenting postgresql.auto.conf behaviour (and the circumstances\n> >> where it's acceptable to modify it outside of ALTER SYSTEM calls) in the documentation\n> >> (and possibly in the code), so anyone writing utilities which need to\n> >> append to postgresql.auto.conf knows what the situation is.\n> >\n> > Yeah, I would think that, ideally, we'd have some code in the common\n> > library that other utilities could leverage and which the backend would\n> > also use.\n>\n> So maybe something along the lines of creating a stripped-down variant of\n> AlterSystemSetConfigFile() (from \"src/backend/utils/misc/guc.c\") which can be\n> called from front-end code to safely modify .auto.conf while the server is *not*\n> running.\n>\n> I'm not terribly familiar with the GUC code, but that would presumably mean making\n> parts of the GUC parsing/handling code linkable externally (ParseConfigFp() etc.)\n>\n\nYeah, this looks a bit tricky as we can't use ereport in the frontend\ncode and that is used at multiple places in that code path.\n\n> as you'd need to parse the file before rewriting it. Something like (minimal\n> pseudo-code):\n>\n> void\n> alter_system_set(char *name, char *value)\n> {\n> /*\n> * check here that the server is *not* running\n> */\n> ...\n> ParseConfigFp(infile, AutoConfFileName, 0, LOG, &head, &tail)\n> ...\n>\n> /*\n> * some robust portable way of ensuring another process can't\n> * modify the file(s) until we're done\n> */\n> lock_file(AutoConfFileName);\n>\n> replace_auto_config_value(&head, &tail, name, value);\n>\n> write_auto_conf_file(AutoConfTmpFileName, head)\n>\n> durable_rename(AutoConfTmpFileName, AutoConfFileName, ERROR);\n>\n> FreeConfigVariables(head);\n> unlock_file(AutoConfFileName);\n> }\n>\n> I'm not sure how feasible it is to validate the provided parameter\n> without the server running, but if not possible, that's not any worse than the\n> status quo, i.e. the utility has to be trusted to write the correct parameters\n> to the file anyway.\n>\n\nRight. I think even if someone has given wrong values, it will get\ndetected on next reload.\n\n> The question is though - is this a change which is practical to make at this point\n> in the release cycle for Pg12?\n>\n\nIt depends on the solution/patch we come up with to solve this issue.\nWhat is the alternative? If we allow Alter System to remove the\nduplicate entries and call the current situation good, then we are\nin-a-way allowing the room for similar or more problems in the future.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Jun 2019 16:14:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n<ian.barwick@2ndquadrant.com> wrote:\n> In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n> but as demonstrated that can cause problems with duplicate entries.\n\nYeah.\n\nTo me, forcing every tools author to use postgresql.conf parsing tools\nrather than just appending to the file is a needless burden on tool\nauthors. I'd vote for just having ALTER SYSTEM silently drop all but\nthe last of duplicated entries.\n\nIt sounds like I might be in the minority, but I feel like the\nreactions which suggest that this is somehow heresy are highly\noverdone. Given that the very first time somebody wanted to do\nsomething like this in core, they picked this approach, I think we can\nassume that it is a natural approach which other people will also\nattempt. There doesn't seem to be any good reason for it not to Just\nWork.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 21 Jun 2019 10:45:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n> <ian.barwick@2ndquadrant.com> wrote:\n> > In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n> > but as demonstrated that can cause problems with duplicate entries.\n> \n> Yeah.\n> \n> To me, forcing every tools author to use postgresql.conf parsing tools\n> rather than just appending to the file is a needless burden on tool\n> authors. I'd vote for just having ALTER SYSTEM silently drop all but\n> the last of duplicated entries.\n> \n> It sounds like I might be in the minority, but I feel like the\n> reactions which suggest that this is somehow heresy are highly\n> overdone. Given that the very first time somebody wanted to do\n> something like this in core, they picked this approach, I think we can\n> assume that it is a natural approach which other people will also\n> attempt. There doesn't seem to be any good reason for it not to Just\n> Work.\n\nThat's not quite accurate, given that it isn't how the ALTER SYSTEM call\nitself works, and clearly isn't how the authors of that feature expected\nthings to work or they would have actually made it work. They didn't,\nand it doesn't actually work.\n\nThe notion that pg_basebackup was correct in this, when it wasn't tested\nat all, evidently, even after the concern was raised, and ALTER SYSTEM\nwas wrong, even though it worked just fine before some later patch\nstarted making changes to the file, based on the idea that it's the\n\"natural approach\" doesn't make sense to me.\n\nIf the change to pg_basebackup had included a change to ALTER SYSTEM to\nmake it work the *same* way that pg_basebackup now does, or at least to\nwork with the changes that pg_basebackup were making, then maybe\neverything would have been fine.\n\nThat is to say, if your recommendation is to change everything that\nmodifies postgresql.auto.conf to *always* append (and maybe even include\na comment about when, and who, made the change..), and to make\neverything work correctly with that, then that seems like it might be a\nreasonable approach (though dealing with RESETs might be a little ugly..\nhaven't fully thought about that).\n\nI still don't feel that having ALTER SYSTEM just remove duplicates is a\ngood idea and I do think it'll lead to confusion.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jun 2019 11:24:52 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> To me, forcing every tools author to use postgresql.conf parsing tools\n> rather than just appending to the file is a needless burden on tool\n> authors. I'd vote for just having ALTER SYSTEM silently drop all but\n> the last of duplicated entries.\n\nI haven't been paying too close attention to this thread, but isn't\nthat exactly what it does now and always has? guc.c, at least, certainly\nis going to interpret duplicate entries that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jun 2019 12:30:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > To me, forcing every tools author to use postgresql.conf parsing tools\n> > rather than just appending to the file is a needless burden on tool\n> > authors. I'd vote for just having ALTER SYSTEM silently drop all but\n> > the last of duplicated entries.\n> \n> I haven't been paying too close attention to this thread, but isn't\n> that exactly what it does now and always has? guc.c, at least, certainly\n> is going to interpret duplicate entries that way.\n\nThe issue isn't with reading them and interpreting them, it's what\nhappens when you run ALTER SYSTEM and it goes and modifies the file.\nPresently, it basically operates on the first entry it finds when\nperforming a SET or a RESET.\n\nWhich also means that you can issue SET's to your heart's content, and\nif there's a duplicate for that GUC, you'll never actually change what\nis interpreted.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jun 2019 12:40:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I haven't been paying too close attention to this thread, but isn't\n>> that exactly what it does now and always has? guc.c, at least, certainly\n>> is going to interpret duplicate entries that way.\n\n> The issue isn't with reading them and interpreting them, it's what\n> happens when you run ALTER SYSTEM and it goes and modifies the file.\n> Presently, it basically operates on the first entry it finds when\n> performing a SET or a RESET.\n\nAh, got it. So it seems like the correct behavior might be for\nALTER SYSTEM to\n(a) run through the whole file and remove any conflicting lines;\n(b) append new setting at the end.\n\nIf you had some fancy setup with comments associated with entries,\nyou might not be pleased with that. But I can't muster a lot of\nsympathy for tools putting comments in postgresql.auto.conf anyway;\nit's not intended to be a human-readable file.\n\nIf anybody does complain, my first reaction would be to make ALTER\nSYSTEM strip all comment lines too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jun 2019 12:55:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> I haven't been paying too close attention to this thread, but isn't\n> >> that exactly what it does now and always has? guc.c, at least, certainly\n> >> is going to interpret duplicate entries that way.\n> \n> > The issue isn't with reading them and interpreting them, it's what\n> > happens when you run ALTER SYSTEM and it goes and modifies the file.\n> > Presently, it basically operates on the first entry it finds when\n> > performing a SET or a RESET.\n> \n> Ah, got it. So it seems like the correct behavior might be for\n> ALTER SYSTEM to\n> (a) run through the whole file and remove any conflicting lines;\n> (b) append new setting at the end.\n\nSure- and every other tool that modifies that file should know that\n*that* is how you do it, and therefore, if everyone is doing it right,\nyou don't ever end up with duplicates in the file. If you do, someone's\ndoing it wrong, and we should issue a warning.\n\nThat's more-or-less the conclusion on the other thread, as I understood\nit.\n\n> If you had some fancy setup with comments associated with entries,\n> you might not be pleased with that. But I can't muster a lot of\n> sympathy for tools putting comments in postgresql.auto.conf anyway;\n> it's not intended to be a human-readable file.\n\nIf we were to *keep* the duplicates, then I could see value in including\ninformation about prior configuration entries (I mean, that's what a lot\nof external tools do with our postgresql.conf file- put it into git or\nsome other configuration management tool...). If we aren't keeping the\ndups, then I agree that there doesn't seem much point.\n\n> If anybody does complain, my first reaction would be to make ALTER\n> SYSTEM strip all comment lines too.\n\nUh, I believe it already does?\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jun 2019 13:01:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n> <ian.barwick@2ndquadrant.com> wrote:\n> > In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n> > but as demonstrated that can cause problems with duplicate entries.\n>\n> Yeah.\n>\n> To me, forcing every tools author to use postgresql.conf parsing tools\n> rather than just appending to the file is a needless burden on tool\n> authors.\n>\n\nOTOH, if we give license to all the tools that they can append to the\n.auto.conf file whenever they want, then, I think the contents of the\nfile can be unpredictable. Basically, as of now, we allow only one\nbackend to write to the file, but giving a free pass to everyone can\ncreate a problem. This won't be a problem for pg_basebackup, but can\nbe for other tools.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 Jun 2019 02:37:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Jun 21, 2019 at 10:31 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>\n> > If anybody does complain, my first reaction would be to make ALTER\n> > SYSTEM strip all comment lines too.\n>\n> Uh, I believe it already does?\n>\n\nYeah, I also think so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 Jun 2019 02:39:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Sat, Jun 22, 2019 at 17:07 Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n> > <ian.barwick@2ndquadrant.com> wrote:\n> > > In Pg12, the code in pg_basebackup implies the correct thing to do is\n> append to .auto.conf,\n> > > but as demonstrated that can cause problems with duplicate entries.\n> >\n> > Yeah.\n> >\n> > To me, forcing every tools author to use postgresql.conf parsing tools\n> > rather than just appending to the file is a needless burden on tool\n> > authors.\n> >\n>\n> OTOH, if we give license to all the tools that they can append to the\n> .auto.conf file whenever they want, then, I think the contents of the\n> file can be unpredictable. Basically, as of now, we allow only one\n> backend to write to the file, but giving a free pass to everyone can\n> create a problem. This won't be a problem for pg_basebackup, but can\n> be for other tools.\n\n\nI don’t think anyone was suggesting that tools be allowed to modify the\nfile while the server is running- if a change needs to be made while the\nserver is running, then it should be done through a call to ALTER SYSTEM.\n\nThere’s no shortage of tools that, particularly with the merger in of\nrecovery.conf, want to modify and manipulate the file when the server is\ndown though.\n\nAll that said, whatever code it is that we write for pg_basebackup to do\nthis properly should go into our client side library, so other tools can\nleverage that and avoid having to write it themselves.\n\nThanks!\n\nStephen\n\n>\n\nGreetings,On Sat, Jun 22, 2019 at 17:07 Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n> <ian.barwick@2ndquadrant.com> wrote:\n> > In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n> > but as demonstrated that can cause problems with duplicate entries.\n>\n> Yeah.\n>\n> To me, forcing every tools author to use postgresql.conf parsing tools\n> rather than just appending to the file is a needless burden on tool\n> authors.\n>\n\nOTOH, if we give license to all the tools that they can append to the\n.auto.conf file whenever they want, then, I think the contents of the\nfile can be unpredictable. Basically, as of now, we allow only one\nbackend to write to the file, but giving a free pass to everyone can\ncreate a problem. This won't be a problem for pg_basebackup, but can\nbe for other tools.I don’t think anyone was suggesting that tools be allowed to modify the file while the server is running- if a change needs to be made while the server is running, then it should be done through a call to ALTER SYSTEM.There’s no shortage of tools that, particularly with the merger in of recovery.conf, want to modify and manipulate the file when the server is down though.All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.Thanks!Stephen",
"msg_date": "Sat, 22 Jun 2019 17:13:21 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Sun, Jun 23, 2019 at 2:43 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> On Sat, Jun 22, 2019 at 17:07 Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> >\n>> > On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n>> > <ian.barwick@2ndquadrant.com> wrote:\n>> > > In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n>> > > but as demonstrated that can cause problems with duplicate entries.\n>> >\n>> > Yeah.\n>> >\n>> > To me, forcing every tools author to use postgresql.conf parsing tools\n>> > rather than just appending to the file is a needless burden on tool\n>> > authors.\n>> >\n>>\n>> OTOH, if we give license to all the tools that they can append to the\n>> .auto.conf file whenever they want, then, I think the contents of the\n>> file can be unpredictable. Basically, as of now, we allow only one\n>> backend to write to the file, but giving a free pass to everyone can\n>> create a problem. This won't be a problem for pg_basebackup, but can\n>> be for other tools.\n>\n>\n> I don’t think anyone was suggesting that tools be allowed to modify the file while the server is running- if a change needs to be made while the server is running, then it should be done through a call to ALTER SYSTEM.\n>\n> There’s no shortage of tools that, particularly with the merger in of recovery.conf, want to modify and manipulate the file when the server is down though.\n>\n> All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n>\n\nFair enough. In that case, don't we need some mechanism to ensure\nthat if the API fails, then the old contents are retained? Alter\nsystem ensures that by writing first the contents to a temporary file,\nbut I am not sure if whatever is done by pg_basebackup has that\nguarantee.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 Jun 2019 03:13:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Sat, Jun 22, 2019 at 17:43 Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sun, Jun 23, 2019 at 2:43 AM Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > Greetings,\n> >\n> > On Sat, Jun 22, 2019 at 17:07 Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >> >\n> >> > On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n> >> > <ian.barwick@2ndquadrant.com> wrote:\n> >> > > In Pg12, the code in pg_basebackup implies the correct thing to do\n> is append to .auto.conf,\n> >> > > but as demonstrated that can cause problems with duplicate entries.\n> >> >\n> >> > Yeah.\n> >> >\n> >> > To me, forcing every tools author to use postgresql.conf parsing tools\n> >> > rather than just appending to the file is a needless burden on tool\n> >> > authors.\n> >> >\n> >>\n> >> OTOH, if we give license to all the tools that they can append to the\n> >> .auto.conf file whenever they want, then, I think the contents of the\n> >> file can be unpredictable. Basically, as of now, we allow only one\n> >> backend to write to the file, but giving a free pass to everyone can\n> >> create a problem. This won't be a problem for pg_basebackup, but can\n> >> be for other tools.\n> >\n> >\n> > I don’t think anyone was suggesting that tools be allowed to modify the\n> file while the server is running- if a change needs to be made while the\n> server is running, then it should be done through a call to ALTER SYSTEM.\n> >\n> > There’s no shortage of tools that, particularly with the merger in of\n> recovery.conf, want to modify and manipulate the file when the server is\n> down though.\n> >\n> > All that said, whatever code it is that we write for pg_basebackup to do\n> this properly should go into our client side library, so other tools can\n> leverage that and avoid having to write it themselves.\n> >\n>\n> Fair enough. In that case, don't we need some mechanism to ensure\n> that if the API fails, then the old contents are retained? Alter\n> system ensures that by writing first the contents to a temporary file,\n> but I am not sure if whatever is done by pg_basebackup has that\n> guarantee.\n\n\nI’m not sure that’s really the same. Certainly, pg_basebackup needs to\ndeal with a partial write, or failure of any kind, in a clean way that\nindicates the backup isn’t good. The important bit is that the resulting\nfile be one that ALTER SYSTEM and potentially other tools will be able to\nwork with.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Sat, Jun 22, 2019 at 17:43 Amit Kapila <amit.kapila16@gmail.com> wrote:On Sun, Jun 23, 2019 at 2:43 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> On Sat, Jun 22, 2019 at 17:07 Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Jun 21, 2019 at 8:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> >\n>> > On Mon, Jun 17, 2019 at 10:50 AM Ian Barwick\n>> > <ian.barwick@2ndquadrant.com> wrote:\n>> > > In Pg12, the code in pg_basebackup implies the correct thing to do is append to .auto.conf,\n>> > > but as demonstrated that can cause problems with duplicate entries.\n>> >\n>> > Yeah.\n>> >\n>> > To me, forcing every tools author to use postgresql.conf parsing tools\n>> > rather than just appending to the file is a needless burden on tool\n>> > authors.\n>> >\n>>\n>> OTOH, if we give license to all the tools that they can append to the\n>> .auto.conf file whenever they want, then, I think the contents of the\n>> file can be unpredictable. Basically, as of now, we allow only one\n>> backend to write to the file, but giving a free pass to everyone can\n>> create a problem. This won't be a problem for pg_basebackup, but can\n>> be for other tools.\n>\n>\n> I don’t think anyone was suggesting that tools be allowed to modify the file while the server is running- if a change needs to be made while the server is running, then it should be done through a call to ALTER SYSTEM.\n>\n> There’s no shortage of tools that, particularly with the merger in of recovery.conf, want to modify and manipulate the file when the server is down though.\n>\n> All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n>\n\nFair enough. In that case, don't we need some mechanism to ensure\nthat if the API fails, then the old contents are retained? Alter\nsystem ensures that by writing first the contents to a temporary file,\nbut I am not sure if whatever is done by pg_basebackup has that\nguarantee.I’m not sure that’s really the same. Certainly, pg_basebackup needs to deal with a partial write, or failure of any kind, in a clean way that indicates the backup isn’t good. The important bit is that the resulting file be one that ALTER SYSTEM and potentially other tools will be able to work with.Thanks,Stephen",
"msg_date": "Sat, 22 Jun 2019 18:02:45 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Jun 21, 2019 at 11:24 AM Stephen Frost <sfrost@snowman.net> wrote:\n> That's not quite accurate, given that it isn't how the ALTER SYSTEM call\n> itself works, and clearly isn't how the authors of that feature expected\n> things to work or they would have actually made it work. They didn't,\n> and it doesn't actually work.\n>\n> The notion that pg_basebackup was correct in this, when it wasn't tested\n> at all, evidently, even after the concern was raised, and ALTER SYSTEM\n> was wrong, even though it worked just fine before some later patch\n> started making changes to the file, based on the idea that it's the\n> \"natural approach\" doesn't make sense to me.\n>\n> If the change to pg_basebackup had included a change to ALTER SYSTEM to\n> make it work the *same* way that pg_basebackup now does, or at least to\n> work with the changes that pg_basebackup were making, then maybe\n> everything would have been fine.\n\nThis argument boils down to: two people patches don't play nicely\ntogether, and we should assume that the first patch had it right and\nthe second patch had it wrong, because the first patch was first.\n\nI don't think it works like that. I think we should decide which patch\nhad it right by looking at what the nicest behavior actually is, not\nby which one came first. In my mind having ALTER SYSTEM drop\nduplicate that other tools may have introduced is a clear winner with\nbasically no downside. You are arguing that it will produce confusion,\nbut I don't really understand who is going to be confused or why they\nare going to be confused. We can document whatever we do, and it\nshould be fine. Humans aren't generally supposed to be examining this\nfile anyway, so they shouldn't get confused very often.\n\nIn my view, the original ALTER SYSTEM patch just has a bug -- it\ndoesn't modify the right copy of the setting when multiple copies are\npresent -- and we should just fix the bug.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jun 2019 13:22:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Jun 21, 2019 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ah, got it. So it seems like the correct behavior might be for\n> ALTER SYSTEM to\n> (a) run through the whole file and remove any conflicting lines;\n> (b) append new setting at the end.\n\nThat is exactly the behavior for which I am arguing. Stephen also\nwants a warning, but I disagree, because the warning is totally\nnon-actionable. It tells you that some tool, at some point in the\npast, did something bad. You can't do anything about that, and you\nwouldn't need to except for the arbitrary decision to label duplicate\nlines as bad in the first place.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jun 2019 13:25:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Sat, Jun 22, 2019 at 5:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n\nThat is probably only going to help people who are writing in C (or\nmaybe some close family member) and a lot of tools for managing\nPostgreSQL will be written in scripting languages. It is unlikely\nthat those people are going to get all of the rules for parsing a file\nfull of GUC settings exactly right, because translating flex into\nPython is probably not anybody's idea of a fun time. So you'll end up\nwith a bunch of rewrite-postgresql.auto.conf tools written in\ndifferent languages at varying degrees of quality many of which will\nmisfire in corner cases where the GUC names contain funny characters\nor the whitespace is off or there's unusual quoting involved.\n\nIf you just decreed that it was OK to append to the file, you could\navoid all that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jun 2019 13:29:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jun 21, 2019 at 11:24 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > That's not quite accurate, given that it isn't how the ALTER SYSTEM call\n> > itself works, and clearly isn't how the authors of that feature expected\n> > things to work or they would have actually made it work. They didn't,\n> > and it doesn't actually work.\n> >\n> > The notion that pg_basebackup was correct in this, when it wasn't tested\n> > at all, evidently, even after the concern was raised, and ALTER SYSTEM\n> > was wrong, even though it worked just fine before some later patch\n> > started making changes to the file, based on the idea that it's the\n> > \"natural approach\" doesn't make sense to me.\n> >\n> > If the change to pg_basebackup had included a change to ALTER SYSTEM to\n> > make it work the *same* way that pg_basebackup now does, or at least to\n> > work with the changes that pg_basebackup were making, then maybe\n> > everything would have been fine.\n> \n> This argument boils down to: two people patches don't play nicely\n> together, and we should assume that the first patch had it right and\n> the second patch had it wrong, because the first patch was first.\n\nNo, the point I was making is that one wasn't \"natural\" compared to the\nother as we have two patches which clearly chose differently. Had they\npicked the same, as I said above, maybe everything would have been fine.\n\n> I don't think it works like that. I think we should decide which patch\n> had it right by looking at what the nicest behavior actually is, not\n> by which one came first. In my mind having ALTER SYSTEM drop\n> duplicate that other tools may have introduced is a clear winner with\n> basically no downside. You are arguing that it will produce confusion,\n> but I don't really understand who is going to be confused or why they\n> are going to be confused. We can document whatever we do, and it\n> should be fine. Humans aren't generally supposed to be examining this\n> file anyway, so they shouldn't get confused very often.\n\nI'm not the only one who feels that it would be confusing for ALTER\nSYSTEM to drop duplicates while every other tools creates them and\ndoesn't do anything to prevent them from being there. As for who-\nanyone who deals with PostgreSQL on a regular basis will end up running\ninto the \"oh, huh, after pg_basebackup ran, I ended up with duplicates\nin postgresql.auto.conf, I wonder if that's ok?\" follow by \"oh, errr, I\nused to have duplicates but now they're gone?!?! how'd that happen?\",\nunless, perhaps, they are reading this thread, in which case they'll\ncertainly know and expect it. You can probably guess which camp is\nlarger.\n\nWhen telling other tool authors how to manipulate PGDATA files, I really\ndislike the \"do as I say, not as I do\" approach that you're advocating\nfor here. Let's come up with a specific, clear, and ideally simple way\nfor everything to modify postgresql.auto.conf and let's have everything\nuse it.\n\n> In my view, the original ALTER SYSTEM patch just has a bug -- it\n> doesn't modify the right copy of the setting when multiple copies are\n> present -- and we should just fix the bug.\n\nRemoving duplicates wouldn't be necessary for ALTER SYSTEM to just\nmodify the 'correct' version.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 14:52:02 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 21, 2019 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ah, got it. So it seems like the correct behavior might be for\n>> ALTER SYSTEM to\n>> (a) run through the whole file and remove any conflicting lines;\n>> (b) append new setting at the end.\n\n> That is exactly the behavior for which I am arguing. Stephen also\n> wants a warning, but I disagree, because the warning is totally\n> non-actionable. It tells you that some tool, at some point in the\n> past, did something bad. You can't do anything about that, and you\n> wouldn't need to except for the arbitrary decision to label duplicate\n> lines as bad in the first place.\n\nAgreed; there's no particular reason to consider the situation as wrong.\nguc.c has always had the policy that dups are fine and the last one wins.\nThe very design of ALTER SYSTEM owes its workability to that policy, so\nwe can hardly say that A.S. should have a different policy internally.\n\nThe problem here is simply that ALTER SYSTEM is failing to consider the\npossibility that there are dups in postgresql.auto.conf, and that seems\nlike little more than an oversight to be fixed.\n\nThere's more than one way we could implement a fix, perhaps, but I don't\nreally see a reason to work harder than is sketched above.\n\n(BTW, has anyone checked whether ALTER SYSTEM RESET is prepared to remove\nmultiple lines for the same var?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jun 2019 14:53:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Jun 21, 2019 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Ah, got it. So it seems like the correct behavior might be for\n> > ALTER SYSTEM to\n> > (a) run through the whole file and remove any conflicting lines;\n> > (b) append new setting at the end.\n> \n> That is exactly the behavior for which I am arguing. Stephen also\n> wants a warning, but I disagree, because the warning is totally\n> non-actionable. It tells you that some tool, at some point in the\n> past, did something bad. You can't do anything about that, and you\n> wouldn't need to except for the arbitrary decision to label duplicate\n> lines as bad in the first place.\n\nStephen and Magnus want a warning, because it's an indication that a\ntool author, or *something* modified the file in an unexpected way, and\nthat we are having to do some kind of cleanup on the file because of it.\n\nIf it was a tool author, who it certainly may very well be as they're\nwriting in support for the v12 changes, they'd almost certainly go and\nfix their code to avoid doing that, lest users complain, which would be\nexactly the behavior we want.\n\nIf it was the user themselves, which is also *entirely* likely, then\nhopefully they'd realize that they really shouldn't be modifying that\nfile.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 14:56:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Sat, Jun 22, 2019 at 5:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n> \n> That is probably only going to help people who are writing in C (or\n> maybe some close family member) and a lot of tools for managing\n> PostgreSQL will be written in scripting languages. It is unlikely\n> that those people are going to get all of the rules for parsing a file\n> full of GUC settings exactly right, because translating flex into\n> Python is probably not anybody's idea of a fun time. So you'll end up\n> with a bunch of rewrite-postgresql.auto.conf tools written in\n> different languages at varying degrees of quality many of which will\n> misfire in corner cases where the GUC names contain funny characters\n> or the whitespace is off or there's unusual quoting involved.\n\nCalling into C functions from Python certainly isn't new, nor is it\ndifficult to do from Perl, or various other languages, someone just\nneeds to write the bindings. I'm not sure where the idea came from that\nsomeone would translate flex into Python, that's certainly not what I\nwas suggesting at any point in this discussion.\n\n> If you just decreed that it was OK to append to the file, you could\n> avoid all that.\n\nAs I said elsewhere on this thread, I have absolutely no problem with\nthat as the documented approach to working with this file- but if that's\nwhat we're going to have be the documented approach, then everything\nshould be using that approach...\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 15:01:13 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 2019-Jun-24, Robert Haas wrote:\n\n> On Sat, Jun 22, 2019 at 5:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n> \n> That is probably only going to help people who are writing in C (or\n> maybe some close family member) and a lot of tools for managing\n> PostgreSQL will be written in scripting languages.\n\nBut we already have ALTER SYSTEM, so why do we need to write it again?\nYou just need to check whether the system is running: if it is, connect\nand do \"ALTER SYSTEM\". If it isn't, do `echo ALTER SYSTEM | postgres\n--single`. Maybe we can embed smarts to do that in, say, pg_ctl; then\neverybody has access to it.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 24 Jun 2019 15:06:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Stephen and Magnus want a warning, because it's an indication that a\n> tool author, or *something* modified the file in an unexpected way, and\n> that we are having to do some kind of cleanup on the file because of it.\n\nBut you're presuming something that not everybody agrees with, which\nis that this situation should be considered unexpected.\n\nIn particular, in order to consider it unexpected, you have to suppose\nthat the content rules for postgresql.auto.conf are different from those\nfor postgresql.conf (wherein we clearly allow last-one-wins). Can you\npoint to any user-facing documentation that says that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jun 2019 15:12:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Jun 21, 2019 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Ah, got it. So it seems like the correct behavior might be for\n> >> ALTER SYSTEM to\n> >> (a) run through the whole file and remove any conflicting lines;\n> >> (b) append new setting at the end.\n> \n> > That is exactly the behavior for which I am arguing. Stephen also\n> > wants a warning, but I disagree, because the warning is totally\n> > non-actionable. It tells you that some tool, at some point in the\n> > past, did something bad. You can't do anything about that, and you\n> > wouldn't need to except for the arbitrary decision to label duplicate\n> > lines as bad in the first place.\n> \n> Agreed; there's no particular reason to consider the situation as wrong.\n> guc.c has always had the policy that dups are fine and the last one wins.\n> The very design of ALTER SYSTEM owes its workability to that policy, so\n> we can hardly say that A.S. should have a different policy internally.\n> \n> The problem here is simply that ALTER SYSTEM is failing to consider the\n> possibility that there are dups in postgresql.auto.conf, and that seems\n> like little more than an oversight to be fixed.\n> \n> There's more than one way we could implement a fix, perhaps, but I don't\n> really see a reason to work harder than is sketched above.\n\nWhy bother removing the duplicate lines?\n\nIf ALTER SYSTEM should remove them, why shouldn't other tools?\n\n> (BTW, has anyone checked whether ALTER SYSTEM RESET is prepared to remove\n> multiple lines for the same var?)\n\nNo, it doesn't handle that today either, as discussed earlier in this\nthread.\n\nIf we want to get to should/must kind of language, then we could say\nthat tools should remove duplicated values, and must append to the end,\nbut I'm not sure that really changes things from what I'm proposing\nanyway.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 15:12:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2019-Jun-24, Robert Haas wrote:\n> \n> > On Sat, Jun 22, 2019 at 5:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > All that said, whatever code it is that we write for pg_basebackup to do this properly should go into our client side library, so other tools can leverage that and avoid having to write it themselves.\n> > \n> > That is probably only going to help people who are writing in C (or\n> > maybe some close family member) and a lot of tools for managing\n> > PostgreSQL will be written in scripting languages.\n> \n> But we already have ALTER SYSTEM, so why do we need to write it again?\n> You just need to check whether the system is running: if it is, connect\n> and do \"ALTER SYSTEM\". If it isn't, do `echo ALTER SYSTEM | postgres\n> --single`. Maybe we can embed smarts to do that in, say, pg_ctl; then\n> everybody has access to it.\n\nWhile I'm not against adding some kind of support like that if we feel\nlike we really need it, I tend to think that just having it in\nlibpgcommon would be enough for most tool authors to use..\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 15:14:29 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > Stephen and Magnus want a warning, because it's an indication that a\n> > tool author, or *something* modified the file in an unexpected way, and\n> > that we are having to do some kind of cleanup on the file because of it.\n> \n> But you're presuming something that not everybody agrees with, which\n> is that this situation should be considered unexpected.\n\nAnd, at least at present, not everyone seems to be agreeing that having\nduplicates should be considered expected, either. Using only ALTER\nSYSTEM, you'd never end up with duplicates either.\n\n> In particular, in order to consider it unexpected, you have to suppose\n> that the content rules for postgresql.auto.conf are different from those\n> for postgresql.conf (wherein we clearly allow last-one-wins). Can you\n> point to any user-facing documentation that says that?\n\nThe backend and frontend tools don't modify postgresql.conf, and we\ndon't document how to modify postgresql.auto.conf at *all*, even though\nwe clearly now expect tool authors to go modifying it so that they can\nprovide the same capabilities that pg_basebackup does and which they\nused to through recovery.conf, so I don't really see that as being\ncomparable.\n\nThe only thing we used to have to go on was what ALTER SYSTEM did, and\nthen pg_basebackup went and did something different, and enough so that\nthey ended up conflicting with each other, leading to this discussion.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 24 Jun 2019 15:20:14 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 6/25/19 4:06 AM, Alvaro Herrera wrote:\n > On 2019-Jun-24, Robert Haas wrote:\n >\n >> On Sat, Jun 22, 2019 at 5:13 PM Stephen Frost <sfrost@snowman.net> wrote:\n >>> All that said, whatever code it is that we write for pg_basebackup to\n >>> do this properly should go into our client side library, so other tools\n >>> can leverage that and avoid having to write it themselves.\n >>\n >> That is probably only going to help people who are writing in C (or\n >> maybe some close family member) and a lot of tools for managing\n >> PostgreSQL will be written in scripting languages.\n >\n > But we already have ALTER SYSTEM, so why do we need to write it again?\n > You just need to check whether the system is running: if it is, connect\n > and do \"ALTER SYSTEM\". If it isn't, do `echo ALTER SYSTEM | postgres\n > --single`. Maybe we can embed smarts to do that in, say, pg_ctl; then\n > everybody has access to it.\n\nUnfortunately, to quote the emitted log message, \"standby mode is not\nsupported by single-user servers\", which as-is renders this approach useless for\nsetting up replication configuration on a standby server (unless I'm missing\nsomething).\n\nI've looked in to what might be involved into creating a client-side function\nfor modifying .auto.conf while the system is not running, and basically\nit seems to involve maintaining a stripped down version of ParseConfigFp()\nwhich doesn't recurse (because we don't allow \"include\" directives in\n.auto.conf, right? Right? I'll send in a different patch for that later...)\nand somehow exposing write_auto_conf_file().\n\nAnd for all those scripts which can't call the putative frontend C function,\nwe could provide a utility called \"pg_alter_system\" or similar which accepts\na name and a value and (provided the system is not running) \"safely\"\nwrites it to .auto.conf (though of course it won't be able to validate the\nprovided parameter(s)).\n\nAlternatively (waves hand vaguely in air) there might be some way of\ncreating a single user startup mode for the express purpose of leveraging\nthe backend code to modify .auto.conf.\n\nBur that seems like a lot of effort and complexity to replace what, in Pg11\nand earlier, was just a case of writing to recovery.conf.\n\nWhich brings me to another thought which AFAIK hasn't been discussed -\nwhat use-cases are there for modifying .auto.conf when the system isn't\nrunning?\n\nThe only one I can think of is the case at hand, i.e. configuring replication\nafter cloning a standby in a manner which *guarantees* that the\nreplication configuration will be read at startup, which was the case\nwith recovery.conf in Pg11 and earlier.\n\nFor anything else, it seems reasonable to me to expect any customised\nsettings to be written (e.g. by a provisioning system) to the normal\nconfiguration file(s).\n\nHaving pg_basebackup write the replication configuration to a normal file\nis icky because there's no guarantee the configuration will be written\nlast, or even included at all, which is a regression against earlier\nversions as there you could clone a standby and (assuming there are no\nissues with any cloned configuration files) have the standby start up\nreliably.\n\n\nRegards\n\nIan Barwick\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 25 Jun 2019 10:57:58 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": " > In particular, in order to consider it unexpected, you have to suppose\n >> that the content rules for postgresql.auto.conf are different from those\n >> for postgresql.conf (wherein we clearly allow last-one-wins). Can you\n >> point to any user-facing documentation that says that?\n >\n > The backend and frontend tools don't modify postgresql.conf, and we\n > don't document how to modify postgresql.auto.conf at *all*, even though\n > we clearly now expect tool authors to go modifying it so that they can\n > provide the same capabilities that pg_basebackup does and which they\n > used to through recovery.conf, so I don't really see that as being\n > comparable.\n >\n > The only thing we used to have to go on was what ALTER SYSTEM did, and\n > then pg_basebackup went and did something different, and enough so that\n > they ended up conflicting with each other, leading to this discussion.\n\nOr looking at it from another perspective - previously there was no\nparticular use-case for appending to .auto.conf, until it (implicitly)\nbecame the only way of doing what recovery.conf used to do, and happened to\nexpose the issue at hand.\n\nLeaving aside pg_basebackup and the whole issue of writing replication\nconfiguration, .auto.conf remains a text file which could potentially\ninclude duplicate entries, no matter how much we stipulate it shouldn't.\nAs-is, ALTER SYSTEM fails to deal with this case, which in my opinion\nis a bug and a potential footgun which needs fixing.\n\n(Though we'll still need to define/provide a way of writing configuration\nwhile the server is not running, which will be guaranteed to be read in last\nwhen it starts up).\n\n\nRegards\n\nIan Barwick\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n",
"msg_date": "Tue, 25 Jun 2019 11:01:17 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hello\n\n> But we already have ALTER SYSTEM, so why do we need to write it again?\n> You just need to check whether the system is running: if it is, connect\n> and do \"ALTER SYSTEM\". If it isn't, do `echo ALTER SYSTEM | postgres\n> --single`.\n\nIs this approach still possible for pg_basebackup --format=tar ? For \"pg_basebackup -D - --format=tar\" ?\n\nregards, Sergei\n\n\n",
"msg_date": "Tue, 25 Jun 2019 11:45:39 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 7:31 AM Ian Barwick <ian.barwick@2ndquadrant.com> wrote:\n>\n> > In particular, in order to consider it unexpected, you have to suppose\n> >> that the content rules for postgresql.auto.conf are different from those\n> >> for postgresql.conf (wherein we clearly allow last-one-wins). Can you\n> >> point to any user-facing documentation that says that?\n> >\n> > The backend and frontend tools don't modify postgresql.conf, and we\n> > don't document how to modify postgresql.auto.conf at *all*, even though\n> > we clearly now expect tool authors to go modifying it so that they can\n> > provide the same capabilities that pg_basebackup does and which they\n> > used to through recovery.conf, so I don't really see that as being\n> > comparable.\n> >\n> > The only thing we used to have to go on was what ALTER SYSTEM did, and\n> > then pg_basebackup went and did something different, and enough so that\n> > they ended up conflicting with each other, leading to this discussion.\n>\n> Or looking at it from another perspective - previously there was no\n> particular use-case for appending to .auto.conf, until it (implicitly)\n> became the only way of doing what recovery.conf used to do, and happened to\n> expose the issue at hand.\n>\n> Leaving aside pg_basebackup and the whole issue of writing replication\n> configuration, .auto.conf remains a text file which could potentially\n> include duplicate entries, no matter how much we stipulate it shouldn't.\n> As-is, ALTER SYSTEM fails to deal with this case, which in my opinion\n> is a bug and a potential footgun which needs fixing.\n>\n\nI think there is an agreement that we should change it to remove\nduplicates and add the new entry at the end. However, we have not\nreached an agreement on whether we should throw WARNING after removing\nduplicates.\n\nI think it is arguable that it was a bug in the first place in Alter\nSystem as there is no way the duplicate lines can be there in\npostgresql.auto.conf file before this feature or if someone ignores\nthe Warning on top of that file. Having said that, I am in favor of\nthis change for the HEAD, but not sure if we should backpatch the same\nas well by considering it as a bug-fix.\n\n> (Though we'll still need to define/provide a way of writing configuration\n> while the server is not running, which will be guaranteed to be read in last\n> when it starts up).\n>\n\nCan you once verify if the current way of writing to\npostgresql.auto.conf is safe in pg_basebackup? It should ensure that\nif there are any failures, partial wite problem while writing, then\nthe old file remains intact. It is not clear to me if that is the\ncase with the current code of pg_basebackup, however the same is\nensured in Alter System code. Because, if we haven't ensured it then\nit is a problem for which we definitely need some fix.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Jun 2019 16:33:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 12:42 AM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > On Fri, Jun 21, 2019 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Ah, got it. So it seems like the correct behavior might be for\n> > >> ALTER SYSTEM to\n> > >> (a) run through the whole file and remove any conflicting lines;\n> > >> (b) append new setting at the end.\n> >\n> > > That is exactly the behavior for which I am arguing. Stephen also\n> > > wants a warning, but I disagree, because the warning is totally\n> > > non-actionable. It tells you that some tool, at some point in the\n> > > past, did something bad. You can't do anything about that, and you\n> > > wouldn't need to except for the arbitrary decision to label duplicate\n> > > lines as bad in the first place.\n> >\n> > Agreed; there's no particular reason to consider the situation as wrong.\n> > guc.c has always had the policy that dups are fine and the last one wins.\n> > The very design of ALTER SYSTEM owes its workability to that policy, so\n> > we can hardly say that A.S. should have a different policy internally.\n> >\n\nBoth are similar but not sure if they are the same because in A.S we\nare planning to remove the duplicate entries from file whereas I think\nin other places that rule is used to just ignore the duplicates and\nallow the last one to win. Now, I think there is merit in giving\nWARNING in this case as we are intentionally removing something which\nuser has added it. However, it is not clear what user is going to do\nwith that WARNING unless we have a system where we detect such a\nsituation, give WARNING and then allow the user to proceed in this\ncase with some option like FORCE.\n\n> > The problem here is simply that ALTER SYSTEM is failing to consider the\n> > possibility that there are dups in postgresql.auto.conf, and that seems\n> > like little more than an oversight to be fixed.\n> >\n> > There's more than one way we could implement a fix, perhaps, but I don't\n> > really see a reason to work harder than is sketched above.\n>\n> Why bother removing the duplicate lines?\n>\n> If ALTER SYSTEM should remove them, why shouldn't other tools?\n>\n> > (BTW, has anyone checked whether ALTER SYSTEM RESET is prepared to remove\n> > multiple lines for the same var?)\n>\n> No, it doesn't handle that today either, as discussed earlier in this\n> thread.\n>\n\nRight, it doesn't handle that today, but I think we can deal it along\nwith Alter System Set ...\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Jun 2019 17:00:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nThis thread discusses an issue that's tracked as an open item for pg12,\nbut it's been quiet for the last ~1 month. I think it's probably time to\ndecide what to do with it. The thread is a bit long, so let me sum what\nthe issue is and what options we have.\n\nThe problem is that ALTER SYSTEM does not handle duplicate entries in\npostgresql.auto.conf file correctly, because it simply modifies the\nfirst item, but the value is then overridden by the duplicate items.\nThis contradicts the idea that duplicate GUCs are allowed, and that we\nshould use the last item.\n\nThis bug seems to exist since ALTER SYSTEM was introduced, so it's not\na clear PG12 item. But it was made more prominent by the removal of\nrecovery.conf in PG12, because pg_basebackup now appends stuff to\npostgresql.auto.conf and may easily create duplicate items.\n\n\nThere seems to be a consensus that this this not a pg_basebackup issue\n(i.e. duplicate values don't make the file invalid), and it should be\nhandled in ALTER SYSTEM.\n\nThe proposal seems to be to run through the .auto.conf file, remove any\nduplicates, and append the new entry at the end. That seems reasonable.\n\nThere was a discussion whether to print warnings about the duplicates. I\npersonally see not much point in doing that - if we consider duplicates\nto be expected, and if ALTER SYSTEM has the license to rework the config\nfile any way it wants, why warn about it?\n\nThe main issue however is that no code was written yet.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 3 Aug 2019 00:22:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> There seems to be a consensus that this this not a pg_basebackup issue\n> (i.e. duplicate values don't make the file invalid), and it should be\n> handled in ALTER SYSTEM.\n\nYeah. I doubt pg_basebackup is the only actor that can create such\nsituations.\n\n> The proposal seems to be to run through the .auto.conf file, remove any\n> duplicates, and append the new entry at the end. That seems reasonable.\n\n+1\n\n> There was a discussion whether to print warnings about the duplicates. I\n> personally see not much point in doing that - if we consider duplicates\n> to be expected, and if ALTER SYSTEM has the license to rework the config\n> file any way it wants, why warn about it?\n\nPersonally I agree that warnings are unnecessary.\n\n> The main issue however is that no code was written yet.\n\nSeems like it ought to be relatively simple ... but I didn't look.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 18:27:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > There seems to be a consensus that this this not a pg_basebackup issue\n> > (i.e. duplicate values don't make the file invalid), and it should be\n> > handled in ALTER SYSTEM.\n>\n> Yeah. I doubt pg_basebackup is the only actor that can create such\n> situations.\n>\n> > The proposal seems to be to run through the .auto.conf file, remove any\n> > duplicates, and append the new entry at the end. That seems reasonable.\n>\n> +1\n\n\nI disagree that this should only be addressed in alter system, as I’ve said\nbefore and as others have agreed with. Having one set of code that can be\nused to update parameters in the auto.conf and then have that be used by\npg_basebackup, alter system, and external tools, is the right approach.\n\nThe idea that alter system should be the only thing that doesn’t just\nappend changes to the file is just going to lead to confusion and bugs down\nthe road.\n\nAs I said before, an alternative could be to make alter system simply\nalways append and declare that to be the way to update parameters in the\nauto.conf.\n\n> There was a discussion whether to print warnings about the duplicates. I\n> > personally see not much point in doing that - if we consider duplicates\n> > to be expected, and if ALTER SYSTEM has the license to rework the config\n> > file any way it wants, why warn about it?\n>\n> Personally I agree that warnings are unnecessary.\n\n\nAnd at least Magnus and I disagree with that, as I recall from this\nthread. Let’s have a clean and clear way to modify the auto.conf and have\neverything that touches the file update it in a consistent way.\n\nThanks,\n\nStephen\n\nGreetings,On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> There seems to be a consensus that this this not a pg_basebackup issue\n> (i.e. duplicate values don't make the file invalid), and it should be\n> handled in ALTER SYSTEM.\n\nYeah. I doubt pg_basebackup is the only actor that can create such\nsituations.\n\n> The proposal seems to be to run through the .auto.conf file, remove any\n> duplicates, and append the new entry at the end. That seems reasonable.\n\n+1I disagree that this should only be addressed in alter system, as I’ve said before and as others have agreed with. Having one set of code that can be used to update parameters in the auto.conf and then have that be used by pg_basebackup, alter system, and external tools, is the right approach.The idea that alter system should be the only thing that doesn’t just append changes to the file is just going to lead to confusion and bugs down the road.As I said before, an alternative could be to make alter system simply always append and declare that to be the way to update parameters in the auto.conf.\n> There was a discussion whether to print warnings about the duplicates. I\n> personally see not much point in doing that - if we consider duplicates\n> to be expected, and if ALTER SYSTEM has the license to rework the config\n> file any way it wants, why warn about it?\n\nPersonally I agree that warnings are unnecessary.And at least Magnus and I disagree with that, as I recall from this thread. Let’s have a clean and clear way to modify the auto.conf and have everything that touches the file update it in a consistent way.Thanks,Stephen",
"msg_date": "Fri, 2 Aug 2019 18:38:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/3/19 7:27 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> There seems to be a consensus that this this not a pg_basebackup issue\n>> (i.e. duplicate values don't make the file invalid), and it should be\n>> handled in ALTER SYSTEM.\n> \n> Yeah. I doubt pg_basebackup is the only actor that can create such\n> situations.\n> \n>> The proposal seems to be to run through the .auto.conf file, remove any\n>> duplicates, and append the new entry at the end. That seems reasonable.\n> \n> +1\n> \n>> There was a discussion whether to print warnings about the duplicates. I\n>> personally see not much point in doing that - if we consider duplicates\n>> to be expected, and if ALTER SYSTEM has the license to rework the config\n>> file any way it wants, why warn about it?\n> \n> Personally I agree that warnings are unnecessary.\n\nHaving played around with the pg.auto.conf stuff for a while, my feeling is\nthat ALTER SYSTEM does indeed have a license to rewrite it (which is what\ncurrently happens anyway, with comments and include directives [1] being silently\nremoved) so it seems reasonable to remove duplicate entries and ensure\nthe correct one is processed.\n\n[1] suprisingly any include directives present are honoured, which seems crazy\nto me, see: https://www.postgresql.org/message-id/flat/8c8bcbca-3bd9-dc6e-8986-04a5abdef142%402ndquadrant.com\n\n>> The main issue however is that no code was written yet.\n> \n> Seems like it ought to be relatively simple ... but I didn't look.\n\nThe patch I originally sent does exactly this.\n\nThe thread then drifted off into a discussion about providing ways for\napplications to properly write to pg.auto.conf while PostgreSQL is not\nrunning; I have a patch for that which I can submit later (though it\nis a thing of considerable ugliness).\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 07:39:27 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> The proposal seems to be to run through the .auto.conf file, remove any\n>>> duplicates, and append the new entry at the end. That seems reasonable.\n\n>> +1\n\n> I disagree that this should only be addressed in alter system, as I’ve said\n> before and as others have agreed with. Having one set of code that can be\n> used to update parameters in the auto.conf and then have that be used by\n> pg_basebackup, alter system, and external tools, is the right approach.\n\nI don't find that to be necessary or even desirable. Many (most?) of the\nsituations where this would be important wouldn't have access to a running\nbackend, and maybe not to any PG code at all --- what if your tool isn't\nwritten in C?\n\nI think it's perfectly fine to say that external tools need only append\nto the file, which will require no special tooling. But then we need\nALTER SYSTEM to be willing to clean out duplicates, if only so you don't\nrun out of disk space after awhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 18:47:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 18:38:46 -0400, Stephen Frost wrote:\n> On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > > There seems to be a consensus that this this not a pg_basebackup issue\n> > > (i.e. duplicate values don't make the file invalid), and it should be\n> > > handled in ALTER SYSTEM.\n> >\n> > Yeah. I doubt pg_basebackup is the only actor that can create such\n> > situations.\n> >\n> > > The proposal seems to be to run through the .auto.conf file, remove any\n> > > duplicates, and append the new entry at the end. That seems reasonable.\n> >\n> > +1\n\n> I disagree that this should only be addressed in alter system, as I’ve said\n> before and as others have agreed with. Having one set of code that can be\n> used to update parameters in the auto.conf and then have that be used by\n> pg_basebackup, alter system, and external tools, is the right approach.\n> \n> The idea that alter system should be the only thing that doesn’t just\n> append changes to the file is just going to lead to confusion and bugs down\n> the road.\n\nTo me that seems like an alternative that needs a good chunk more work\nthan just having ALTER SYSTEM fix things up, and isn't actually likely\nto prevent such scenarios from occurring in practice. Providing a\ndecent API to change conflict files from various places, presumably\nincluding a commandline utility to do so, would be a nice feature, but\nit seems vastly out of scope for v12. My vote is to fix this via ALTER\nSYSTEM in v12, and then for whoever is interested enough to provide\nbetter tools down the road.\n\n\n> As I said before, an alternative could be to make alter system simply\n> always append and declare that to be the way to update parameters in the\n> auto.conf.\n\nWhy would that be a good idea? We'd just take longer and longer to parse\nit. There's people that change database settings on a regular and\nautomated basis using ALTER SYSTEm.\n\n\n> > There was a discussion whether to print warnings about the duplicates. I\n> > > personally see not much point in doing that - if we consider duplicates\n> > > to be expected, and if ALTER SYSTEM has the license to rework the config\n> > > file any way it wants, why warn about it?\n> >\n> > Personally I agree that warnings are unnecessary.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 15:49:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 18:47:07 -0400, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I disagree that this should only be addressed in alter system, as I’ve said\n> > before and as others have agreed with. Having one set of code that can be\n> > used to update parameters in the auto.conf and then have that be used by\n> > pg_basebackup, alter system, and external tools, is the right approach.\n> \n> I don't find that to be necessary or even desirable. Many (most?) of the\n> situations where this would be important wouldn't have access to a running\n> backend, and maybe not to any PG code at all --- what if your tool isn't\n> written in C?\n\nI think a commandline tool to perform the equivalent of ALTER SYSTEM on\na shutdown cluster would be a great tool. It's easy enough to add\nsomething with broken syntax, and further down the road such a tool\ncould not only ensure the syntax is correct, but also validate\nindividual settings as much as possible (obviously there's some hairy\nissues here).\n\nQuite possibly the most realistic way to implement something like that\nwould be a postgres commandline switch, which'd start up far enough to\nperform GUC checks and execute AlterSystem(), and then shut down\nagain. We already have -C, I think such an option could reasonably be\nimplemented alongside it.\n\nObviously this is widely out of scope for v12.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 15:56:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 06:38:46PM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> > There seems to be a consensus that this this not a pg_basebackup issue\n>> > (i.e. duplicate values don't make the file invalid), and it should be\n>> > handled in ALTER SYSTEM.\n>>\n>> Yeah. I doubt pg_basebackup is the only actor that can create such\n>> situations.\n>>\n>> > The proposal seems to be to run through the .auto.conf file, remove any\n>> > duplicates, and append the new entry at the end. That seems reasonable.\n>>\n>> +1\n>\n>\n>I disagree that this should only be addressed in alter system, as I’ve said\n>before and as others have agreed with. Having one set of code that can be\n>used to update parameters in the auto.conf and then have that be used by\n>pg_basebackup, alter system, and external tools, is the right approach.\n>\n>The idea that alter system should be the only thing that doesn’t just\n>append changes to the file is just going to lead to confusion and bugs down\n>the road.\n>\n\nI don't remember any suggestions ALTER SYSTEM should be the only thing\nthat can rewrite the config file, but maybe it's buried somewhere in the\nthread history. The current proposal certainly does not prohibit any\nexternal tool from doing so, it just says we should expect duplicates.\n\n>As I said before, an alternative could be to make alter system simply\n>always append and declare that to be the way to update parameters in the\n>auto.conf.\n>\n\nThat just seems strange, TBH.\n\n>> There was a discussion whether to print warnings about the duplicates. I\n>> > personally see not much point in doing that - if we consider duplicates\n>> > to be expected, and if ALTER SYSTEM has the license to rework the config\n>> > file any way it wants, why warn about it?\n>>\n>> Personally I agree that warnings are unnecessary.\n>\n>\n>And at least Magnus and I disagree with that, as I recall from this\n>thread. Let’s have a clean and clear way to modify the auto.conf and have\n>everything that touches the file update it in a consistent way.\n>\n\nWell, I personally don't feel very strongly about it. I think the\nwarnings will be a nuisance bothering people with expeced stuff, but I'm\nnot willing to fight against it.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 3 Aug 2019 01:00:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-08-02 18:47:07 -0400, Tom Lane wrote:\n>> I don't find that to be necessary or even desirable. Many (most?) of the\n>> situations where this would be important wouldn't have access to a running\n>> backend, and maybe not to any PG code at all --- what if your tool isn't\n>> written in C?\n\n> I think a commandline tool to perform the equivalent of ALTER SYSTEM on\n> a shutdown cluster would be a great tool.\n\nPerhaps, but ...\n\n> Obviously this is widely out of scope for v12.\n\n... this. It's entirely insane to think we're going to produce any such\nthing for v12 (much less back-patch it into prior versions). In the short\nterm I don't think there's any workable alternative except to decree that\n\"just append to the end\" is a supported way to alter pg.auto.conf.\n\nBut, as you said, it's also not sane for ALTER SYSTEM to behave that way,\nbecause it won't cope for long with repetitive modifications. I think\nwe can get away with the \"just append\" recommendation for most external\ndrivers because they won't be doing that. If they are, they'll need to\nbe smarter, and maybe some command-line tool would make their lives\nsimpler down the line. But we aren't providing that in this cycle.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 19:05:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I think a commandline tool to perform the equivalent of ALTER SYSTEM on\n>> a shutdown cluster would be a great tool.\n\n> Perhaps, but ...\n\n>> Obviously this is widely out of scope for v12.\n\n> ... this.\n\nAlthough, there's always\n\necho \"alter system set work_mem = 4242;\" | postgres --single\n\nMaybe we could recommend that to tools that need to do\npotentially-repetitive modifications?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Aug 2019 19:09:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/3/19 8:09 AM, Tom Lane wrote:\n> I wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I think a commandline tool to perform the equivalent of ALTER SYSTEM on\n>>> a shutdown cluster would be a great tool.\n> \n>> Perhaps, but ...\n> \n>>> Obviously this is widely out of scope for v12.\n> \n>> ... this.\n> \n> Although, there's always\n> \n> echo \"alter system set work_mem = 4242;\" | postgres --single\n> \n> Maybe we could recommend that to tools that need to do\n> potentially-repetitive modifications?\n\nThe slight problem with that, particularly with the use-case\nI am concerned with (writing replication configuration), is:\n\n [2019-08-03 08:14:21 JST] FATAL: 0A000: standby mode is not supported by single-user servers\n\n(I may be missing something obvious of course)\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 08:18:00 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/3/19 7:56 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-02 18:47:07 -0400, Tom Lane wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> I disagree that this should only be addressed in alter system, as I’ve said\n>>> before and as others have agreed with. Having one set of code that can be\n>>> used to update parameters in the auto.conf and then have that be used by\n>>> pg_basebackup, alter system, and external tools, is the right approach.\n>>\n>> I don't find that to be necessary or even desirable. Many (most?) of the\n>> situations where this would be important wouldn't have access to a running\n>> backend, and maybe not to any PG code at all --- what if your tool isn't\n>> written in C?\n> \n> I think a commandline tool to perform the equivalent of ALTER SYSTEM on\n> a shutdown cluster would be a great tool. It's easy enough to add\n> something with broken syntax, and further down the road such a tool\n> could not only ensure the syntax is correct, but also validate\n> individual settings as much as possible (obviously there's some hairy\n> issues here).\n\nWhat I came up with shoehorned a stripped-down version of the backend\nconfig parser into fe_utils and provides a function to modify pg.auto.conf\nin much the same way ALTER SYSTEM does, but with only the basic syntax\nchecking provided by the parser of course. And for completeness a\nclient utility which can be called by scripts etc.\n\nI can clean it up and submit it later for reference (got distracted by other things\nrecently) though I don't think it's a particularly good solution due to the\nlack of actual checks for the provided GUCSs (and the implementation\nis ugly anyway); something like what Andres suggests below would be far better.\n\n> Quite possibly the most realistic way to implement something like that\n> would be a postgres commandline switch, which'd start up far enough to\n> perform GUC checks and execute AlterSystem(), and then shut down\n> again. We already have -C, I think such an option could reasonably be\n> implemented alongside it.\n> \n> Obviously this is widely out of scope for v12.\n\n\nRegards\n\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 08:22:29 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-03 08:22:29 +0900, Ian Barwick wrote:\n> What I came up with shoehorned a stripped-down version of the backend\n> config parser into fe_utils and provides a function to modify pg.auto.conf\n> in much the same way ALTER SYSTEM does, but with only the basic syntax\n> checking provided by the parser of course. And for completeness a\n> client utility which can be called by scripts etc.\n\n> I can clean it up and submit it later for reference (got distracted by other things\n> recently) though I don't think it's a particularly good solution due to the\n> lack of actual checks for the provided GUCSs (and the implementation\n> is ugly anyway); something like what Andres suggests below would be far better.\n\nI think my main problem with that is that it duplicates a nontrivial\namount of code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 16:24:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/3/19 8:24 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-03 08:22:29 +0900, Ian Barwick wrote:\n>> What I came up with shoehorned a stripped-down version of the backend\n>> config parser into fe_utils and provides a function to modify pg.auto.conf\n>> in much the same way ALTER SYSTEM does, but with only the basic syntax\n>> checking provided by the parser of course. And for completeness a\n>> client utility which can be called by scripts etc.\n> \n>> I can clean it up and submit it later for reference (got distracted by other things\n>> recently) though I don't think it's a particularly good solution due to the\n>> lack of actual checks for the provided GUCSs (and the implementation\n>> is ugly anyway); something like what Andres suggests below would be far better.\n> \n> I think my main problem with that is that it duplicates a nontrivial\n> amount of code.\n\nThat is indeed part of the ugliness of the implementation.\n\n\nRegards\n\nIan Barwick\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 3 Aug 2019 08:36:13 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Aug 2, 2019 at 19:36 Ian Barwick <ian.barwick@2ndquadrant.com>\nwrote:\n\n> On 8/3/19 8:24 AM, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2019-08-03 08:22:29 +0900, Ian Barwick wrote:\n> >> What I came up with shoehorned a stripped-down version of the backend\n> >> config parser into fe_utils and provides a function to modify\n> pg.auto.conf\n> >> in much the same way ALTER SYSTEM does, but with only the basic syntax\n> >> checking provided by the parser of course. And for completeness a\n> >> client utility which can be called by scripts etc.\n> >\n> >> I can clean it up and submit it later for reference (got distracted by\n> other things\n> >> recently) though I don't think it's a particularly good solution due to\n> the\n> >> lack of actual checks for the provided GUCSs (and the implementation\n> >> is ugly anyway); something like what Andres suggests below would be far\n> better.\n> >\n> > I think my main problem with that is that it duplicates a nontrivial\n> > amount of code.\n>\n> That is indeed part of the ugliness of the implementation.\n\n\nI agree that duplicate code isn’t good- the goal would be to eliminate the\nduplication by having it be common code instead of duplicated. We have\nother code that’s common to the frontend and backend and I don’t doubt that\nwe will have more going forward...\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Fri, Aug 2, 2019 at 19:36 Ian Barwick <ian.barwick@2ndquadrant.com> wrote:On 8/3/19 8:24 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2019-08-03 08:22:29 +0900, Ian Barwick wrote:\n>> What I came up with shoehorned a stripped-down version of the backend\n>> config parser into fe_utils and provides a function to modify pg.auto.conf\n>> in much the same way ALTER SYSTEM does, but with only the basic syntax\n>> checking provided by the parser of course. And for completeness a\n>> client utility which can be called by scripts etc.\n> \n>> I can clean it up and submit it later for reference (got distracted by other things\n>> recently) though I don't think it's a particularly good solution due to the\n>> lack of actual checks for the provided GUCSs (and the implementation\n>> is ugly anyway); something like what Andres suggests below would be far better.\n> \n> I think my main problem with that is that it duplicates a nontrivial\n> amount of code.\n\nThat is indeed part of the ugliness of the implementation.I agree that duplicate code isn’t good- the goal would be to eliminate the duplication by having it be common code instead of duplicated. We have other code that’s common to the frontend and backend and I don’t doubt that we will have more going forward...Thanks,Stephen",
"msg_date": "Fri, 2 Aug 2019 20:13:49 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Aug 2, 2019 at 18:47 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> The proposal seems to be to run through the .auto.conf file, remove any\n> >>> duplicates, and append the new entry at the end. That seems reasonable.\n>\n> >> +1\n>\n> > I disagree that this should only be addressed in alter system, as I’ve\n> said\n> > before and as others have agreed with. Having one set of code that can\n> be\n> > used to update parameters in the auto.conf and then have that be used by\n> > pg_basebackup, alter system, and external tools, is the right approach.\n>\n> I don't find that to be necessary or even desirable. Many (most?) of the\n> situations where this would be important wouldn't have access to a running\n> backend, and maybe not to any PG code at all --- what if your tool isn't\n> written in C?\n\n\nWhat if you want to access PG and your tool isn’t written in C? You use a\nmodule, extension, package, whatever, that provides the glue between what\nyour language wants and what the C library provides. There’s psycopg2 for\npython, DBD::Pg for Perl, et al, and they use libpq. There’s languages that\nlike to write their own too, like the JDBC driver, the Golang driver, but\nthat doesn’t mean we shouldn’t provide libpq or that non-C tools can’t\nleverage libpq. This argument is just not sensible.\n\nI agree entirely that we want to be able to modify auto.conf without having\nPG running (and without using single mode, bleh, that’s horrid..). I think\nwe can accept that there we can’t necessarily *validate* that every option\nis acceptable but that’s not the same as being able to simply parse the\nfile and modify a value.\n\nI think it's perfectly fine to say that external tools need only append\n> to the file, which will require no special tooling. But then we need\n> ALTER SYSTEM to be willing to clean out duplicates, if only so you don't\n> run out of disk space after awhile.\n\n\nUh, if you don’t ever run ALTER SYSTEM then you could also “run out of disk\nspace” due to external tools modifying by just adding to the file.\n\nPersonally, I don’t buy the “run out of disk space” argument but if we are\ngoing to go there then we should apply it appropriately.\n\nHaving the history of changes to auto.conf would actually be quite useful,\nimv, and worth a bit of disk space (heck, it’s not exactly uncommon for\npeople to keep their config files in git repos..). I’d suggest we also\ninclude the date/time of when the modification was made.\n\nThanks,\n\nStephen\n\nGreetings,On Fri, Aug 2, 2019 at 18:47 Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n> On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> The proposal seems to be to run through the .auto.conf file, remove any\n>>> duplicates, and append the new entry at the end. That seems reasonable.\n\n>> +1\n\n> I disagree that this should only be addressed in alter system, as I’ve said\n> before and as others have agreed with. Having one set of code that can be\n> used to update parameters in the auto.conf and then have that be used by\n> pg_basebackup, alter system, and external tools, is the right approach.\n\nI don't find that to be necessary or even desirable. Many (most?) of the\nsituations where this would be important wouldn't have access to a running\nbackend, and maybe not to any PG code at all --- what if your tool isn't\nwritten in C?What if you want to access PG and your tool isn’t written in C? You use a module, extension, package, whatever, that provides the glue between what your language wants and what the C library provides. There’s psycopg2 for python, DBD::Pg for Perl, et al, and they use libpq. There’s languages that like to write their own too, like the JDBC driver, the Golang driver, but that doesn’t mean we shouldn’t provide libpq or that non-C tools can’t leverage libpq. This argument is just not sensible.I agree entirely that we want to be able to modify auto.conf without having PG running (and without using single mode, bleh, that’s horrid..). I think we can accept that there we can’t necessarily *validate* that every option is acceptable but that’s not the same as being able to simply parse the file and modify a value.\nI think it's perfectly fine to say that external tools need only append\nto the file, which will require no special tooling. But then we need\nALTER SYSTEM to be willing to clean out duplicates, if only so you don't\nrun out of disk space after awhile.Uh, if you don’t ever run ALTER SYSTEM then you could also “run out of disk space” due to external tools modifying by just adding to the file.Personally, I don’t buy the “run out of disk space” argument but if we are going to go there then we should apply it appropriately.Having the history of changes to auto.conf would actually be quite useful, imv, and worth a bit of disk space (heck, it’s not exactly uncommon for people to keep their config files in git repos..). I’d suggest we also include the date/time of when the modification was made.Thanks,Stephen",
"msg_date": "Fri, 2 Aug 2019 20:27:25 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 20:27:25 -0400, Stephen Frost wrote:\n> On Fri, Aug 2, 2019 at 18:47 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>> The proposal seems to be to run through the .auto.conf file, remove any\n> > >>> duplicates, and append the new entry at the end. That seems reasonable.\n> >\n> > >> +1\n> >\n> > > I disagree that this should only be addressed in alter system, as I’ve\n> > said\n> > > before and as others have agreed with. Having one set of code that can\n> > be\n> > > used to update parameters in the auto.conf and then have that be used by\n> > > pg_basebackup, alter system, and external tools, is the right approach.\n> >\n> > I don't find that to be necessary or even desirable. Many (most?) of the\n> > situations where this would be important wouldn't have access to a running\n> > backend, and maybe not to any PG code at all --- what if your tool isn't\n> > written in C?\n>\n>\n> What if you want to access PG and your tool isn’t written in C? You use a\n> module, extension, package, whatever, that provides the glue between what\n> your language wants and what the C library provides. There’s psycopg2 for\n> python, DBD::Pg for Perl, et al, and they use libpq. There’s languages that\n> like to write their own too, like the JDBC driver, the Golang driver, but\n> that doesn’t mean we shouldn’t provide libpq or that non-C tools can’t\n> leverage libpq. This argument is just not sensible.\n\nOh, comeon. Are you seriously suggesting that a few commands to add a a\nnew config setting to postgresql.auto.conf will cause a lot of people to\nwrite wrappers around $new_config_library in their language of choice,\nbecause they did the same for libpq? And that we should design such a\nlibrary, for v12?\n\n\n> I think it's perfectly fine to say that external tools need only append\n> > to the file, which will require no special tooling. But then we need\n> > ALTER SYSTEM to be willing to clean out duplicates, if only so you don't\n> > run out of disk space after awhile.\n\n> Uh, if you don’t ever run ALTER SYSTEM then you could also “run out of disk\n> space” due to external tools modifying by just adding to the file.\n\nThat was commented upon in the emails you're replying to? It seems\nhardly likely that you'd get enough config entries to make that\nproblematic while postgres is not running. While running it's a\ndifferent story.\n\n\n> Personally, I don’t buy the “run out of disk space” argument but if we are\n> going to go there then we should apply it appropriately.\n>\n> Having the history of changes to auto.conf would actually be quite useful,\n> imv, and worth a bit of disk space (heck, it’s not exactly uncommon for\n> people to keep their config files in git repos..). I’d suggest we also\n> include the date/time of when the modification was made.\n\nThat just seems like an entirely different project. It seems blindlingly\nobvious that we can't keep the entire history in the file that we're\ngoing to be parsing on a regular basis. Having some form of config\nhistory tracking might be interesting, but I think it's utterly and\ncompletely independent from what we need to fix for v12.\n\nIt seems pretty clear that there's more people disagreeing with your\nposition than agreeing with you. Because of this conflict there's not\nbeen progress on this for weeks. I think it's beyond time that we just\ndo the minimal thing for v12, and then continue from there in v13.\n\n- Andres\n\n\n",
"msg_date": "Fri, 2 Aug 2019 17:45:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Fri, Aug 2, 2019 at 20:46 Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-08-02 20:27:25 -0400, Stephen Frost wrote:\n> > On Fri, Aug 2, 2019 at 18:47 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Stephen Frost <sfrost@snowman.net> writes:\n> > > > On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >>> The proposal seems to be to run through the .auto.conf file,\n> remove any\n> > > >>> duplicates, and append the new entry at the end. That seems\n> reasonable.\n> > >\n> > > >> +1\n> > >\n> > > > I disagree that this should only be addressed in alter system, as\n> I’ve\n> > > said\n> > > > before and as others have agreed with. Having one set of code that\n> can\n> > > be\n> > > > used to update parameters in the auto.conf and then have that be\n> used by\n> > > > pg_basebackup, alter system, and external tools, is the right\n> approach.\n> > >\n> > > I don't find that to be necessary or even desirable. Many (most?) of\n> the\n> > > situations where this would be important wouldn't have access to a\n> running\n> > > backend, and maybe not to any PG code at all --- what if your tool\n> isn't\n> > > written in C?\n> >\n> >\n> > What if you want to access PG and your tool isn’t written in C? You use\n> a\n> > module, extension, package, whatever, that provides the glue between what\n> > your language wants and what the C library provides. There’s psycopg2\n> for\n> > python, DBD::Pg for Perl, et al, and they use libpq. There’s languages\n> that\n> > like to write their own too, like the JDBC driver, the Golang driver, but\n> > that doesn’t mean we shouldn’t provide libpq or that non-C tools can’t\n> > leverage libpq. This argument is just not sensible.\n>\n> Oh, comeon. Are you seriously suggesting that a few commands to add a a\n> new config setting to postgresql.auto.conf will cause a lot of people to\n> write wrappers around $new_config_library in their language of choice,\n> because they did the same for libpq? And that we should design such a\n> library, for v12?\n\n\nNo, I’m saying that we already *have* a library and we can add a few\nfunctions to it and if people want to leverage those functions then they\ncan write glue code to do so, just like was done with libpq. The argument\nthat “we shouldn’t put code into the common library because only tools\nwritten in C can use the common library” is what I was specifically taking\nexception with and your response doesn’t change my opinion of that argument\none bit.\n\n> I think it's perfectly fine to say that external tools need only append\n> > > to the file, which will require no special tooling. But then we need\n> > > ALTER SYSTEM to be willing to clean out duplicates, if only so you\n> don't\n> > > run out of disk space after awhile.\n>\n> > Uh, if you don’t ever run ALTER SYSTEM then you could also “run out of\n> disk\n> > space” due to external tools modifying by just adding to the file.\n>\n> That was commented upon in the emails you're replying to? It seems\n> hardly likely that you'd get enough config entries to make that\n> problematic while postgres is not running. While running it's a\n> different story.\n\n\nApparently I don’t have the experiences that you do as I’ve not seen a lot\nof systems which are constantly rewriting the conf file to the point where\nkeeping the versions would be likely to add up to anything interesting.\n\nDesigning the system around “well, we don’t think you’ll modify the file\nvery much from an external tool, so we just won’t worry about it, but if\nyou use alter system then we will clean things up” certainly doesn’t strike\nme as terribly principled.\n\n> Personally, I don’t buy the “run out of disk space” argument but if we are\n> > going to go there then we should apply it appropriately.\n> >\n> > Having the history of changes to auto.conf would actually be quite\n> useful,\n> > imv, and worth a bit of disk space (heck, it’s not exactly uncommon for\n> > people to keep their config files in git repos..). I’d suggest we also\n> > include the date/time of when the modification was made.\n>\n> That just seems like an entirely different project. It seems blindlingly\n> obvious that we can't keep the entire history in the file that we're\n> going to be parsing on a regular basis. Having some form of config\n> history tracking might be interesting, but I think it's utterly and\n> completely independent from what we need to fix for v12.\n\n\nWe don’t parse the file on anything like a “regular” basis.\n\nThanks,\n\nStephen\n\nGreetings,On Fri, Aug 2, 2019 at 20:46 Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-08-02 20:27:25 -0400, Stephen Frost wrote:\n> On Fri, Aug 2, 2019 at 18:47 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Stephen Frost <sfrost@snowman.net> writes:\n> > > On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>> The proposal seems to be to run through the .auto.conf file, remove any\n> > >>> duplicates, and append the new entry at the end. That seems reasonable.\n> >\n> > >> +1\n> >\n> > > I disagree that this should only be addressed in alter system, as I’ve\n> > said\n> > > before and as others have agreed with. Having one set of code that can\n> > be\n> > > used to update parameters in the auto.conf and then have that be used by\n> > > pg_basebackup, alter system, and external tools, is the right approach.\n> >\n> > I don't find that to be necessary or even desirable. Many (most?) of the\n> > situations where this would be important wouldn't have access to a running\n> > backend, and maybe not to any PG code at all --- what if your tool isn't\n> > written in C?\n>\n>\n> What if you want to access PG and your tool isn’t written in C? You use a\n> module, extension, package, whatever, that provides the glue between what\n> your language wants and what the C library provides. There’s psycopg2 for\n> python, DBD::Pg for Perl, et al, and they use libpq. There’s languages that\n> like to write their own too, like the JDBC driver, the Golang driver, but\n> that doesn’t mean we shouldn’t provide libpq or that non-C tools can’t\n> leverage libpq. This argument is just not sensible.\n\nOh, comeon. Are you seriously suggesting that a few commands to add a a\nnew config setting to postgresql.auto.conf will cause a lot of people to\nwrite wrappers around $new_config_library in their language of choice,\nbecause they did the same for libpq? And that we should design such a\nlibrary, for v12?No, I’m saying that we already *have* a library and we can add a few functions to it and if people want to leverage those functions then they can write glue code to do so, just like was done with libpq. The argument that “we shouldn’t put code into the common library because only tools written in C can use the common library” is what I was specifically taking exception with and your response doesn’t change my opinion of that argument one bit. \n> I think it's perfectly fine to say that external tools need only append\n> > to the file, which will require no special tooling. But then we need\n> > ALTER SYSTEM to be willing to clean out duplicates, if only so you don't\n> > run out of disk space after awhile.\n\n> Uh, if you don’t ever run ALTER SYSTEM then you could also “run out of disk\n> space” due to external tools modifying by just adding to the file.\n\nThat was commented upon in the emails you're replying to? It seems\nhardly likely that you'd get enough config entries to make that\nproblematic while postgres is not running. While running it's a\ndifferent story.Apparently I don’t have the experiences that you do as I’ve not seen a lot of systems which are constantly rewriting the conf file to the point where keeping the versions would be likely to add up to anything interesting.Designing the system around “well, we don’t think you’ll modify the file very much from an external tool, so we just won’t worry about it, but if you use alter system then we will clean things up” certainly doesn’t strike me as terribly principled.\n> Personally, I don’t buy the “run out of disk space” argument but if we are\n> going to go there then we should apply it appropriately.\n>\n> Having the history of changes to auto.conf would actually be quite useful,\n> imv, and worth a bit of disk space (heck, it’s not exactly uncommon for\n> people to keep their config files in git repos..). I’d suggest we also\n> include the date/time of when the modification was made.\n\nThat just seems like an entirely different project. It seems blindlingly\nobvious that we can't keep the entire history in the file that we're\ngoing to be parsing on a regular basis. Having some form of config\nhistory tracking might be interesting, but I think it's utterly and\ncompletely independent from what we need to fix for v12.We don’t parse the file on anything like a “regular” basis.Thanks,Stephen",
"msg_date": "Fri, 2 Aug 2019 20:57:20 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-02 20:57:20 -0400, Stephen Frost wrote:\n> No, I’m saying that we already *have* a library and we can add a few\n> functions to it and if people want to leverage those functions then they\n> can write glue code to do so, just like was done with libpq. The argument\n> that “we shouldn’t put code into the common library because only tools\n> written in C can use the common library” is what I was specifically taking\n> exception with and your response doesn’t change my opinion of that argument\n> one bit.\n\nWait, which library is this? And which code is suitable for being put in\na library right now?\n\nWe're WAY WAY past feature freeze. This isn't the time to rewrite guc.c,\nguc-file.l to be suitable for running outside of a backend environment.\n\n\n\n> Apparently I don’t have the experiences that you do as I’ve not seen a lot\n> of systems which are constantly rewriting the conf file to the point where\n> keeping the versions would be likely to add up to anything interesting.\n\nShrug. I've e.g. seen people continuously (every few minutes or so)\nchange autovacuum settings depending on load and observed response\ntimes. Which isn't even a crazy thing to do.\n\n\n> Designing the system around “well, we don’t think you’ll modify the file\n> very much from an external tool, so we just won’t worry about it, but if\n> you use alter system then we will clean things up” certainly doesn’t strike\n> me as terribly principled.\n\nWell. You shouldn't change postgresql.conf.auto while the server is\nrunning, for fairly obvious reasons. Therefore external tools not using\nALTER SYSTEM only make sense when the server is not running. And I don't\nthink it's a crazy to assume that PG servers where you'd regularly\nchange the config are running most of the time.\n\nAnd again, we're talking about v12 here. I don't think anybody is\narguing that we shouldn't provide library/commandline tools to make make\nchanges to postgresql.auto.conf conveniently possible without\nduplicating lines. BUT not for v12, especially not because as the person\narguing for this, you've not provided a patch providing such a library.\n\n\n> > Personally, I don’t buy the “run out of disk space” argument but if we are\n> > > going to go there then we should apply it appropriately.\n> > >\n> > > Having the history of changes to auto.conf would actually be quite\n> > useful,\n> > > imv, and worth a bit of disk space (heck, it’s not exactly uncommon for\n> > > people to keep their config files in git repos..). I’d suggest we also\n> > > include the date/time of when the modification was made.\n> >\n> > That just seems like an entirely different project. It seems blindlingly\n> > obvious that we can't keep the entire history in the file that we're\n> > going to be parsing on a regular basis. Having some form of config\n> > history tracking might be interesting, but I think it's utterly and\n> > completely independent from what we need to fix for v12.\n\n> We don’t parse the file on anything like a “regular” basis.\n\nWell, everytime somebody does pg_reload_conf(), which for systems that\ndo frequent ALTER SYSTEMs, is kinda frequent too...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 18:08:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Fri, Aug 02, 2019 at 06:08:02PM -0700, Andres Freund wrote:\n>Hi,\n>\n>On 2019-08-02 20:57:20 -0400, Stephen Frost wrote:\n>> No, I’m saying that we already *have* a library and we can add a few\n>> functions to it and if people want to leverage those functions then they\n>> can write glue code to do so, just like was done with libpq. The argument\n>> that “we shouldn’t put code into the common library because only tools\n>> written in C can use the common library” is what I was specifically taking\n>> exception with and your response doesn’t change my opinion of that argument\n>> one bit.\n>\n>Wait, which library is this? And which code is suitable for being put in\n>a library right now?\n>\n>We're WAY WAY past feature freeze. This isn't the time to rewrite guc.c,\n>guc-file.l to be suitable for running outside of a backend environment.\n>\n\nRight. And even if we had the code, it's not quite backpatchable (which\nwe probably should do, considering this is a general ALTER SYSTEM issue,\nso not pg12-only).\n\nNot to mention there's no clear consensus this is actually desirable.\nI'd argue forcing external tools (written in arbitrary language) to use\nthis library (written in C), just to modify a \"stupid\" text file is a\nbit overkill. IMO duplicates don't make the file invalid, we should\nhandle that correctly/gracefully, so I don't see why external tools\ncould not simply append to the file. We can deduplicate the file when\nstarting the server, on ALTER SYSTEM, or some other time.\n\nIf we really want to give external tools a sensible (and optional) API\nto access the file, a simple command-line tool seems much better. Say we\nhave something like\n\n pg_config_file -f PATH --set KEY VALUE\n pg_config_file -f PATH --get KEY\n\nto set / query value of an option. I still don't see why we should force\npeople to use that (instead of appending to the file), though. Not to\nmention it's way of out pg12 scope.\n\n>\n>\n>> Apparently I don’t have the experiences that you do as I’ve not seen a lot\n>> of systems which are constantly rewriting the conf file to the point where\n>> keeping the versions would be likely to add up to anything interesting.\n>\n>Shrug. I've e.g. seen people continuously (every few minutes or so)\n>change autovacuum settings depending on load and observed response\n>times. Which isn't even a crazy thing to do.\n>\n\nI agree a history of the config values is useful in some cases, but I\nvery much doubt stashing them in the config file is sensible. It gives\nyou pretty much no metadata (like timestamp of the change), certainly\nnot in an easy-to-query way. I've seen people storing that info in a\nmonitoring system (so a timeseries for each autovacuum setting), or we\nmight add a hook to ALTER SYSTEM so that we could feed it somewhere.\n\nBut I see little evidence stashing the changes in a file indefinitely is\na good idea, especially when there's no way to clear old data etc. It\nseems more like a rather artificial use case invented to support the\nidea of keeping the duplicates.\n\n>\n>> Designing the system around “well, we don’t think you’ll modify the file\n>> very much from an external tool, so we just won’t worry about it, but if\n>> you use alter system then we will clean things up” certainly doesn’t strike\n>> me as terribly principled.\n>\n>Well. You shouldn't change postgresql.conf.auto while the server is\n>running, for fairly obvious reasons. Therefore external tools not using\n>ALTER SYSTEM only make sense when the server is not running. And I don't\n>think it's a crazy to assume that PG servers where you'd regularly\n>change the config are running most of the time.\n>\n\nRight.\n\n>And again, we're talking about v12 here. I don't think anybody is\n>arguing that we shouldn't provide library/commandline tools to make make\n>changes to postgresql.auto.conf conveniently possible without\n>duplicating lines. BUT not for v12, especially not because as the person\n>arguing for this, you've not provided a patch providing such a library.\n>\n\n+1 million here\n\n>\n>> > Personally, I don’t buy the “run out of disk space” argument but if we are\n>> > > going to go there then we should apply it appropriately.\n>> > >\n>> > > Having the history of changes to auto.conf would actually be quite\n>> > useful,\n>> > > imv, and worth a bit of disk space (heck, it’s not exactly uncommon for\n>> > > people to keep their config files in git repos..). I’d suggest we also\n>> > > include the date/time of when the modification was made.\n>> >\n>> > That just seems like an entirely different project. It seems blindlingly\n>> > obvious that we can't keep the entire history in the file that we're\n>> > going to be parsing on a regular basis. Having some form of config\n>> > history tracking might be interesting, but I think it's utterly and\n>> > completely independent from what we need to fix for v12.\n>\n>> We don’t parse the file on anything like a “regular” basis.\n>\n>Well, everytime somebody does pg_reload_conf(), which for systems that\n>do frequent ALTER SYSTEMs, is kinda frequent too...\n>\n\nRight.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 3 Aug 2019 14:41:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Aug 02, 2019 at 06:08:02PM -0700, Andres Freund wrote:\n>> We're WAY WAY past feature freeze. This isn't the time to rewrite guc.c,\n>> guc-file.l to be suitable for running outside of a backend environment.\n\n> Right. And even if we had the code, it's not quite backpatchable (which\n> we probably should do, considering this is a general ALTER SYSTEM issue,\n> so not pg12-only).\n\n> Not to mention there's no clear consensus this is actually desirable.\n> I'd argue forcing external tools (written in arbitrary language) to use\n> this library (written in C), just to modify a \"stupid\" text file is a\n> bit overkill. IMO duplicates don't make the file invalid, we should\n> handle that correctly/gracefully, so I don't see why external tools\n> could not simply append to the file. We can deduplicate the file when\n> starting the server, on ALTER SYSTEM, or some other time.\n\nYup. I'd also point out that even if we had a command-line tool of this\nsort, there would be scenarios where it's not practical or not convenient\nto use. We need not go further than \"my tool needs to work with existing\nPG releases\" to think of good examples.\n\nI think we should just accept the facts on the ground, which are that\nsome tools modify pg.auto.conf by appending to it, and say that that's\nsupported as long as the file doesn't get unreasonably long.\n\nI'm not at all on board with inventing a requirement for pg.auto.conf\nto track its modification history. I don't buy that that's a\nwidespread need in the first place; if I did buy it, that file\nitself is not where to keep the history; and in any case, it'd be\na new feature and it's way too late for v12.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Aug 2019 12:59:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> On 8/3/19 7:27 AM, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> The main issue however is that no code was written yet.\n\n>> Seems like it ought to be relatively simple ... but I didn't look.\n\n> The patch I originally sent does exactly this.\n\nAh, you did send a patch, but that tries to maintain the existing behavior\nof replacing the last occurrence in-place. I think it's simpler and more\nsensible to just make a sweep to delete all matches, and then append the\nnew setting (if any) at the end, as attached.\n\nA more aggressive patch would try to de-duplicate the entire list, not\njust the current target entry ... but I'm not really excited about doing\nthat in a back-patchable bug fix.\n\nI looked at the TAP test you proposed and couldn't quite convince myself\nthat it was worth the trouble. A new test within an existing suite\nwould likely be fine, but a whole new src/test/ subdirectory just for\npg.auto.conf seems a bit much. (Note that the buildfarm and possibly\nthe MSVC scripts would have to be taught about each such subdirectory.)\nAt the same time, we lack any better place to put such a test :-(.\nMaybe it's time for a \"miscellaneous TAP tests\" subdirectory?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 03 Aug 2019 15:13:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/4/19 4:13 AM, Tom Lane wrote:\n> Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n>> On 8/3/19 7:27 AM, Tom Lane wrote:\n>>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>>> The main issue however is that no code was written yet.\n> \n>>> Seems like it ought to be relatively simple ... but I didn't look.\n> \n>> The patch I originally sent does exactly this.\n> \n> Ah, you did send a patch, but that tries to maintain the existing behavior\n> of replacing the last occurrence in-place. I think it's simpler and more\n> sensible to just make a sweep to delete all matches, and then append the\n> new setting (if any) at the end, as attached.\n\nYes, that is less convoluted.\n\n> A more aggressive patch would try to de-duplicate the entire list, not\n> just the current target entry ... but I'm not really excited about doing\n> that in a back-patchable bug fix.\n\nI thought about doing that but it's more of a nice-to-have and not essential\nto fix the issue, as any other duplicate entries will get removed the next\ntime ALTER SYSTEM is run on the entry in question. Maybe as part of a future\nimprovement.\n\n> I looked at the TAP test you proposed and couldn't quite convince myself\n> that it was worth the trouble. A new test within an existing suite\n> would likely be fine, but a whole new src/test/ subdirectory just for\n> pg.auto.conf seems a bit much. (Note that the buildfarm and possibly\n> the MSVC scripts would have to be taught about each such subdirectory.)\n\nDidn't know that, but couldn't find anywhere obvious to put the test.\n\n> At the same time, we lack any better place to put such a test :-(.\n> Maybe it's time for a \"miscellaneous TAP tests\" subdirectory?\n\nSounds reasonable.\n\n\nRegards\n\nIan Barwick\n\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Mon, 5 Aug 2019 15:42:30 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/4/19 1:59 AM, Tom Lane wrote:> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n >> On Fri, Aug 02, 2019 at 06:08:02PM -0700, Andres Freund wrote:\n >>> We're WAY WAY past feature freeze. This isn't the time to rewrite guc.c,\n >>> guc-file.l to be suitable for running outside of a backend environment.\n >\n >> Right. And even if we had the code, it's not quite backpatchable (which\n >> we probably should do, considering this is a general ALTER SYSTEM issue,\n >> so not pg12-only).\n >\n >> Not to mention there's no clear consensus this is actually desirable.\n >> I'd argue forcing external tools (written in arbitrary language) to use\n >> this library (written in C), just to modify a \"stupid\" text file is a\n >> bit overkill. IMO duplicates don't make the file invalid, we should\n >> handle that correctly/gracefully, so I don't see why external tools\n >> could not simply append to the file. We can deduplicate the file when\n >> starting the server, on ALTER SYSTEM, or some other time.\n >\n > Yup. I'd also point out that even if we had a command-line tool of this\n > sort, there would be scenarios where it's not practical or not convenient\n > to use. We need not go further than \"my tool needs to work with existing\n > PG releases\" to think of good examples.\n\nI suspect this hasn't been an issue before, simply because until the removal\nof recovery.conf AFAIK there hasn't been a general use-case where you'd need\nto modify pg.auto.conf while the server is not running. The use-case which now\nexists (i.e. for writing replication configuration) is one where the tool will\nneed to be version-aware anyway (like pg_basebackup is), so I don't see that as\na particular deal-breaker.\n\nBut...\n\n > I think we should just accept the facts on the ground, which are that\n > some tools modify pg.auto.conf by appending to it\n\n+1. It's just a text file...\n\n > and say that that's supported as long as the file doesn't get unreasonably long.\n\nAlbeit with the caveat that the server should not be running.\n\nNot sure how you define \"unreasonably long\" though.\n\n > I'm not at all on board with inventing a requirement for pg.auto.conf\n > to track its modification history. I don't buy that that's a\n > widespread need in the first place; if I did buy it, that file\n > itself is not where to keep the history; and in any case, it'd be\n > a new feature and it's way too late for v12.\n\nYeah, that's way outside of the scope of this issue.\n\n\nRegards\n\nIan Barwick\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n",
"msg_date": "Mon, 5 Aug 2019 15:52:07 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Fri, Aug 02, 2019 at 06:38:46PM -0400, Stephen Frost wrote:\n> >On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >>> There seems to be a consensus that this this not a pg_basebackup issue\n> >>> (i.e. duplicate values don't make the file invalid), and it should be\n> >>> handled in ALTER SYSTEM.\n> >>\n> >>Yeah. I doubt pg_basebackup is the only actor that can create such\n> >>situations.\n> >>\n> >>> The proposal seems to be to run through the .auto.conf file, remove any\n> >>> duplicates, and append the new entry at the end. That seems reasonable.\n> >>\n> >>+1\n> >\n> >I disagree that this should only be addressed in alter system, as I’ve said\n> >before and as others have agreed with. Having one set of code that can be\n> >used to update parameters in the auto.conf and then have that be used by\n> >pg_basebackup, alter system, and external tools, is the right approach.\n> >\n> >The idea that alter system should be the only thing that doesn’t just\n> >append changes to the file is just going to lead to confusion and bugs down\n> >the road.\n> \n> I don't remember any suggestions ALTER SYSTEM should be the only thing\n> that can rewrite the config file, but maybe it's buried somewhere in the\n> thread history. The current proposal certainly does not prohibit any\n> external tool from doing so, it just says we should expect duplicates.\n\nThere's an ongoing assumption that's been made that only ALTER SYSTEM\ncould make these changes because nothing else has the full GUC system\nand a running PG instance to validate everything.\n\nThe suggestion that an external tool could do it goes against that.\n\nIf we can't, for whatever reason, work our way towards having code that\nexternal tools could leverage to manage .auto.conf, then if we could at\nleast document what the expectations are and what tools can/can't do\nwith the file, that would put us in a better position than where we are\nnow.\n\nI strongly believe that whatever the rules and expectations are that we\ncome up with, both ALTER SYSTEM and the in-core and external tools\nshould follow them.\n\nIf we say to that tools should expect duplicates in the file, then\nALTER SYSTEM should as well, which was the whole issue in the first\nplace- ALTER SYSTEM didn't expect duplicates, but the external tools and\nthe GUC system did.\n\nIf we say that it's acceptable for something to remove duplicate GUC\nentries from the file, keeping the last one, then external tools should\nfeel comfortable doing that too and we should make it clear what\n\"duplicate\" means here and how to identify one.\n\nIf we say it's optional for a tool to remove duplicates, then we should\npoint out the risk of \"running out of disk space\" for tool authors to\nconsider. I don't agree with the idea that tool authors should be asked\nto depend on someone running ALTER SYSTEM to address that risk. If\nthere's a strong feeling that tool authors should be able to depend on\nPG to perform that cleanup for them, then we should use a mechanism to\ndo so which doesn't involve an entirely optional feature.\n\nFor reference, all of the above, while not as cleanly as it could have\nbeen, was addressed with the recovery.conf/recovery.done system. Tool\nauthors had a good sense that they could replace that file, and that PG\nwould clean it up at exactly the right moment, and there wasn't this\nugly interaction with ALTER SYSTEM to have to worry about. That none of\nthis was really even discussed or addressed previously even after being\npointed out is really disappointing.\n\nJust to be clear, I brought up this exact concern back in *November*:\n\nhttps://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n\nAnd now because this was pushed forward and the concerns that I raised\nignored, we're having to deal with this towards the end of the release\ncycle instead of during normal development. The things we're talking\nabout now and which I'm getting push-back on because of the release\ncycle situation were specifically suggestions I made in the above email\nin November where I also brought up concern that ALTER SYSTEM would be\nconfused by the duplicates- giving external tools guideance on how to\nmodify .auto.conf, or providing them a tool (or library), or both.\n\nNone of this should be coming as a surprise to anyone who was following\nand I feel we should be upset that this was left to such a late point in\nthe release cycle to address these issues.\n\n> >>There was a discussion whether to print warnings about the duplicates. I\n> >>> personally see not much point in doing that - if we consider duplicates\n> >>> to be expected, and if ALTER SYSTEM has the license to rework the config\n> >>> file any way it wants, why warn about it?\n> >>\n> >>Personally I agree that warnings are unnecessary.\n> >\n> >And at least Magnus and I disagree with that, as I recall from this\n> >thread. Let’s have a clean and clear way to modify the auto.conf and have\n> >everything that touches the file update it in a consistent way.\n> \n> Well, I personally don't feel very strongly about it. I think the\n> warnings will be a nuisance bothering people with expeced stuff, but I'm\n> not willing to fight against it.\n\nI'd be happier with one set of code at least being the recommended\napproach to modifying the file and only one set of code in our codebase\nthat's messing with .auto.conf, so that, hopefully, it's done\nconsistently and properly and in a way that everything agrees on and\nexpects, but if we can't get there due to concerns about where we are in\nthe release cycle, et al, then let's at least document what is\n*supposed* to happen and have our code do so.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 10:21:39 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Here's a radical suggestion: replace postgresql.auto.conf with a directory\ncontaining multiple files. Each file is named after a configuration\nparameter, and its content is the value of the parameter.\n\nSo to remove a special configuration parameter, delete its file. To set it,\nwrite the file, replacing an existing file if it exists.\n\nFor this to work unambiguously we would have to specify an exact,\ncase-sensitive, form of every parameter name that must be used within the\nauto conf directory. I would suggest using the form listed in the\ndocumentation (i.e., lower case, to my knowledge).\n\nIn order to prevent confusing and surprising behaviour, the system should\ncomplain vociferously if it finds a configuration parameter file that is\nnot named after a defined parameter, rather than just ignoring it.\n\nOn Mon, 5 Aug 2019 at 10:21, Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > On Fri, Aug 02, 2019 at 06:38:46PM -0400, Stephen Frost wrote:\n> > >On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> > >>> There seems to be a consensus that this this not a pg_basebackup\n> issue\n> > >>> (i.e. duplicate values don't make the file invalid), and it should be\n> > >>> handled in ALTER SYSTEM.\n> > >>\n> > >>Yeah. I doubt pg_basebackup is the only actor that can create such\n> > >>situations.\n> > >>\n> > >>> The proposal seems to be to run through the .auto.conf file, remove\n> any\n> > >>> duplicates, and append the new entry at the end. That seems\n> reasonable.\n> > >>\n> > >>+1\n> > >\n> > >I disagree that this should only be addressed in alter system, as I’ve\n> said\n> > >before and as others have agreed with. Having one set of code that can\n> be\n> > >used to update parameters in the auto.conf and then have that be used by\n> > >pg_basebackup, alter system, and external tools, is the right approach.\n> > >\n> > >The idea that alter system should be the only thing that doesn’t just\n> > >append changes to the file is just going to lead to confusion and bugs\n> down\n> > >the road.\n> >\n> > I don't remember any suggestions ALTER SYSTEM should be the only thing\n> > that can rewrite the config file, but maybe it's buried somewhere in the\n> > thread history. The current proposal certainly does not prohibit any\n> > external tool from doing so, it just says we should expect duplicates.\n>\n> There's an ongoing assumption that's been made that only ALTER SYSTEM\n> could make these changes because nothing else has the full GUC system\n> and a running PG instance to validate everything.\n>\n> The suggestion that an external tool could do it goes against that.\n>\n> If we can't, for whatever reason, work our way towards having code that\n> external tools could leverage to manage .auto.conf, then if we could at\n> least document what the expectations are and what tools can/can't do\n> with the file, that would put us in a better position than where we are\n> now.\n>\n> I strongly believe that whatever the rules and expectations are that we\n> come up with, both ALTER SYSTEM and the in-core and external tools\n> should follow them.\n>\n> If we say to that tools should expect duplicates in the file, then\n> ALTER SYSTEM should as well, which was the whole issue in the first\n> place- ALTER SYSTEM didn't expect duplicates, but the external tools and\n> the GUC system did.\n>\n> If we say that it's acceptable for something to remove duplicate GUC\n> entries from the file, keeping the last one, then external tools should\n> feel comfortable doing that too and we should make it clear what\n> \"duplicate\" means here and how to identify one.\n>\n> If we say it's optional for a tool to remove duplicates, then we should\n> point out the risk of \"running out of disk space\" for tool authors to\n> consider. I don't agree with the idea that tool authors should be asked\n> to depend on someone running ALTER SYSTEM to address that risk. If\n> there's a strong feeling that tool authors should be able to depend on\n> PG to perform that cleanup for them, then we should use a mechanism to\n> do so which doesn't involve an entirely optional feature.\n>\n> For reference, all of the above, while not as cleanly as it could have\n> been, was addressed with the recovery.conf/recovery.done system. Tool\n> authors had a good sense that they could replace that file, and that PG\n> would clean it up at exactly the right moment, and there wasn't this\n> ugly interaction with ALTER SYSTEM to have to worry about. That none of\n> this was really even discussed or addressed previously even after being\n> pointed out is really disappointing.\n>\n> Just to be clear, I brought up this exact concern back in *November*:\n>\n>\n> https://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n>\n> And now because this was pushed forward and the concerns that I raised\n> ignored, we're having to deal with this towards the end of the release\n> cycle instead of during normal development. The things we're talking\n> about now and which I'm getting push-back on because of the release\n> cycle situation were specifically suggestions I made in the above email\n> in November where I also brought up concern that ALTER SYSTEM would be\n> confused by the duplicates- giving external tools guideance on how to\n> modify .auto.conf, or providing them a tool (or library), or both.\n>\n> None of this should be coming as a surprise to anyone who was following\n> and I feel we should be upset that this was left to such a late point in\n> the release cycle to address these issues.\n>\n> > >>There was a discussion whether to print warnings about the duplicates.\n> I\n> > >>> personally see not much point in doing that - if we consider\n> duplicates\n> > >>> to be expected, and if ALTER SYSTEM has the license to rework the\n> config\n> > >>> file any way it wants, why warn about it?\n> > >>\n> > >>Personally I agree that warnings are unnecessary.\n> > >\n> > >And at least Magnus and I disagree with that, as I recall from this\n> > >thread. Let’s have a clean and clear way to modify the auto.conf and\n> have\n> > >everything that touches the file update it in a consistent way.\n> >\n> > Well, I personally don't feel very strongly about it. I think the\n> > warnings will be a nuisance bothering people with expeced stuff, but I'm\n> > not willing to fight against it.\n>\n> I'd be happier with one set of code at least being the recommended\n> approach to modifying the file and only one set of code in our codebase\n> that's messing with .auto.conf, so that, hopefully, it's done\n> consistently and properly and in a way that everything agrees on and\n> expects, but if we can't get there due to concerns about where we are in\n> the release cycle, et al, then let's at least document what is\n> *supposed* to happen and have our code do so.\n>\n> Thanks,\n>\n> Stephen\n>\n\nHere's a radical suggestion: replace postgresql.auto.conf with a directory containing multiple files. Each file is named after a configuration parameter, and its content is the value of the parameter.So to remove a special configuration parameter, delete its file. To set it, write the file, replacing an existing file if it exists.For this to work unambiguously we would have to specify an exact, case-sensitive, form of every parameter name that must be used within the auto conf directory. I would suggest using the form listed in the documentation (i.e., lower case, to my knowledge).In order to prevent confusing and surprising behaviour, the system should complain vociferously if it finds a configuration parameter file that is not named after a defined parameter, rather than just ignoring it.On Mon, 5 Aug 2019 at 10:21, Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Fri, Aug 02, 2019 at 06:38:46PM -0400, Stephen Frost wrote:\n> >On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >>> There seems to be a consensus that this this not a pg_basebackup issue\n> >>> (i.e. duplicate values don't make the file invalid), and it should be\n> >>> handled in ALTER SYSTEM.\n> >>\n> >>Yeah. I doubt pg_basebackup is the only actor that can create such\n> >>situations.\n> >>\n> >>> The proposal seems to be to run through the .auto.conf file, remove any\n> >>> duplicates, and append the new entry at the end. That seems reasonable.\n> >>\n> >>+1\n> >\n> >I disagree that this should only be addressed in alter system, as I’ve said\n> >before and as others have agreed with. Having one set of code that can be\n> >used to update parameters in the auto.conf and then have that be used by\n> >pg_basebackup, alter system, and external tools, is the right approach.\n> >\n> >The idea that alter system should be the only thing that doesn’t just\n> >append changes to the file is just going to lead to confusion and bugs down\n> >the road.\n> \n> I don't remember any suggestions ALTER SYSTEM should be the only thing\n> that can rewrite the config file, but maybe it's buried somewhere in the\n> thread history. The current proposal certainly does not prohibit any\n> external tool from doing so, it just says we should expect duplicates.\n\nThere's an ongoing assumption that's been made that only ALTER SYSTEM\ncould make these changes because nothing else has the full GUC system\nand a running PG instance to validate everything.\n\nThe suggestion that an external tool could do it goes against that.\n\nIf we can't, for whatever reason, work our way towards having code that\nexternal tools could leverage to manage .auto.conf, then if we could at\nleast document what the expectations are and what tools can/can't do\nwith the file, that would put us in a better position than where we are\nnow.\n\nI strongly believe that whatever the rules and expectations are that we\ncome up with, both ALTER SYSTEM and the in-core and external tools\nshould follow them.\n\nIf we say to that tools should expect duplicates in the file, then\nALTER SYSTEM should as well, which was the whole issue in the first\nplace- ALTER SYSTEM didn't expect duplicates, but the external tools and\nthe GUC system did.\n\nIf we say that it's acceptable for something to remove duplicate GUC\nentries from the file, keeping the last one, then external tools should\nfeel comfortable doing that too and we should make it clear what\n\"duplicate\" means here and how to identify one.\n\nIf we say it's optional for a tool to remove duplicates, then we should\npoint out the risk of \"running out of disk space\" for tool authors to\nconsider. I don't agree with the idea that tool authors should be asked\nto depend on someone running ALTER SYSTEM to address that risk. If\nthere's a strong feeling that tool authors should be able to depend on\nPG to perform that cleanup for them, then we should use a mechanism to\ndo so which doesn't involve an entirely optional feature.\n\nFor reference, all of the above, while not as cleanly as it could have\nbeen, was addressed with the recovery.conf/recovery.done system. Tool\nauthors had a good sense that they could replace that file, and that PG\nwould clean it up at exactly the right moment, and there wasn't this\nugly interaction with ALTER SYSTEM to have to worry about. That none of\nthis was really even discussed or addressed previously even after being\npointed out is really disappointing.\n\nJust to be clear, I brought up this exact concern back in *November*:\n\nhttps://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n\nAnd now because this was pushed forward and the concerns that I raised\nignored, we're having to deal with this towards the end of the release\ncycle instead of during normal development. The things we're talking\nabout now and which I'm getting push-back on because of the release\ncycle situation were specifically suggestions I made in the above email\nin November where I also brought up concern that ALTER SYSTEM would be\nconfused by the duplicates- giving external tools guideance on how to\nmodify .auto.conf, or providing them a tool (or library), or both.\n\nNone of this should be coming as a surprise to anyone who was following\nand I feel we should be upset that this was left to such a late point in\nthe release cycle to address these issues.\n\n> >>There was a discussion whether to print warnings about the duplicates. I\n> >>> personally see not much point in doing that - if we consider duplicates\n> >>> to be expected, and if ALTER SYSTEM has the license to rework the config\n> >>> file any way it wants, why warn about it?\n> >>\n> >>Personally I agree that warnings are unnecessary.\n> >\n> >And at least Magnus and I disagree with that, as I recall from this\n> >thread. Let’s have a clean and clear way to modify the auto.conf and have\n> >everything that touches the file update it in a consistent way.\n> \n> Well, I personally don't feel very strongly about it. I think the\n> warnings will be a nuisance bothering people with expeced stuff, but I'm\n> not willing to fight against it.\n\nI'd be happier with one set of code at least being the recommended\napproach to modifying the file and only one set of code in our codebase\nthat's messing with .auto.conf, so that, hopefully, it's done\nconsistently and properly and in a way that everything agrees on and\nexpects, but if we can't get there due to concerns about where we are in\nthe release cycle, et al, then let's at least document what is\n*supposed* to happen and have our code do so.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 10:33:35 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 10:21:39AM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> On Fri, Aug 02, 2019 at 06:38:46PM -0400, Stephen Frost wrote:\n>> >On Fri, Aug 2, 2019 at 18:27 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> >>> There seems to be a consensus that this this not a pg_basebackup issue\n>> >>> (i.e. duplicate values don't make the file invalid), and it should be\n>> >>> handled in ALTER SYSTEM.\n>> >>\n>> >>Yeah. I doubt pg_basebackup is the only actor that can create such\n>> >>situations.\n>> >>\n>> >>> The proposal seems to be to run through the .auto.conf file, remove any\n>> >>> duplicates, and append the new entry at the end. That seems reasonable.\n>> >>\n>> >>+1\n>> >\n>> >I disagree that this should only be addressed in alter system, as I’ve said\n>> >before and as others have agreed with. Having one set of code that can be\n>> >used to update parameters in the auto.conf and then have that be used by\n>> >pg_basebackup, alter system, and external tools, is the right approach.\n>> >\n>> >The idea that alter system should be the only thing that doesn’t just\n>> >append changes to the file is just going to lead to confusion and bugs down\n>> >the road.\n>>\n>> I don't remember any suggestions ALTER SYSTEM should be the only thing\n>> that can rewrite the config file, but maybe it's buried somewhere in the\n>> thread history. The current proposal certainly does not prohibit any\n>> external tool from doing so, it just says we should expect duplicates.\n>\n>There's an ongoing assumption that's been made that only ALTER SYSTEM\n>could make these changes because nothing else has the full GUC system\n>and a running PG instance to validate everything.\n>\n>The suggestion that an external tool could do it goes against that.\n>\n>If we can't, for whatever reason, work our way towards having code that\n>external tools could leverage to manage .auto.conf, then if we could at\n>least document what the expectations are and what tools can/can't do\n>with the file, that would put us in a better position than where we are\n>now.\n>\n\nIMO documenting the basic rules, and then doing some cleanup/validation\nat instance start is the only practical solution, really.\n\nYou can't really validate \"everything\" without a running instance,\nbecause that's the only place where you have GUCs defined by extensions.\nI don't see how that could work for external tools, expected to run\nexactly when the instance is not running.\n\nI can't think of a use case where simply appending to the file would not\nbe perfectly sufficient. You can't really do much when the instance is\nnot running.\n\n\n>I strongly believe that whatever the rules and expectations are that we\n>come up with, both ALTER SYSTEM and the in-core and external tools\n>should follow them.\n>\n\nI'm not against giving external tools such capability, in whatever way\nwe think is appropriate (library, command-line binary, ...).\n\nI'm against (a) making that a requirement for the external tools,\ninstead of just allowing them to append to the file, and (b) trying to\ndo that in PG12. We're at beta3, we don't even have any patch, and it\ndoes quite work for past releases (although it's not that pressing\nthere, thanks to still having recovery.conf).\n\n>If we say to that tools should expect duplicates in the file, then\n>ALTER SYSTEM should as well, which was the whole issue in the first\n>place- ALTER SYSTEM didn't expect duplicates, but the external tools and\n>the GUC system did.\n>\n\nSure.\n\n>If we say that it's acceptable for something to remove duplicate GUC\n>entries from the file, keeping the last one, then external tools should\n>feel comfortable doing that too and we should make it clear what\n>\"duplicate\" means here and how to identify one.\n>\n\nSure. I don't see why the external tools would bother with doing that,\nbut I agree there's no reason not to document what duplicates mean.\n\n>If we say it's optional for a tool to remove duplicates, then we should\n>point out the risk of \"running out of disk space\" for tool authors to\n>consider. I don't agree with the idea that tool authors should be asked\n>to depend on someone running ALTER SYSTEM to address that risk. If\n>there's a strong feeling that tool authors should be able to depend on\n>PG to perform that cleanup for them, then we should use a mechanism to\n>do so which doesn't involve an entirely optional feature.\n>\n\nConsidering the external tools are only allowed to modify the file while\nthe instance is not running, and that most instances are running all the\ntime, I very much doubt this is a risk we need to worry about.\n\nAnd I don't see why we'd have to run ALTER SYSTEM - I proposed to do the\ncleanup at instance start too.\n\n>For reference, all of the above, while not as cleanly as it could have\n>been, was addressed with the recovery.conf/recovery.done system. Tool\n>authors had a good sense that they could replace that file, and that PG\n>would clean it up at exactly the right moment, and there wasn't this\n>ugly interaction with ALTER SYSTEM to have to worry about. That none of\n>this was really even discussed or addressed previously even after being\n>pointed out is really disappointing.\n>\n>Just to be clear, I brought up this exact concern back in *November*:\n>\n>https://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n>\n>And now because this was pushed forward and the concerns that I raised\n>ignored, we're having to deal with this towards the end of the release\n>cycle instead of during normal development. The things we're talking\n>about now and which I'm getting push-back on because of the release\n>cycle situation were specifically suggestions I made in the above email\n>in November where I also brought up concern that ALTER SYSTEM would be\n>confused by the duplicates- giving external tools guideance on how to\n>modify .auto.conf, or providing them a tool (or library), or both.\n>\n>None of this should be coming as a surprise to anyone who was following\n>and I feel we should be upset that this was left to such a late point in\n>the release cycle to address these issues.\n>\n\nI have not been following that discussion, but I acknowledge you've\nraised those points before. At this point I'm really interested in this\nas a RMT member, and from that position I don't quite care what happened\nin November - my concern is what to do now, so that we can get 12 out.\n\n\n>> >>There was a discussion whether to print warnings about the duplicates. I\n>> >>> personally see not much point in doing that - if we consider duplicates\n>> >>> to be expected, and if ALTER SYSTEM has the license to rework the config\n>> >>> file any way it wants, why warn about it?\n>> >>\n>> >>Personally I agree that warnings are unnecessary.\n>> >\n>> >And at least Magnus and I disagree with that, as I recall from this\n>> >thread. Let’s have a clean and clear way to modify the auto.conf and have\n>> >everything that touches the file update it in a consistent way.\n>>\n>> Well, I personally don't feel very strongly about it. I think the\n>> warnings will be a nuisance bothering people with expeced stuff, but I'm\n>> not willing to fight against it.\n>\n>I'd be happier with one set of code at least being the recommended\n>approach to modifying the file and only one set of code in our codebase\n>that's messing with .auto.conf, so that, hopefully, it's done\n>consistently and properly and in a way that everything agrees on and\n>expects, but if we can't get there due to concerns about where we are in\n>the release cycle, et al, then let's at least document what is\n>*supposed* to happen and have our code do so.\n>\n\nI think fixing ALTER SYSTEM to handle duplicities, and documenting the\nbasic rules about modifying .auto.conf is the way to go.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 5 Aug 2019 16:55:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Mon, Aug 05, 2019 at 10:21:39AM -0400, Stephen Frost wrote:\n>> I'd be happier with one set of code at least being the recommended\n>> approach to modifying the file and only one set of code in our codebase\n>> that's messing with .auto.conf, so that, hopefully, it's done\n>> consistently and properly and in a way that everything agrees on and\n>> expects, but if we can't get there due to concerns about where we are in\n>> the release cycle, et al, then let's at least document what is\n>> *supposed* to happen and have our code do so.\n\n> I think fixing ALTER SYSTEM to handle duplicities, and documenting the\n> basic rules about modifying .auto.conf is the way to go.\n\nI agree. So the problem at the moment is we lack a documentation\npatch. Who wants to write it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 11:00:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> Here's a radical suggestion: replace postgresql.auto.conf with a directory\n> containing multiple files. Each file is named after a configuration\n> parameter, and its content is the value of the parameter.\n\nHmm ... that seems like a lot of work and code churn --- in particular,\nguaranteed breakage of code that works today --- to solve a problem\nwe haven't got.\n\nThe problem we do have is lack of documentation, which this wouldn't\nin itself remedy.\n\n> In order to prevent confusing and surprising behaviour, the system should\n> complain vociferously if it finds a configuration parameter file that is\n> not named after a defined parameter, rather than just ignoring it.\n\nAs has been pointed out repeatedly, the set of known parameters just\nisn't that stable. Different builds can recognize different sets of\nGUCs, even without taking extensions into account.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 11:05:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Re: Tomas Vondra 2019-08-03 <20190803124111.2aaddumd7url5wmq@development>\n> If we really want to give external tools a sensible (and optional) API\n> to access the file, a simple command-line tool seems much better. Say we\n> have something like\n> \n> pg_config_file -f PATH --set KEY VALUE\n> pg_config_file -f PATH --get KEY\n\nFwiw, Debian has pg_conftool (based on the perl lib around\nPgCommon.pm):\n\nNAME\n pg_conftool - read and edit PostgreSQL cluster configuration files\n\nSYNOPSIS\n pg_conftool [options] [version cluster] [configfile] command\n\nDESCRIPTION\n pg_conftool allows to show and set parameters in PostgreSQL configuration files.\n\n If version cluster is omitted, it defaults to the default cluster (see user_clusters(5) and postgresqlrc(5)). If configfile is\n omitted, it defaults to postgresql.conf. configfile can also be a path, in which case version cluster is ignored.\n\nOPTIONS\n -b, --boolean\n Format boolean value as on or off (not for \"show all\").\n\n -s, --short\n Show only the value (instead of key = value pair).\n\n -v, --verbose\n Verbose output.\n\n --help\n Print help.\n\nCOMMANDS\n show parameter|all\n Show a parameter, or all present in this config file.\n\n set parameter value\n Set or update a parameter.\n\n remove parameter\n Remove (comment out) a parameter from a config file.\n\n edit\n Open the config file in an editor. Unless $EDITOR is set, vi is used.\n\nSEE ALSO\n user_clusters(5), postgresqlrc(5)\n\nAUTHOR\n Christoph Berg <myon@debian.org>\n\nDebian 2019-07-15 PG_CONFTOOL(1)\n\n\n",
"msg_date": "Mon, 5 Aug 2019 17:34:06 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 03:52:07PM +0900, Ian Barwick wrote:\n> On 8/4/19 1:59 AM, Tom Lane wrote:> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> >> On Fri, Aug 02, 2019 at 06:08:02PM -0700, Andres Freund wrote:\n> >>> We're WAY WAY past feature freeze. This isn't the time to rewrite guc.c,\n> >>> guc-file.l to be suitable for running outside of a backend environment.\n> >\n> >> Right. And even if we had the code, it's not quite backpatchable (which\n> >> we probably should do, considering this is a general ALTER SYSTEM issue,\n> >> so not pg12-only).\n> >\n> >> Not to mention there's no clear consensus this is actually desirable.\n> >> I'd argue forcing external tools (written in arbitrary language) to use\n> >> this library (written in C), just to modify a \"stupid\" text file is a\n> >> bit overkill. IMO duplicates don't make the file invalid, we should\n> >> handle that correctly/gracefully, so I don't see why external tools\n> >> could not simply append to the file. We can deduplicate the file when\n> >> starting the server, on ALTER SYSTEM, or some other time.\n> >\n> > Yup. I'd also point out that even if we had a command-line tool of this\n> > sort, there would be scenarios where it's not practical or not convenient\n> > to use. We need not go further than \"my tool needs to work with existing\n> > PG releases\" to think of good examples.\n> \n> I suspect this hasn't been an issue before, simply because until the removal\n> of recovery.conf AFAIK there hasn't been a general use-case where you'd need\n> to modify pg.auto.conf while the server is not running. The use-case which now\n> exists (i.e. for writing replication configuration) is one where the tool will\n> need to be version-aware anyway (like pg_basebackup is), so I don't see that as\n> a particular deal-breaker.\n> \n> But...\n> \n> > I think we should just accept the facts on the ground, which are that\n> > some tools modify pg.auto.conf by appending to it\n> \n> +1. It's just a text file...\n\nSo are crontab and /etc/passwd, but there are gizmos that help make it\ndifficult for people to write complete gobbledygook to those. Does it\nmake sense to discuss tooling of that type?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 5 Aug 2019 18:22:03 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> On Mon, Aug 05, 2019 at 03:52:07PM +0900, Ian Barwick wrote:\n>> On 8/4/19 1:59 AM, Tom Lane wrote:\n>>> I think we should just accept the facts on the ground, which are that\n>>> some tools modify pg.auto.conf by appending to it\n\n>> +1. It's just a text file...\n\n> So are crontab and /etc/passwd, but there are gizmos that help make it\n> difficult for people to write complete gobbledygook to those. Does it\n> make sense to discuss tooling of that type?\n\nPerhaps as a future improvement, but it's not happening for v12,\nat least not unless you accept \"use ALTER SYSTEM in a standalone\nbackend\" as a usable answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 12:25:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Perhaps as a future improvement, but it's not happening for v12,\n> at least not unless you accept \"use ALTER SYSTEM in a standalone\n> backend\" as a usable answer.\n\nI'm not sure that would even work for the cases at issue ... because\nwe're talking about setting up recovery parameters, and wouldn't the\nserver run recovery before it got around to do anything with ALTER\nSYSTEM?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 12:31:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps as a future improvement, but it's not happening for v12,\n>> at least not unless you accept \"use ALTER SYSTEM in a standalone\n>> backend\" as a usable answer.\n\n> I'm not sure that would even work for the cases at issue ... because\n> we're talking about setting up recovery parameters, and wouldn't the\n> server run recovery before it got around to do anything with ALTER\n> SYSTEM?\n\nYeah, good point. There are a lot of other cases where you really\ndon't want system startup to happen, too. On the other hand,\npeople have also opined that they want full error checking on\nthe inserted values, and that seems out of reach with less than\na full running system (mumble extensions mumble).\n\nIn the end, I think I don't buy Stephen's argument that there should\nbe a one-size-fits-all answer. It seems entirely reasonable that\nwe'll have more than one way to do it, because the constraints are\ndifferent depending on what use-case you think about.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 12:43:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 12:43:24 -0400, Tom Lane wrote:\n> Yeah, good point. There are a lot of other cases where you really\n> don't want system startup to happen, too.\n\nAgreed.\n\n\n> On the other hand, people have also opined that they want full error\n> checking on the inserted values, and that seems out of reach with less\n> than a full running system (mumble extensions mumble).\n\nI think the error checking ought to be about as complete as the one we\ndo during a normal postmaster startup. Afaict that requires loading\nshared_preload_library extensions, but does *not* require booting up far\nenough to run GUC checks in a context with database access.\n\nThe one possible \"extension\" to that that I can see is that arguably we\nmight want to error out if DefineCustom*Variable() doesn't think the\nvalue is valid for a shared_preload_library, rather than just WARNING\n(i.e. refuse to start). We can't really do that for other libraries, but\nfor shared_preload_libraries it might make sense. Although I suspect\nthe better approach would be to just generally convert that to an error,\nrather than having some startup specific logic.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 09:53:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 10:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Just to be clear, I brought up this exact concern back in *November*:\n>\n> https://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n>\n> And now because this was pushed forward and the concerns that I raised\n> ignored, we're having to deal with this towards the end of the release\n> cycle instead of during normal development.\n\nI disagree. My analysis is that you're blocking a straightforward bug\nfix because we're not prepared to redesign the world to match your\nexpectations. The actual point of controversy at the moment, as I\nunderstand it, is this: if the backend, while rewriting\npostgresql.auto.conf, discovers that it contains duplicates, should we\n(a) give a WARNING or (b) not?\n\nThe argument for not doing that is pretty simple: if we give a WARNING\nwhen this happens, then every tool that appends to\npostgresql.auto.conf has to be responsible for making sure to remove\nduplicates along the way. To do that reliably, it needs a\nclient-accessible version of all the GUC parsing stuff. You refer to\nthis above as an \"assumption,\" but it seems to me that a more accurate\nword would be \"fact.\" Now, I don't think anyone would disagree with\nthe idea that it possible to do it in an only-approximately-correct\nway pretty easily: just match the first word of the line against the\nGUC you're proposing to set, and drop the line if it matches. If you\nwant it to be exactly correct, though, you either need to run the\noriginal code, or your own custom code that behaves in exactly the\nsame way. And since the original code runs only in the server, it\nfollows directly that if you are not running inside the server, you\ncannot be running the original code. How you can label any of that as\nan \"assumption\" is beyond me.\n\nNow, I agree that IF we were prepared to provide a standalone\nconfig-editing tool that removes duplicates, THEN it would not be\ncrazy to emit a WARNING if we find a duplicate, because we could\nreasonably tell people to just use that tool. However, such a tool is\nnot trivial to construct, as evidenced by the fact that, on this very\nthread, Ian tried and Andres thought the result contained too much\ncode duplication. Moreover, we are past feature freeze, which is the\nwrong time to add altogether new things to the release, even if we had\ncode that everybody liked. Furthermore, even if we had such a tool and\neven if it had already been committed, I would still not be in favor\nof the WARNING, because duplicate settings in postgresql.auto.conf are\nharmless unless you include a truly insane number of them, and there\nis no reason for anybody to ever do that. In a way, I sorta hope\nsomebody does do that, because if I get a problem report from a user\nthat they put 10 million copies of their recovery settings in\npostgresql.auto.conf and the server now starts up very slowly, I am\ngoing to have a good private laugh, and then suggest that they maybe\nnot do that.\n\nIn general, I am sympathetic to the argument that we ought to do tight\nintegrity-checking on inputs: that's one of the things for which\nPostgreSQL is known, and it's a strength of the project. In this case,\nthough, the cost-benefit trade-off seems very poor to me: it just\nmakes life complicated without really buying us anything. The whole\nreason postgresql.conf is a text file in the first place instead of\nbeing stored in the catalogs is because you might not be able to start\nthe server if it's not set right, and if you can't edit it without\nbeing able to start the server, then you're stuck. Indeed, one of the\nkey arguments in getting ALTER SYSTEM accepted in the first place was\nthat, if you put dumb settings into postgresql.auto.conf and borked\nyour system so it wouldn't start, you could always use a text editor\nto undo it. Given that history, any argument that postgresql.auto.conf\nis somehow different and should be subjected to tighter integrity\nconstraints does not resonate with me. Its mission is to be a\nmachine-editable postgresql.conf, not to be some other kind of file\nthat plays by a different set of rules.\n\nI really don't understand why you're fighting so hard about this. We\nhave a long history of being skeptical about WARNING messages. If, on\nthe one hand, they are super-important, they might still get ignored\nbecause it could be an automated context where nobody will see it; and\nif, on the other hand, they are not that important, then emitting them\nis just clutter in the first place. The particular WARNING under\ndiscussion here is one that would likely only fire long after the\nfact, when it's far too late to do anything about it, and when, in all\nprobability, no real harm has resulted anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 13:06:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Aug 5, 2019 at 12:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Perhaps as a future improvement, but it's not happening for v12,\n> >> at least not unless you accept \"use ALTER SYSTEM in a standalone\n> >> backend\" as a usable answer.\n> \n> > I'm not sure that would even work for the cases at issue ... because\n> > we're talking about setting up recovery parameters, and wouldn't the\n> > server run recovery before it got around to do anything with ALTER\n> > SYSTEM?\n> \n> Yeah, good point. There are a lot of other cases where you really\n> don't want system startup to happen, too. On the other hand,\n> people have also opined that they want full error checking on\n> the inserted values, and that seems out of reach with less than\n> a full running system (mumble extensions mumble).\n\nThere have been ideas brought up about some way to provide \"full\nvalidation\" but I, at least, don't recall seeing anyone actually say\nthat they *want* that- just different people suggesting that it could be\ndone.\n\nI agree that full validation is a pipe dream for this kind of system and\nisn't what I was intending to suggest at any point.\n\n> In the end, I think I don't buy Stephen's argument that there should\n> be a one-size-fits-all answer. It seems entirely reasonable that\n> we'll have more than one way to do it, because the constraints are\n> different depending on what use-case you think about.\n\nThis doesn't seem to me, at least, to be an accurate representation of\nmy thoughts on this- there could be 15 different ways to modify the\nfile, but let's have a common set of code to provide those ways instead\nof different code between the backend ALTER SYSTEM and the frontend\npg_basebackup (and if we put it in the common library that we already\nhave for sharing code between the backend and the frontend, and which we\nmake available for external tools, then those external tools could use\nthose methods in the same way that we do).\n\nI'm happy to be told I'm wrong, but as far as I know, there's nothing in\nappending to the file or removing duplicates that actually requires\nvalidation of the values which are included in order to apply those\noperations correctly.\n\nI'm sure I'll be told again about how we can't do this for 12, and I do\nappreciate that, but it's because we ignored these issues during\ndevelopment that we're here and that's really just not something that's\nacceptable in my view- we shouldn't be pushing features which have known\nissues that we then have to fight about how to fix it at the last minute\nand with the constraint that we can't make any big changes.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 13:24:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Aug 5, 2019 at 10:21 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Just to be clear, I brought up this exact concern back in *November*:\n> >\n> > https://www.postgresql.org/message-id/20181127153405.GX3415%40tamriel.snowman.net\n> >\n> > And now because this was pushed forward and the concerns that I raised\n> > ignored, we're having to deal with this towards the end of the release\n> > cycle instead of during normal development.\n> \n> I disagree. \n\nIt's unclear what you're disagreeing with here as the below response\ndoesn't seem to discuss the question about if these issues were brought\nup and pointed out previously, nor about if I was the one who raised\nthem, nor about if we're towards the end of the release cycle.\n\n> My analysis is that you're blocking a straightforward bug\n> fix because we're not prepared to redesign the world to match your\n> expectations. The actual point of controversy at the moment, as I\n> understand it, is this: if the backend, while rewriting\n> postgresql.auto.conf, discovers that it contains duplicates, should we\n> (a) give a WARNING or (b) not?\n\nNo, that isn't the point of the controversy nor does it relate, at all,\nto what you quoted above, so I don't think there's much value in\nresponding to the discussion about WARNING or not that you put together\nbelow.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 13:29:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2019-08-05 12:43:24 -0400, Tom Lane wrote:\n> > On the other hand, people have also opined that they want full error\n> > checking on the inserted values, and that seems out of reach with less\n> > than a full running system (mumble extensions mumble).\n> \n> I think the error checking ought to be about as complete as the one we\n> do during a normal postmaster startup. Afaict that requires loading\n> shared_preload_library extensions, but does *not* require booting up far\n> enough to run GUC checks in a context with database access.\n\nI'm not following this thread of the discussion.\n\nYou're not suggesting that pg_basebackup perform this error checking\nafter it modifies the file, are you..?\n\nWhere are you thinking this error checking would be happening?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 13:34:39 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 13:34:39 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-08-05 12:43:24 -0400, Tom Lane wrote:\n> > > On the other hand, people have also opined that they want full error\n> > > checking on the inserted values, and that seems out of reach with less\n> > > than a full running system (mumble extensions mumble).\n> > \n> > I think the error checking ought to be about as complete as the one we\n> > do during a normal postmaster startup. Afaict that requires loading\n> > shared_preload_library extensions, but does *not* require booting up far\n> > enough to run GUC checks in a context with database access.\n> \n> I'm not following this thread of the discussion.\n\nIt's about the future, not v12.\n\n\n> Where are you thinking this error checking would be happening?\n\nA hypothethical post v12 tool that can set config values with as much\nchecking as feasible. The IMO most realistic tool to do so is postmaster\nitself, similar to it's already existing -C. Boot it up until\nshared_preload_libraries have been processed, run check hook(s) for the\nnew value(s), change postgresql.auto.conf, shutdown.\n\n\n> You're not suggesting that pg_basebackup perform this error checking\n> after it modifies the file, are you..?\n\nNot at the moment, at least.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Aug 2019 11:11:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 1:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> No, that isn't the point of the controversy nor does it relate, at all,\n> to what you quoted above, so I don't think there's much value in\n> responding to the discussion about WARNING or not that you put together\n> below.\n\nWell, if that's not what we're arguing about, then what the heck are\nwe arguing about?\n\nAll we need to do to resolve this issue is have Tom commit his patch.\nThe set of people who are objecting to that is either {} or {Stephen\nFrost}. Even if it's the latter, we should just proceed, because\nthere are clearly enough votes in favor of the patch to proceed,\nincluding 2 from RMT members, and if it's the former, we should\nDEFINITELY proceed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:23:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Aug 5, 2019 at 14:11 Andres Freund <andres@anarazel.de> wrote:\n\n> On 2019-08-05 13:34:39 -0400, Stephen Frost wrote:\n> > * Andres Freund (andres@anarazel.de) wrote:\n> > > On 2019-08-05 12:43:24 -0400, Tom Lane wrote:\n> > > > On the other hand, people have also opined that they want full error\n> > > > checking on the inserted values, and that seems out of reach with\n> less\n> > > > than a full running system (mumble extensions mumble).\n> > >\n> > > I think the error checking ought to be about as complete as the one we\n> > > do during a normal postmaster startup. Afaict that requires loading\n> > > shared_preload_library extensions, but does *not* require booting up\n> far\n> > > enough to run GUC checks in a context with database access.\n> >\n> > I'm not following this thread of the discussion.\n>\n> It's about the future, not v12.\n\n\nI’m happy to chat about post-v12, certainly. As I said up thread, I get\nthat we are in this unfortunate situation for v12 and we can do what needs\ndoing here (where I agree with what Tom said, “a doc patch”- and with fixes\nfor ALTER SYSTEM to be in line with that doc patch, along with\npg_basebackup, should any changes be needed, of course).\n\n> Where are you thinking this error checking would be happening?\n>\n> A hypothethical post v12 tool that can set config values with as much\n> checking as feasible. The IMO most realistic tool to do so is postmaster\n> itself, similar to it's already existing -C. Boot it up until\n> shared_preload_libraries have been processed, run check hook(s) for the\n> new value(s), change postgresql.auto.conf, shutdown.\n\n\nThat’s a nice idea but I don’t think it’s really necessary and I’m not sure\nhow useful this level of error checking would end up being as part of\npg_basebackup.\n\nI can certainly see value in a tool that could be run to verify a\npostgresql.conf+auto.conf is valid to the extent that we are able to do so,\nsince that could, ideally, be run by the init script system prior to a\nrestart to let the user know that their restart is likely to fail. Having\nthat be some option to the postmaster could work, as long as it is assured\nto not do anything that would upset a running PG instance (like, say, try\nto allocate shared buffers).\n\n> You're not suggesting that pg_basebackup perform this error checking\n> > after it modifies the file, are you..?\n>\n> Not at the moment, at least.\n\n\nSince pg_basebackup runs, typically, on a server other than the one that PG\nis running on, it certainly would have to have a way to at least disable\nanything that caused it to try and load libraries on the destination side,\nor do anything else that required something external in order to validate-\nbut frankly I don’t think it should ever be loading libraries that it has\nno business with, not even if it means that the error checking of the\npostgresql.conf would be wonderful.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Aug 5, 2019 at 14:11 Andres Freund <andres@anarazel.de> wrote:\nOn 2019-08-05 13:34:39 -0400, Stephen Frost wrote:\n> * Andres Freund (andres@anarazel.de) wrote:\n> > On 2019-08-05 12:43:24 -0400, Tom Lane wrote:\n> > > On the other hand, people have also opined that they want full error\n> > > checking on the inserted values, and that seems out of reach with less\n> > > than a full running system (mumble extensions mumble).\n> > \n> > I think the error checking ought to be about as complete as the one we\n> > do during a normal postmaster startup. Afaict that requires loading\n> > shared_preload_library extensions, but does *not* require booting up far\n> > enough to run GUC checks in a context with database access.\n> \n> I'm not following this thread of the discussion.\n\nIt's about the future, not v12.I’m happy to chat about post-v12, certainly. As I said up thread, I get that we are in this unfortunate situation for v12 and we can do what needs doing here (where I agree with what Tom said, “a doc patch”- and with fixes for ALTER SYSTEM to be in line with that doc patch, along with pg_basebackup, should any changes be needed, of course).\n> Where are you thinking this error checking would be happening?\n\nA hypothethical post v12 tool that can set config values with as much\nchecking as feasible. The IMO most realistic tool to do so is postmaster\nitself, similar to it's already existing -C. Boot it up until\nshared_preload_libraries have been processed, run check hook(s) for the\nnew value(s), change postgresql.auto.conf, shutdown.That’s a nice idea but I don’t think it’s really necessary and I’m not sure how useful this level of error checking would end up being as part of pg_basebackup.I can certainly see value in a tool that could be run to verify a postgresql.conf+auto.conf is valid to the extent that we are able to do so, since that could, ideally, be run by the init script system prior to a restart to let the user know that their restart is likely to fail. Having that be some option to the postmaster could work, as long as it is assured to not do anything that would upset a running PG instance (like, say, try to allocate shared buffers).\n> You're not suggesting that pg_basebackup perform this error checking\n> after it modifies the file, are you..?\n\nNot at the moment, at least.Since pg_basebackup runs, typically, on a server other than the one that PG is running on, it certainly would have to have a way to at least disable anything that caused it to try and load libraries on the destination side, or do anything else that required something external in order to validate- but frankly I don’t think it should ever be loading libraries that it has no business with, not even if it means that the error checking of the postgresql.conf would be wonderful.Thanks,Stephen",
"msg_date": "Mon, 5 Aug 2019 14:24:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> All we need to do to resolve this issue is have Tom commit his patch.\n\nI think Stephen is not being unreasonable to suggest that we need some\ndocumentation about what external tools may safely do to pg.auto.conf.\nSo somebody's got to write that. But I agree that we could go ahead\nwith the code patch.\n\n(At this point I won't risk doing so before the wrap, though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 14:29:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Aug 5, 2019 at 14:29 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > All we need to do to resolve this issue is have Tom commit his patch.\n>\n> I think Stephen is not being unreasonable to suggest that we need some\n> documentation about what external tools may safely do to pg.auto.conf.\n\n\nI dare say that if we had some documentation around what to expect and how\nto deal with it, for our own tools as well as external ones, then maybe we\nwouldn’t be in this situation in the first place. Clearly ALTER SYSTEM and\nthe pg_basebackup modifications had different understands and expectations.\n\nSo somebody's got to write that. But I agree that we could go ahead\n> with the code patch.\n\n\nI haven’t looked at the code patch at all, just to be clear. That said, if\nyou’re comfortable with it and it’s in line with what we document as being\nhow you handle pg.auto.conf (for ourselves as well as external tools..),\nthen that’s fine with me.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Aug 5, 2019 at 14:29 Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> All we need to do to resolve this issue is have Tom commit his patch.\n\nI think Stephen is not being unreasonable to suggest that we need some\ndocumentation about what external tools may safely do to pg.auto.conf.I dare say that if we had some documentation around what to expect and how to deal with it, for our own tools as well as external ones, then maybe we wouldn’t be in this situation in the first place. Clearly ALTER SYSTEM and the pg_basebackup modifications had different understands and expectations.\nSo somebody's got to write that. But I agree that we could go ahead\nwith the code patch.I haven’t looked at the code patch at all, just to be clear. That said, if you’re comfortable with it and it’s in line with what we document as being how you handle pg.auto.conf (for ourselves as well as external tools..), then that’s fine with me.Thanks,Stephen",
"msg_date": "Mon, 5 Aug 2019 14:35:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > All we need to do to resolve this issue is have Tom commit his patch.\n>\n> I think Stephen is not being unreasonable to suggest that we need some\n> documentation about what external tools may safely do to pg.auto.conf.\n> So somebody's got to write that.\n\nI mean, really? We're going to document that if you want to add a\nsetting to the file, you can just append it, but that if you find\nyourself desirous of appending so many settings that the entire disk\nwill fill up, you should maybe reconsider? Perhaps I'm being mean\nhere, but that seems like it's straight out of the\nblinding-flashes-of-the-obvious department.\n\nIf we were going to adopt Stephen's proposed rule that you must remove\nduplicates or be punished later with an annoying WARNING, I would\nagree that it ought to be documented, because I believe many people\nwould find that a POLA violation. And to be clear, I'm not objecting\nto a sentence someplace that says that duplicates in\npostgresql.auto.conf will be ignored and the last value will be used,\nsame as for any other PostgreSQL configuration file. However, I think\nit's highly likely people would have assumed that anyway.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:38:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Stephen is not being unreasonable to suggest that we need some\n>> documentation about what external tools may safely do to pg.auto.conf.\n>> So somebody's got to write that.\n\n> I mean, really? We're going to document that if you want to add a\n> setting to the file, you can just append it, but that if you find\n> yourself desirous of appending so many settings that the entire disk\n> will fill up, you should maybe reconsider? Perhaps I'm being mean\n> here, but that seems like it's straight out of the\n> blinding-flashes-of-the-obvious department.\n\nI don't think we need to go on about it at great length, but it seems\nto me that it'd be reasonable to point out that (a) you'd be well\nadvised not to touch the file while the postmaster is up, and (b)\nlast setting wins. Those things are equally true of postgresql.conf\nof course, but I don't recall whether they're already documented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 14:42:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't think we need to go on about it at great length, but it seems\n> to me that it'd be reasonable to point out that (a) you'd be well\n> advised not to touch the file while the postmaster is up, and (b)\n> last setting wins. Those things are equally true of postgresql.conf\n> of course, but I don't recall whether they're already documented.\n\nOK, fair enough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:43:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 2:35 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I dare say that if we had some documentation around what to expect and how to deal with it, for our own tools as well as external ones, then maybe we wouldn’t be in this situation in the first place. Clearly ALTER SYSTEM and the pg_basebackup modifications had different understands and expectations.\n\nBut that's not the problem. The problem is that ALTER SYSTEM modifies\nthe first match instead of the last one, when it's a well-established\nprinciple that when reading from a PostgreSQL configuration file, the\nsystem adopts the value from the last match, not the first one. I\nadmit that if somebody had thought to document what ALTER SYSTEM was\ndoing, that person would probably have also realized that they had\nmade a mistake in the code, and then they would have fixed the bug,\nand that would be great.\n\nBut we have exactly zero need to invent a new set of principles\nexplaining how to deal with postgresql.auto.conf. We just need to\nmake the ALTER SYSTEM code conform to the same general rule that has\nbeen well-established for many years. The only reason why we're still\ncarrying on about this 95 messages later is because you're trying to\nmake an argument that postgresql.auto.conf is a different kind of\nthing from postgresql.conf and therefore can have its own novel set of\nrules which consequently need to be debated. IMHO, it's not, it\nshouldn't, and they don't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 14:47:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Aug 5, 2019 at 14:38 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > All we need to do to resolve this issue is have Tom commit his patch.\n> >\n> > I think Stephen is not being unreasonable to suggest that we need some\n> > documentation about what external tools may safely do to pg.auto.conf.\n> > So somebody's got to write that.\n>\n> I mean, really? We're going to document that if you want to add a\n> setting to the file, you can just append it, but that if you find\n> yourself desirous of appending so many settings that the entire disk\n> will fill up, you should maybe reconsider? Perhaps I'm being mean\n> here, but that seems like it's straight out of the\n> blinding-flashes-of-the-obvious department.\n>\n> If we were going to adopt Stephen's proposed rule that you must remove\n> duplicates or be punished later with an annoying WARNING, I would\n> agree that it ought to be documented, because I believe many people\n> would find that a POLA violation. And to be clear, I'm not objecting\n> to a sentence someplace that says that duplicates in\n> postgresql.auto.conf will be ignored and the last value will be used,\n> same as for any other PostgreSQL configuration file. However, I think\n> it's highly likely people would have assumed that anyway.\n\n\nThe authors and committer for ALTER SYSTEM didn’t. It’s not uncommon for\nus to realize when different people and/or parts of the system make\ndifferent assumption about something and they end up causing bugs, we try\nto document the “right way” and what expectations one can have.\n\nAlso, to be clear, if we document it then I don’t feel we need a WARNING to\nbe issued- because then it’s expected and handled.\n\nYes, there was a lot of discussion about how it’d be nice to go further\nthan documentation and actually provide a facility for tools to use to\nmodify the file, so we could have the same code being used by pg_basebackup\nand ALTER SYSTEM, but the argument was made that we can’t make that happen\nfor v12. I’m not sure I agree with that because a lot of the responses\nhave been blowing up the idea of what amounts to a simple parser/editor of\nPG config files (which, as Christoph pointed out, has already been done\nexternally and I doubt it’s actually all that’s much code) to a full blown\nwe must validate everything and load every extension’s .so file to make\nsure the value is ok, but even so, I’ve backed away from that position and\nagreed that a documentation fix would be enough for v12 and hopefully\nsomeone will revisit it in the future and improve on it- at least with the\ndocumentation, there would be a higher chance that they’d get it right and\nnot end up making different assumptions than those made by other hackers.\n\nThanks,\n\nStephen\n\nGreetings,On Mon, Aug 5, 2019 at 14:38 Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > All we need to do to resolve this issue is have Tom commit his patch.\n>\n> I think Stephen is not being unreasonable to suggest that we need some\n> documentation about what external tools may safely do to pg.auto.conf.\n> So somebody's got to write that.\n\nI mean, really? We're going to document that if you want to add a\nsetting to the file, you can just append it, but that if you find\nyourself desirous of appending so many settings that the entire disk\nwill fill up, you should maybe reconsider? Perhaps I'm being mean\nhere, but that seems like it's straight out of the\nblinding-flashes-of-the-obvious department.\n\nIf we were going to adopt Stephen's proposed rule that you must remove\nduplicates or be punished later with an annoying WARNING, I would\nagree that it ought to be documented, because I believe many people\nwould find that a POLA violation. And to be clear, I'm not objecting\nto a sentence someplace that says that duplicates in\npostgresql.auto.conf will be ignored and the last value will be used,\nsame as for any other PostgreSQL configuration file. However, I think\nit's highly likely people would have assumed that anyway.The authors and committer for ALTER SYSTEM didn’t. It’s not uncommon for us to realize when different people and/or parts of the system make different assumption about something and they end up causing bugs, we try to document the “right way” and what expectations one can have.Also, to be clear, if we document it then I don’t feel we need a WARNING to be issued- because then it’s expected and handled.Yes, there was a lot of discussion about how it’d be nice to go further than documentation and actually provide a facility for tools to use to modify the file, so we could have the same code being used by pg_basebackup and ALTER SYSTEM, but the argument was made that we can’t make that happen for v12. I’m not sure I agree with that because a lot of the responses have been blowing up the idea of what amounts to a simple parser/editor of PG config files (which, as Christoph pointed out, has already been done externally and I doubt it’s actually all that’s much code) to a full blown we must validate everything and load every extension’s .so file to make sure the value is ok, but even so, I’ve backed away from that position and agreed that a documentation fix would be enough for v12 and hopefully someone will revisit it in the future and improve on it- at least with the documentation, there would be a higher chance that they’d get it right and not end up making different assumptions than those made by other hackers.Thanks,Stephen",
"msg_date": "Mon, 5 Aug 2019 14:48:22 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\nOn Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think Stephen is not being unreasonable to suggest that we need some\n> >> documentation about what external tools may safely do to pg.auto.conf.\n> >> So somebody's got to write that.\n>\n> > I mean, really? We're going to document that if you want to add a\n> > setting to the file, you can just append it, but that if you find\n> > yourself desirous of appending so many settings that the entire disk\n> > will fill up, you should maybe reconsider? Perhaps I'm being mean\n> > here, but that seems like it's straight out of the\n> > blinding-flashes-of-the-obvious department.\n>\n> I don't think we need to go on about it at great length, but it seems\n> to me that it'd be reasonable to point out that (a) you'd be well\n> advised not to touch the file while the postmaster is up, and (b)\n> last setting wins. Those things are equally true of postgresql.conf\n> of course, but I don't recall whether they're already documented.\n\n\nFolks certainly modify postgresql.conf while the postmaster is running\npretty routinely, and we expect them to which is why we have a reload\noption, so I don’t think we can say that the auto.conf and postgresql.conf\nare to be handled in the same way.\n\nLast setting wins, duplicates should be ignored and may be removed,\ncomments should be ignored and may be removed, and appending to the file is\nacceptable for modifying a value. I’m not sure how much we really document\nthe structure of the file itself offhand- back when users were editing it\nwe could probably be a bit more fast and loose with it, but now that we\nhave different parts of the system modifying it along with external tools\ndoing so, we should probably write it down a bit more clearly/precisely.\n\nI suspect the authors of pg_conftool would appreciate that too, so they\ncould make sure that they aren’t doing anything unexpected or incorrect.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Stephen is not being unreasonable to suggest that we need some\n>> documentation about what external tools may safely do to pg.auto.conf.\n>> So somebody's got to write that.\n\n> I mean, really? We're going to document that if you want to add a\n> setting to the file, you can just append it, but that if you find\n> yourself desirous of appending so many settings that the entire disk\n> will fill up, you should maybe reconsider? Perhaps I'm being mean\n> here, but that seems like it's straight out of the\n> blinding-flashes-of-the-obvious department.\n\nI don't think we need to go on about it at great length, but it seems\nto me that it'd be reasonable to point out that (a) you'd be well\nadvised not to touch the file while the postmaster is up, and (b)\nlast setting wins. Those things are equally true of postgresql.conf\nof course, but I don't recall whether they're already documented.Folks certainly modify postgresql.conf while the postmaster is running pretty routinely, and we expect them to which is why we have a reload option, so I don’t think we can say that the auto.conf and postgresql.conf are to be handled in the same way.Last setting wins, duplicates should be ignored and may be removed, comments should be ignored and may be removed, and appending to the file is acceptable for modifying a value. I’m not sure how much we really document the structure of the file itself offhand- back when users were editing it we could probably be a bit more fast and loose with it, but now that we have different parts of the system modifying it along with external tools doing so, we should probably write it down a bit more clearly/precisely.I suspect the authors of pg_conftool would appreciate that too, so they could make sure that they aren’t doing anything unexpected or incorrect.Thanks,Stephen",
"msg_date": "Mon, 5 Aug 2019 14:56:36 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> But that's not the problem. The problem is that ALTER SYSTEM modifies\n> the first match instead of the last one, when it's a well-established\n> principle that when reading from a PostgreSQL configuration file, the\n> system adopts the value from the last match, not the first one. I\n> admit that if somebody had thought to document what ALTER SYSTEM was\n> doing, that person would probably have also realized that they had\n> made a mistake in the code, and then they would have fixed the bug,\n> and that would be great.\n\nWell, actually, the existing code is perfectly clear about this:\n\n /* Search the list for an existing match (we assume there's only one) */\n\nThat assumption is fine *if* you grant that only ALTER SYSTEM itself\nis authorized to write that file. I think the real argument here\ncenters around who else is authorized to write the file, and when\nand how.\n\nIn view of the point you made upthread that we explicitly made\npg.auto.conf a plain text file so that one could recover from\nmistakes by hand-editing it, I think it's pretty silly to adopt\na position that external mods are disallowed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 15:02:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't think we need to go on about it at great length, but it seems\n>> to me that it'd be reasonable to point out that (a) you'd be well\n>> advised not to touch the file while the postmaster is up, and (b)\n>> last setting wins. Those things are equally true of postgresql.conf\n>> of course, but I don't recall whether they're already documented.\n\n> OK, fair enough.\n\nConcretely, how about the attached?\n\n(Digging around in config.sgml, I found that last-one-wins is stated,\nbut only in the context of one include file overriding another.\nThat's not *directly* a statement about what happens within a single\nfile, and it's in a different subsection anyway, so repeating the\ninfo in 19.1.2 doesn't seem unreasonable.)\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 05 Aug 2019 15:07:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 5, 2019 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Concretely, how about the attached?\n\nWorks for me, for whatever that's worth.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 15:39:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Mon, Aug 5, 2019 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I don't think we need to go on about it at great length, but it seems\n> >> to me that it'd be reasonable to point out that (a) you'd be well\n> >> advised not to touch the file while the postmaster is up, and (b)\n> >> last setting wins. Those things are equally true of postgresql.conf\n> >> of course, but I don't recall whether they're already documented.\n> \n> > OK, fair enough.\n> \n> Concretely, how about the attached?\n\n\n> (Digging around in config.sgml, I found that last-one-wins is stated,\n> but only in the context of one include file overriding another.\n> That's not *directly* a statement about what happens within a single\n> file, and it's in a different subsection anyway, so repeating the\n> info in 19.1.2 doesn't seem unreasonable.)\n\nAgreed.\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index cdc30fa..f5986b2 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -153,6 +153,8 @@ shared_buffers = 128MB\n> identifiers or numbers must be single-quoted. To embed a single\n> quote in a parameter value, write either two quotes (preferred)\n> or backslash-quote.\n> + If the file contains multiple entries for the same parameter,\n> + all but the last one are ignored.\n> </para>\n\nLooking at this patch quickly but also in isolation, so I could be wrong\nhere, but it seems like the above might be a good place to mention\n\"duplicate entries and comments may be removed.\"\n\n> <para>\n> @@ -185,18 +187,27 @@ shared_buffers = 128MB\n> In addition to <filename>postgresql.conf</filename>,\n> a <productname>PostgreSQL</productname> data directory contains a file\n> <filename>postgresql.auto.conf</filename><indexterm><primary>postgresql.auto.conf</primary></indexterm>,\n> - which has the same format as <filename>postgresql.conf</filename> but should\n> - never be edited manually. This file holds settings provided through\n> - the <xref linkend=\"sql-altersystem\"/> command. This file is automatically\n> - read whenever <filename>postgresql.conf</filename> is, and its settings take\n> - effect in the same way. Settings in <filename>postgresql.auto.conf</filename>\n> - override those in <filename>postgresql.conf</filename>.\n> + which has the same format as <filename>postgresql.conf</filename> but\n> + is intended to be edited automatically not manually. This file holds\n> + settings provided through the <xref linkend=\"sql-altersystem\"/> command.\n> + This file is read whenever <filename>postgresql.conf</filename> is,\n> + and its settings take effect in the same way. Settings\n> + in <filename>postgresql.auto.conf</filename> override those\n> + in <filename>postgresql.conf</filename>.\n> + </para>\n\nThe above hunk looks fine.\n\n> + <para>\n> + External tools might also\n> + modify <filename>postgresql.auto.conf</filename>, typically by appending\n> + new settings to the end. It is not recommended to do this while the\n> + server is running, since a concurrent <command>ALTER SYSTEM</command>\n> + command could overwrite such changes.\n> </para>\n\nAlternatively, or maybe also here, we could say \"note that appending to\nthe file as a mechanism for setting a new value by an external tool is\nacceptable even though it will cause duplicates- PostgreSQL will always\nuse the last value set and other tools should as well. Duplicates and\ncomments may be removed when rewriting the file, and parameters may be\nlower-cased.\" (istr that last bit being true too but I haven't checked\nlately).\n\n> <para>\n> The system view\n> <link linkend=\"view-pg-file-settings\"><structname>pg_file_settings</structname></link>\n> - can be helpful for pre-testing changes to the configuration file, or for\n> + can be helpful for pre-testing changes to the configuration files, or for\n> diagnosing problems if a <systemitem>SIGHUP</systemitem> signal did not have the\n> desired effects.\n> </para>\n\nThis hunk looks fine.\n\nReviewing https://www.postgresql.org/docs/current/config-setting.html\nagain, it looks reasonably comprehensive regarding the format of the\nfile- perhaps we should link to there from the \"external tools might\nalso modify\" para..? \"Tool authors should review <link> to understand\nthe structure of postgresql.auto.conf\".\n\nThanks!\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 20:52:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/6/19 9:52 AM, Stephen Frost wrote:> Greetings,\n >\n > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n >> Robert Haas <robertmhaas@gmail.com> writes:\n >>> On Mon, Aug 5, 2019 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n >>>> I don't think we need to go on about it at great length, but it seems\n >>>> to me that it'd be reasonable to point out that (a) you'd be well\n >>>> advised not to touch the file while the postmaster is up, and (b)\n >>>> last setting wins. Those things are equally true of postgresql.conf\n >>>> of course, but I don't recall whether they're already documented.\n >>\n >>> OK, fair enough.\n >>\n >> Concretely, how about the attached?\n >\n >\n >> (Digging around in config.sgml, I found that last-one-wins is stated,\n >> but only in the context of one include file overriding another.\n >> That's not *directly* a statement about what happens within a single\n >> file, and it's in a different subsection anyway, so repeating the\n >> info in 19.1.2 doesn't seem unreasonable.)\n >\n > Agreed.\n\n+1.\n\n >> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n >> index cdc30fa..f5986b2 100644\n >> --- a/doc/src/sgml/config.sgml\n >> +++ b/doc/src/sgml/config.sgml\n >> @@ -153,6 +153,8 @@ shared_buffers = 128MB\n >> identifiers or numbers must be single-quoted. To embed a single\n >> quote in a parameter value, write either two quotes (preferred)\n >> or backslash-quote.\n >> + If the file contains multiple entries for the same parameter,\n >> + all but the last one are ignored.\n >> </para>\n >\n > Looking at this patch quickly but also in isolation, so I could be wrong\n > here, but it seems like the above might be a good place to mention\n > \"duplicate entries and comments may be removed.\"\n\nThat section applies to all configuration files, the removal is specific\nto pg.auto.conf so better mentioned further down.\n\n >> <para>\n >> @@ -185,18 +187,27 @@ shared_buffers = 128MB\n >> In addition to <filename>postgresql.conf</filename>,\n >> a <productname>PostgreSQL</productname> data directory contains a file\n >> <filename>postgresql.auto.conf</filename><indexterm><primary>postgresql.auto.conf</primary></indexterm>,\n >> - which has the same format as <filename>postgresql.conf</filename> but should\n >> - never be edited manually. This file holds settings provided through\n >> - the <xref linkend=\"sql-altersystem\"/> command. This file is automatically\n >> - read whenever <filename>postgresql.conf</filename> is, and its settings take\n >> - effect in the same way. Settings in <filename>postgresql.auto.conf</filename>\n >> - override those in <filename>postgresql.conf</filename>.\n >> + which has the same format as <filename>postgresql.conf</filename> but\n >> + is intended to be edited automatically not manually. This file holds\n >> + settings provided through the <xref linkend=\"sql-altersystem\"/> command.\n >> + This file is read whenever <filename>postgresql.conf</filename> is,\n >> + and its settings take effect in the same way. Settings\n >> + in <filename>postgresql.auto.conf</filename> override those\n >> + in <filename>postgresql.conf</filename>.\n >> + </para>\n >\n > The above hunk looks fine.\n >\n >> + <para>\n >> + External tools might also\n >> + modify <filename>postgresql.auto.conf</filename>, typically by appending\n >> + new settings to the end. It is not recommended to do this while the\n >> + server is running, since a concurrent <command>ALTER SYSTEM</command>\n >> + command could overwrite such changes.\n >> </para>\n >\n > Alternatively, or maybe also here, we could say \"note that appending to\n > the file as a mechanism for setting a new value by an external tool is\n > acceptable even though it will cause duplicates- PostgreSQL will always\n > use the last value set and other tools should as well. Duplicates and\n > comments may be removed when rewriting the file\n\nFWIW, as the file is rewritten each time, *all* comments are removed\nanyway (the first two comment lines in the file with the warning\nare added when the new version of the file is written().\n\n > and parameters may be\n > lower-cased.\" (istr that last bit being true too but I haven't checked\n > lately).\n\nHo-hum, they won't be lower-cased, instead currently they just won't be\noverwritten if they're present in pg.auto.conf, which is a slight\neccentricity, but not actually a problem with the current code as the new value\nwill be written last. E.g.:\n\n $ cat postgresql.auto.conf\n # Do not edit this file manually!\n # It will be overwritten by the ALTER SYSTEM command.\n DEFAULT_TABLESPACE = 'space_1'\n\n postgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n ALTER SYSTEM\n\n $ cat postgresql.auto.conf\n # Do not edit this file manually!\n # It will be overwritten by the ALTER SYSTEM command.\n DEFAULT_TABLESPACE = 'space_1'\n default_tablespace = 'pg_default'\n\nI don't think that's worth worrying about now.\n\nMy suggestion for the paragaph in question:\n\n <para>\n External tools which need to write configuration settings (e.g. for replication)\n where it's essential to ensure these are read last (to override versions\n of these settings present in other configuration files), may append settings to\n <filename>postgresql.auto.conf</filename>. It is not recommended to do this while\n the server is running, since a concurrent <command>ALTER SYSTEM</command>\n command could overwrite such changes. Note that a subsequent\n <command>ALTER SYSTEM</command> will cause <filename>postgresql.auto.conf</filename>,\n to be rewritten, removing any duplicate versions of the setting altered, and also\n any comment lines present.\n </para>\n\n >\n >> <para>\n >> The system view\n >> <link linkend=\"view-pg-file-settings\"><structname>pg_file_settings</structname></link>\n >> - can be helpful for pre-testing changes to the configuration file, or for\n >> + can be helpful for pre-testing changes to the configuration files, or for\n >> diagnosing problems if a <systemitem>SIGHUP</systemitem> signal did not have the\n >> desired effects.\n >> </para>\n >\n > This hunk looks fine.\n >\n > Reviewing https://www.postgresql.org/docs/current/config-setting.html\n > again, it looks reasonably comprehensive regarding the format of the\n > file- perhaps we should link to there from the \"external tools might\n > also modify\" para..? \"Tool authors should review <link> to understand\n > the structure of postgresql.auto.conf\".\n\nThis is all on the same page anyway.\n\n\nRegards\n\nIan Barwick\n\n--\n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n",
"msg_date": "Tue, 6 Aug 2019 10:55:16 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n> On 8/6/19 9:52 AM, Stephen Frost wrote:> Greetings,\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Mon, Aug 5, 2019 at 2:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>>> I don't think we need to go on about it at great length, but it seems\n> >>>> to me that it'd be reasonable to point out that (a) you'd be well\n> >>>> advised not to touch the file while the postmaster is up, and (b)\n> >>>> last setting wins. Those things are equally true of postgresql.conf\n> >>>> of course, but I don't recall whether they're already documented.\n> >>\n> >>> OK, fair enough.\n> >>\n> >> Concretely, how about the attached?\n> >\n> >\n> >> (Digging around in config.sgml, I found that last-one-wins is stated,\n> >> but only in the context of one include file overriding another.\n> >> That's not *directly* a statement about what happens within a single\n> >> file, and it's in a different subsection anyway, so repeating the\n> >> info in 19.1.2 doesn't seem unreasonable.)\n> >\n> > Agreed.\n> \n> +1.\n> \n> >> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> >> index cdc30fa..f5986b2 100644\n> >> --- a/doc/src/sgml/config.sgml\n> >> +++ b/doc/src/sgml/config.sgml\n> >> @@ -153,6 +153,8 @@ shared_buffers = 128MB\n> >> identifiers or numbers must be single-quoted. To embed a single\n> >> quote in a parameter value, write either two quotes (preferred)\n> >> or backslash-quote.\n> >> + If the file contains multiple entries for the same parameter,\n> >> + all but the last one are ignored.\n> >> </para>\n> >\n> > Looking at this patch quickly but also in isolation, so I could be wrong\n> > here, but it seems like the above might be a good place to mention\n> > \"duplicate entries and comments may be removed.\"\n> \n> That section applies to all configuration files, the removal is specific\n> to pg.auto.conf so better mentioned further down.\n\nOk, fair enough.\n\n> >> <para>\n> >> @@ -185,18 +187,27 @@ shared_buffers = 128MB\n> >> In addition to <filename>postgresql.conf</filename>,\n> >> a <productname>PostgreSQL</productname> data directory contains a file\n> >> <filename>postgresql.auto.conf</filename><indexterm><primary>postgresql.auto.conf</primary></indexterm>,\n> >> - which has the same format as <filename>postgresql.conf</filename> but should\n> >> - never be edited manually. This file holds settings provided through\n> >> - the <xref linkend=\"sql-altersystem\"/> command. This file is automatically\n> >> - read whenever <filename>postgresql.conf</filename> is, and its settings take\n> >> - effect in the same way. Settings in <filename>postgresql.auto.conf</filename>\n> >> - override those in <filename>postgresql.conf</filename>.\n> >> + which has the same format as <filename>postgresql.conf</filename> but\n> >> + is intended to be edited automatically not manually. This file holds\n> >> + settings provided through the <xref linkend=\"sql-altersystem\"/> command.\n> >> + This file is read whenever <filename>postgresql.conf</filename> is,\n> >> + and its settings take effect in the same way. Settings\n> >> + in <filename>postgresql.auto.conf</filename> override those\n> >> + in <filename>postgresql.conf</filename>.\n> >> + </para>\n> >\n> > The above hunk looks fine.\n> >\n> >> + <para>\n> >> + External tools might also\n> >> + modify <filename>postgresql.auto.conf</filename>, typically by appending\n> >> + new settings to the end. It is not recommended to do this while the\n> >> + server is running, since a concurrent <command>ALTER SYSTEM</command>\n> >> + command could overwrite such changes.\n> >> </para>\n> >\n> > Alternatively, or maybe also here, we could say \"note that appending to\n> > the file as a mechanism for setting a new value by an external tool is\n> > acceptable even though it will cause duplicates- PostgreSQL will always\n> > use the last value set and other tools should as well. Duplicates and\n> > comments may be removed when rewriting the file\n> \n> FWIW, as the file is rewritten each time, *all* comments are removed\n> anyway (the first two comment lines in the file with the warning\n> are added when the new version of the file is written().\n\nWhoah- the file is *not* rewritten each time. It's only rewritten each\ntime by *ALTER SYSTEM*, but that it not the only thing that's modifying\nthe file. That mistaken assumption is part of what got us into this\nmess...\n\n> > and parameters may be\n> > lower-cased.\" (istr that last bit being true too but I haven't checked\n> > lately).\n> \n> Ho-hum, they won't be lower-cased, instead currently they just won't be\n> overwritten if they're present in pg.auto.conf, which is a slight\n> eccentricity, but not actually a problem with the current code as the new value\n> will be written last. E.g.:\n> \n> $ cat postgresql.auto.conf\n> # Do not edit this file manually!\n> # It will be overwritten by the ALTER SYSTEM command.\n> DEFAULT_TABLESPACE = 'space_1'\n> \n> postgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n> ALTER SYSTEM\n> \n> $ cat postgresql.auto.conf\n> # Do not edit this file manually!\n> # It will be overwritten by the ALTER SYSTEM command.\n> DEFAULT_TABLESPACE = 'space_1'\n> default_tablespace = 'pg_default'\n> \n> I don't think that's worth worrying about now.\n\nErm, those are duplicates though and we're saying that ALTER SYSTEM\nremoves those... Seems like we should be normalizing the file to be\nconsistent in this regard too.\n\n> My suggestion for the paragaph in question:\n> \n> <para>\n> External tools which need to write configuration settings (e.g. for replication)\n> where it's essential to ensure these are read last (to override versions\n> of these settings present in other configuration files), may append settings to\n> <filename>postgresql.auto.conf</filename>. It is not recommended to do this while\n> the server is running, since a concurrent <command>ALTER SYSTEM</command>\n> command could overwrite such changes. Note that a subsequent\n> <command>ALTER SYSTEM</command> will cause <filename>postgresql.auto.conf</filename>,\n> to be rewritten, removing any duplicate versions of the setting altered, and also\n> any comment lines present.\n> </para>\n\nI dislike the special-casing of ALTER SYSTEM here, where we're basically\nsaying that only ALTER SYSTEM is allowed to do this cleanup and that if\nsuch cleanup is wanted then ALTER SYSTEM must be run.\n\nWhat I was trying to get at is a definition of what transformations are\nallowed and to make it clear that anything using/modifying the file\nneeds to be prepared for and work with those transformations. I don't\nthink we want people assuming that if they don't run ALTER SYSTEM then\nthey can depend on duplicates being preserved and such.. and, yes, I\nknow that's a stretch, but if we ever want anything other than ALTER\nSYSTEM to be able to make such changes (and I feel pretty confident that\nwe will...) then we shouldn't document things specifically about when\nthat command runs.\n\n> >> <para>\n> >> The system view\n> >> <link linkend=\"view-pg-file-settings\"><structname>pg_file_settings</structname></link>\n> >> - can be helpful for pre-testing changes to the configuration file, or for\n> >> + can be helpful for pre-testing changes to the configuration files, or for\n> >> diagnosing problems if a <systemitem>SIGHUP</systemitem> signal did not have the\n> >> desired effects.\n> >> </para>\n> >\n> > This hunk looks fine.\n> >\n> > Reviewing https://www.postgresql.org/docs/current/config-setting.html\n> > again, it looks reasonably comprehensive regarding the format of the\n> > file- perhaps we should link to there from the \"external tools might\n> > also modify\" para..? \"Tool authors should review <link> to understand\n> > the structure of postgresql.auto.conf\".\n> \n> This is all on the same page anyway.\n\nAh, ok, fair enough.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 5 Aug 2019 22:16:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Mon, Aug 05, 2019 at 10:16:16PM -0400, Stephen Frost wrote:\n> * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n>> On 8/6/19 9:52 AM, Stephen Frost wrote:> Greetings,\n>>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>>> Concretely, how about the attached?\n>>>>\n>>>> (Digging around in config.sgml, I found that last-one-wins is stated,\n>>>> but only in the context of one include file overriding another.\n>>>> That's not *directly* a statement about what happens within a single\n>>>> file, and it's in a different subsection anyway, so repeating the\n>>>> info in 19.1.2 doesn't seem unreasonable.)\n>>>\n>>> Agreed.\n>> \n>> +1.\n\nI have read the latest patch from Tom and I have a suggestion about\nthis part:\n+ and its settings take effect in the same way. Settings\n+ in <filename>postgresql.auto.conf</filename> override those\n+ in <filename>postgresql.conf</filename>.\n\nIt seems to me that we should mention included files here, as any\nsettings in postgresql.auto.conf override these as well.\n--\nMichael",
"msg_date": "Tue, 6 Aug 2019 14:05:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/6/19 11:16 AM, Stephen Frost wrote:\n > Greetings,\n >\n > * Ian Barwick (ian.barwick@2ndquadrant.com) wrote:\n >> On 8/6/19 9:52 AM, Stephen Frost wrote:> Greetings,\n >>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n >>>>\n >>>> + <para>\n >>>> + External tools might also\n >>>> + modify <filename>postgresql.auto.conf</filename>, typically by appending\n >>>> + new settings to the end. It is not recommended to do this while the\n >>>> + server is running, since a concurrent <command>ALTER SYSTEM</command>\n >>>> + command could overwrite such changes.\n >>>> </para>\n >>>\n >>> Alternatively, or maybe also here, we could say \"note that appending to\n >>> the file as a mechanism for setting a new value by an external tool is\n >>> acceptable even though it will cause duplicates- PostgreSQL will always\n >>> use the last value set and other tools should as well. Duplicates and\n >>> comments may be removed when rewriting the file\n >>\n >> FWIW, as the file is rewritten each time, *all* comments are removed\n >> anyway (the first two comment lines in the file with the warning\n >> are added when the new version of the file is written().\n >\n > Whoah- the file is *not* rewritten each time. It's only rewritten each\n > time by *ALTER SYSTEM*, but that it not the only thing that's modifying\n > the file. That mistaken assumption is part of what got us into this\n > mess...\n\nAh, got it, I thought you were talking about the ALTER SYSTEM behaviour.\n\n >>> and parameters may be\n >>> lower-cased.\" (istr that last bit being true too but I haven't checked\n >>> lately).\n >>\n >> Ho-hum, they won't be lower-cased, instead currently they just won't be\n >> overwritten if they're present in pg.auto.conf, which is a slight\n >> eccentricity, but not actually a problem with the current code as the new value\n >> will be written last. E.g.:\n >>\n >> $ cat postgresql.auto.conf\n >> # Do not edit this file manually!\n >> # It will be overwritten by the ALTER SYSTEM command.\n >> DEFAULT_TABLESPACE = 'space_1'\n >>\n >> postgres=# ALTER SYSTEM SET default_tablespace ='pg_default';\n >> ALTER SYSTEM\n >>\n >> $ cat postgresql.auto.conf\n >> # Do not edit this file manually!\n >> # It will be overwritten by the ALTER SYSTEM command.\n >> DEFAULT_TABLESPACE = 'space_1'\n >> default_tablespace = 'pg_default'\n >>\n >> I don't think that's worth worrying about now.\n >\n > Erm, those are duplicates though and we're saying that ALTER SYSTEM\n > removes those... Seems like we should be normalizing the file to be\n > consistent in this regard too.\n\nTrue. (Switches brain on)... Ah yes, with the patch previously provided\nby Tom, it seems to be just a case of replacing \"strcmp\" with \"guc_name_compare\"\nto match the existing string; the name will be rewritten with the value provided\nto ALTER SYSTEM, which will be normalized to lower case anyway.\n\nTweaked version attached.\n\n >> My suggestion for the paragaph in question:\n >>\n >> <para>\n >> External tools which need to write configuration settings (e.g. for replication)\n >> where it's essential to ensure these are read last (to override versions\n >> of these settings present in other configuration files), may append settings to\n >> <filename>postgresql.auto.conf</filename>. It is not recommended to do this while\n >> the server is running, since a concurrent <command>ALTER SYSTEM</command>\n >> command could overwrite such changes. Note that a subsequent\n >> <command>ALTER SYSTEM</command> will cause <filename>postgresql.auto.conf</filename>,\n >> to be rewritten, removing any duplicate versions of the setting altered, and also\n >> any comment lines present.\n >> </para>\n >\n > I dislike the special-casing of ALTER SYSTEM here, where we're basically\n > saying that only ALTER SYSTEM is allowed to do this cleanup and that if\n > such cleanup is wanted then ALTER SYSTEM must be run.\n\nThis is just saying what ALTER SYSTEM will do, which IMHO we should describe\nsomewhere. Initially when I stated working with pg.auto.conf I had\nmy application append a comment line to show where the entries came from,\nbut not having any idea how pg.auto.conf was modified at that point, I was\nwondering why the comment subsequently disappeared. Perusing the source code has\nexplained that for me, but would be mighty useful to document that.\n\n > What I was trying to get at is a definition of what transformations are\n > allowed and to make it clear that anything using/modifying the file\n > needs to be prepared for and work with those transformations. I don't\n > think we want people assuming that if they don't run ALTER SYSTEM then\n > they can depend on duplicates being preserved and such..\n\nOK, then we should be saying something like:\n- pg.auto.conf may be rewritten at any point and duplicates/comments removed\n- the rewrite will occur whenever ALTER SYSTEM is run, removing duplicates\n of the parameter being modified and any comments\n- external utilities may also rewrite it; they may, if they wish, remove\n duplicates and comments\n\n > and, yes, I\n > know that's a stretch, but if we ever want anything other than ALTER\n > SYSTEM to be able to make such changes (and I feel pretty confident that\n > we will...) then we shouldn't document things specifically about when\n > that command runs.\n\nBut at this point ALTER SYSTEM is the only thing which is known to modify\nthe file, with known effects. If in a future release something else is\nadded, the documentation can be updated as appropriate.\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 6 Aug 2019 14:53:10 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> On 8/6/19 11:16 AM, Stephen Frost wrote:\n>>> Erm, those are duplicates though and we're saying that ALTER SYSTEM\n>>> removes those... Seems like we should be normalizing the file to be\n>>> consistent in this regard too.\n\n> True. (Switches brain on)... Ah yes, with the patch previously provided\n> by Tom, it seems to be just a case of replacing \"strcmp\" with \"guc_name_compare\"\n> to match the existing string; the name will be rewritten with the value provided\n> to ALTER SYSTEM, which will be normalized to lower case anyway.\n\nGood catch.\n\n>>> I dislike the special-casing of ALTER SYSTEM here, where we're basically\n>>> saying that only ALTER SYSTEM is allowed to do this cleanup and that if\n>>> such cleanup is wanted then ALTER SYSTEM must be run.\n\n> This is just saying what ALTER SYSTEM will do, which IMHO we should describe\n> somewhere. Initially when I stated working with pg.auto.conf I had\n> my application append a comment line to show where the entries came from,\n> but not having any idea how pg.auto.conf was modified at that point, I was\n> wondering why the comment subsequently disappeared. Perusing the source code has\n> explained that for me, but would be mighty useful to document that.\n\nI feel fairly resistant to making the config.sgml explanation much longer\nthan what I wrote. That chapter is material that every Postgres DBA has\nto absorb, so we should *not* be burdening it with stuff that few people\nneed to know.\n\nPerhaps we could put some of these details into the Notes section of the\nALTER SYSTEM ref page. But I wonder how much of this is needed at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Aug 2019 15:57:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Ian Barwick <ian.barwick@2ndquadrant.com> writes:\n> >>> I dislike the special-casing of ALTER SYSTEM here, where we're basically\n> >>> saying that only ALTER SYSTEM is allowed to do this cleanup and that if\n> >>> such cleanup is wanted then ALTER SYSTEM must be run.\n> \n> > This is just saying what ALTER SYSTEM will do, which IMHO we should describe\n> > somewhere. Initially when I stated working with pg.auto.conf I had\n> > my application append a comment line to show where the entries came from,\n> > but not having any idea how pg.auto.conf was modified at that point, I was\n> > wondering why the comment subsequently disappeared. Perusing the source code has\n> > explained that for me, but would be mighty useful to document that.\n> \n> I feel fairly resistant to making the config.sgml explanation much longer\n> than what I wrote. That chapter is material that every Postgres DBA has\n> to absorb, so we should *not* be burdening it with stuff that few people\n> need to know.\n\nSure, I agree with that.\n\n> Perhaps we could put some of these details into the Notes section of the\n> ALTER SYSTEM ref page. But I wonder how much of this is needed at all.\n\nI'd be alright with that too, but I'd be just as fine with even a README\nor something that we feel other hackers and external tool developers\nwould be likely to find. I agree that all of this isn't something that\nyour run-of-the-mill DBA needs to know, but they are things that I'm\nsure external tool authors will care about (including myself, David S,\nprobably the other backup/restore tool maintainers, and at least the\nauthor of pg_conftool, presumably).\n\nOf course, for my 2c anyway, the \"low level backup API\" is in the same\nrealm as this stuff (though it's missing important things like \"what\nmagic exit code do you return from archive command to make PG give up\ninstead of retry\"...) and we've got a whole ton of text in our docs\nabout that.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 6 Aug 2019 17:43:55 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Perhaps we could put some of these details into the Notes section of the\n>> ALTER SYSTEM ref page. But I wonder how much of this is needed at all.\n\n> I'd be alright with that too, but I'd be just as fine with even a README\n> or something that we feel other hackers and external tool developers\n> would be likely to find. I agree that all of this isn't something that\n> your run-of-the-mill DBA needs to know, but they are things that I'm\n> sure external tool authors will care about (including myself, David S,\n> probably the other backup/restore tool maintainers, and at least the\n> author of pg_conftool, presumably).\n\nIn hopes of moving this along, I've pushed Ian's last code change,\nas there seems to be no real argument about that anymore.\n\nAs for the doc changes, how about the attached revision of what\nI wrote previously? It gives some passing mention to what ALTER\nSYSTEM will do, without belaboring it or going into things that\nare really implementation details.\n\nAs an example of the sort of implementation detail that I *don't*\nwant to document, I invite you to experiment with the difference\nbetween\n\tALTER SYSTEM SET TimeZone = 'America/New_York';\n\tALTER SYSTEM SET \"TimeZone\" = 'America/New_York';\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 14 Aug 2019 15:15:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Perhaps we could put some of these details into the Notes section of the\n> >> ALTER SYSTEM ref page. But I wonder how much of this is needed at all.\n> \n> > I'd be alright with that too, but I'd be just as fine with even a README\n> > or something that we feel other hackers and external tool developers\n> > would be likely to find. I agree that all of this isn't something that\n> > your run-of-the-mill DBA needs to know, but they are things that I'm\n> > sure external tool authors will care about (including myself, David S,\n> > probably the other backup/restore tool maintainers, and at least the\n> > author of pg_conftool, presumably).\n> \n> In hopes of moving this along, I've pushed Ian's last code change,\n> as there seems to be no real argument about that anymore.\n> \n> As for the doc changes, how about the attached revision of what\n> I wrote previously? It gives some passing mention to what ALTER\n> SYSTEM will do, without belaboring it or going into things that\n> are really implementation details.\n\nIt's certainly better than what we have now.\n\n> As an example of the sort of implementation detail that I *don't*\n> want to document, I invite you to experiment with the difference\n> between\n> \tALTER SYSTEM SET TimeZone = 'America/New_York';\n> \tALTER SYSTEM SET \"TimeZone\" = 'America/New_York';\n\nImplementation details and file formats / acceptable transformations\nare naturally different things- a given implementation may sort things\none way or another but if there's no requirement that the file be sorted\nthen that's just fine and can be an implementation detail possibly based\naround how duplicates are dealt with.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 14 Aug 2019 17:15:20 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> In hopes of moving this along, I've pushed Ian's last code change,\n>> as there seems to be no real argument about that anymore.\n>> \n>> As for the doc changes, how about the attached revision of what\n>> I wrote previously? It gives some passing mention to what ALTER\n>> SYSTEM will do, without belaboring it or going into things that\n>> are really implementation details.\n\n> It's certainly better than what we have now.\n\nHearing no other comments, I've pushed that and marked this issue closed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Aug 2019 11:22:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On 8/16/19 12:22 AM, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>>> In hopes of moving this along, I've pushed Ian's last code change,\n>>> as there seems to be no real argument about that anymore.\n>>>\n>>> As for the doc changes, how about the attached revision of what\n>>> I wrote previously? It gives some passing mention to what ALTER\n>>> SYSTEM will do, without belaboring it or going into things that\n>>> are really implementation details.\n> \n>> It's certainly better than what we have now.\n> \n> Hearing no other comments, I've pushed that and marked this issue closed.\n\nThanks!\n\n\nRegards\n\nIan Barwick\n\n\n-- \n Ian Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 21 Aug 2019 10:25:40 +0900",
"msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> + <para>\n> + External tools may also\n> + modify <filename>postgresql.auto.conf</filename>. It is not\n> + recommended to do this while the server is running, since a\n> + concurrent <command>ALTER SYSTEM</command> command could overwrite\n> + such changes. Such tools might simply append new settings to the end,\n> + or they might choose to remove duplicate settings and/or comments\n> + (as <command>ALTER SYSTEM</command> will).\n> </para>\n\nWhile I don't know that we necessairly have to change this langauge, I\ndid want to point out for folks who look at these things and consider\nthe challenges of this change that simply appending, when it comes to\nthings like backup tools and such, is just not going to work, since\nyou'll run into things like this:\n\nFATAL: multiple recovery targets specified\nDETAIL: At most one of recovery_target, recovery_target_lsn, recovery_target_name, recovery_target_time, recovery_target_xid may be set.\n\nThat's from simply doing a backup, restore with one recovery target,\nthen back that up and restore with a different recovery target.\n\nFurther there's the issue that if you specify a recovery target for the\nfirst restore and then *don't* have one for the second restore, then\nyou'll still end up trying to restore to the first point... So,\nbasically, appending just isn't actually practical for what is probably\nthe most common use-case these days for an external tool to go modify\npostgresql.auto.conf.\n\nAnd so, every backup tool author that lets a user specify a target\nduring the restore to generate the postgresql.auto.conf with (formerly\nrecovery.conf) is going to have to write enough of a parser for PG\nconfig files to be able to find and comment or remove any recovery\ntarget options from postgresql.auto.conf.\n\nThat'd be the kind of thing that I was really hoping we could provide a\ncommon library for.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 21 Aug 2019 12:25:22 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "On Wed, Aug 21, 2019 at 12:25:22PM -0400, Stephen Frost wrote:\n> That'd be the kind of thing that I was really hoping we could provide a\n> common library for.\n\nIndeed. There could be many use cases for that. Most of the parsing\nlogic is in guc-file.l. There is little dependency to elog() and\nthere is some handling for backend-side fd and their cleanup, but that\nlooks doable to me without too many ifdef FRONTEND.\n--\nMichael",
"msg_date": "Thu, 22 Aug 2019 12:13:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "Buch (buchen sollst du suchen), Buchhaltung is great. Thanks for the\nwriting.\n\nStephen Frost <sfrost@snowman.net> schrieb am Mo., 5. Aug. 2019, 21:02:\n\n> Greetings,\n>\n> On Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>> > On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >> I think Stephen is not being unreasonable to suggest that we need some\n>> >> documentation about what external tools may safely do to pg.auto.conf.\n>> >> So somebody's got to write that.\n>>\n>> > I mean, really? We're going to document that if you want to add a\n>> > setting to the file, you can just append it, but that if you find\n>> > yourself desirous of appending so many settings that the entire disk\n>> > will fill up, you should maybe reconsider? Perhaps I'm being mean\n>> > here, but that seems like it's straight out of the\n>> > blinding-flashes-of-the-obvious department.\n>>\n>> I don't think we need to go on about it at great length, but it seems\n>> to me that it'd be reasonable to point out that (a) you'd be well\n>> advised not to touch the file while the postmaster is up, and (b)\n>> last setting wins. Those things are equally true of postgresql.conf\n>> of course, but I don't recall whether they're already documented.\n>\n>\n> Folks certainly modify postgresql.conf while the postmaster is running\n> pretty routinely, and we expect them to which is why we have a reload\n> option, so I don’t think we can say that the auto.conf and postgresql.conf\n> are to be handled in the same way.\n>\n> Last setting wins, duplicates should be ignored and may be removed,\n> comments should be ignored and may be removed, and appending to the file is\n> acceptable for modifying a value. I’m not sure how much we really document\n> the structure of the file itself offhand- back when users were editing it\n> we could probably be a bit more fast and loose with it, but now that we\n> have different parts of the system modifying it along with external tools\n> doing so, we should probably write it down a bit more clearly/precisely.\n>\n> I suspect the authors of pg_conftool would appreciate that too, so they\n> could make sure that they aren’t doing anything unexpected or incorrect.\n>\n> Thanks,\n>\n> Stephen\n>\n>>\n\nBuch (buchen sollst du suchen), Buchhaltung is great. Thanks for the writing.Stephen Frost <sfrost@snowman.net> schrieb am Mo., 5. Aug. 2019, 21:02:Greetings,On Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Stephen is not being unreasonable to suggest that we need some\n>> documentation about what external tools may safely do to pg.auto.conf.\n>> So somebody's got to write that.\n\n> I mean, really? We're going to document that if you want to add a\n> setting to the file, you can just append it, but that if you find\n> yourself desirous of appending so many settings that the entire disk\n> will fill up, you should maybe reconsider? Perhaps I'm being mean\n> here, but that seems like it's straight out of the\n> blinding-flashes-of-the-obvious department.\n\nI don't think we need to go on about it at great length, but it seems\nto me that it'd be reasonable to point out that (a) you'd be well\nadvised not to touch the file while the postmaster is up, and (b)\nlast setting wins. Those things are equally true of postgresql.conf\nof course, but I don't recall whether they're already documented.Folks certainly modify postgresql.conf while the postmaster is running pretty routinely, and we expect them to which is why we have a reload option, so I don’t think we can say that the auto.conf and postgresql.conf are to be handled in the same way.Last setting wins, duplicates should be ignored and may be removed, comments should be ignored and may be removed, and appending to the file is acceptable for modifying a value. I’m not sure how much we really document the structure of the file itself offhand- back when users were editing it we could probably be a bit more fast and loose with it, but now that we have different parts of the system modifying it along with external tools doing so, we should probably write it down a bit more clearly/precisely.I suspect the authors of pg_conftool would appreciate that too, so they could make sure that they aren’t doing anything unexpected or incorrect.Thanks,Stephen",
"msg_date": "Mon, 29 Nov 2021 17:39:10 +0100",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
},
{
"msg_contents": "To give you another thanks: IT is compatible with discapacity. Great\n\nSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 29. Nov. 2021, 17:39:\n\n> Buch (buchen sollst du suchen), Buchhaltung is great. Thanks for the\n> writing.\n>\n> Stephen Frost <sfrost@snowman.net> schrieb am Mo., 5. Aug. 2019, 21:02:\n>\n>> Greetings,\n>>\n>> On Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> > On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> >> I think Stephen is not being unreasonable to suggest that we need some\n>>> >> documentation about what external tools may safely do to pg.auto.conf.\n>>> >> So somebody's got to write that.\n>>>\n>>> > I mean, really? We're going to document that if you want to add a\n>>> > setting to the file, you can just append it, but that if you find\n>>> > yourself desirous of appending so many settings that the entire disk\n>>> > will fill up, you should maybe reconsider? Perhaps I'm being mean\n>>> > here, but that seems like it's straight out of the\n>>> > blinding-flashes-of-the-obvious department.\n>>>\n>>> I don't think we need to go on about it at great length, but it seems\n>>> to me that it'd be reasonable to point out that (a) you'd be well\n>>> advised not to touch the file while the postmaster is up, and (b)\n>>> last setting wins. Those things are equally true of postgresql.conf\n>>> of course, but I don't recall whether they're already documented.\n>>\n>>\n>> Folks certainly modify postgresql.conf while the postmaster is running\n>> pretty routinely, and we expect them to which is why we have a reload\n>> option, so I don’t think we can say that the auto.conf and postgresql.conf\n>> are to be handled in the same way.\n>>\n>> Last setting wins, duplicates should be ignored and may be removed,\n>> comments should be ignored and may be removed, and appending to the file is\n>> acceptable for modifying a value. I’m not sure how much we really document\n>> the structure of the file itself offhand- back when users were editing it\n>> we could probably be a bit more fast and loose with it, but now that we\n>> have different parts of the system modifying it along with external tools\n>> doing so, we should probably write it down a bit more clearly/precisely.\n>>\n>> I suspect the authors of pg_conftool would appreciate that too, so they\n>> could make sure that they aren’t doing anything unexpected or incorrect.\n>>\n>> Thanks,\n>>\n>> Stephen\n>>\n>>>\n\nTo give you another thanks: IT is compatible with discapacity. GreatSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 29. Nov. 2021, 17:39:Buch (buchen sollst du suchen), Buchhaltung is great. Thanks for the writing.Stephen Frost <sfrost@snowman.net> schrieb am Mo., 5. Aug. 2019, 21:02:Greetings,On Mon, Aug 5, 2019 at 14:43 Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 5, 2019 at 2:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Stephen is not being unreasonable to suggest that we need some\n>> documentation about what external tools may safely do to pg.auto.conf.\n>> So somebody's got to write that.\n\n> I mean, really? We're going to document that if you want to add a\n> setting to the file, you can just append it, but that if you find\n> yourself desirous of appending so many settings that the entire disk\n> will fill up, you should maybe reconsider? Perhaps I'm being mean\n> here, but that seems like it's straight out of the\n> blinding-flashes-of-the-obvious department.\n\nI don't think we need to go on about it at great length, but it seems\nto me that it'd be reasonable to point out that (a) you'd be well\nadvised not to touch the file while the postmaster is up, and (b)\nlast setting wins. Those things are equally true of postgresql.conf\nof course, but I don't recall whether they're already documented.Folks certainly modify postgresql.conf while the postmaster is running pretty routinely, and we expect them to which is why we have a reload option, so I don’t think we can say that the auto.conf and postgresql.conf are to be handled in the same way.Last setting wins, duplicates should be ignored and may be removed, comments should be ignored and may be removed, and appending to the file is acceptable for modifying a value. I’m not sure how much we really document the structure of the file itself offhand- back when users were editing it we could probably be a bit more fast and loose with it, but now that we have different parts of the system modifying it along with external tools doing so, we should probably write it down a bit more clearly/precisely.I suspect the authors of pg_conftool would appreciate that too, so they could make sure that they aren’t doing anything unexpected or incorrect.Thanks,Stephen",
"msg_date": "Mon, 29 Nov 2021 17:41:16 +0100",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Stop ALTER SYSTEM from making bad assumptions"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\n\r\nWhen the number of potential synchronous standbys is smaller than num_sync, such as 'FIRST 3 (1,2)', 'ANY 4 (1,2,3)' in the synchronous_standby_names, the processes will wait for synchronous replication forever. \r\n \r\n\r\n \r\nObviously, it's not expected. I think return false and a error message may be better. And attached is a patch that implements the simple check. \r\n\r\n\r\n\r\n\r\n \r\n\r\n \r\n\r\nWhat do you think about this?\r\n\r\n\r\n\r\n\r\n--\r\nZhang Wenjie",
"msg_date": "Fri, 14 Jun 2019 19:00:09 +0800",
"msg_from": "\"=?gb18030?B?1cXOxL3c?=\" <757634191@qq.com>",
"msg_from_op": true,
"msg_subject": "Check the number of potential synchronous standbys"
},
{
"msg_contents": "\"=?gb18030?B?1cXOxL3c?=\" <757634191@qq.com> writes:\n> When the number of potential synchronous standbys is smaller than num_sync, such as 'FIRST 3 (1,2)', 'ANY 4 (1,2,3)' in the synchronous_standby_names, the processes will wait for synchronous replication forever. \n> Obviously, it's not expected. I think return false and a error message may be better. And attached is a patch that implements the simple check. \n\nWell, it's not *that* simple; this patch rejects cases like \"ANY 2(*)\"\nwhich need to be accepted. That causes the src/test/recovery tests\nto fail (you should have tried check-world).\n\nI also observe that there's a test case in 007_sync_rep.pl which is\nactually exercising the case you want to reject:\n\n# Check that sync_state of each standby is determined correctly\n# when num_sync exceeds the number of names of potential sync standbys\n# specified in synchronous_standby_names.\ntest_sync_state(\n\t$node_master, qq(standby1|0|async\nstandby2|4|sync\nstandby3|3|sync\nstandby4|1|sync),\n\t'num_sync exceeds the num of potential sync standbys',\n\t'6(standby4,standby0,standby3,standby2)');\n\nSo it can't be said that nobody thought about this at all.\n\nNow, I'm not convinced that this represents a useful use-case as-is.\nHowever, because we can't know how many standbys may match \"*\",\nit's clear that the code has to do something other than just\nabort when the situation happens. Conceivably we could fail at\nruntime (not GUC parse time) if the number of required standbys\nexceeds the number available, rather than waiting indefinitely.\nHowever, if standbys can come online dynamically, a wait involving\n\"*\" might be satisfiable after awhile even if it isn't immediately.\n\nOn the whole, given the fuzziness around \"*\", I'm not sure that\nit's easy to make this much better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Aug 2019 16:53:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check the number of potential synchronous standbys"
}
] |
[
{
"msg_contents": "Index AM's can cache stuff in RelationData->rd_amcache. In the zedstore \ntable AM we've been hacking on, I'd like to also use rd_amcache to cache \nsome information, but that's not currently possible, because rd_amcache \ncan only be used by index AMs, not table AMs.\n\nAttached patch allows rd_amcache to also be used by table AMs.\n\nWhile working on this, I noticed that the memory management of relcache \nentries is quite complicated. Most stuff that's part of a relcache entry \nis allocated in CacheMemoryContext. But some fields have a dedicated \nmemory context to hold them, like rd_rulescxt for rules and rd_pdcxt for \npartition information. And indexes have rd_indexcxt to hold all kinds of \nindex support info.\n\nIn the patch, I documented that rd_amcache must be allocated in \nCacheMemoryContext, or in rd_indexcxt if it's an index. It works, but \nit's a bit weird. It would nice to have one memory context in every \nrelcache entry, to hold all the stuff related to it, including \nrd_amcache. In other words, it would be nice if we had \"rd_indexcxt\" for \ntables, too, not just indexes. That would allow tracking memory usage \nmore accurately, if you're debugging an out of memory situation for example.\n\nHowever, the special contexts like rd_rulescxt and rd_pdcxt would still \nbe needed, because of the way RelationClearRelation preserves them, when \nrebuilding the relcache entry for an open relation. So I'm not sure how \nmuch it would really simplify things. Also, there's some overhead for \nhaving extra memory contexts, and some people already complain that the \nrelcache uses too much memory.\n\nAlternatively, we could document that rd_amcache should always be \nallocated in CacheMemoryContext, even for indexes. That would make the \nrule for pg_amcache straightforward. There's no particular reason why \nrd_amcache has to be allocated in rd_indexcxt, except for how it's \naccounted for in memory context dumps.\n\n- Heikki",
"msg_date": "Fri, 14 Jun 2019 18:20:29 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Allow table AM's to cache stuff in relcache"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> Index AM's can cache stuff in RelationData->rd_amcache. In the zedstore \n> table AM we've been hacking on, I'd like to also use rd_amcache to cache \n> some information, but that's not currently possible, because rd_amcache \n> can only be used by index AMs, not table AMs.\n> Attached patch allows rd_amcache to also be used by table AMs.\n\nSeems reasonable.\n\n> In the patch, I documented that rd_amcache must be allocated in \n> CacheMemoryContext, or in rd_indexcxt if it's an index. It works, but \n> it's a bit weird.\n\nGiven the way the patch is implemented, it doesn't really matter which\ncontext it's in, does it? The retail pfree is inessential but also\nharmless, if rd_amcache is in rd_indexcxt. So we could take out the\n\"must\". I think it's slightly preferable to use rd_indexcxt if available,\nto reduce the amount of loose junk in CacheMemoryContext.\n\n> It would nice to have one memory context in every \n> relcache entry, to hold all the stuff related to it, including \n> rd_amcache. In other words, it would be nice if we had \"rd_indexcxt\" for \n> tables, too, not just indexes. That would allow tracking memory usage \n> more accurately, if you're debugging an out of memory situation for example.\n\nWe had some discussion related to that in the \"hyrax\nvs. RelationBuildPartitionDesc\" thread. I'm not quite sure where\nwe'll settle on that, but some redesign seems inevitable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 11:40:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow table AM's to cache stuff in relcache"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > Index AM's can cache stuff in RelationData->rd_amcache. In the zedstore\n> > table AM we've been hacking on, I'd like to also use rd_amcache to cache\n> > some information, but that's not currently possible, because rd_amcache\n> > can only be used by index AMs, not table AMs.\n> > Attached patch allows rd_amcache to also be used by table AMs.\n>\n> Seems reasonable.\n\n+1.\n\n> > In the patch, I documented that rd_amcache must be allocated in\n> > CacheMemoryContext, or in rd_indexcxt if it's an index. It works, but\n> > it's a bit weird.\n>\n> Given the way the patch is implemented, it doesn't really matter which\n> context it's in, does it? The retail pfree is inessential but also\n> harmless, if rd_amcache is in rd_indexcxt. So we could take out the\n> \"must\". I think it's slightly preferable to use rd_indexcxt if available,\n> to reduce the amount of loose junk in CacheMemoryContext.\n\nI agree that for indexes the context used won't make much difference.\nBut IMHO avoiding some bloat in CacheMemoryContext is a good enough\nreason to document using rd_indexcxt when available.\n\n> > It would nice to have one memory context in every\n> > relcache entry, to hold all the stuff related to it, including\n> > rd_amcache. In other words, it would be nice if we had \"rd_indexcxt\" for\n> > tables, too, not just indexes. That would allow tracking memory usage\n> > more accurately, if you're debugging an out of memory situation for example.\n>\n> We had some discussion related to that in the \"hyrax\n> vs. RelationBuildPartitionDesc\" thread. I'm not quite sure where\n> we'll settle on that, but some redesign seems inevitable.\n\nThere wasn't any progress on this since last month, and this patch\nwon't make the situation any worse. I'll mark this patch as ready for\ncommitter, as it may save some time for people working on custom table\nAM.\n\n\n",
"msg_date": "Fri, 12 Jul 2019 15:07:54 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow table AM's to cache stuff in relcache"
},
{
"msg_contents": "On 12/07/2019 16:07, Julien Rouhaud wrote:\n> On Fri, Jun 14, 2019 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>>> In the patch, I documented that rd_amcache must be allocated in\n>>> CacheMemoryContext, or in rd_indexcxt if it's an index. It works, but\n>>> it's a bit weird.\n>>\n>> Given the way the patch is implemented, it doesn't really matter which\n>> context it's in, does it? The retail pfree is inessential but also\n>> harmless, if rd_amcache is in rd_indexcxt. So we could take out the\n>> \"must\". I think it's slightly preferable to use rd_indexcxt if available,\n>> to reduce the amount of loose junk in CacheMemoryContext.\n> \n> I agree that for indexes the context used won't make much difference.\n> But IMHO avoiding some bloat in CacheMemoryContext is a good enough\n> reason to document using rd_indexcxt when available.\n\nRight, it doesn't really matter whether an index AM uses \nCacheMemoryContext or rd_indexctx, the code works either way. I think \nit's better to give clear advice though, one way or another. Otherwise, \ndifferent index AM's can end up doing it differently for no particular \nreason, which seems confusing.\n\n>>> It would nice to have one memory context in every\n>>> relcache entry, to hold all the stuff related to it, including\n>>> rd_amcache. In other words, it would be nice if we had \"rd_indexcxt\" for\n>>> tables, too, not just indexes. That would allow tracking memory usage\n>>> more accurately, if you're debugging an out of memory situation for example.\n>>\n>> We had some discussion related to that in the \"hyrax\n>> vs. RelationBuildPartitionDesc\" thread. I'm not quite sure where\n>> we'll settle on that, but some redesign seems inevitable.\n> \n> There wasn't any progress on this since last month, and this patch\n> won't make the situation any worse. I'll mark this patch as ready for\n> committer, as it may save some time for people working on custom table\n> AM.\n\nPushed, thanks for the review! As Tom noted, some redesign here seems \ninevitable, but this patch shouldn't get in the way of that, so no need \nto hold this back for the redesign.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 30 Jul 2019 22:19:34 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Allow table AM's to cache stuff in relcache"
}
] |
[
{
"msg_contents": "https://www.postgresql.org/docs/12/sql-createstatistics.html contains\nthis example command:\n\nCREATE STATISTICS s2 (mcv) ON (a, b) FROM t2;\n\nBut that produces:\n\npsql: ERROR: only simple column references are allowed in CREATE STATISTICS\n\nI think the parentheses around (a, b) just need to be removed.\n\nP.S. I think the fact that we print \"psql: \" before the ERROR here is\nuseless clutter. We didn't do that in v11 and prior and I think we\nshould kill it with fire.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:23:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "CREATE STATISTICS documentation bug"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> P.S. I think the fact that we print \"psql: \" before the ERROR here is\n> useless clutter. We didn't do that in v11 and prior and I think we\n> should kill it with fire.\n\nAgreed, particularly seeing that the error is not originating with\npsql; it's just passing it on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 15:26:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "On 2019-Jun-14, Tom Lane wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > P.S. I think the fact that we print \"psql: \" before the ERROR here is\n> > useless clutter. We didn't do that in v11 and prior and I think we\n> > should kill it with fire.\n> \n> Agreed, particularly seeing that the error is not originating with\n> psql; it's just passing it on.\n\n+1\n\nProposal: each program declares at startup whether it wants the program\nname prefix or not.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 16:25:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Jun-14, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> P.S. I think the fact that we print \"psql: \" before the ERROR here is\n>>> useless clutter. We didn't do that in v11 and prior and I think we\n>>> should kill it with fire.\n\n>> Agreed, particularly seeing that the error is not originating with\n>> psql; it's just passing it on.\n\n> +1\n\n> Proposal: each program declares at startup whether it wants the program\n> name prefix or not.\n\nWell, to clarify: I think it's reasonable to include \"psql: \" if the\nmessage is originating in psql. So I don't think your idea quite\ndoes what we want.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 16:48:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "On 2019-Jun-14, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> > Proposal: each program declares at startup whether it wants the program\n> > name prefix or not.\n> \n> Well, to clarify: I think it's reasonable to include \"psql: \" if the\n> message is originating in psql. So I don't think your idea quite\n> does what we want.\n\nHmm, it doesn't.\n\nMaybe the error reporting API needs a bit of a refinement to suppress\nthe prefix for specific error callsites, then?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 14 Jun 2019 17:04:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 03:23:29PM -0400, Robert Haas wrote:\n>https://www.postgresql.org/docs/12/sql-createstatistics.html contains\n>this example command:\n>\n>CREATE STATISTICS s2 (mcv) ON (a, b) FROM t2;\n>\n>But that produces:\n>\n>psql: ERROR: only simple column references are allowed in CREATE STATISTICS\n>\n>I think the parentheses around (a, b) just need to be removed.\n>\n>P.S. I think the fact that we print \"psql: \" before the ERROR here is\n>useless clutter. We didn't do that in v11 and prior and I think we\n>should kill it with fire.\n>\n\nI've pushed a fix for the docs issue.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 16 Jun 2019 01:22:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 7:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I've pushed a fix for the docs issue.\n\nThanks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 17 Jun 2019 08:01:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
},
{
"msg_contents": "On 2019-06-14 23:04, Alvaro Herrera wrote:\n> On 2019-Jun-14, Tom Lane wrote:\n> \n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> \n>>> Proposal: each program declares at startup whether it wants the program\n>>> name prefix or not.\n>>\n>> Well, to clarify: I think it's reasonable to include \"psql: \" if the\n>> message is originating in psql. So I don't think your idea quite\n>> does what we want.\n> \n> Hmm, it doesn't.\n> \n> Maybe the error reporting API needs a bit of a refinement to suppress\n> the prefix for specific error callsites, then?\n\nThis was an oversight and has been fixed. (It was masked if you had a\n.psqlrc, which is why I never saw it.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 6 Jul 2019 22:59:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: CREATE STATISTICS documentation bug"
}
] |
[
{
"msg_contents": "I've committed first-draft release notes for next week's\nreleases at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0995cefa74510ee0e38d1bf095b2eef2c1ea37c4\n\nPlease send any review comments by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 14 Jun 2019 16:58:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Fri, Jun 14, 2019 at 04:58:47PM -0400, Tom Lane wrote:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0995cefa74510ee0e38d1bf095b2eef2c1ea37c4\n\n> +<!--\n> +Author: Peter Geoghegan <pg@bowt.ie>\n> +Branch: master [9b42e7137] 2019-05-13 10:27:59 -0700\n> +Branch: REL_11_STABLE [bf78f50ba] 2019-05-13 10:27:57 -0700\n> +-->\n> + <para>\n> + Avoid corruption of a btree index in the unlikely case that a failure\n> + occurs during key truncation during a page split (Peter Geoghegan)\n> + </para>\n\nTo me, this text implies a cautious DBA should amcheck every index. Reading\nthe thread[1], I no longer think that. It's enough to monitor that VACUUM\ndoesn't start failing persistently on any index. I suggest replacing this\nrelease note text with something like the following:\n\n Avoid writing erroneous btree index data that does not change query results\n but causes VACUUM to abort with \"failed to re-find parent key\". Affected\n indexes are rare; REINDEX fixes them.\n\n(I removed \"key truncation during a page split\" as being too technical for\nrelease notes.)\n\n[1] https://postgr.es/m/flat/CAH2-WzkcWT_-NH7EeL=Az4efg0KCV+wArygW8zKB=+HoP=VWMw@mail.gmail.com \n\n\n",
"msg_date": "Sat, 15 Jun 2019 20:39:11 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 1:39 PM Noah Misch <noah@leadboat.com> wrote:\n> To me, this text implies a cautious DBA should amcheck every index. Reading\n> the thread[1], I no longer think that. It's enough to monitor that VACUUM\n> doesn't start failing persistently on any index. I suggest replacing this\n> release note text with something like the following:\n>\n> Avoid writing erroneous btree index data that does not change query results\n> but causes VACUUM to abort with \"failed to re-find parent key\". Affected\n> indexes are rare; REINDEX fixes them.\n>\n> (I removed \"key truncation during a page split\" as being too technical for\n> release notes.)\n\nI agree that this isn't terribly significant in general. Your proposed\nwording seems better than what we have now, but a reference to INCLUDE\nindexes also seems like a good idea. They are the only type of index\nthat could possibly have the issue with page deletion/VACUUM becoming\nconfused. Even then, the risk seems minor, because there has to be an\nOOM at precisely the wrong point.\n\nIf there was any kind of _bt_split() OOM in 11.3 that involved a\nnon-INCLUDE B-Tree index, then the OOM could only occur when we\nallocate a temp page buffer. I verified that this causes no\nsignificant issue for VACUUM. It is best avoided, since we still\n\"leak\" the new page/buffer, although that is almost harmless.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 15 Jun 2019 14:11:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 2:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Jun 15, 2019 at 1:39 PM Noah Misch <noah@leadboat.com> wrote:\n> > To me, this text implies a cautious DBA should amcheck every index. Reading\n> > the thread[1], I no longer think that. It's enough to monitor that VACUUM\n> > doesn't start failing persistently on any index. I suggest replacing this\n> > release note text with something like the following:\n\nFWIW, amcheck won't help here. It can only access pages through its\nbreadth-first search, and so will not land on any \"leaked\" page (i.e.\npage that has no link to the tree). Ideally, amcheck would notice that\nit hasn't visited certain blocks, and then inspect the blocks/pages in\na separate pass, but that doesn't happen right now.\n\nAs you know, VACUUM can find leaked blocks/pages because nbtree VACUUM\nhas an optimization that allows it to access them in sequential order.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 15 Jun 2019 14:35:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 02:11:41PM -0700, Peter Geoghegan wrote:\n> On Sat, Jun 15, 2019 at 1:39 PM Noah Misch <noah@leadboat.com> wrote:\n> > To me, this text implies a cautious DBA should amcheck every index. Reading\n> > the thread[1], I no longer think that. It's enough to monitor that VACUUM\n> > doesn't start failing persistently on any index. I suggest replacing this\n> > release note text with something like the following:\n> >\n> > Avoid writing erroneous btree index data that does not change query results\n> > but causes VACUUM to abort with \"failed to re-find parent key\". Affected\n> > indexes are rare; REINDEX fixes them.\n> >\n> > (I removed \"key truncation during a page split\" as being too technical for\n> > release notes.)\n> \n> I agree that this isn't terribly significant in general. Your proposed\n> wording seems better than what we have now, but a reference to INCLUDE\n> indexes also seems like a good idea. They are the only type of index\n> that could possibly have the issue with page deletion/VACUUM becoming\n> confused.\n\nIf true, that's important to mention, yes.\n\n\n",
"msg_date": "Sat, 15 Jun 2019 14:42:50 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Jun 15, 2019 at 02:11:41PM -0700, Peter Geoghegan wrote:\n>> I agree that this isn't terribly significant in general. Your proposed\n>> wording seems better than what we have now, but a reference to INCLUDE\n>> indexes also seems like a good idea. They are the only type of index\n>> that could possibly have the issue with page deletion/VACUUM becoming\n>> confused.\n\n> If true, that's important to mention, yes.\n\nThanks for the input, guys. What do you think of\n\n Avoid writing an invalid empty btree index page in the unlikely case\n that a failure occurs while processing INCLUDEd columns during a page\n split (Peter Geoghegan)\n\n The invalid page would not affect normal index operations, but it\n might cause failures in subsequent VACUUMs. If that has happened to\n one of your indexes, recover by reindexing the index.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 15 Jun 2019 18:05:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 3:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thanks for the input, guys. What do you think of\n>\n> Avoid writing an invalid empty btree index page in the unlikely case\n> that a failure occurs while processing INCLUDEd columns during a page\n> split (Peter Geoghegan)\n>\n> The invalid page would not affect normal index operations, but it\n> might cause failures in subsequent VACUUMs. If that has happened to\n> one of your indexes, recover by reindexing the index.\n\nThat seems perfect.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 15 Jun 2019 15:12:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 06:05:00PM -0400, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > On Sat, Jun 15, 2019 at 02:11:41PM -0700, Peter Geoghegan wrote:\n> >> I agree that this isn't terribly significant in general. Your proposed\n> >> wording seems better than what we have now, but a reference to INCLUDE\n> >> indexes also seems like a good idea. They are the only type of index\n> >> that could possibly have the issue with page deletion/VACUUM becoming\n> >> confused.\n> \n> > If true, that's important to mention, yes.\n> \n> Thanks for the input, guys. What do you think of\n> \n> Avoid writing an invalid empty btree index page in the unlikely case\n> that a failure occurs while processing INCLUDEd columns during a page\n> split (Peter Geoghegan)\n> \n> The invalid page would not affect normal index operations, but it\n> might cause failures in subsequent VACUUMs. If that has happened to\n> one of your indexes, recover by reindexing the index.\n\nLooks good.\n\n\n",
"msg_date": "Sat, 15 Jun 2019 15:14:28 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Draft back-branch release notes are up for review"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.