threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi all,\n\nHere my first patch for postgres. Starting by an easy thing, I correct \nthe duplicated \"the the\" strings from comments on some files.\n\n- src/backend/executor/execExpr.c\n- src/include/c.h\n- src/include/jit/llvmjit_emit.h\n- src/include/nodes/execnodes.h\n- src/include/replication/logical.h\n\nAny feedback are welcome!\n\nThanks a lot,",
"msg_date": "Mon, 13 May 2019 16:52:15 -0300",
"msg_from": "Stephen Amell <mrstephenamell@gmail.com>",
"msg_from_op": true,
"msg_subject": "Quitting the thes"
},
{
"msg_contents": "On Mon, May 13, 2019 at 04:52:15PM -0300, Stephen Amell wrote:\n> Here my first patch for postgres. Starting by an easy thing, I correct the\n> duplicated \"the the\" strings from comments on some files.\n\nWelcome!\n\n> - src/backend/executor/execExpr.c\n> - src/include/c.h\n> - src/include/jit/llvmjit_emit.h\n> - src/include/nodes/execnodes.h\n> - src/include/replication/logical.h\n> \n> Any feedback are welcome!\n\nThanks, committed. I have noticed an extra one in reorderbuffer.c.\n\nIf you get interested in more areas of the code, there is plently of\nsmall work items which can be reviewed or worked on.\n\nWe have on the wiki a manual about how to submit a patch:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nPatches go through a dedicated commit fest app, and it has many small\nfishes which can be worked on:\nhttps://commitfest.postgresql.org/23/\nYou could get more familiar with some areas of the code this way.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 14 May 2019 09:49:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "Hi all\n\n(CC'ing 'Last-Translator' addresses)\n\nI noticed a few duplicated words in the .po files. Some of these are\nobviously false positives.\n\nThis command reports them with color highlighting:\n$ git grep '\\<\\([a-z_ ]\\+\\)\\> \\<\\1\\>' -- *.po\n\nsrc/backend/po/de.po:msgstr \"Wenn der Planer schätzt, dass zu wenige Tabellenseiten gelesen werden werden um diesen Wert zu erreichen, dann wird kein paralleler Scan in Erwägung gezogen werden.\"\nsrc/backend/po/de.po:msgstr \"Wenn der Planer schätzt, dass zu wenige Indexseiten gelesen werden werden um diesen Wert zu erreichen, dann wird kein paralleler Scan in Erwägung gezogen werden.\"\nsrc/backend/po/de.po:msgstr \"Setzt den vom Planer geschätzten Anteil der Cursor-Zeilen, die ausgelesen werden werden.\"\nsrc/backend/po/fr.po:msgstr \"l'extension « %s » n'a pas de script d'installation ou de chemin de de mise à jour pour la version « %s »\"\nsrc/backend/po/fr.po:msgstr \"les fonctions in_range btree doivent doivent retourner un booléen\"\nsrc/backend/po/fr.po:msgstr \"la relation référencée « %s » n'est ni une table ni une table distante\"\nsrc/backend/po/fr.po:msgstr \"la relation héritée « %s » n'est ni une table ni une table distante\"\nsrc/backend/po/fr.po:\"l'option DISABLE_PAGE_SKIPPING de la commande VACUUM ne pas pas être utilisée\\n\"\nsrc/backend/po/fr.po:\"Vous pourriez avoir avoir besoin d'ajouter des conversions de type explicites.\"\nsrc/backend/po/fr.po:msgstr \"symbole « %c » invalide invalide lors du décodage de la séquence en base64\"\nsrc/backend/po/fr.po:msgstr \"La chaîne des des guillements doubles non fermés.\"\nsrc/backend/po/id.po:msgstr \"sambungan terputus saat COPY ke stdout stdout\"\nsrc/backend/po/id.po:msgstr \"Gunakan 'quoted CSV field' untuk menyatakan baris baris baru\"\nsrc/backend/po/id.po:msgstr \"file konfigurasi « %s » terdapat errors ; tidak tidak terpengaruh applied\"\nsrc/backend/po/it.po:msgstr \"le protezioni di wraparound dei membri MultiXact sono disabilitate perché il il MultiXact più vecchio che abbia ricevuto un checkpoint %u non esiste sul disco\"\nsrc/backend/po/it.po:msgstr \"La rilevazione di un errore in una somma di controllo di solito fa generare a PostgreSQL un errore che fa abortire la transazione corrente. Impostare ignore_checksum_failure a \\\"true\\\" fa sì che il sistema ignori l'errore (che viene riportato come un avviso), consentendo al processo di continuare. Questo comportamento potrebbe causare crash o altri problemi gravi. Ha effetto solo se se somme di controllo sono abilitate.\"\nsrc/backend/po/sv.po:msgid \"tid (%u, %u) is not valid for relation for relation \\\"%s\\\"\"\nsrc/backend/po/tr.po:msgstr \"\\\"%2$s\\\" kullanarak kullanarak PID %1$d olan başka bir istemci süreci çalışmakta mıdır?\"\nsrc/backend/po/tr.po:msgstr \"\\\"%2$s\\\" kullanarak kullanarak PID %1$d olan başka bir sunucu çalışmakta mıdır?\"\nsrc/bin/initdb/po/cs.po:msgstr \"Spusťte znovu %s s přepínačem -E.\\n\"\nsrc/bin/initdb/po/cs.po:\"Pusťte znovu %s s jiným nastavením locale.\\n\"\nsrc/bin/initdb/po/tr.po:\"kaldırın, ya boşaltın ya da ya da %s 'i \\n\"\nsrc/bin/pg_basebackup/po/de.po:\"Optionen, die die Ausgabe kontrollieren:\\n\"\nsrc/bin/pg_basebackup/po/tr.po:msgstr \"%s: --no-slot slot adıyla birlikte kullanılamaz\\n\"\nsrc/bin/pg_dump/po/de.po:\"Optionen die die Wiederherstellung kontrollieren:\\n\"\nsrc/bin/pg_resetwal/po/it.po:msgstr \" (zero in uno dei dei valori vuol dire nessun cambiamento)\\n\"\nsrc/bin/psql/po/de.po:msgstr \"ändert den Eigentümer der der Rolle gehörenden Datenbankobjekte\"\nsrc/bin/scripts/po/pl.po:msgstr \" -t, --timeout=SEKUNDY sekundy oczekiwania podczas podczas próby połączenia, 0 wyłącza (domyślnie: %s)\\n\"\nsrc/interfaces/ecpg/ecpglib/po/pl.po:msgstr \"zmienna nie ma typu typu numeric, linia %d\"\nsrc/interfaces/ecpg/ecpglib/po/pl.po:msgstr \"zmienna nie ma typu typu character, linia %d\"\nsrc/interfaces/libpq/po/fr.po:msgstr \"mot de passe récupéré dans le fichier fichier « %s »\\n\"\nsrc/interfaces/libpq/po/it.po:msgstr \"non è stato possibile possibile ottenere il certificato: %s\\n\"\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 14:40:39 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "On 2019-May-14, Michael Paquier wrote:\n\n> Thanks, committed. I have noticed an extra one in reorderbuffer.c.\n\nSome grepping found a bit more; patch attached.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 12 Jun 2019 14:45:27 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "Le mer. 12 juin 2019 à 20:45, Alvaro Herrera <alvherre@2ndquadrant.com> a\nécrit :\n\n> On 2019-May-14, Michael Paquier wrote:\n>\n> > Thanks, committed. I have noticed an extra one in reorderbuffer.c.\n>\n> Some grepping found a bit more; patch attached.\n>\n>\nThanks a lot, this is very good. I've got some fixes to do :)\n\n\n-- \nGuillaume.\n\nLe mer. 12 juin 2019 à 20:45, Alvaro Herrera <alvherre@2ndquadrant.com> a écrit :On 2019-May-14, Michael Paquier wrote:\n\n> Thanks, committed. I have noticed an extra one in reorderbuffer.c.\n\nSome grepping found a bit more; patch attached.\nThanks a lot, this is very good. I've got some fixes to do :)-- Guillaume.",
"msg_date": "Wed, 12 Jun 2019 21:42:07 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 02:45:27PM -0400, Alvaro Herrera wrote:\n> Some grepping found a bit more; patch attached.\n\nIndeed. There were much more. I just got to look with stuff like\nthat:\nfind . -name \"*.c\" | xargs egrep \"(\\b[a-zA-Z]+) \\1\\b\"\n\nBut I did not find any more spots. Indentation is incorrect in\ntest_integerset.c.\n--\nMichael",
"msg_date": "Thu, 13 Jun 2019 15:44:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "On 2019-Jun-13, Michael Paquier wrote:\n\n> On Wed, Jun 12, 2019 at 02:45:27PM -0400, Alvaro Herrera wrote:\n> > Some grepping found a bit more; patch attached.\n> \n> Indeed. There were much more. I just got to look with stuff like\n> that:\n> find . -name \"*.c\" | xargs egrep \"(\\b[a-zA-Z]+) \\1\\b\"\n\nThis is what I used:\ngit grep '\\<\\([a-z_ ]\\+\\)\\> \\<\\1\\>'\n\n> But I did not find any more spots. Indentation is incorrect in\n> test_integerset.c.\n\nIndeed ... fixed.\n\nThanks for looking,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:10:43 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
},
{
"msg_contents": "On 2019-06-12 20:40, Alvaro Herrera wrote:\n> I noticed a few duplicated words in the .po files. Some of these are\n> obviously false positives.\n\nThe \"de\" ones were all correct as is, but some of the ones in other\nlanguages do indeed look like typos.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 16 Jun 2019 22:51:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Quitting the thes"
}
] |
[
{
"msg_contents": "I'm not sure doc/bug.template still serves a purpose. There is bug\nreporting advice in the documentation, and there is a bug reporting\nform. This file just seems outdated. Should we remove it?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 13 May 2019 21:53:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove doc/bug.template?"
},
{
"msg_contents": "On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I'm not sure doc/bug.template still serves a purpose. There is bug\n> reporting advice in the documentation, and there is a bug reporting\n> form. This file just seems outdated. Should we remove it?\n\nIn my opinion, yes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 May 2019 15:59:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove doc/bug.template?"
},
{
"msg_contents": "On Mon, May 13, 2019 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n> > I'm not sure doc/bug.template still serves a purpose. There is bug\n> > reporting advice in the documentation, and there is a bug reporting\n> > form. This file just seems outdated. Should we remove it?\n>\n> In my opinion, yes.\n>\n\n+1.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, May 13, 2019 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> I'm not sure doc/bug.template still serves a purpose. There is bug\n> reporting advice in the documentation, and there is a bug reporting\n> form. This file just seems outdated. Should we remove it?\n\nIn my opinion, yes.+1.-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 13 May 2019 22:10:11 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: remove doc/bug.template?"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Mon, May 13, 2019 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>> I'm not sure doc/bug.template still serves a purpose. There is bug\n>>> reporting advice in the documentation, and there is a bug reporting\n>>> form. This file just seems outdated. Should we remove it?\n\n>> In my opinion, yes.\n\n> +1.\n\nNo objection, but make sure you fix src/tools/version_stamp.pl.\n(Looks like there's a reference in .gitattributes, too)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 May 2019 16:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove doc/bug.template?"
},
{
"msg_contents": "On Mon, May 13, 2019 at 04:34:34PM -0400, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Mon, May 13, 2019 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n> >> <peter.eisentraut@2ndquadrant.com> wrote:\n> >>> I'm not sure doc/bug.template still serves a purpose. There is bug\n> >>> reporting advice in the documentation, and there is a bug reporting\n> >>> form. This file just seems outdated. Should we remove it?\n> \n> >> In my opinion, yes.\n> \n> > +1.\n> \n> No objection, but make sure you fix src/tools/version_stamp.pl.\n> (Looks like there's a reference in .gitattributes, too)\n\nYes, please remove.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 13 May 2019 22:56:54 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: remove doc/bug.template?"
},
{
"msg_contents": "On 2019-05-14 04:56, Bruce Momjian wrote:\n> On Mon, May 13, 2019 at 04:34:34PM -0400, Tom Lane wrote:\n>> Magnus Hagander <magnus@hagander.net> writes:\n>>> On Mon, May 13, 2019 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>>> On Mon, May 13, 2019 at 3:54 PM Peter Eisentraut\n>>>> <peter.eisentraut@2ndquadrant.com> wrote:\n>>>>> I'm not sure doc/bug.template still serves a purpose. There is bug\n>>>>> reporting advice in the documentation, and there is a bug reporting\n>>>>> form. This file just seems outdated. Should we remove it?\n>>\n>>>> In my opinion, yes.\n>>\n>>> +1.\n>>\n>> No objection, but make sure you fix src/tools/version_stamp.pl.\n>> (Looks like there's a reference in .gitattributes, too)\n> \n> Yes, please remove.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 May 2019 08:58:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove doc/bug.template?"
}
] |
[
{
"msg_contents": "Hi,\n\nThe Release Management Team is pleased to announce that\nthe release date for PostgreSQL 11 Beta 1 is set to be 2019-05-23\n(wrapping [1] the release 2019-05-20).\n\nWe’re excited to make the first beta for this latest major\nrelease of PostgreSQL available for testing, and we welcome\nall feedback.\n\nPlease let us know if you have any questions.\n\nRegards,\n\nAndres, on behalf of the PG 12 RMT\n\n[1] https://wiki.postgresql.org/wiki/Release_process\n\n\n",
"msg_date": "Mon, 13 May 2019 14:58:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 Beta 1 Release: 2019-05-23"
},
{
"msg_contents": "\n> The Release Management Team is pleased to announce that\n> the release date for PostgreSQL 11 Beta 1 is set to be 2019-05-23\n> (wrapping [1] the release 2019-05-20).\n\nWill 12 Beta 1 come out the same day as well? ;)\n\n\n",
"msg_date": "Mon, 13 May 2019 22:59:50 +0000",
"msg_from": "\"Nasby, Jim\" <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 Release: 2019-05-23"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nSome users don't like the fact that ldapbindpasswd can leak into logs\n(and now system views?). Also, some users don't like the fact that it\nis in cleartext rather than some encryption scheme (though I don't\nknow what, since we'd presumably also need the key). I propose a new\noption $SUBJECT so that users can at least add a level of indirection\nand put the password in a file. A motivated user could point it at an\nencrypted loopback device so that they need a passphrase at mount\ntime, or a named pipe that performs arbitrary magic. Some of these\ntopics were discussed last time someone had this idea[1].\n\nUsing a separate file for the bind password is fairly common in other\nsoftware: see the ldapsearch's -y switch, and I think it probably\nmakes sense at the very least as a convenience, without getting into\nhand-wringing discussions about whether any security is truly added.\n\nDraft patch attached.\n\nHi Stephen!\n\nI also know that a motivated user could also use GSSAPI instead of\nLDAP. Do you think we should update the manual to say so, perhaps in\na \"tip\" box on the LDAP auth page?\n\n[1] https://www.postgresql.org/message-id/flat/20140617175511.2589.45249%40wrigleys.postgresql.org\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 14 May 2019 13:49:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "ldapbindpasswdfile"
},
{
"msg_contents": "On Tue, May 14, 2019 at 1:49 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... or a named pipe that performs arbitrary magic.\n\n(Erm, that bit might not make much sense...)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2019 14:42:18 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ldapbindpasswdfile"
},
{
"msg_contents": "> On 14 May 2019, at 03:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> I propose a new option $SUBJECT so that users can at least add a level of\n> indirection and put the password in a file.\n\n\n+1, seems like a reasonable option to give.\n\n> Draft patch attached.\n\nI might be a bit thick, but this is somewhat hard to parse IMO:\n\n+ File containing the password for user to bind to the directory with to\n+ perform the search when doing search+bind authentication\n\nTo add a little bit more security around this, does it make sense to check (on\nunix filesystems) that the file isn’t world readable/editable?\n\n+ fd = OpenTransientFile(path, O_RDONLY);\n+ if (fd < 0)\n+ return -1;\n\ncheers ./daniel\n\n",
"msg_date": "Tue, 14 May 2019 22:24:15 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: ldapbindpasswdfile"
},
{
"msg_contents": "On Tue, May 14, 2019 at 1:24 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 14 May 2019, at 03:49, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> > I propose a new option $SUBJECT so that users can at least add a level of\n> > indirection and put the password in a file.\n>\n> +1, seems like a reasonable option to give.\n\nThanks for the review!\n\n> > Draft patch attached.\n>\n> I might be a bit thick, but this is somewhat hard to parse IMO:\n>\n> + File containing the password for user to bind to the directory with to\n> + perform the search when doing search+bind authentication\n>\n> To add a little bit more security around this, does it make sense to check (on\n> unix filesystems) that the file isn’t world readable/editable?\n>\n> + fd = OpenTransientFile(path, O_RDONLY);\n> + if (fd < 0)\n> + return -1;\n\nGood point.\n\nHowever, I realised that this patch is nearly but not quite redundant.\nYou can already write @somefile given a file somefile that contains\nldapbindpasswd=secret. It'd be a bit nicer if you could also write\nldapbindpasswd=@somefile to include just the value, and not have to\ninclude the option name in the file. Then you could use the same\npassword file that you use for the ldapsearch command line tool, and\nin general that seems nicer. That syntax might have backwards\ncompatibility problems though. You could probably resolve any\nproblems by requiring quote marks around @ signs that are not acting\nas include directives, or something like that. If we do that, I'd\nalso like to be able to write ldapbindpasswd=$SOME_ENV_VAR.\n\nAnyway, I hereby withdraw the earlier patch; it seems silly to do\nper-option ad hoc read-from-file variants. Perhaps we can do\nsomething much better and more general, or perhaps what we have is\nenough already.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 13 Jun 2019 21:40:09 -0700",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ldapbindpasswdfile"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> I also know that a motivated user could also use GSSAPI instead of\n> LDAP. Do you think we should update the manual to say so, perhaps in\n> a \"tip\" box on the LDAP auth page?\n\nHrm, not sure how I missed this before, but, yes, I'm all for adding a\n'tip' box on the LDAP auth page which recommends use of GSSAPI when\navailable (such as when operating in an Active Directory\nenvironment...). Note that, technically, you can run LDAP without using\nActive Directory and without running any kind of KDC, so we can't just\nblanket say \"use GSSAPI\" because there exists use-cases where that isn't\nan option.\n\nNot that I've ever actually *encountered* such an environment, but\npeople have assured me that they do, in fact, exist, and that there are\nusers of PG LDAP auth with such a setup who would be upset to see\nsupport for it removed.\n\nAnyhow, yes, a 'tip' would be great to add.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 21 Jun 2019 09:21:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: ldapbindpasswdfile"
}
] |
[
{
"msg_contents": "Hi,\n\nSince I keep forgetting the syntax and options, here is $SUBJECT.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 14 May 2019 17:50:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tab completion for CREATE TYPE"
},
{
"msg_contents": "Hello.\n\nAt Tue, 14 May 2019 17:50:58 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in <CA+hUKGLk=0yLDjfviONJLzcHEzygj=x6VbGH43LnXbBUvQb52g@mail.gmail.com>\n> Hi,\n> \n> Since I keep forgetting the syntax and options, here is $SUBJECT.\n\nI played with this a bit and found that \"... (attr=[tab]\" (no\nspace between \"r\" and \"=\") complets with '='. Isn't it annoying?\n\nOnly \"UPDATE hoge SET a=[tab]\" behaves the same way among\nexisting completions.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 14 May 2019 15:18:07 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "On Tue, May 14, 2019 at 6:18 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> I played with this a bit and found that \"... (attr=[tab]\" (no\n> space between \"r\" and \"=\") complets with '='. Isn't it annoying?\n>\n> Only \"UPDATE hoge SET a=[tab]\" behaves the same way among\n> existing completions.\n\nHmm. True. Here's one way to fix that.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 14 May 2019 18:58:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "On Tue, May 14, 2019 at 06:58:14PM +1200, Thomas Munro wrote:\n> On Tue, May 14, 2019 at 6:18 PM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > I played with this a bit and found that \"... (attr=[tab]\" (no\n> > space between \"r\" and \"=\") complets with '='. Isn't it annoying?\n> >\n> > Only \"UPDATE hoge SET a=[tab]\" behaves the same way among\n> > existing completions.\n> \n> Hmm. True. Here's one way to fix that.\n\nHmm... just got here.\n\nWhat happens around here?\n\n> \n> -- \n> Thomas Munro\n> https://enterprisedb.com\n\n\n\n\n\n",
"msg_date": "Tue, 14 May 2019 11:31:17 +0300",
"msg_from": "Edgy Hacker <edgy.hacker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "On Tue, May 14, 2019 at 8:32 PM Edgy Hacker <edgy.hacker@gmail.com> wrote:\n> Hmm... just got here.\n\nWelcome.\n\n> What happens around here?\n\nPlease see https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F .\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2019 21:01:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "On Tue, May 14, 2019 at 09:01:27PM +1200, Thomas Munro wrote:\n> On Tue, May 14, 2019 at 8:32 PM Edgy Hacker <edgy.hacker@gmail.com> wrote:\n> > Hmm... just got here.\n> \n> Welcome.\n\nThanks.\n\n> \n> > What happens around here?\n> \n> Please see https://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F .\n\nNot exactly a prospective developer but if it ever comes it...\n\n> \n> -- \n> Thomas Munro\n> https://enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 May 2019 12:25:21 +0300",
"msg_from": "Edgy Hacker <edgy.hacker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "At Tue, 14 May 2019 18:58:14 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in <CA+hUKG+ojKTKw=aG6QU=VmPMc8Sq7nM4Ah7fk1e+g1YngCVNmg@mail.gmail.com>\n> On Tue, May 14, 2019 at 6:18 PM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > I played with this a bit and found that \"... (attr=[tab]\" (no\n> > space between \"r\" and \"=\") complets with '='. Isn't it annoying?\n> >\n> > Only \"UPDATE hoge SET a=[tab]\" behaves the same way among\n> > existing completions.\n> \n> Hmm. True. Here's one way to fix that.\n\nThanks. That's what was in my mind.\n\nSome definition item names are induced from some current states\n(e.g. \"CREATE TYPE name AS RANGE (\" => \"SUBTYPE = \") but I think\nit's too much.\n\nCOLLATE is not suggested with possible collations but I think\nsuggesting it is not so useful.\n\nPASSEDBYVALUE is suggested with '=', which is different from\ndocumented syntax but I don't think that's not such a problem for\nthose who spell this command out.\n\n# By the way, collatable and preferred are boolean which behaves\n# the same way with passedbyvalue. Is there any intention in the\n# difference in the documentation?\n\nThe completion lists contain all possible words correctly (I\nthink \"analyse\" is an implicit synonym.).\n\nAs the result, I find it perfect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 14 May 2019 20:13:24 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for CREATE TYPE"
},
{
"msg_contents": "On Tue, May 14, 2019 at 11:13 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> At Tue, 14 May 2019 18:58:14 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in <CA+hUKG+ojKTKw=aG6QU=VmPMc8Sq7nM4Ah7fk1e+g1YngCVNmg@mail.gmail.com>\n> > On Tue, May 14, 2019 at 6:18 PM Kyotaro HORIGUCHI\n> > <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > > I played with this a bit and found that \"... (attr=[tab]\" (no\n> > > space between \"r\" and \"=\") complets with '='. Isn't it annoying?\n> > >\n> > > Only \"UPDATE hoge SET a=[tab]\" behaves the same way among\n> > > existing completions.\n> >\n> > Hmm. True. Here's one way to fix that.\n>\n> Thanks. That's what was in my mind.\n\nI pushed a fix for that separately. I remembered that we had decided\nto use MatchAnyExcept(\"...\") instead of \"!...\", so I did it that way.\n\n> Some definition item names are induced from some current states\n> (e.g. \"CREATE TYPE name AS RANGE (\" => \"SUBTYPE = \") but I think\n> it's too much.\n>\n> COLLATE is not suggested with possible collations but I think\n> suggesting it is not so useful.\n\nYes, there is room to make it smarter.\n\n> PASSEDBYVALUE is suggested with '=', which is different from\n> documented syntax but I don't think that's not such a problem for\n> those who spell this command out.\n>\n> # By the way, collatable and preferred are boolean which behaves\n> # the same way with passedbyvalue. Is there any intention in the\n> # difference in the documentation?\n\nGood question.\n\n> The completion lists contain all possible words correctly (I\n> think \"analyse\" is an implicit synonym.).\n\nI am not a fan of doing anything at all to support alternative\nspellings for keywords etc, even though I personally use British\nspelling in most contexts outside PostgreSQL source code. We don't\nsupport MATERIALISED, CATALOGUE, BACKWARDS/FORWARDS (with an S), etc,\nso I don't know why we have this one single word ANALYSE from a\ndifferent spelling system than the one used by SQL.\n\n> As the result, I find it perfect.\n\nPushed. Thanks for the review!\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Jul 2019 16:54:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for CREATE TYPE"
}
] |
[
{
"msg_contents": "Hi,\n This is working in Oracle but not in postgresql 'CREATE INDEX\nclient.test_1_idx\n ON dbo.test_1 (name);' . Can we implement this by another way?\n\nThanks\nNavneet\n\nHi, This is working in Oracle but not in postgresql 'CREATE INDEX client.test_1_idx ON dbo.test_1 (name);' . Can we implement this by another way?Thanks Navneet",
"msg_date": "Tue, 14 May 2019 03:41:37 -0400",
"msg_from": "navneet nikku <navneetnikks@gmail.com>",
"msg_from_op": true,
"msg_subject": "can we create index/constraints in different schema"
},
{
"msg_contents": "On Tue, May 14, 2019 at 03:41:37AM -0400, navneet nikku wrote:\n> This is working in Oracle but not in postgresql 'CREATE INDEX\n> client.test_1_idx\n> ON dbo.test_1 (name);' . Can we implement this by another way?\n\nNo, it is not possible to define a schema with CREATE INDEX, and an\nindex gets located in the same schema as its depending table.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 17:33:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: can we create index/constraints in different schema"
},
{
"msg_contents": "Ok, thanks for the clarification.\nRegards\nNavneet\n\nOn Tue, May 14, 2019 at 4:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 14, 2019 at 03:41:37AM -0400, navneet nikku wrote:\n> > This is working in Oracle but not in postgresql 'CREATE INDEX\n> > client.test_1_idx\n> > ON dbo.test_1 (name);' . Can we implement this by another way?\n>\n> No, it is not possible to define a schema with CREATE INDEX, and an\n> index gets located in the same schema as its depending table.\n> --\n> Michael\n>\n\nOk, thanks for the clarification. Regards NavneetOn Tue, May 14, 2019 at 4:33 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 14, 2019 at 03:41:37AM -0400, navneet nikku wrote:\n> This is working in Oracle but not in postgresql 'CREATE INDEX\n> client.test_1_idx\n> ON dbo.test_1 (name);' . Can we implement this by another way?\n\nNo, it is not possible to define a schema with CREATE INDEX, and an\nindex gets located in the same schema as its depending table.\n--\nMichael",
"msg_date": "Tue, 14 May 2019 04:34:53 -0400",
"msg_from": "navneet nikku <navneetnikks@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we create index/constraints in different schema"
}
] |
[
{
"msg_contents": "Hi,\n\nThat is an attempt number N+1 to relax checks for a temporary table access\nin a transaction that is going to be prepared.\n\nOne of the problems regarding the use of temporary tables in prepared transactions\nis that such transaction will hold locks for a temporary table after being prepared.\nThat locks will prevent the backend from exiting since it will fail to acquire lock\nneeded to delete temp table during exit. Also, re-acquiring such lock after server\nrestart seems like an ill-defined operation.\n\nI tried to allow prepared transactions that opened a temporary relation only in\nAccessShare mode and then neither transfer this lock to a dummy PGPROC nor include\nit in a 'prepare' record. Such prepared transaction will not prevent the backend from\nexiting and can be committed from other backend or after a restart.\n\nHowever, that modification allows new DDL-related serialization anomaly: it will be\npossible to prepare transaction which read table A; then drop A; then commit the\ntransaction. I not sure whether that is worse than not being able to access temp\nrelations or not. On the other hand, it is possible to drop AccessShare locks only for\ntemporary relation and don't change behavior for an ordinary table (in the attached\npatch this is done for all tables).\n\nAlso, I slightly modified ON COMMIT DELETE code path. Right now all ON COMMIT DELETE\ntemp tables are linked in a static list and if transaction accessed any temp table\nin any mode then during commit all tables from that list will be truncated. For a\ngiven patch that means that even if a transaction only did read from a temp table it\nanyway can access other temp tables with high lock mode during commit. I've added\nhashtable that tracks higher-than-AccessShare action with a temp table during\ncurrent transaction, so during commit, only tables from that hash will be truncated.\nThat way ON COMMIT DELETE tables in the backend will not prevent read-only access to\nsome other table in a given backend.\n\nAny thoughts?\n\n--\nStas Kelvich\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 14 May 2019 12:53:31 +0300",
"msg_from": "Stas Kelvich <s.kelvich@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "> On 14 May 2019, at 12:53, Stas Kelvich <s.kelvich@postgrespro.ru> wrote:\n> \n> Hi,\n> \n> That is an attempt number N+1 to relax checks for a temporary table access\n> in a transaction that is going to be prepared.\n> \n\nKonstantin Knizhnik made off-list review of this patch and spotted\nfew problems.\n\n* Incorrect reasoning that ON COMMIT DELETE truncate mechanism\nshould be changed in order to allow preparing transactions with\nread-only access to temp relations. It actually can be be leaved\nas is. Things done in previous patch for ON COMMIT DELETE may be\na performance win, but not directly related to this topic so I've\ndeleted that part.\n\n* Copy-paste error with check conditions in\nrelation_open/relation_try_open.\n\nFixed version attached.\n\n--\nStas Kelvich\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 22 May 2019 18:41:39 +0300",
"msg_from": "Stas Kelvich <s.kelvich@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On Tue, 14 May 2019 at 10:53, Stas Kelvich <s.kelvich@postgrespro.ru> wrote:\n\n\n> One of the problems regarding the use of temporary tables in prepared\n> transactions\n> is that such transaction will hold locks for a temporary table after being\n> prepared.\n> That locks will prevent the backend from exiting since it will fail to\n> acquire lock\n> needed to delete temp table during exit. Also, re-acquiring such lock\n> after server\n> restart seems like an ill-defined operation.\n>\n...\n\n> Any thoughts?\n>\n\nIt occurs to me that there is no problem to solve here.\n\nWhen we PREPARE, it is because we expect to COMMIT or ABORT soon afterwards.\n\nIf we are using an external transaction manager, the session is idle while\nwe wait for the manager to decide whether to commit or abort. Or the\nsession is disconnected or server is crashes. Either way, nothing happens\nbetween PREPARE and resolution. So there is no need at all for locking of\ntemporary tables after the prepare.\n\nThe ONLY case where this matters is if someone does a PREPARE and then\nstarts doing other work on the session. Which makes no sense in the normal\nworkflow of a session. I'm sure there are tests that do that, but those\ntests are unrepresentative of sensible usage.\n\nIf you were using session temporary tables while using a transaction mode\npool then you're already going to have problems, so that aspect is a\nnon-issue.\n\nSo I think we should ban this by definition. Say that we expect that you\nwon't do any work on the session until COMMIT/ABORT. That means we can then\ndrop locks on sesion temporary tables and drop on-commit temp tables when\nwe hit the prepare, not try and hold them for later.\n\nA patch is needed to implement the above, but I think we can forget the\ncurrent patch as not needed.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Tue, 14 May 2019 at 10:53, Stas Kelvich <s.kelvich@postgrespro.ru> wrote: One of the problems regarding the use of temporary tables in prepared transactions\nis that such transaction will hold locks for a temporary table after being prepared.\nThat locks will prevent the backend from exiting since it will fail to acquire lock\nneeded to delete temp table during exit. Also, re-acquiring such lock after server\nrestart seems like an ill-defined operation.... \nAny thoughts?It occurs to me that there is no problem to solve here.When we PREPARE, it is because we expect to COMMIT or ABORT soon afterwards.If we are using an external transaction manager, the session is idle while we wait for the manager to decide whether to commit or abort. Or the session is disconnected or server is crashes. Either way, nothing happens between PREPARE and resolution. So there is no need at all for locking of temporary tables after the prepare.The ONLY case where this matters is if someone does a PREPARE and then starts doing other work on the session. Which makes no sense in the normal workflow of a session. I'm sure there are tests that do that, but those tests are unrepresentative of sensible usage.If you were using session temporary tables while using a transaction mode pool then you're already going to have problems, so that aspect is a non-issue.So I think we should ban this by definition. Say that we expect that you won't do any work on the session until COMMIT/ABORT. That means we can then drop locks on sesion temporary tables and drop on-commit temp tables when we hit the prepare, not try and hold them for later.A patch is needed to implement the above, but I think we can forget the current patch as not needed.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 23 May 2019 12:36:09 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n> The ONLY case where this matters is if someone does a PREPARE and then\n> starts doing other work on the session. Which makes no sense in the normal\n> workflow of a session. I'm sure there are tests that do that, but those\n> tests are unrepresentative of sensible usage.\n\nThat's extremely common.\n\nThere's no way we can forbid using session after 2PC unconditionally,\nit'd break most users of 2PC.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 08:54:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On Thu, May 23, 2019 at 08:54:59AM -0700, Andres Freund wrote:\n> On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n>> The ONLY case where this matters is if someone does a PREPARE and then\n>> starts doing other work on the session. Which makes no sense in the normal\n>> workflow of a session. I'm sure there are tests that do that, but those\n>> tests are unrepresentative of sensible usage.\n> \n> That's extremely common.\n> \n> There's no way we can forbid using session after 2PC unconditionally,\n> it'd break most users of 2PC.\n\nThis does not break Postgres-XC or XL as their inner parts with a\nCOMMIT involving multiple write nodes issue a set of PREPARE\nTRANSACTION followed by an immediate COMMIT PREPARED which are\ntransparent for the user, so the point of Simon looks sensible from\nthis angle. Howewer, I much agree with Andres that it is very common\nto have PREPARE and COMMIT PREPARED issued with different sessions. I\nam not much into the details of XA-compliant drivers, but I think that\nhaving us lose this property would be the source of many complaints.\n--\nMichael",
"msg_date": "Fri, 24 May 2019 09:37:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On Thu, 23 May 2019 at 16:55, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n> > The ONLY case where this matters is if someone does a PREPARE and then\n> > starts doing other work on the session. Which makes no sense in the\n> normal\n> > workflow of a session. I'm sure there are tests that do that, but those\n> > tests are unrepresentative of sensible usage.\n>\n> That's extremely common.\n>\n\nNot at all.\n\n\n> There's no way we can forbid using session after 2PC unconditionally,\n> it'd break most users of 2PC.\n>\n\nSince we disagree, can you provide more information about this usage\npattern?\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Thu, 23 May 2019 at 16:55, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n> The ONLY case where this matters is if someone does a PREPARE and then\n> starts doing other work on the session. Which makes no sense in the normal\n> workflow of a session. I'm sure there are tests that do that, but those\n> tests are unrepresentative of sensible usage.\n\nThat's extremely common.Not at all. \nThere's no way we can forbid using session after 2PC unconditionally,\nit'd break most users of 2PC.Since we disagree, can you provide more information about this usage pattern? -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 24 May 2019 09:30:22 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On Fri, 24 May 2019 at 01:39, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, May 23, 2019 at 08:54:59AM -0700, Andres Freund wrote:\n> > On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n> >> The ONLY case where this matters is if someone does a PREPARE and then\n> >> starts doing other work on the session. Which makes no sense in the\n> normal\n> >> workflow of a session. I'm sure there are tests that do that, but those\n> >> tests are unrepresentative of sensible usage.\n> >\n> > That's extremely common.\n> >\n> > There's no way we can forbid using session after 2PC unconditionally,\n> > it'd break most users of 2PC.\n>\n> This does not break Postgres-XC or XL as their inner parts with a\n> COMMIT involving multiple write nodes issue a set of PREPARE\n> TRANSACTION followed by an immediate COMMIT PREPARED which are\n> transparent for the user, so the point of Simon looks sensible from\n> this angle.\n\n\nMaybe, but I am not discussing other products since they can be changed\nwithout discussion here.\n\n\n> Howewer, I much agree with Andres that it is very common\n> to have PREPARE and COMMIT PREPARED issued with different sessions. I\n> am not much into the details of XA-compliant drivers, but I think that\n> having us lose this property would be the source of many complaints.\n>\n\nYes, it is *very* common to have PREPARE and COMMIT PREPARED issued from\ndifferent sessions. That is the main usage in a session pool and not the\npoint I made.\n\nThere are two usage patterns, with a correlation between the way 2PC and\ntemp tables work:\n\nTransaction-mode session-pool: (Most common usage mode)\n* No usage of session-level temp tables (because that wouldn't work)\n* 2PC with PREPARE and COMMIT PREPARED on different sessions\n* No reason at all to hold locks on temp table after PREPARE\n\nSession-mode (Less frequent usage mode)\n* Usage of session-level temp tables\n* 2PC on same session only, i.e. no usage of session between PREPARE and\nCOMMIT PREPARED (Simon's observation)\n* No reason at all to hold locks on temp table after PREPARE (Simon's\nconclusion)\n\nI'd like to hear from anyone that thinks my observation is incorrect and to\nexplain their usage pattern so we can understand why they think they would\nexecute further SQL between PREPARE and COMMIT PREPARED when using a single\nsession, while at the same time using temp tables.\n\nIf there really is a usage pattern there we should take note of, then I\nsuggest we introduce a parameter that allows temp table locks to be dropped\nat PREPARE, so that we can use 2PC and Temp Tables with ease, for those\nthat want it.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 24 May 2019 at 01:39, Michael Paquier <michael@paquier.xyz> wrote:On Thu, May 23, 2019 at 08:54:59AM -0700, Andres Freund wrote:\n> On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n>> The ONLY case where this matters is if someone does a PREPARE and then\n>> starts doing other work on the session. Which makes no sense in the normal\n>> workflow of a session. I'm sure there are tests that do that, but those\n>> tests are unrepresentative of sensible usage.\n> \n> That's extremely common.\n> \n> There's no way we can forbid using session after 2PC unconditionally,\n> it'd break most users of 2PC.\n\nThis does not break Postgres-XC or XL as their inner parts with a\nCOMMIT involving multiple write nodes issue a set of PREPARE\nTRANSACTION followed by an immediate COMMIT PREPARED which are\ntransparent for the user, so the point of Simon looks sensible from\nthis angle. Maybe, but I am not discussing other products since they can be changed without discussion here. Howewer, I much agree with Andres that it is very common\nto have PREPARE and COMMIT PREPARED issued with different sessions. I\nam not much into the details of XA-compliant drivers, but I think that\nhaving us lose this property would be the source of many complaints.Yes, it is *very* common to have PREPARE and COMMIT PREPARED issued from different sessions. That is the main usage in a session pool and not the point I made.There are two usage patterns, with a correlation between the way 2PC and temp tables work:Transaction-mode session-pool: (Most common usage mode)* No usage of session-level temp tables (because that wouldn't work)* 2PC with PREPARE and COMMIT PREPARED on different sessions* No reason at all to hold locks on temp table after PREPARESession-mode (Less frequent usage mode)* Usage of session-level temp tables* 2PC on same session only, i.e. no usage of session between PREPARE and COMMIT PREPARED (Simon's observation)* No reason at all to hold locks on temp table after PREPARE (Simon's conclusion)I'd like to hear from anyone that thinks my observation is incorrect and to explain their usage pattern so we can understand why they think they would execute further SQL between PREPARE and COMMIT PREPARED when using a single session, while at the same time using temp tables.If there really is a usage pattern there we should take note of, then I suggest we introduce a parameter that allows temp table locks to be dropped at PREPARE, so that we can use 2PC and Temp Tables with ease, for those that want it.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 24 May 2019 09:52:49 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On 24.05.2019 11:52, Simon Riggs wrote:\n> On Fri, 24 May 2019 at 01:39, Michael Paquier <michael@paquier.xyz \n> <mailto:michael@paquier.xyz>> wrote:\n>\n> On Thu, May 23, 2019 at 08:54:59AM -0700, Andres Freund wrote:\n> > On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n> >> The ONLY case where this matters is if someone does a PREPARE\n> and then\n> >> starts doing other work on the session. Which makes no sense in\n> the normal\n> >> workflow of a session. I'm sure there are tests that do that,\n> but those\n> >> tests are unrepresentative of sensible usage.\n> >\n> > That's extremely common.\n> >\n> > There's no way we can forbid using session after 2PC\n> unconditionally,\n> > it'd break most users of 2PC.\n>\n> This does not break Postgres-XC or XL as their inner parts with a\n> COMMIT involving multiple write nodes issue a set of PREPARE\n> TRANSACTION followed by an immediate COMMIT PREPARED which are\n> transparent for the user, so the point of Simon looks sensible from\n> this angle. \n>\n>\n> Maybe, but I am not discussing other products since they can be \n> changed without discussion here.\n>\n> Howewer, I much agree with Andres that it is very common\n> to have PREPARE and COMMIT PREPARED issued with different sessions. I\n> am not much into the details of XA-compliant drivers, but I think that\n> having us lose this property would be the source of many complaints.\n>\n>\n> Yes, it is *very* common to have PREPARE and COMMIT PREPARED issued \n> from different sessions. That is the main usage in a session pool and \n> not the point I made.\n>\n> There are two usage patterns, with a correlation between the way 2PC \n> and temp tables work:\n>\n> Transaction-mode session-pool: (Most common usage mode)\n> * No usage of session-level temp tables (because that wouldn't work)\n> * 2PC with PREPARE and COMMIT PREPARED on different sessions\n> * No reason at all to hold locks on temp table after PREPARE\n>\n> Session-mode (Less frequent usage mode)\n> * Usage of session-level temp tables\n> * 2PC on same session only, i.e. no usage of session between PREPARE \n> and COMMIT PREPARED (Simon's observation)\n> * No reason at all to hold locks on temp table after PREPARE (Simon's \n> conclusion)\n>\n> I'd like to hear from anyone that thinks my observation is incorrect \n> and to explain their usage pattern so we can understand why they think \n> they would execute further SQL between PREPARE and COMMIT PREPARED \n> when using a single session, while at the same time using temp tables.\n>\n> If there really is a usage pattern there we should take note of, then \n> I suggest we introduce a parameter that allows temp table locks to be \n> dropped at PREPARE, so that we can use 2PC and Temp Tables with ease, \n> for those that want it.\n>\n> -- \n> Simon Riggs http://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/>\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n From my point of view releasing all temporary table locks after \npreparing of 2PC transaction is not technically possible:\nassume that this transaction has updated some tuples of temporary table \n- them are not visible to other transactions until 2PC is committed,\nbut since lock is removed, other transactions can update the same tuple.\n\nProhibiting transaction to do anything else except COMMIT/ROLLBACK \nPREPARED after preparing transaction seems to be too voluntaristic decision.\nI do not think that \"That's extremely common\", but I almost sure that \nthere are such cases.\n\nThe safe scenario is when temporary table is created and dropped inside \ntransaction (table created with ON COMMIT DROP). But there is still one \nissue with this scenario: first creation of temporary table cause \ncreation of\npg_temp namespace and it can not be undone. Another possible scenario is \ntemporary table created outside transaction with ON COMMIT DELETE. In \nthis case truncation of table on prepare will also release all locks.\n\nPure read-only access to temporary tables seems to be not so useful, \nbecause before reading something from temporary table, we have to write \nsomething to it. And if reading of temporary table is wrapped in 2PC,\nthen most likely writing to temporary table also has to be wrapped in \n2PC, which is not possible with the proposed solution.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 24.05.2019 11:52, Simon Riggs wrote:\n\n\n\n\n\n\nOn Fri, 24 May 2019 at 01:39, Michael Paquier\n <michael@paquier.xyz>\n wrote:\n\n\nOn\n Thu, May 23, 2019 at 08:54:59AM -0700, Andres Freund\n wrote:\n > On 2019-05-23 12:36:09 +0100, Simon Riggs wrote:\n >> The ONLY case where this matters is if someone\n does a PREPARE and then\n >> starts doing other work on the session. Which\n makes no sense in the normal\n >> workflow of a session. I'm sure there are tests\n that do that, but those\n >> tests are unrepresentative of sensible usage.\n > \n > That's extremely common.\n > \n > There's no way we can forbid using session after\n 2PC unconditionally,\n > it'd break most users of 2PC.\n\n This does not break Postgres-XC or XL as their inner\n parts with a\n COMMIT involving multiple write nodes issue a set of\n PREPARE\n TRANSACTION followed by an immediate COMMIT PREPARED\n which are\n transparent for the user, so the point of Simon looks\n sensible from\n this angle. \n\n\nMaybe, but I am not discussing other products since\n they can be changed without discussion here.\n \nHowewer,\n I much agree with Andres that it is very common\n to have PREPARE and COMMIT PREPARED issued with\n different sessions. I\n am not much into the details of XA-compliant drivers,\n but I think that\n having us lose this property would be the source of many\n complaints.\n\n\n\nYes, it is *very* common to have PREPARE and COMMIT\n PREPARED issued from different sessions. That is the\n main usage in a session pool and not the point I made.\n\n\nThere are two usage patterns, with a correlation\n between the way 2PC and temp tables work:\n\n\nTransaction-mode session-pool: (Most common usage\n mode)\n\n* No usage of session-level temp tables (because\n that wouldn't work)\n\n* 2PC with PREPARE and COMMIT PREPARED on different\n sessions\n* No reason at all to hold locks on temp table after\n PREPARE\n\n\nSession-mode (Less frequent usage mode)\n* Usage of session-level temp tables\n* 2PC on same session only, i.e. no usage of session\n between PREPARE and COMMIT PREPARED (Simon's\n observation)\n* No reason at all to hold locks on temp table after\n PREPARE (Simon's conclusion)\n\n\n\nI'd like to hear from anyone that thinks my\n observation is incorrect and to explain their usage\n pattern so we can understand why they think they would\n execute further SQL between PREPARE and COMMIT PREPARED\n when using a single session, while at the same time\n using temp tables.\n\n\nIf there really is a usage pattern there we should\n take note of, then I suggest we introduce a parameter\n that allows temp table locks to be dropped at PREPARE,\n so that we can use 2PC and Temp Tables with ease, for\n those that want it.\n\n\n\n -- \n\nSimon\n Riggs http://www.2ndQuadrant.com/\nPostgreSQL\n Development, 24x7 Support, Remote DBA, Training &\n Services\n\n\n\n\n\n\n\n From my point of view releasing all temporary table locks after\n preparing of 2PC transaction is not technically possible:\n assume that this transaction has updated some tuples of temporary\n table - them are not visible to other transactions until 2PC is\n committed,\n but since lock is removed, other transactions can update the same\n tuple.\n\n Prohibiting transaction to do anything else except COMMIT/ROLLBACK\n PREPARED after preparing transaction seems to be too voluntaristic\n decision.\n I do not think that \"That's extremely common\", but I almost sure\n that there are such cases.\n\n The safe scenario is when temporary table is created and dropped\n inside transaction (table created with ON COMMIT DROP). But there is\n still one issue with this scenario: first creation of temporary\n table cause creation of \n pg_temp namespace and it can not be undone. Another possible\n scenario is temporary table created outside transaction with ON\n COMMIT DELETE. In this case truncation of table on prepare will also\n release all locks.\n\n Pure read-only access to temporary tables seems to be not so\n useful, because before reading something from temporary table, we\n have to write something to it. And if reading of temporary table is\n wrapped in 2PC,\n then most likely writing to temporary table also has to be wrapped\n in 2PC, which is not possible with the proposed solution.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 24 May 2019 19:37:15 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-24 19:37:15 +0300, Konstantin Knizhnik wrote:\n> From my point of view releasing all temporary table locks after preparing of\n> 2PC transaction is not technically possible:\n> assume that this transaction has� updated some tuples of temporary table - them\n> are not visible to other transactions until 2PC is committed,\n> but since lock is removed, other transactions can update the same tuple.\n\nI don't think tuple level actions are the problem? Those doesn't require\ntable level locks to be held.\n\nGenerally, I fail to see how locks themselves are the problem. The\nproblem are the catalog entries for the temp table, the relation forks,\nand the fact that a session basically couldn't drop (and if created in\nthat transaction, use) etc the temp table after the PREPARE.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 May 2019 10:09:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
},
{
"msg_contents": "On Fri, 24 May 2019 at 18:09, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-24 19:37:15 +0300, Konstantin Knizhnik wrote:\n> > From my point of view releasing all temporary table locks after\n> preparing of\n> > 2PC transaction is not technically possible:\n> > assume that this transaction has updated some tuples of temporary table\n> - them\n> > are not visible to other transactions until 2PC is committed,\n> > but since lock is removed, other transactions can update the same tuple.\n>\n> I don't think tuple level actions are the problem? Those doesn't require\n> table level locks to be held.\n>\n> Generally, I fail to see how locks themselves are the problem.\n\n\nAgreed\n\n\n> The\n> problem are the catalog entries for the temp table, the relation forks,\n> and the fact that a session basically couldn't drop (and if created in\n> that transaction, use) etc the temp table after the PREPARE.\n>\n\nI don't see there is a problem here, but run out of time to explain more,\nfor a week.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nOn Fri, 24 May 2019 at 18:09, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-05-24 19:37:15 +0300, Konstantin Knizhnik wrote:\n> From my point of view releasing all temporary table locks after preparing of\n> 2PC transaction is not technically possible:\n> assume that this transaction has updated some tuples of temporary table - them\n> are not visible to other transactions until 2PC is committed,\n> but since lock is removed, other transactions can update the same tuple.\n\nI don't think tuple level actions are the problem? Those doesn't require\ntable level locks to be held.\n\nGenerally, I fail to see how locks themselves are the problem.Agreed The\nproblem are the catalog entries for the temp table, the relation forks,\nand the fact that a session basically couldn't drop (and if created in\nthat transaction, use) etc the temp table after the PREPARE.I don't see there is a problem here, but run out of time to explain more, for a week.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 26 May 2019 09:45:55 +0100",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read-only access to temp tables for 2PC transactions"
}
] |
[
{
"msg_contents": "The 'succeeded' argument seems backwards here:\n\n> static void\n> heapam_tuple_complete_speculative(Relation relation, TupleTableSlot *slot,\n> \t\t\t\t\t\t\t\t uint32 spekToken, bool succeeded)\n> {\n> \tbool\t\tshouldFree = true;\n> \tHeapTuple\ttuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);\n> \n> \t/* adjust the tuple's state accordingly */\n> \tif (!succeeded)\n> \t\theap_finish_speculative(relation, &slot->tts_tid);\n> \telse\n> \t\theap_abort_speculative(relation, &slot->tts_tid);\n> \n> \tif (shouldFree)\n> \t\tpfree(tuple);\n> }\n\nAccording to the comments, if \"succeeded = true\", the insertion is \ncompleted, and otherwise it's killed. It works, because the only caller \nis also passing the argument wrong.\n\nBarring objections, I'll push the attached patch to fix that.\n\n- Heikki",
"msg_date": "Tue, 14 May 2019 14:29:01 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Table AM callback table_complete_speculative()'s succeeded argument\n is reversed"
},
{
"msg_contents": "Hi,\n\n\nOn May 14, 2019 4:29:01 AM PDT, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>The 'succeeded' argument seems backwards here:\n>\n>> static void\n>> heapam_tuple_complete_speculative(Relation relation, TupleTableSlot\n>*slot,\n>> \t\t\t\t\t\t\t\t uint32 spekToken, bool succeeded)\n>> {\n>> \tbool\t\tshouldFree = true;\n>> \tHeapTuple\ttuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);\n>> \n>> \t/* adjust the tuple's state accordingly */\n>> \tif (!succeeded)\n>> \t\theap_finish_speculative(relation, &slot->tts_tid);\n>> \telse\n>> \t\theap_abort_speculative(relation, &slot->tts_tid);\n>> \n>> \tif (shouldFree)\n>> \t\tpfree(tuple);\n>> }\n>\n>According to the comments, if \"succeeded = true\", the insertion is \n>completed, and otherwise it's killed. It works, because the only caller\n>\n>is also passing the argument wrong.\n\nThanks for finding.\n\n\n>Barring objections, I'll push the attached patch to fix that.\n\nPlease hold off - your colleagues found this before, and I worked on getting test coverage for the code. It's scheduled for commit together today. Unfortunately nobody looked at the test much...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 14 May 2019 07:06:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Table AM callback table_complete_speculative()'s succeeded\n argument is reversed"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 07:06:34 -0700, Andres Freund wrote:\n> On May 14, 2019 4:29:01 AM PDT, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> >The 'succeeded' argument seems backwards here:\n> >\n> >> static void\n> >> heapam_tuple_complete_speculative(Relation relation, TupleTableSlot\n> >*slot,\n> >> \t\t\t\t\t\t\t\t uint32 spekToken, bool succeeded)\n> >> {\n> >> \tbool\t\tshouldFree = true;\n> >> \tHeapTuple\ttuple = ExecFetchSlotHeapTuple(slot, true, &shouldFree);\n> >> \n> >> \t/* adjust the tuple's state accordingly */\n> >> \tif (!succeeded)\n> >> \t\theap_finish_speculative(relation, &slot->tts_tid);\n> >> \telse\n> >> \t\theap_abort_speculative(relation, &slot->tts_tid);\n> >> \n> >> \tif (shouldFree)\n> >> \t\tpfree(tuple);\n> >> }\n> >\n> >According to the comments, if \"succeeded = true\", the insertion is \n> >completed, and otherwise it's killed. It works, because the only caller\n> >\n> >is also passing the argument wrong.\n> \n> Thanks for finding.\n> \n> \n> >Barring objections, I'll push the attached patch to fix that.\n> \n> Please hold off - your colleagues found this before, and I worked on getting test coverage for the code. It's scheduled for commit together today. Unfortunately nobody looked at the test much...\n\n\\\nAnd pushed, as https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=aa4b8c61d2cd57b53be03defb04d59b232a0e150\nwith the part that wasn't covered by tests now covered by\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=08e2edc0767ab6e619970f165cb34d4673105f23\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 12:23:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Table AM callback table_complete_speculative()'s succeeded\n argument is reversed"
}
] |
[
{
"msg_contents": "Hi there I am interested about the project and have gone through the\nproject idea.\nBut I would like to know more about the project and the organization\nexpectations\nthe tech writers .Apart from the skills and language mentioned..what more\nskills/language you are expecting from technical writers.\nplease let me know.\n\nHi there I am interested about the project and have gone through the project idea.But I would like to know more about the project and the organization expectations the tech writers .Apart from the skills and language mentioned..what more skills/language you are expecting from technical writers.please let me know.",
"msg_date": "Tue, 14 May 2019 18:22:29 +0530",
"msg_from": "\"Manvendra Singh 4-Yr B.Tech. Chemical Engg.,\n IIT (BHU) Varanasi\" <manvendra.singh.che17@itbhu.ac.in>",
"msg_from_op": true,
"msg_subject": "SEASON OF DOCS PROJECT"
}
] |
[
{
"msg_contents": "Hi,\n\nVACUUM fails to parse 0 and 1 as boolean value\n\nThe document for VACUUM explains\n\n boolean\n Specifies whether the selected option should be turned on or off.\n You can write TRUE, ON, or 1 to enable the option, and FALSE, OFF,\n or 0 to disable it.\n\nBut VACUUM fails to parse 0 and 1 as boolean value as follows.\n\n =# VACUUM (INDEX_CLEANUP 1);\n ERROR: syntax error at or near \"1\" at character 23\n STATEMENT: VACUUM (INDEX_CLEANUP 1);\n\nThis looks a bug. The cause of this is a lack of NumericOnly clause\nfor vac_analyze_option_arg in gram.y. The attached patch\nadds such NumericOnly. The bug exists only in 12dev.\n\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 15 May 2019 02:45:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 02:45:21 +0900, Fujii Masao wrote:\n> VACUUM fails to parse 0 and 1 as boolean value\n> \n> The document for VACUUM explains\n> \n> boolean\n> Specifies whether the selected option should be turned on or off.\n> You can write TRUE, ON, or 1 to enable the option, and FALSE, OFF,\n> or 0 to disable it.\n> \n> But VACUUM fails to parse 0 and 1 as boolean value as follows.\n> \n> =# VACUUM (INDEX_CLEANUP 1);\n> ERROR: syntax error at or near \"1\" at character 23\n> STATEMENT: VACUUM (INDEX_CLEANUP 1);\n> \n> This looks a bug. The cause of this is a lack of NumericOnly clause\n> for vac_analyze_option_arg in gram.y. The attached patch\n> adds such NumericOnly. The bug exists only in 12dev.\n> \n> Barring any objection, I will commit the patch.\n\nMight be worth having a common rule for such options, so we don't\nduplicate the knowledge between different places.\n\nCCing Robert and Sawada-san, who committed / authored that code.\n\ncommit 41b54ba78e8c4d64679ba4daf82e4e2efefe1922\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2019-03-29 08:22:49 -0400\n\n Allow existing VACUUM options to take a Boolean argument.\n \n This makes VACUUM work more like EXPLAIN already does without changing\n the meaning of any commands that already work. It is intended to\n facilitate the addition of future VACUUM options that may take\n non-Boolean parameters or that default to false.\n \n Masahiko Sawada, reviewed by me.\n \n Discussion: http://postgr.es/m/CA+TgmobpYrXr5sUaEe_T0boabV0DSm=utSOZzwCUNqfLEEm8Mw@mail.gmail.com\n Discussion: http://postgr.es/m/CAD21AoBaFcKBAeL5_++j+Vzir2vBBcF4juW7qH8b3HsQY=Q6+w@mail.gmail.com\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 10:52:23 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Tue, May 14, 2019 at 10:52:23AM -0700, Andres Freund wrote:\n> Might be worth having a common rule for such options, so we don't\n> duplicate the knowledge between different places.\n> \n> CCing Robert and Sawada-san, who committed / authored that code.\n\nHmn. I think that Robert's commit is right to rely on defGetBoolean()\nfor option parsing. That's what we use for anything from CREATE\nEXTENSION to CREATE SUBSCRIPTION, etc.\n--\nMichael",
"msg_date": "Wed, 15 May 2019 08:20:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 08:20:33 +0900, Michael Paquier wrote:\n> On Tue, May 14, 2019 at 10:52:23AM -0700, Andres Freund wrote:\n> > Might be worth having a common rule for such options, so we don't\n> > duplicate the knowledge between different places.\n> > \n> > CCing Robert and Sawada-san, who committed / authored that code.\n> \n> Hmn. I think that Robert's commit is right to rely on defGetBoolean()\n> for option parsing. That's what we use for anything from CREATE\n> EXTENSION to CREATE SUBSCRIPTION, etc.\n\nThat seems like a separate angle? What does that have to do with\naccepting 0/1 in the grammar? I mean, EXPLAIN also uses defGetBoolean(),\nwhile accepting NumericOnly for the option values?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 16:26:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Wed, May 15, 2019 at 08:20:33AM +0900, Michael Paquier wrote:\n> Hmn. I think that Robert's commit is right to rely on defGetBoolean()\n> for option parsing. That's what we use for anything from CREATE\n> EXTENSION to CREATE SUBSCRIPTION, etc.\n\nAnd I need more coffee at this time of the day... Because I have not\nlooked at the proposed patch.\n\nThe patch of Fujii-san does its job as far as it goes, but we have\nmore parsing nodes with the same logic:\n- explain_option_arg, which is the same.\n- copy_generic_opt_arg, which shares the same root.\n\nSo there is room for a common rule, still it does not impact that many\nplaces. I would have believed that more commands use that.\n--\nMichael",
"msg_date": "Wed, 15 May 2019 08:29:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Wed, May 15, 2019 at 2:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-05-15 02:45:21 +0900, Fujii Masao wrote:\n> > VACUUM fails to parse 0 and 1 as boolean value\n> >\n> > The document for VACUUM explains\n> >\n> > boolean\n> > Specifies whether the selected option should be turned on or off.\n> > You can write TRUE, ON, or 1 to enable the option, and FALSE, OFF,\n> > or 0 to disable it.\n> >\n> > But VACUUM fails to parse 0 and 1 as boolean value as follows.\n> >\n> > =# VACUUM (INDEX_CLEANUP 1);\n> > ERROR: syntax error at or near \"1\" at character 23\n> > STATEMENT: VACUUM (INDEX_CLEANUP 1);\n> >\n> > This looks a bug. The cause of this is a lack of NumericOnly clause\n> > for vac_analyze_option_arg in gram.y. The attached patch\n> > adds such NumericOnly. The bug exists only in 12dev.\n\nThank you for reporting and the patch.\n\n> >\n> > Barring any objection, I will commit the patch.\n>\n> Might be worth having a common rule for such options, so we don't\n> duplicate the knowledge between different places.\n\n+1 for committing this patch.\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 May 2019 10:46:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Wed, May 15, 2019 at 2:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-05-15 02:45:21 +0900, Fujii Masao wrote:\n> > VACUUM fails to parse 0 and 1 as boolean value\n> >\n> > The document for VACUUM explains\n> >\n> > boolean\n> > Specifies whether the selected option should be turned on or off.\n> > You can write TRUE, ON, or 1 to enable the option, and FALSE, OFF,\n> > or 0 to disable it.\n> >\n> > But VACUUM fails to parse 0 and 1 as boolean value as follows.\n> >\n> > =# VACUUM (INDEX_CLEANUP 1);\n> > ERROR: syntax error at or near \"1\" at character 23\n> > STATEMENT: VACUUM (INDEX_CLEANUP 1);\n> >\n> > This looks a bug. The cause of this is a lack of NumericOnly clause\n> > for vac_analyze_option_arg in gram.y. The attached patch\n> > adds such NumericOnly. The bug exists only in 12dev.\n> >\n> > Barring any objection, I will commit the patch.\n>\n> Might be worth having a common rule for such options, so we don't\n> duplicate the knowledge between different places.\n\nYes. Thanks for the comment!\nAttached is the updated version of the patch.\nIt adds such common rule.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Fri, 17 May 2019 03:56:17 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> Yes. Thanks for the comment!\n> Attached is the updated version of the patch.\n> It adds such common rule.\n\nI'm not sure how much value it really has to define\nopt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n3 places, but costs 6 lines of code to have it.\n\nPerhaps we could try to unify at a higher level. Like can we merge\nvac_analyze_option_list with explain_option_list?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 May 2019 15:29:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "We now have several syntax elements seemingly the same but behave\ndifferent way.\n\nAt Thu, 16 May 2019 15:29:36 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmobK1ngid9Pxs7g8RFQDH+O1X4yyL+vMQtaV7i6m-Xn0rw@mail.gmail.com>\n> On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > Yes. Thanks for the comment!\n> > Attached is the updated version of the patch.\n> > It adds such common rule.\n> \n> I'm not sure how much value it really has to define\n> opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> 3 places, but costs 6 lines of code to have it.\n\nANALYZE (options) desn't accept 1/0 but only accepts true/false\nor on/off. Why are we going to make VACUUM differently?\n\nAnd the documentation for ANALYZE doesn't mention the change.\n\nI think we don't need to support 1/0 as boolean here (it's\nunnatural) and the documentation of VACUUM/ANALYZE should be\nfixed.\n\n> Perhaps we could try to unify at a higher level. Like can we merge\n> vac_analyze_option_list with explain_option_list?\n\nAlso REINDEX (VERBOSE) doesn't accept explict argument as of\nnow. (reindex_option_list)\n\nI'm not sure about FDW/SERVER/CREATE USER MAPPING but perhaps\nit's a different from this.\n\nCOPY .. WITH (options) doesn't accpet 1/0 as boolean.\n\ncopy_generic_opt_arg:\n opt_boolean_or_string { $$ = (Node *) makeString($1); }\n | NumericOnly { $$ = (Node *) $1; }\n | '*' { $$ = (Node *) makeNode(A_Star); }\n | '(' copy_generic_opt_arg_list ')' { $$ = (Node *) $2; }\n | /* EMPTY */ { $$ = NULL; }\n ;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 17 May 2019 10:21:21 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Mmm. It has gone before complete.\n\nAt Fri, 17 May 2019 10:21:21 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in <20190517.102121.72558057.horiguchi.kyotaro@lab.ntt.co.jp>\n> We now have several syntax elements seemingly the same but behave\n> different way.\n> \n> At Thu, 16 May 2019 15:29:36 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmobK1ngid9Pxs7g8RFQDH+O1X4yyL+vMQtaV7i6m-Xn0rw@mail.gmail.com>\n> > On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > Yes. Thanks for the comment!\n> > > Attached is the updated version of the patch.\n> > > It adds such common rule.\n> > \n> > I'm not sure how much value it really has to define\n> > opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> > 3 places, but costs 6 lines of code to have it.\n> \n> ANALYZE (options) desn't accept 1/0 but only accepts true/false\n> or on/off. Why are we going to make VACUUM differently?\n\nThe patch changes the behvaior of ANALYZE together. Please ignore\nthis.\n\n> And the documentation for ANALYZE doesn't mention the change.\n> \n> I think we don't need to support 1/0 as boolean here (it's\n> unnatural) and the documentation of VACUUM/ANALYZE should be\n> fixed.\n> \n> > Perhaps we could try to unify at a higher level. Like can we merge\n> > vac_analyze_option_list with explain_option_list?\n> \n> Also REINDEX (VERBOSE) doesn't accept explict argument as of\n> now. (reindex_option_list)\n> \n> I'm not sure about FDW/SERVER/CREATE USER MAPPING but perhaps\n> it's a different from this.\n> \n> COPY .. WITH (options) doesn't accpet 1/0 as boolean.\n> \n> copy_generic_opt_arg:\n> opt_boolean_or_string { $$ = (Node *) makeString($1); }\n> | NumericOnly { $$ = (Node *) $1; }\n> | '*' { $$ = (Node *) makeNode(A_Star); }\n> | '(' copy_generic_opt_arg_list ')' { $$ = (Node *) $2; }\n> | /* EMPTY */ { $$ = NULL; }\n> ;\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Fri, 17 May 2019 10:24:25 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Thu, May 16, 2019 at 03:29:36PM -0400, Robert Haas wrote:\n> I'm not sure how much value it really has to define\n> opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> 3 places, but costs 6 lines of code to have it.\n> \n> Perhaps we could try to unify at a higher level. Like can we merge\n> vac_analyze_option_list with explain_option_list?\n\nvar_value has also similar semantics, and it uses makeAConst(). At\nthis point of the game, I'd like to think that it would be just better\nto leave all the refactoring behind us on HEAD, to apply the first\npatch presented on this thread, and to work more on that for v13 once\nit opens for business if there is a patch to discuss about.\n--\nMichael",
"msg_date": "Fri, 17 May 2019 10:34:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Fri, May 17, 2019 at 10:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 16, 2019 at 03:29:36PM -0400, Robert Haas wrote:\n> > I'm not sure how much value it really has to define\n> > opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> > 3 places, but costs 6 lines of code to have it.\n> >\n> > Perhaps we could try to unify at a higher level. Like can we merge\n> > vac_analyze_option_list with explain_option_list?\n>\n> var_value has also similar semantics, and it uses makeAConst(). At\n> this point of the game, I'd like to think that it would be just better\n> to leave all the refactoring behind us on HEAD, to apply the first\n> patch presented on this thread, and to work more on that for v13 once\n> it opens for business if there is a patch to discuss about.\n\nWe can refactor the gram.y several ways and it's not good to\nrush the partial refactoring code into v12 before beta.\nI'm ok to apply the first patch and focus on the bugfix at this moment.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Mon, 20 May 2019 11:47:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Thu, May 16, 2019 at 9:21 PM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> I think we don't need to support 1/0 as boolean here (it's\n> unnatural) and the documentation of VACUUM/ANALYZE should be\n> fixed.\n\nWell, it's confusing that we're not consistent about which spellings\nare accepted. The GUC system accepts true/false, on/off, and 0/1, so\nit seems reasonable to me to standardize on that treatment across the\nboard. That's not necessarily something we have to do for v12, but\nlonger-term, consistency is of value.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 May 2019 09:55:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Fri, May 17, 2019 at 10:21 AM Kyotaro HORIGUCHI\n<horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n>\n> We now have several syntax elements seemingly the same but behave\n> different way.\n>\n> At Thu, 16 May 2019 15:29:36 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmobK1ngid9Pxs7g8RFQDH+O1X4yyL+vMQtaV7i6m-Xn0rw@mail.gmail.com>\n> > On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > Yes. Thanks for the comment!\n> > > Attached is the updated version of the patch.\n> > > It adds such common rule.\n> >\n> > I'm not sure how much value it really has to define\n> > opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> > 3 places, but costs 6 lines of code to have it.\n>\n> ANALYZE (options) desn't accept 1/0 but only accepts true/false\n> or on/off. Why are we going to make VACUUM differently?\n>\n> And the documentation for ANALYZE doesn't mention the change.\n\nCommit 41b54ba78e seems to affect also ANALYZE syntax.\nIf it's intentional, IMO we should apply the attached patch.\nThought?\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Tue, 21 May 2019 02:10:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "> On Thu, May 16, 2019 at 8:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> Yes. Thanks for the comment!\n> Attached is the updated version of the patch.\n> It adds such common rule.\n\nIf I understand correctly, it resulted in the commit fc7c281f8. For some reason\nit breaks vacuum tests for me, is it expected?\n\n ANALYZE (nonexistent-arg) does_not_exist;\n -ERROR: syntax error at or near \"-\"\n +ERROR: syntax error at or near \"arg\"\n LINE 1: ANALYZE (nonexistent-arg) does_not_exist;\n - ^\n + ^\n ANALYZE (nonexistentarg) does_not_exit;\n\n\n",
"msg_date": "Mon, 20 May 2019 21:14:30 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Hi,\n\nOn May 20, 2019 12:14:30 PM PDT, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> On Thu, May 16, 2019 at 8:56 PM Fujii Masao <masao.fujii@gmail.com>\n>wrote:\n>>\n>> Yes. Thanks for the comment!\n>> Attached is the updated version of the patch.\n>> It adds such common rule.\n>\n>If I understand correctly, it resulted in the commit fc7c281f8. For\n>some reason\n>it breaks vacuum tests for me, is it expected?\n>\n> ANALYZE (nonexistent-arg) does_not_exist;\n> -ERROR: syntax error at or near \"-\"\n> +ERROR: syntax error at or near \"arg\"\n> LINE 1: ANALYZE (nonexistent-arg) does_not_exist;\n> - ^\n> + ^\n> ANALYZE (nonexistentarg) does_not_exit;\n\nThat has since been fixed, right?\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Mon, 20 May 2019 12:20:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "> On Mon, May 20, 2019 at 9:20 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On May 20, 2019 12:14:30 PM PDT, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >> On Thu, May 16, 2019 at 8:56 PM Fujii Masao <masao.fujii@gmail.com>\n> >wrote:\n> >>\n> >> Yes. Thanks for the comment!\n> >> Attached is the updated version of the patch.\n> >> It adds such common rule.\n> >\n> >If I understand correctly, it resulted in the commit fc7c281f8. For\n> >some reason\n> >it breaks vacuum tests for me, is it expected?\n> >\n> > ANALYZE (nonexistent-arg) does_not_exist;\n> > -ERROR: syntax error at or near \"-\"\n> > +ERROR: syntax error at or near \"arg\"\n> > LINE 1: ANALYZE (nonexistent-arg) does_not_exist;\n> > - ^\n> > + ^\n> > ANALYZE (nonexistentarg) does_not_exit;\n>\n> That has since been fixed, right?\n\nYep, right, after I've checkout 47a14c99e471. Sorry for the noise.\n\n\n",
"msg_date": "Mon, 20 May 2019 21:22:34 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Tue, May 21, 2019 at 2:10 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Fri, May 17, 2019 at 10:21 AM Kyotaro HORIGUCHI\n> <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> >\n> > We now have several syntax elements seemingly the same but behave\n> > different way.\n> >\n> > At Thu, 16 May 2019 15:29:36 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmobK1ngid9Pxs7g8RFQDH+O1X4yyL+vMQtaV7i6m-Xn0rw@mail.gmail.com>\n> > > On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > > Yes. Thanks for the comment!\n> > > > Attached is the updated version of the patch.\n> > > > It adds such common rule.\n> > >\n> > > I'm not sure how much value it really has to define\n> > > opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> > > 3 places, but costs 6 lines of code to have it.\n> >\n> > ANALYZE (options) desn't accept 1/0 but only accepts true/false\n> > or on/off. Why are we going to make VACUUM differently?\n> >\n> > And the documentation for ANALYZE doesn't mention the change.\n>\n> Commit 41b54ba78e seems to affect also ANALYZE syntax.\n> If it's intentional, IMO we should apply the attached patch.\n> Thought?\n>\n\n+1\nThank you for the patch!\n\nRegards,\n\n--\nMasahiko Sawada\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 May 2019 11:41:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Mon, May 20, 2019 at 09:55:59AM -0400, Robert Haas wrote:\n> Well, it's confusing that we're not consistent about which spellings\n> are accepted. The GUC system accepts true/false, on/off, and 0/1, so\n> it seems reasonable to me to standardize on that treatment across the\n> board. That's not necessarily something we have to do for v12, but\n> longer-term, consistency is of value.\n\n+1.\n\nNote: boolean GUCs accept a bit more: yes, no, tr, fa, and their upper\ncase flavors, etc. These are everything parse_bool():bool.c accepts\nas valid values.\n--\nMichael",
"msg_date": "Tue, 21 May 2019 14:31:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "At Tue, 21 May 2019 14:31:32 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190521053132.GG1921@paquier.xyz>\n> On Mon, May 20, 2019 at 09:55:59AM -0400, Robert Haas wrote:\n> > Well, it's confusing that we're not consistent about which spellings\n> > are accepted. The GUC system accepts true/false, on/off, and 0/1, so\n> > it seems reasonable to me to standardize on that treatment across the\n> > board. That's not necessarily something we have to do for v12, but\n> > longer-term, consistency is of value.\n> \n> +1.\n> \n> Note: boolean GUCs accept a bit more: yes, no, tr, fa, and their upper\n> case flavors, etc. These are everything parse_bool():bool.c accepts\n> as valid values.\n\nYeah, I agree for longer-term. The opinion was short-term\nconsideration on v12. We would be able to achieve full\nunification on sub-applications in v13 in that direction. (But I\ndon't think it's good that apps pass-through options then server\ncheckes them..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n",
"msg_date": "Tue, 21 May 2019 16:00:25 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 20, 2019 at 09:55:59AM -0400, Robert Haas wrote:\n>> Well, it's confusing that we're not consistent about which spellings\n>> are accepted. The GUC system accepts true/false, on/off, and 0/1, so\n>> it seems reasonable to me to standardize on that treatment across the\n>> board. That's not necessarily something we have to do for v12, but\n>> longer-term, consistency is of value.\n\n> +1.\n\n> Note: boolean GUCs accept a bit more: yes, no, tr, fa, and their upper\n> case flavors, etc. These are everything parse_bool():bool.c accepts\n> as valid values.\n\nI'm not excited about allowing abbreviated keywords here, though.\nAllowing true/false, on/off, and 0/1 seems reasonable but let's\nnot go overboard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 09:40:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 16:00:25 +0900, Kyotaro HORIGUCHI wrote:\n> At Tue, 21 May 2019 14:31:32 +0900, Michael Paquier <michael@paquier.xyz> wrote in <20190521053132.GG1921@paquier.xyz>\n> > On Mon, May 20, 2019 at 09:55:59AM -0400, Robert Haas wrote:\n> > > Well, it's confusing that we're not consistent about which spellings\n> > > are accepted. The GUC system accepts true/false, on/off, and 0/1, so\n> > > it seems reasonable to me to standardize on that treatment across the\n> > > board. That's not necessarily something we have to do for v12, but\n> > > longer-term, consistency is of value.\n> > \n> > +1.\n> > \n> > Note: boolean GUCs accept a bit more: yes, no, tr, fa, and their upper\n> > case flavors, etc. These are everything parse_bool():bool.c accepts\n> > as valid values.\n> \n> Yeah, I agree for longer-term. The opinion was short-term\n> consideration on v12. We would be able to achieve full\n> unification on sub-applications in v13 in that direction. (But I\n> don't think it's good that apps pass-through options then server\n> checkes them..)\n\nTo me it is odd to introduce an option, just to revamp the accepted\nstyle of arguments in the next release. I think we ought to just clean\nthis up now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 08:19:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 21, 2019 at 2:10 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > On Fri, May 17, 2019 at 10:21 AM Kyotaro HORIGUCHI\n> > <horiguchi.kyotaro@lab.ntt.co.jp> wrote:\n> > >\n> > > We now have several syntax elements seemingly the same but behave\n> > > different way.\n> > >\n> > > At Thu, 16 May 2019 15:29:36 -0400, Robert Haas <robertmhaas@gmail.com> wrote in <CA+TgmobK1ngid9Pxs7g8RFQDH+O1X4yyL+vMQtaV7i6m-Xn0rw@mail.gmail.com>\n> > > > On Thu, May 16, 2019 at 2:56 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > > > Yes. Thanks for the comment!\n> > > > > Attached is the updated version of the patch.\n> > > > > It adds such common rule.\n> > > >\n> > > > I'm not sure how much value it really has to define\n> > > > opt_boolean_or_string_or_numeric. It saves 1 line of code in each of\n> > > > 3 places, but costs 6 lines of code to have it.\n> > >\n> > > ANALYZE (options) desn't accept 1/0 but only accepts true/false\n> > > or on/off. Why are we going to make VACUUM differently?\n> > >\n> > > And the documentation for ANALYZE doesn't mention the change.\n> >\n> > Commit 41b54ba78e seems to affect also ANALYZE syntax.\n> > If it's intentional, IMO we should apply the attached patch.\n> > Thought?\n> >\n>\n> +1\n> Thank you for the patch!\n\nI found that tab-completion also needs to be updated for ANALYZE\nboolean options. I added that change for tab-completion into\nthe patch and am thinking to apply the attached patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 22 May 2019 04:32:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Wed, May 22, 2019 at 04:32:38AM +0900, Fujii Masao wrote:\n> I found that tab-completion also needs to be updated for ANALYZE\n> boolean options. I added that change for tab-completion into\n> the patch and am thinking to apply the attached patch.\n\nLooks fine to me at quick glance.\n--\nMichael",
"msg_date": "Wed, 22 May 2019 16:46:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
},
{
"msg_contents": "On Wed, May 22, 2019 at 4:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 22, 2019 at 04:32:38AM +0900, Fujii Masao wrote:\n> > I found that tab-completion also needs to be updated for ANALYZE\n> > boolean options. I added that change for tab-completion into\n> > the patch and am thinking to apply the attached patch.\n>\n> Looks fine to me at quick glance.\n\nThanks! Committed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 23 May 2019 01:20:53 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM fails to parse 0 and 1 as boolean value"
}
] |
[
{
"msg_contents": "Hi,\n\nThere's a new set of CPU vulnerabilities, so far only affecting intel\nCPUs. Cribbing from the linux-kernel announcement I'm referring to\nhttps://xenbits.xen.org/xsa/advisory-297.html\nfor details.\n\nThe \"fix\" is for the OS to perform some extra mitigations:\nhttps://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html\nhttps://www.kernel.org/doc/html/latest/x86/mds.html#mds\n\n*And* SMT/hyperthreading needs to be disabled, to be fully safe.\n\nFun.\n\nI've run a quick pgbench benchmark:\n\n*Without* disabling SMT, for readonly pgbench, I'm seeing regressions\nbetween 7-11%, depending on the size of shared_buffers (and some runtime\nvariations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\nI'd be surprised if there weren't adversarial loads with bigger\nslowdowns - what gets more expensive with the mitigations is syscalls.\n\n\nMost OSs / distributions either have rolled these changes out already,\nor will do so soon. So it's likely that most of us and our users will be\naffected by this soon. At least on linux the part of the mitigation\nthat makes syscalls slower (blowing away buffers at the end of a sycall)\nis enabled by default, but SMT is not disabled by default.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 15:30:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "On Wed, May 15, 2019 at 10:31 AM Andres Freund <andres@anarazel.de> wrote:\n> *Without* disabling SMT, for readonly pgbench, I'm seeing regressions\n> between 7-11%, depending on the size of shared_buffers (and some runtime\n> variations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\n> I'd be surprised if there weren't adversarial loads with bigger\n> slowdowns - what gets more expensive with the mitigations is syscalls.\n\nYikes. This all in warm shared buffers, right? So effectively this\nis the cost of recvfrom() and sendto() going up? Did you use -M\nprepared? If not, there would also be a couple of lseek(SEEK_END)\ncalls in between for planning... I wonder how many more\nsyscall-taxing mitigations we need before relation size caching pays\noff.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 May 2019 12:52:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 12:52:47 +1200, Thomas Munro wrote:\n> On Wed, May 15, 2019 at 10:31 AM Andres Freund <andres@anarazel.de> wrote:\n> > *Without* disabling SMT, for readonly pgbench, I'm seeing regressions\n> > between 7-11%, depending on the size of shared_buffers (and some runtime\n> > variations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\n> > I'd be surprised if there weren't adversarial loads with bigger\n> > slowdowns - what gets more expensive with the mitigations is syscalls.\n> \n> Yikes. This all in warm shared buffers, right?\n\nNot initially, but it ought to warm up quite quickly. I ran something\nboiling down to pgbench -q -i -s 200; psql -c 'vacuum (freeze, analyze,\nverbose)'; pgbench -n -S -c 32 -j 32 -S -M prepared -T 100 -P1. As both\npgbench -i's COPY and VACUUM use ringbuffers, initially s_b will\neffectively be empty.\n\n\n> So effectively this is the cost of recvfrom() and sendto() going up?\n\nPlus epoll_wait(). And read(), for the cases where s_b was smaller than\nthe data.\n\n\n> Did you use -M prepared?\n\nYes.\n\n\n> If not, there would also be a couple of lseek(SEEK_END) calls in\n> between for planning... I wonder how many more syscall-taxing\n> mitigations we need before relation size caching pays off.\n\nYea, I suspect we're going to have to go there soon for a number of\nreasons.\n\n- Andres\n\n\n",
"msg_date": "Tue, 14 May 2019 18:06:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-14 15:30:52 -0700, Andres Freund wrote:\n> There's a new set of CPU vulnerabilities, so far only affecting intel\n> CPUs. Cribbing from the linux-kernel announcement I'm referring to\n> https://xenbits.xen.org/xsa/advisory-297.html\n> for details.\n> \n> The \"fix\" is for the OS to perform some extra mitigations:\n> https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html\n> https://www.kernel.org/doc/html/latest/x86/mds.html#mds\n> \n> *And* SMT/hyperthreading needs to be disabled, to be fully safe.\n> \n> Fun.\n> \n> I've run a quick pgbench benchmark:\n> \n> *Without* disabling SMT, for readonly pgbench, I'm seeing regressions\n> between 7-11%, depending on the size of shared_buffers (and some runtime\n> variations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\n> I'd be surprised if there weren't adversarial loads with bigger\n> slowdowns - what gets more expensive with the mitigations is syscalls.\n\nThe profile after the mitigations looks like:\n\n+ 3.62% postgres [kernel.vmlinux] [k] do_syscall_64\n+ 2.99% postgres postgres [.] _bt_compare\n+ 2.76% postgres postgres [.] hash_search_with_hash_value\n+ 2.33% postgres [kernel.vmlinux] [k] entry_SYSCALL_64\n+ 1.69% pgbench [kernel.vmlinux] [k] do_syscall_64\n+ 1.61% postgres postgres [.] AllocSetAlloc\n 1.41% postgres postgres [.] PostgresMain\n+ 1.22% pgbench [kernel.vmlinux] [k] entry_SYSCALL_64\n+ 1.11% postgres postgres [.] LWLockAcquire\n+ 0.86% postgres postgres [.] PinBuffer\n+ 0.80% postgres postgres [.] LockAcquireExtended\n+ 0.78% postgres [kernel.vmlinux] [k] psi_task_change\n 0.76% pgbench pgbench [.] threadRun\n 0.69% postgres postgres [.] LWLockRelease\n+ 0.69% postgres postgres [.] SearchCatCache1\n 0.66% postgres postgres [.] LockReleaseAll\n+ 0.65% postgres postgres [.] GetSnapshotData\n+ 0.58% postgres postgres [.] hash_seq_search\n 0.54% postgres postgres [.] hash_search\n+ 0.53% postgres [kernel.vmlinux] [k] __switch_to\n+ 0.53% postgres postgres [.] hash_any\n 0.52% pgbench libpq.so.5.12 [.] pqParseInput3\n 0.50% pgbench [kernel.vmlinux] [k] do_raw_spin_lock\n\nwhere do_syscall_64 show this instruction profile:\n\n │ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)\n │ {\n │ asm_volatile_goto(\"1:\"\n 1.58 │ ↓ jmpq bd\n │ mds_clear_cpu_buffers():\n │ * Works with any segment selector, but a valid writable\n │ * data segment is the fastest variant.\n │ *\n │ * \"cc\" clobber is required because VERW modifies ZF.\n │ */\n │ asm volatile(\"verw %[ds]\" : : [ds] \"m\" (ds) : \"cc\");\n 77.38 │ verw 0x13fea53(%rip) # ffffffff82400ee0 <ds.4768>\n │ do_syscall_64():\n │ }\n │\n │ syscall_return_slowpath(regs);\n │ }\n 13.18 │ bd: pop %rbx\n 0.08 │ pop %rbp\n │ ← retq\n │ nr = syscall_trace_enter(regs);\n │ c0: mov %rbp,%rdi\n │ → callq syscall_trace_enter\n\n\nWhere verw is the instruction that was recycled to now have the\nside-effect of flushing CPU buffers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 May 2019 18:13:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "On Wed, May 15, 2019 at 1:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > I've run a quick pgbench benchmark:\n> >\n> > *Without* disabling SMT, for readonly pgbench, I'm seeing regressions\n> > between 7-11%, depending on the size of shared_buffers (and some runtime\n> > variations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\n> > I'd be surprised if there weren't adversarial loads with bigger\n> > slowdowns - what gets more expensive with the mitigations is syscalls.\n\nThis stuff landed in my FreeBSD 13.0-CURRENT kernel, so I was curious\nto measure it with and without the earlier mitigations. On my humble\ni7-8550U laptop with the new 1.22 microcode installed, with my usual\nsettings of PTI=on and IBRS=off, so far MDS=VERW gives me ~1.5% loss\nof TPS with a single client, up to 4.3% loss of TPS for 16 clients,\nbut it didn't go higher when I tried 32 clients. This was a tiny\nscale 10 database, though in a quick test it didn't look like it was\nworse with scale 100.\n\nWith all three mitigations activated, my little dev machine has gone\nfrom being able to do ~11.8 million baseline syscalls per second to\n~1.6 million, or ~1.4 million with the AVX variant of the mitigation.\n\nRaw getuid() syscalls per second:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 11798658 4764159 3274043\n off on 2652564 1941606 1655356\n on off 4973053 2932906 2339779\n on on 1988527 1556922 1378798\n\npgbench read-only transactions per second, 1 client thread:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 19393 18949 18615\n off on 17946 17586 17323\n on off 19381 19015 18696\n on on 18045 17709 17418\n\npgbench -M prepared read-only transactions per second, 1 client thread:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 35020 34049 33200\n off on 31658 30902 30229\n on off 35445 34353 33415\n on on 32415 31599 30712\n\npgbench -M prepared read-only transactions per second, 4 client threads:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 79515 76898 76465\n off on 63608 62220 61952\n on off 77863 75431 74847\n on on 62709 60790 60575\n\npgbench -M prepared read-only transactions per second, 16 client threads:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 125984 121164 120468\n off on 112884 108346 107984\n on off 121032 116156 115462\n on on 108889 104636 104027\n\ntime gmake -s check:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 16.78 16.85 17.03\n off on 18.19 18.81 19.08\n on off 16.67 16.86 17.33\n on on 18.58 18.83 18.99\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2019 23:08:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "Missatge de Thomas Munro <thomas.munro@gmail.com> del dia dj., 16 de\nmaig 2019 a les 13:09:\n>\n> On Wed, May 15, 2019 at 1:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I've run a quick pgbench benchmark:\n> > >\n> > > *Without* disabling SMT, for readonly pgbench, I'm seeing regressions\n> > > between 7-11%, depending on the size of shared_buffers (and some runtime\n> > > variations). That's just on my laptop, with an i7-6820HQ / Haswell CPU.\n> > > I'd be surprised if there weren't adversarial loads with bigger\n> > > slowdowns - what gets more expensive with the mitigations is syscalls.\n>\n> This stuff landed in my FreeBSD 13.0-CURRENT kernel, so I was curious\n> to measure it with and without the earlier mitigations. On my humble\n> i7-8550U laptop with the new 1.22 microcode installed, with my usual\n> settings of PTI=on and IBRS=off, so far MDS=VERW gives me ~1.5% loss\n> of TPS with a single client, up to 4.3% loss of TPS for 16 clients,\n> but it didn't go higher when I tried 32 clients. This was a tiny\n> scale 10 database, though in a quick test it didn't look like it was\n> worse with scale 100.\n>\n> With all three mitigations activated, my little dev machine has gone\n> from being able to do ~11.8 million baseline syscalls per second to\n\nDid you mean \"1.8\"?\n\n> ~1.6 million, or ~1.4 million with the AVX variant of the mitigation.\n>\n> Raw getuid() syscalls per second:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 11798658 4764159 3274043\n> off on 2652564 1941606 1655356\n> on off 4973053 2932906 2339779\n> on on 1988527 1556922 1378798\n>\n> pgbench read-only transactions per second, 1 client thread:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 19393 18949 18615\n> off on 17946 17586 17323\n> on off 19381 19015 18696\n> on on 18045 17709 17418\n>\n> pgbench -M prepared read-only transactions per second, 1 client thread:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 35020 34049 33200\n> off on 31658 30902 30229\n> on off 35445 34353 33415\n> on on 32415 31599 30712\n>\n> pgbench -M prepared read-only transactions per second, 4 client threads:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 79515 76898 76465\n> off on 63608 62220 61952\n> on off 77863 75431 74847\n> on on 62709 60790 60575\n>\n> pgbench -M prepared read-only transactions per second, 16 client threads:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 125984 121164 120468\n> off on 112884 108346 107984\n> on off 121032 116156 115462\n> on on 108889 104636 104027\n>\n> time gmake -s check:\n>\n> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> ===== ===== ======== ======== ========\n> off off 16.78 16.85 17.03\n> off on 18.19 18.81 19.08\n> on off 16.67 16.86 17.33\n> on on 18.58 18.83 18.99\n>\n> --\n> Thomas Munro\n> https://enterprisedb.com\n>\n>\n\n\n-- \nAlbert Cervera i Areny\nhttp://www.NaN-tic.com\nTel. 93 553 18 03\n\n\n",
"msg_date": "Thu, 16 May 2019 18:24:40 +0200",
"msg_from": "Albert Cervera i Areny <albert@nan-tic.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "On 5/16/19 12:24 PM, Albert Cervera i Areny wrote:\n> Missatge de Thomas Munro <thomas.munro@gmail.com> del dia dj., 16 de\n> maig 2019 a les 13:09:\n>> With all three mitigations activated, my little dev machine has gone\n>> from being able to do ~11.8 million baseline syscalls per second to\n> \n> Did you mean \"1.8\"?\n\nNot in what I thought I saw:\n\n>> ~1.6 million, or ~1.4 million ...\n>>\n>> PTI IBRS MDS=off MDS=VERW MDS=AVX\n>> ===== ===== ======== ======== ========\n>> off off 11798658 4764159 3274043\n ^^^^^^^^\n>> off on 2652564 1941606 1655356\n>> on off 4973053 2932906 2339779\n>> on on 1988527 1556922 1378798\n ^^^^^^^ ^^^^^^^\n\n-Chap\n\n\n",
"msg_date": "Thu, 16 May 2019 13:26:41 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "On Fri, May 17, 2019 at 5:26 AM Chapman Flack <chap@anastigmatix.net> wrote:\n> On 5/16/19 12:24 PM, Albert Cervera i Areny wrote:\n> > Missatge de Thomas Munro <thomas.munro@gmail.com> del dia dj., 16 de\n> > maig 2019 a les 13:09:\n> >> With all three mitigations activated, my little dev machine has gone\n> >> from being able to do ~11.8 million baseline syscalls per second to\n> >\n> > Did you mean \"1.8\"?\n>\n> Not in what I thought I saw:\n>\n> >> ~1.6 million, or ~1.4 million ...\n> >>\n> >> PTI IBRS MDS=off MDS=VERW MDS=AVX\n> >> ===== ===== ======== ======== ========\n> >> off off 11798658 4764159 3274043\n> ^^^^^^^^\n> >> off on 2652564 1941606 1655356\n> >> on off 4973053 2932906 2339779\n> >> on on 1988527 1556922 1378798\n> ^^^^^^^ ^^^^^^^\n\nRight. Actually it's worse than that -- after I posted I realised\nthat I had some debug stuff enabled in my kernel that was slowing\nthings down a bit, so I reran the tests overnight with a production\nkernel and here is what I see this morning. It's actually ~17.8\nmillion syscalls/sec -> ~1.7 million syscalls/sec, if you go from all\nmitigations off to all mitigations on, or -> ~3.2 million for just PTI\n+ MDS. And the loss of TPS is ~5% for the case I was most interested\nin, just turning on MDS=VERW if you already had PTI on and IBRS off.\n\nRaw getuid() syscalls per second:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 17771744 5372032 3575035\n off on 3060923 2166527 1817052\n on off 5622591 3150883 2463934\n on on 2213190 1687748 1475605\n\npgbench read-only transactions per second, 1 client thread:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 22414 22103 21571\n off on 21298 20817 20418\n on off 22473 22080 21550\n on on 21286 20850 20386\n\npgbench -M prepared read-only transactions per second, 1 client thread:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 43508 42476 41123\n off on 40729 39483 38555\n on off 44110 42989 42012\n on on 41143 39990 38798\n\npgbench -M prepared read-only transactions per second, 4 client threads:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 100735 97689 96662\n off on 80142 77804 77064\n on off 100540 97010 95827\n on on 79492 76976 76226\n\npgbench -M prepared read-only transactions per second, 16 client threads:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 161015 152978 152556\n off on 145605 139438 139179\n on off 155359 147691 146987\n on on 140976 134978 134177\n\npgbench -M prepared read-only transactions per second, 16 client threads:\n\n PTI IBRS MDS=off MDS=VERW MDS=AVX\n ===== ===== ======== ======== ========\n off off 157986 150132 149436\n off on 142618 136220 135901\n on off 153482 146214 145839\n on on 138650 133074 132142\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 09:42:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
},
{
"msg_contents": "On Fri, May 17, 2019 at 9:42 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> pgbench -M prepared read-only transactions per second, 16 client threads:\n\n(That second \"16 client threads\" line should read \"32 client threads\".)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 09:46:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PSA: New intel MDS vulnerability mitigations cause measurable\n slowdown"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have just finished my annual set of checks with\nwal_consistency_checking enabled based on f4125278, and I am seeing no\nfailures when replaying comparison pages on a standby.\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 15 May 2019 13:10:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "wal_consistency_checking clean on HEAD (f4125278)"
}
] |
[
{
"msg_contents": "Hi,\r\nCurrently any DDL operations (Create Indexes, Drop Indexes etc.) when run during an existing concurrent index build on the same table causes the index build to fail with “deadlock detected”. This is a pain-point specially when we want to kick-off multiple concurrent index builds on the same table; the index build will reach phase 3 (consuming resources) and then fail with deadlock errors.\r\n\r\nI have a patch that might improve the build times and reduce deadlock occurrences. Is this something the community would be interested in? I might be missing some documentation changes in the patch but wanted to get some feedback on the functional aspect of the patch first.\r\n\r\nProblem:\r\nIn the Concurrent Index creation implementation there are three waits that are relevant:\r\n\r\n 1. Wait 1 at start of Phase 2: Postgres waits for all transactions that started before this transaction and conflict with “Share Lock” on this relation. This is to make sure from this point forward all HOT updates to the table will be compatible with the new index.\r\n 2. Wait 2 at the start of Phase 3: Postgres waits for all transactions that started before this transaction and conflict with “Share Lock” on this relation.\r\n 3. Wait 3 at the end of Phase 3: PG waits for all transactions that started before this transaction primarily because they should not start using the index as they might be using an older snapshot and the index does not have all the entries (missing deleted tuples) for snapshot.\r\n\r\nTypically, all the three wait states can cause deadlocks. Deadlocks due to the third wait state is reproduced by transactions that are waiting for a lock to be freed from “CREATE INDEX CONCURRENTLY” will cause deadlocks (primarily DDLs). The former 2 waits are much harder to reproduce with the test case being a Insert/Update/Delete as first statement of the transaction and then another DDL which causes lock escalation.\r\n\r\nProposed Solution:\r\nWe remove the third wait state completely from the concurrent index build. When we mark the index as ready, we also mark “indcheckxmin” to true which essentially enforces Postgres to not use this index for older snapshots.\r\n\r\nTests:\r\nAdded an isolation test which breaks without the patch. Manual test with a Repeatable Read Transaction that has an older snapshot with a tuple that has been deleted since and not part of the index.\r\n\r\n\r\nMay the force be with you,\r\nDhruv",
"msg_date": "Wed, 15 May 2019 08:15:04 +0000",
"msg_from": "\"Goel, Dhruv\" <goeldhru@amazon.com>",
"msg_from_op": true,
"msg_subject": "Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Hello,\n\nOn Wed, May 15, 2019 at 1:45 PM Goel, Dhruv <goeldhru@amazon.com> wrote:\n\n>\n>\n> Proposed Solution:\n>\n> We remove the third wait state completely from the concurrent index build.\n> When we mark the index as ready, we also mark “indcheckxmin” to true which\n> essentially enforces Postgres to not use this index for older snapshots.\n>\n>\n>\nI think there is a problem in the proposed solution. When phase 3 is\nreached, the index is valid. But it might not contain tuples deleted\njust before the reference snapshot was taken. Hence, we wait for those\ntransactions that might have older snapshot. The TransactionXmin of these\ntransactions can be greater than the xmin of the pg_index entry for this\nindex.\nInstead of waiting in the third phase, if we just set indcheckxmin as true,\nthe above transactions will be able to use the index which is wrong.\n(because they won't find the recently deleted tuples from the index that\nare still live according to their snapshots)\n\nThe respective code from get_relation_info:\nif (index->indcheckxmin &&\n\n\n\n !TransactionIdPrecedes(HeapTupleHeaderGetXmin(indexRelation->rd_indextuple->t_data),\nTransactionXmin))\n { /* don't use this index */ }\n\nPlease let me know if I'm missing something.\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\nHello,On Wed, May 15, 2019 at 1:45 PM Goel, Dhruv <goeldhru@amazon.com> wrote:\n \nProposed Solution:\nWe remove the third wait state completely from the concurrent index build. When we mark the index as ready, we also mark “indcheckxmin” to true which essentially enforces Postgres to not use this\n index for older snapshots.\n \n\n\nI think there is a problem in the proposed solution. When phase 3 is reached, the index is valid. But it might not contain tuples deleted just before the reference snapshot was taken. Hence, we wait for those transactions that might have older snapshot. The TransactionXmin of these transactions can be greater than the xmin of the pg_index entry for this index.Instead of waiting in the third phase, if we just set indcheckxmin as true, the above transactions will be able to use the index which is wrong. (because they won't find the recently deleted tuples from the index that are still live according to their snapshots)The respective code from get_relation_info:if (index->indcheckxmin && !TransactionIdPrecedes(HeapTupleHeaderGetXmin(indexRelation->rd_indextuple->t_data), TransactionXmin)) { /* don't use this index */ }Please let me know if I'm missing something.-- Thanks & Regards,Kuntal GhoshEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 May 2019 16:14:23 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Yes, you are correct. The test case here was that if a tuple is inserted after the reference snapshot is taken in Phase 2 and before the index is marked ready. If this tuple is deleted before the reference snapshot of Phase 3, it will never make it to the index. I have fixed this problem by making pg_index tuple updates transactional (I believe there is no reason why it has to be in place now) so that the xmin of the pg_index tuple is same the xmin of the snapshot in Phase 3.\r\n\r\nAttached the amended patch.\r\n\r\nFrom: Kuntal Ghosh <kuntalghosh.2007@gmail.com>\r\nDate: Wednesday, May 15, 2019 at 3:45 AM\r\nTo: \"Goel, Dhruv\" <goeldhru@amazon.com>\r\nCc: \"pgsql-hackers@postgresql.org\" <pgsql-hackers@postgresql.org>\r\nSubject: Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY\r\n\r\nHello,\r\n\r\nOn Wed, May 15, 2019 at 1:45 PM Goel, Dhruv <goeldhru@amazon.com<mailto:goeldhru@amazon.com>> wrote:\r\n\r\nProposed Solution:\r\nWe remove the third wait state completely from the concurrent index build. When we mark the index as ready, we also mark “indcheckxmin” to true which essentially enforces Postgres to not use this index for older snapshots.\r\n\r\nI think there is a problem in the proposed solution. When phase 3 is reached, the index is valid. But it might not contain tuples deleted just before the reference snapshot was taken. Hence, we wait for those transactions that might have older snapshot. The TransactionXmin of these transactions can be greater than the xmin of the pg_index entry for this index.\r\nInstead of waiting in the third phase, if we just set indcheckxmin as true, the above transactions will be able to use the index which is wrong. (because they won't find the recently deleted tuples from the index that are still live according to their snapshots)\r\n\r\nThe respective code from get_relation_info:\r\nif (index->indcheckxmin &&\r\n !TransactionIdPrecedes(HeapTupleHeaderGetXmin(indexRelation->rd_indextuple->t_data), TransactionXmin))\r\n { /* don't use this index */ }\r\n\r\nPlease let me know if I'm missing something.\r\n\r\n--\r\nThanks & Regards,\r\nKuntal Ghosh\r\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 6 Jun 2019 22:13:14 +0000",
"msg_from": "\"Goel, Dhruv\" <goeldhru@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "\"Goel, Dhruv\" <goeldhru@amazon.com> writes:\n> Yes, you are correct. The test case here was that if a tuple is inserted after the reference snapshot is taken in Phase 2 and before the index is marked ready. If this tuple is deleted before the reference snapshot of Phase 3, it will never make it to the index. I have fixed this problem by making pg_index tuple updates transactional (I believe there is no reason why it has to be in place now) so that the xmin of the pg_index tuple is same the xmin of the snapshot in Phase 3.\n\nI think you are mistaken that doing transactional updates in pg_index\nis OK. If memory serves, we rely on xmin of the pg_index row for purposes\nsuch as detecting whether a concurrently-created index is safe to use yet.\nSo a transactional update would restart that clock and result in temporary\ndenial of service.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jun 2019 11:36:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Hi,\n\nOn June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\"Goel, Dhruv\" <goeldhru@amazon.com> writes:\n>I think you are mistaken that doing transactional updates in pg_index\n>is OK. If memory serves, we rely on xmin of the pg_index row for\n>purposes\n>such as detecting whether a concurrently-created index is safe to use\n>yet.\n\nWe could replace that with storing a 64 xid in a normal column nowadays.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sun, 09 Jun 2019 08:40:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think you are mistaken that doing transactional updates in pg_index\n>> is OK. If memory serves, we rely on xmin of the pg_index row for\n>> purposes such as detecting whether a concurrently-created index is safe\n>> to use yet.\n\n> We could replace that with storing a 64 xid in a normal column nowadays.\n\nPerhaps, but that's a nontrivial change that'd be prerequisite to\ndoing what's suggested in this thread.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Jun 2019 20:33:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "\n> On Jun 9, 2019, at 5:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> On June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think you are mistaken that doing transactional updates in pg_index\n>>> is OK. If memory serves, we rely on xmin of the pg_index row for\n>>> purposes such as detecting whether a concurrently-created index is safe\n>>> to use yet.\n\nI took a deeper look regarding this use case but was unable to find more evidence. As part of this patch, we essentially make concurrently-created index safe to use only if transaction started after the xmin of Phase 3. Even today concurrent indexes can not be used for transactions before this xmin because of the wait (which I am trying to get rid of in this patch), is there any other denial of service you are talking about? Both the other states indislive, indisready can be transactional updates as far as I understand. Is there anything more I am missing here?\n\n",
"msg_date": "Mon, 10 Jun 2019 20:22:16 +0000",
"msg_from": "\"Goel, Dhruv\" <goeldhru@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "\n> On Jun 10, 2019, at 1:20 PM, Goel, Dhruv <goeldhru@amazon.com> wrote:\n> \n> \n>> On Jun 9, 2019, at 5:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Andres Freund <andres@anarazel.de> writes:\n>>> On June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> I think you are mistaken that doing transactional updates in pg_index\n>>>> is OK. If memory serves, we rely on xmin of the pg_index row for\n>>>> purposes such as detecting whether a concurrently-created index is safe\n>>>> to use yet.\n> \n> I took a deeper look regarding this use case but was unable to find more evidence. As part of this patch, we essentially make concurrently-created index safe to use only if transaction started after the xmin of Phase 3. Even today concurrent indexes can not be used for transactions before this xmin because of the wait (which I am trying to get rid of in this patch), is there any other denial of service you are talking about? Both the other states indislive, indisready can be transactional updates as far as I understand. Is there anything more I am missing here?\n\n\nHi,\n\nI did some more concurrency testing here through some python scripts which compare the end state of the concurrently created indexes. I also back-ported this patch to PG 9.6 and ran some custom concurrency tests (Inserts, Deletes, and Create Index Concurrently) which seem to succeed. The intermediate states unfortunately are not easy to test in an automated manner, but to be fair concurrent indexes could never be used for older transactions. Do you have more inputs/ideas on this patch?\n\nThanks,\nDhruv\n\n",
"msg_date": "Sun, 30 Jun 2019 07:30:01 +0000",
"msg_from": "\"Goel, Dhruv\" <goeldhru@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "On Sun, Jun 30, 2019 at 7:30 PM Goel, Dhruv <goeldhru@amazon.com> wrote:\n> > On Jun 10, 2019, at 1:20 PM, Goel, Dhruv <goeldhru@amazon.com> wrote:\n> >> On Jun 9, 2019, at 5:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> On June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>>> I think you are mistaken that doing transactional updates in pg_index\n> >>>> is OK. If memory serves, we rely on xmin of the pg_index row for\n> >>>> purposes such as detecting whether a concurrently-created index is safe\n> >>>> to use yet.\n> >\n> > I took a deeper look regarding this use case but was unable to find more evidence. As part of this patch, we essentially make concurrently-created index safe to use only if transaction started after the xmin of Phase 3. Even today concurrent indexes can not be used for transactions before this xmin because of the wait (which I am trying to get rid of in this patch), is there any other denial of service you are talking about? Both the other states indislive, indisready can be transactional updates as far as I understand. Is there anything more I am missing here?\n>\n> I did some more concurrency testing here through some python scripts which compare the end state of the concurrently created indexes. I also back-ported this patch to PG 9.6 and ran some custom concurrency tests (Inserts, Deletes, and Create Index Concurrently) which seem to succeed. The intermediate states unfortunately are not easy to test in an automated manner, but to be fair concurrent indexes could never be used for older transactions. Do you have more inputs/ideas on this patch?\n\nI noticed that check-world passed several times with this patch\napplied, but the most recent CI run failed in multiple-cic:\n\n+error in steps s2i s1i: ERROR: cache lookup failed for index 26303\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555472214\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 09:51:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 9:51 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I noticed that check-world passed several times with this patch\n> applied, but the most recent CI run failed in multiple-cic:\n>\n> +error in steps s2i s1i: ERROR: cache lookup failed for index 26303\n>\n> https://travis-ci.org/postgresql-cfbot/postgresql/builds/555472214\n\nAnd in another run, this time on Windows, create_index failed:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46455\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 10:15:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Hi,\n\nThank you Thomas.\n\nOn Jul 7, 2019, at 3:15 PM, Thomas Munro <thomas.munro@gmail.com<mailto:thomas.munro@gmail.com>> wrote:\n\nOn Mon, Jul 8, 2019 at 9:51 AM Thomas Munro <thomas.munro@gmail.com<mailto:thomas.munro@gmail.com>> wrote:\nI noticed that check-world passed several times with this patch\napplied, but the most recent CI run failed in multiple-cic:\n\n+error in steps s2i s1i: ERROR: cache lookup failed for index 26303\n\nhttps://travis-ci.org/postgresql-cfbot/postgresql/builds/555472214\n\nAnd in another run, this time on Windows, create_index failed:\n\nhttps://ci.appveyor.com/project/postgresql-cfbot/postgresql/build/1.0.46455\n\n--\nThomas Munro\nhttps://enterprisedb.com\nI have attached the revised patch. I ran check-world multiple times on my machine and it seems to succeed now. Do you mind kicking-off the CI build with the latest patch?",
"msg_date": "Mon, 8 Jul 2019 22:33:34 +0000",
"msg_from": "\"Goel, Dhruv\" <goeldhru@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 10:33 AM Goel, Dhruv <goeldhru@amazon.com> wrote:\n> I have attached the revised patch. I ran check-world multiple times on my machine and it seems to succeed now. Do you mind kicking-off the CI build with the latest patch?\n\nThanks.\n\nIt's triggered automatically when you post patches to the thread and\nalso once a day, though it took ~35 minutes to get around to noticing\nyour new version due to other activity in other threads, and general\nlack of horsepower. I'm planning to fix that with more horses.\n\nIt passed on both OSes. See here:\n\nhttp://cfbot.cputube.org/dhruv-goel.html\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Jul 2019 11:22:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "Hi Dhruv,\n\nOn Sun, June 30, 2019 at 7:30 AM, Goel, Dhruv wrote:\n> > On Jun 10, 2019, at 1:20 PM, Goel, Dhruv <goeldhru@amazon.com> wrote:\n> >> On Jun 9, 2019, at 5:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> On June 9, 2019 8:36:37 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>>> I think you are mistaken that doing transactional updates in\n> >>>> pg_index is OK. If memory serves, we rely on xmin of the pg_index\n> >>>> row for purposes such as detecting whether a concurrently-created\n> >>>> index is safe to use yet.\n> >\n> > I took a deeper look regarding this use case but was unable to find more evidence. As part of this patch, we essentially\n> make concurrently-created index safe to use only if transaction started after the xmin of Phase 3. Even today concurrent\n> indexes can not be used for transactions before this xmin because of the wait (which I am trying to get rid of in this\n> patch), is there any other denial of service you are talking about? Both the other states indislive, indisready can\n> be transactional updates as far as I understand. Is there anything more I am missing here?\n> \n> \n> Hi,\n> \n> I did some more concurrency testing here through some python scripts which compare the end state of the concurrently\n> created indexes. I also back-ported this patch to PG 9.6 and ran some custom concurrency tests (Inserts, Deletes, and\n> Create Index Concurrently) which seem to succeed. The intermediate states unfortunately are not easy to test in an\n> automated manner, but to be fair concurrent indexes could never be used for older transactions. Do you have more\n> inputs/ideas on this patch?\n\nAccording to the commit 3c8404649 [1], transactional update in pg_index is not safe in non-MVCC catalog scans before PG9.4.\nBut it seems to me that we can use transactional update in pg_index after the commit 813fb03155 [2] which got rid of SnapshotNow. \n\nIf we apply this patch back to 9.3 or earlier, we might need to consider another way or take the Andres suggestion (which I don't understand the way fully though), but which version do you want/do we need to apply this patch?\n\nAlso, if we apply this patch in this way, there are several comments to be fixed which state the method of CREATE INDEX CONCURRENTLY.\n\nex.\n[index.c]\n/*\n* validate_index - support code for concurrent index builds\n...\n* After completing validate_index(), we wait until all transactions that\n* were alive at the time of the reference snapshot are gone; this is\n* necessary to be sure there are none left with a transaction snapshot\n* older than the reference (and hence possibly able to see tuples we did\n* not index). Then we mark the index \"indisvalid\" and commit. Subsequent\n* transactions will be able to use it for queries.\n...\nvaliate_index()\n{\n}\n\n\n[1] https://github.com/postgres/postgres/commit/3c84046490bed3c22e0873dc6ba492e02b8b9051#diff-b279fc6d56760ed80ce4178de1401d2c\n[2] https://github.com/postgres/postgres/commit/813fb0315587d32e3b77af1051a0ef517d187763#diff-b279fc6d56760ed80ce4178de1401d2c\n\n--\nYoshikazu Imai\n\n\n",
"msg_date": "Mon, 28 Oct 2019 05:17:52 +0000",
"msg_from": "\"imai.yoshikazu@fujitsu.com\" <imai.yoshikazu@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "On Mon, Oct 28, 2019 at 05:17:52AM +0000, imai.yoshikazu@fujitsu.com wrote:\n> According to the commit 3c8404649 [1], transactional update in\n> pg_index is not safe in non-MVCC catalog scans before PG9.4.\n> But it seems to me that we can use transactional update in pg_index\n> after the commit 813fb03155 [2] which got rid of SnapshotNow.\n\nThat's actually this part of the patch:\n- /* Assert that current xact hasn't done any transactional updates */\n- Assert(GetTopTransactionIdIfAny() == InvalidTransactionId);\nAnd this thread (for commit 3c84046):\nhttps://www.postgresql.org/message-id/19082.1349481400@sss.pgh.pa.us \n\nAnd while looking at this patch, I have doubts that what you are doing\nis actually safe either.\n\n> If we apply this patch back to 9.3 or earlier, we might need to\n> consider another way or take the Andres suggestion (which I don't\n> understand the way fully though), but which version do you want/do\n> we need to apply this patch?\n\nPer the arguments of upthread, storing a 64-bit XID would require a\ncatalog change and you cannot backpatch that. I would suggest to keep\nthis patch focused on HEAD, and keep it as an improvement of the\nexisting features. Concurrent deadlock risks caused by CCI exist\nsince the feature came to life.\n\n> Also, if we apply this patch in this way, there are several comments\n> to be fixed which state the method of CREATE INDEX CONCURRENTLY.\n\nAre we sure as well that all the cache lookup failures are addressed?\nThe CF robot does not complain per its latest status, but are we sure\nto be out of the ground here?\n\nThe indentation of your patch is wrong in some places by the way.\n--\nMichael",
"msg_date": "Fri, 8 Nov 2019 10:30:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
},
{
"msg_contents": "On Fri, Nov 08, 2019 at 10:30:39AM +0900, Michael Paquier wrote:\n> Per the arguments of upthread, storing a 64-bit XID would require a\n> catalog change and you cannot backpatch that. I would suggest to keep\n> this patch focused on HEAD, and keep it as an improvement of the\n> existing features. Concurrent deadlock risks caused by CCI exist\n> since the feature came to life.\n\nMarked as returned with feedback per lack of activity and the patch\nwas waiting on author for a bit more than two weeks.\n--\nMichael",
"msg_date": "Mon, 25 Nov 2019 17:01:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding deadlock errors in CREATE INDEX CONCURRENTLY"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhen we build hash table for a hash join node etc., we split tuples into\ndifferent hash buckets. Since tuples could not all be held in memory.\nPostgres splits each bucket into batches, only the current batch of bucket\nis in memory while other batches are written to disk.\n\nDuring ExecHashTableInsert(), if the memory cost exceeds the operator\nallowed limit(hashtable->spaceAllowed), batches will be split on the fly by\ncalling ExecHashIncreaseNumBatches().\n\nIn past, if data is distributed unevenly, the split of batch may failed(All\nthe tuples falls into one split batch and the other batch is empty) Then\nPostgres will set hashtable->growEnable to false. And never expand batch\nnumber any more.\n\nIf tuples become diverse in future, spliting batch is still valuable and\ncould avoid the current batch become too big and finally OOM.\n\nTo fix this, we introduce a penalty on hashtable->spaceAllowed, which is\nthe threshold to determine whether to increase batch number.\nIf batch split failed, we increase the penalty instead of just turn off the\ngrowEnable flag.\n\nAny comments?\n\n\n-- \nThanks\n\nHubert Zhang",
"msg_date": "Wed, 15 May 2019 18:19:38 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Replace hashtable growEnable flag"
},
{
"msg_contents": "On Wed, May 15, 2019 at 06:19:38PM +0800, Hubert Zhang wrote:\n>Hi all,\n>\n>When we build hash table for a hash join node etc., we split tuples into\n>different hash buckets. Since tuples could not all be held in memory.\n>Postgres splits each bucket into batches, only the current batch of bucket\n>is in memory while other batches are written to disk.\n>\n>During ExecHashTableInsert(), if the memory cost exceeds the operator\n>allowed limit(hashtable->spaceAllowed), batches will be split on the fly by\n>calling ExecHashIncreaseNumBatches().\n>\n>In past, if data is distributed unevenly, the split of batch may failed(All\n>the tuples falls into one split batch and the other batch is empty) Then\n>Postgres will set hashtable->growEnable to false. And never expand batch\n>number any more.\n>\n>If tuples become diverse in future, spliting batch is still valuable and\n>could avoid the current batch become too big and finally OOM.\n>\n>To fix this, we introduce a penalty on hashtable->spaceAllowed, which is\n>the threshold to determine whether to increase batch number.\n>If batch split failed, we increase the penalty instead of just turn off the\n>growEnable flag.\n>\n>Any comments?\n>\n\nThere's already another thread discussing various issues with how hashjoin\nincreases the number of batches, including various issues with how/when we\ndisable adding more batches.\n\n https://commitfest.postgresql.org/23/2109/\n\nIn general I think you're right something like this is necessary, but I\nthink we may need to rethink growEnable a bit more.\n\nFor example, the way you implemented it, after reaching the increased\nlimit, we just increase the number of batches just like today, and then\ndecide whether it actually helped. But that means we double the number of\nBufFile entries, which uses more and more memory (because each is 8kB and\nwe need 1 per batch). I think in this case (after increasing the limit) we\nshould check whether increasing batches makes sense or not. And only do it\nif it helps. Otherwise we'll double the amount of memory for BufFile(s)\nand also the work_mem. That's not a good idea.\n\nBut as I said, there are other issues discussed on the other thread. For\nexample we only disable the growth when all rows fall into the same batch.\nBut that's overly strict.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 15 May 2019 21:58:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace hashtable growEnable flag"
},
{
"msg_contents": "Thanks Tomas.\nI will follow this problem on your thread. This thread could be terminated.\n\nOn Thu, May 16, 2019 at 3:58 AM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, May 15, 2019 at 06:19:38PM +0800, Hubert Zhang wrote:\n> >Hi all,\n> >\n> >When we build hash table for a hash join node etc., we split tuples into\n> >different hash buckets. Since tuples could not all be held in memory.\n> >Postgres splits each bucket into batches, only the current batch of bucket\n> >is in memory while other batches are written to disk.\n> >\n> >During ExecHashTableInsert(), if the memory cost exceeds the operator\n> >allowed limit(hashtable->spaceAllowed), batches will be split on the fly\n> by\n> >calling ExecHashIncreaseNumBatches().\n> >\n> >In past, if data is distributed unevenly, the split of batch may\n> failed(All\n> >the tuples falls into one split batch and the other batch is empty) Then\n> >Postgres will set hashtable->growEnable to false. And never expand batch\n> >number any more.\n> >\n> >If tuples become diverse in future, spliting batch is still valuable and\n> >could avoid the current batch become too big and finally OOM.\n> >\n> >To fix this, we introduce a penalty on hashtable->spaceAllowed, which is\n> >the threshold to determine whether to increase batch number.\n> >If batch split failed, we increase the penalty instead of just turn off\n> the\n> >growEnable flag.\n> >\n> >Any comments?\n> >\n>\n> There's already another thread discussing various issues with how hashjoin\n> increases the number of batches, including various issues with how/when we\n> disable adding more batches.\n>\n> https://commitfest.postgresql.org/23/2109/\n>\n> In general I think you're right something like this is necessary, but I\n> think we may need to rethink growEnable a bit more.\n>\n> For example, the way you implemented it, after reaching the increased\n> limit, we just increase the number of batches just like today, and then\n> decide whether it actually helped. But that means we double the number of\n> BufFile entries, which uses more and more memory (because each is 8kB and\n> we need 1 per batch). I think in this case (after increasing the limit) we\n> should check whether increasing batches makes sense or not. And only do it\n> if it helps. Otherwise we'll double the amount of memory for BufFile(s)\n> and also the work_mem. That's not a good idea.\n>\n> But as I said, there are other issues discussed on the other thread. For\n> example we only disable the growth when all rows fall into the same batch.\n> But that's overly strict.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com&d=DwIBAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=lz-kpGdw_rtpgYV2ho3DjDSB5Psxis_b-3VZKON7K7c&m=y2bI6_b4EPRd9aTQGv9Pio3c_ZtCWs_jzKd4t8CtJEI&s=XHLORM8U7I6XR_EDkgSFtJDvhxIVd2rDA7r-xvJa278&e=\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n\n-- \nThanks\n\nHubert Zhang\n\nThanks Tomas.I will follow this problem on your thread. This thread could be terminated.On Thu, May 16, 2019 at 3:58 AM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:On Wed, May 15, 2019 at 06:19:38PM +0800, Hubert Zhang wrote:\n>Hi all,\n>\n>When we build hash table for a hash join node etc., we split tuples into\n>different hash buckets. Since tuples could not all be held in memory.\n>Postgres splits each bucket into batches, only the current batch of bucket\n>is in memory while other batches are written to disk.\n>\n>During ExecHashTableInsert(), if the memory cost exceeds the operator\n>allowed limit(hashtable->spaceAllowed), batches will be split on the fly by\n>calling ExecHashIncreaseNumBatches().\n>\n>In past, if data is distributed unevenly, the split of batch may failed(All\n>the tuples falls into one split batch and the other batch is empty) Then\n>Postgres will set hashtable->growEnable to false. And never expand batch\n>number any more.\n>\n>If tuples become diverse in future, spliting batch is still valuable and\n>could avoid the current batch become too big and finally OOM.\n>\n>To fix this, we introduce a penalty on hashtable->spaceAllowed, which is\n>the threshold to determine whether to increase batch number.\n>If batch split failed, we increase the penalty instead of just turn off the\n>growEnable flag.\n>\n>Any comments?\n>\n\nThere's already another thread discussing various issues with how hashjoin\nincreases the number of batches, including various issues with how/when we\ndisable adding more batches.\n\n https://commitfest.postgresql.org/23/2109/\n\nIn general I think you're right something like this is necessary, but I\nthink we may need to rethink growEnable a bit more.\n\nFor example, the way you implemented it, after reaching the increased\nlimit, we just increase the number of batches just like today, and then\ndecide whether it actually helped. But that means we double the number of\nBufFile entries, which uses more and more memory (because each is 8kB and\nwe need 1 per batch). I think in this case (after increasing the limit) we\nshould check whether increasing batches makes sense or not. And only do it\nif it helps. Otherwise we'll double the amount of memory for BufFile(s)\nand also the work_mem. That's not a good idea.\n\nBut as I said, there are other issues discussed on the other thread. For\nexample we only disable the growth when all rows fall into the same batch.\nBut that's overly strict.\n\n\nregards\n\n-- \nTomas Vondra https://urldefense.proofpoint.com/v2/url?u=http-3A__www.2ndQuadrant.com&d=DwIBAg&c=lnl9vOaLMzsy2niBC8-h_K-7QJuNJEsFrzdndhuJ3Sw&r=lz-kpGdw_rtpgYV2ho3DjDSB5Psxis_b-3VZKON7K7c&m=y2bI6_b4EPRd9aTQGv9Pio3c_ZtCWs_jzKd4t8CtJEI&s=XHLORM8U7I6XR_EDkgSFtJDvhxIVd2rDA7r-xvJa278&e= \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n-- ThanksHubert Zhang",
"msg_date": "Thu, 16 May 2019 17:57:17 +0800",
"msg_from": "Hubert Zhang <hzhang@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Replace hashtable growEnable flag"
}
] |
[
{
"msg_contents": "Hi all, I’m working on an FDW that would benefit greatly from parallel foreign scan. I have implemented the callbacks described here:https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-PARALLEL. and I see a big improvement in certain plans.\n\nMy problem is that I can’t seem to get a parallel foreign scan in a query that does not contain an aggregate.\n\nFor example:\n SELECT count(*) FROM foreign table;\nGives me a parallel scan, but\n SELECT * FROM foreign table;\nDoes not.\n\nI’ve been fiddling with the costing GUCs, foreign scan row estimates, and foreign scan cost estimates - I can force the cost of a partial path to be much lower than a sequential foreign scan, but no luck.\n\nAny troubleshooting advice?\n\nA second related question - how can I find the actual number of workers chose for my ForeignScan? At the moment, I looking at ParallelContext->nworkers (inside of the InitializeDSMForeignScan() callback) because that seems to be the first callback function that might provide the worker count. I need the *actual* worker count in order to evenly distribute my workload. I can’t use the usual trick of having each worker grab the next available chunk (because I have to avoid seek operations on compressed data). In other words, it is of great advantage for each worker to read contiguous chunks of data - seeking to another part of the file is prohibitively expensive.\n\nThanks for all help.\n\n — Korry\n\n\n\n",
"msg_date": "Wed, 15 May 2019 12:55:33 -0400",
"msg_from": "Korry Douglas <korry@me.com>",
"msg_from_op": true,
"msg_subject": "Parallel Foreign Scans - need advice"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 12:55:33 -0400, Korry Douglas wrote:\n> Hi all, I’m working on an FDW that would benefit greatly from parallel foreign scan. I have implemented the callbacks described here:https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-PARALLEL. and I see a big improvement in certain plans.\n> \n> My problem is that I can’t seem to get a parallel foreign scan in a query that does not contain an aggregate.\n> \n> For example:\n> SELECT count(*) FROM foreign table;\n> Gives me a parallel scan, but\n> SELECT * FROM foreign table;\n> Does not.\n\nWell, that'd be bound by the cost of transferring tuples between workers\nand leader. You don't get, unless you fiddle heavily with the cost, a\nparallel scan for the equivalent local table scan either. You can\nprobably force the planner's hand by setting parallel_setup_cost,\nparallel_tuple_cost very low - but it's unlikely to be beneficial.\n\nIf you added a where clause that needs to be evaluated outside the FDW,\nyou'd probably see parallel scans without fiddling with the costs.\n\n\n> A second related question - how can I find the actual number of\n> workers chose for my ForeignScan? At the moment, I looking at\n> ParallelContext->nworkers (inside of the InitializeDSMForeignScan()\n> callback) because that seems to be the first callback function that\n> might provide the worker count. I need the *actual* worker count in\n> order to evenly distribute my workload. I can’t use the usual trick\n> of having each worker grab the next available chunk (because I have to\n> avoid seek operations on compressed data). In other words, it is of\n> great advantage for each worker to read contiguous chunks of data -\n> seeking to another part of the file is prohibitively expensive.\n\nDon't think - but am not sure - that there's a nicer way\ncurrently. Although I'd use nworkers_launched, rather than nworkers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 10:08:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "Thanks for the quick answer Andres. You’re right - it was parallel_tuple_cost that was getting in my way; my query returns about 6 million rows so I guess that can add up.\n\nIf I change parallel_tuple_scan from 0.1 to 0.0001, I get a parallel foreign scan.\n\nWith 4 workers, that reduces my execution time by about half. \n\nBut, nworkers_launched is always set to 0 in InitializeDSMForeignScan(), so that won’t work. Any other ideas?\n\n — Korry\n\n> On May 15, 2019, at 1:08 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Hi,\n> \n> On 2019-05-15 12:55:33 -0400, Korry Douglas wrote:\n>> Hi all, I’m working on an FDW that would benefit greatly from parallel foreign scan. I have implemented the callbacks described here:https://www.postgresql.org/docs/devel/fdw-callbacks.html#FDW-CALLBACKS-PARALLEL. and I see a big improvement in certain plans.\n>> \n>> My problem is that I can’t seem to get a parallel foreign scan in a query that does not contain an aggregate.\n>> \n>> For example:\n>> SELECT count(*) FROM foreign table;\n>> Gives me a parallel scan, but\n>> SELECT * FROM foreign table;\n>> Does not.\n> \n> Well, that'd be bound by the cost of transferring tuples between workers\n> and leader. You don't get, unless you fiddle heavily with the cost, a\n> parallel scan for the equivalent local table scan either. You can\n> probably force the planner's hand by setting parallel_setup_cost,\n> parallel_tuple_cost very low - but it's unlikely to be beneficial.\n> \n> If you added a where clause that needs to be evaluated outside the FDW,\n> you'd probably see parallel scans without fiddling with the costs.\n> \n> \n>> A second related question - how can I find the actual number of\n>> workers chose for my ForeignScan? At the moment, I looking at\n>> ParallelContext->nworkers (inside of the InitializeDSMForeignScan()\n>> callback) because that seems to be the first callback function that\n>> might provide the worker count. I need the *actual* worker count in\n>> order to evenly distribute my workload. I can’t use the usual trick\n>> of having each worker grab the next available chunk (because I have to\n>> avoid seek operations on compressed data). In other words, it is of\n>> great advantage for each worker to read contiguous chunks of data -\n>> seeking to another part of the file is prohibitively expensive.\n> \n> Don't think - but am not sure - that there's a nicer way\n> currently. Although I'd use nworkers_launched, rather than nworkers.\n> \n> Greetings,\n> \n> Andres Freund\n\n\n\n",
"msg_date": "Wed, 15 May 2019 13:31:59 -0400",
"msg_from": "Korry Douglas <korry@me.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "Hi,\n\nDon't top quote on these list...\n\nOn 2019-05-15 13:31:59 -0400, Korry Douglas wrote:\n> Thanks for the quick answer Andres. You’re right - it was parallel_tuple_cost that was getting in my way; my query returns about 6 million rows so I guess that can add up.\n> \n> If I change parallel_tuple_scan from 0.1 to 0.0001, I get a parallel foreign scan.\n> \n> With 4 workers, that reduces my execution time by about half. \n\nThen you probably need to adjust the scan costs you have.\n\n\n> But, nworkers_launched is always set to 0 in\n> InitializeDSMForeignScan(), so that won’t work. Any other ideas?\n\nAt that state it's simply not yet known how many workers will be\nactually launched (they might not start successfully or such). Why do\nyou need to know it there and not later?\n\n- Andres\n\n\n",
"msg_date": "Wed, 15 May 2019 10:34:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "\n>> But, nworkers_launched is always set to 0 in\n>> InitializeDSMForeignScan(), so that won’t work. Any other ideas?\n> \n> At that state it's simply not yet known how many workers will be\n> actually launched (they might not start successfully or such). Why do\n> you need to know it there and not later?\n> \n> - Andres\n\nI need to know at some point *before* I actually start scanning. The ParallelContext pointer is only available in EstimateDSMForeignScan(), InitializeDSMForeignScan(), and ReInitializeDSMForeignScan(). \n\nIf there is some other way to discover the actual worker count, I’m open to that. The three functions above are not particularly helpful to me so I’m happy to look somewhere else.\n\n — Korry\n\n",
"msg_date": "Wed, 15 May 2019 13:45:45 -0400",
"msg_from": "Korry Douglas <korry@me.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "On Thu, May 16, 2019 at 5:46 AM Korry Douglas <korry@me.com> wrote:\n> >> But, nworkers_launched is always set to 0 in\n> >> InitializeDSMForeignScan(), so that won’t work. Any other ideas?\n> >\n> > At that state it's simply not yet known how many workers will be\n> > actually launched (they might not start successfully or such). Why do\n> > you need to know it there and not later?\n> >\n> > - Andres\n>\n> I need to know at some point *before* I actually start scanning. The ParallelContext pointer is only available in EstimateDSMForeignScan(), InitializeDSMForeignScan(), and ReInitializeDSMForeignScan().\n\nHi Korry,\n\nThat's only a superficial problem. You don't even know if or when the\nworkers that are launched will all finish up running your particular\nnode, because (for example) they might be sent to different children\nof a Parallel Append node above you (AFAICS there is no way for a\nparticipant to indicate \"I've finished all the work allocated to me,\nbut I happen to know that some other worker #3 is needed here\" -- as\nsoon as any participant reports that it has executed the plan to\ncompletion, pa_finished[] will prevent new workers from picking that\nnode to execute). Suppose we made a rule that *every* worker must\nvisit *every* partial child of a Parallel Append and run it to\ncompletion (and any similar node in the future must do the same)...\nthen I think there is still a higher level design problem: if you do\nallocate work up front rather than on demand, then work could be\nunevenly distributed, and parallel query would be weakened.\n\nSo I think you ideally need a simple get-next-chunk work allocator\n(like Parallel Seq Scan and like the file_fdw patch I posted[1]), or a\npass-the-baton work allocator when there is a dependency between\nchunks (like Parallel Index Scan for btrees), or a more complicated\nmulti-phase system that counts participants arriving and joining in\n(like Parallel Hash) so that participants can coordinate and wait for\neach other in controlled circumstances.\n\nIf this compressed data doesn't have natural chunks designed for this\npurpose (like, say, ORC stripes), perhaps you could have a dedicated\nworkers streaming data (compressed? decompressed?) into shared memory,\nand parallel query participants coordinating to consume chunks of\nthat?\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BqK3E2RF75PKfsV0sn2s018%2Bft--hUuCmd2R_yQ9tmPQ%40mail.gmail.com\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 May 2019 15:17:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "\n> That's only a superficial problem. You don't even know if or when the\n> workers that are launched will all finish up running your particular\n> node, because (for example) they might be sent to different children\n> of a Parallel Append node above you (AFAICS there is no way for a\n> participant to indicate \"I've finished all the work allocated to me,\n> but I happen to know that some other worker #3 is needed here\" -- as\n> soon as any participant reports that it has executed the plan to\n> completion, pa_finished[] will prevent new workers from picking that\n> node to execute). Suppose we made a rule that *every* worker must\n> visit *every* partial child of a Parallel Append and run it to\n> completion (and any similar node in the future must do the same)...\n> then I think there is still a higher level design problem: if you do\n> allocate work up front rather than on demand, then work could be\n> unevenly distributed, and parallel query would be weakened.\n\nWhat I really need (for the scheme I’m using at the moment) is to know how many workers will be used to execute my particular Plan. I understand that some workers will naturally end up idle while the last (busy) worker finishes up. I’m dividing the workload (the number of row groups to scan) by the number of workers to get an even distribution. \n\nI’m willing to pay that price (at least, I haven’t seen a problem so far… famous last words)\n\nI do plan to switch over to get-next-chunk allocator as you mentioned below, but I’d like to get the minimized-seek mechanism working first.\n\nIt sounds like there is no reliable way to get the information that I’m looking for, is that right?\n\n> So I think you ideally need a simple get-next-chunk work allocator\n> (like Parallel Seq Scan and like the file_fdw patch I posted[1]), or a\n> pass-the-baton work allocator when there is a dependency between\n> chunks (like Parallel Index Scan for btrees), or a more complicated\n> multi-phase system that counts participants arriving and joining in\n> (like Parallel Hash) so that participants can coordinate and wait for\n> each other in controlled circumstances.\n\nI haven’t looked at Parallel Hash - will try to understand that next.\n\n> If this compressed data doesn't have natural chunks designed for this\n> purpose (like, say, ORC stripes), perhaps you could have a dedicated\n> workers streaming data (compressed? decompressed?) into shared memory,\n> and parallel query participants coordinating to consume chunks of\n> that?\n\n\nI’ll give that some thought. Thanks for the ideas.\n\n — Korry\n\n\n\n",
"msg_date": "Thu, 16 May 2019 08:45:00 -0400",
"msg_from": "Korry Douglas <korry@me.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
},
{
"msg_contents": "On Fri, May 17, 2019 at 12:45 AM Korry Douglas <korry@me.com> wrote:\n> It sounds like there is no reliable way to get the information that I’m looking for, is that right?\n\nCorrect. And if there were, it could only be used to write bugs. Let\nme see if I can demonstrate... I'll use the file_fdw patch from the\nlink I gave before, and I'll add an elog(LOG) message to show when\nfileIterateForeignScan() runs.\n\n$ echo 1 > /tmp/t2\n\npostgres=# create table t1 as select generate_series(1, 1000000)::int i;\nSELECT 1000000\npostgres=# create server files foreign data wrapper file_fdw;\nCREATE SERVER\npostgres=# create foreign table t2 (n int) server files\n options (filename '/tmp/t2', format 'csv');\nCREATE FOREIGN TABLE\n\nThe relevant EXPLAIN output is harder to understand if the parallel\nleader participates, but it changes nothing important, so I'll turn\nthat off first, and then see how it is run:\n\npostgres=# set parallel_leader_participation = off;\nSET\npostgres=# explain (analyze, verbose) select count(*) from (select *\nfrom t1 union all select * from t2) ss;\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=14176.32..14176.33 rows=1 width=8) (actual\ntime=234.023..234.023 rows=1 loops=1)\n Output: count(*)\n -> Gather (cost=14176.10..14176.31 rows=2 width=8) (actual\ntime=233.840..235.079 rows=2 loops=1)\n Output: (PARTIAL count(*))\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (cost=13176.10..13176.11 rows=1\nwidth=8) (actual time=223.550..223.555 rows=1 loops=2)\n Output: PARTIAL count(*)\n Worker 0: actual time=223.432..223.443 rows=1 loops=1\n Worker 1: actual time=223.667..223.668 rows=1 loops=1\n -> Parallel Append (cost=0.00..11926.10 rows=500000\nwidth=0) (actual time=0.087..166.669 rows=500000 loops=2)\n Worker 0: actual time=0.083..166.366 rows=499687 loops=1\n Worker 1: actual time=0.092..166.972 rows=500314 loops=1\n -> Parallel Seq Scan on public.t1\n(cost=0.00..9425.00 rows=500000 width=0) (actual time=0.106..103.384\nrows=500000 loops=2)\n Worker 0: actual time=0.123..103.106\nrows=499686 loops=1\n Worker 1: actual time=0.089..103.662\nrows=500314 loops=1\n -> Parallel Foreign Scan on public.t2\n(cost=0.00..1.10 rows=1 width=0) (actual time=0.079..0.096 rows=1\nloops=1)\n Foreign File: /tmp/numbers\n Foreign File Size: 2 b\n Worker 0: actual time=0.079..0.096 rows=1 loops=1\n Planning Time: 0.219 ms\n Execution Time: 235.262 ms\n(22 rows)\n\nYou can see the that Parallel Foreign Scan was only actually run by\none worker. So if you were somehow expecting both of them to show up\nin order to produce the correct results, you have a bug. The reason\nthat happened is because Parallal Append sent one worker to chew on\nt1, and another to chew on t2, but the scan of t2 was finished very\nquickly, so that worker then went to help out with t1. And for\nfurther proof of that, here's what I see in my server log (note only\never called twice, and in the same process):\n\n2019-05-17 10:51:42.248 NZST [52158] LOG: fileIterateForeignScan\n2019-05-17 10:51:42.248 NZST [52158] STATEMENT: explain analyze\nselect count(*) from (select * from t1 union all select * from t2) ss;\n2019-05-17 10:51:42.249 NZST [52158] LOG: fileIterateForeignScan\n2019-05-17 10:51:42.249 NZST [52158] STATEMENT: explain analyze\nselect count(*) from (select * from t1 union all select * from t2) ss;\n\nTherefore you can't allocate the work up front based on expected\nnumber of workers, even if it works in simple examples. Your node\nisn't necessarily the only node in the plan, and higher up nodes get\nto decide when, if at all, you run, in each worker.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 11:05:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Foreign Scans - need advice"
}
] |
[
{
"msg_contents": "Here's a bunch of message fixes in the postgres.po module. Please\ncomment if anything seems amiss. This is not a final patch, since\nregression output has not been adjusted; I only verified that the\nbackend still compiles cleanly. Some of the changes are going from this\nstyle of message:\n You need an unconditional ON DELETE DO INSTEAD rule with a RETURNING clause.\nto this:\n You need an unconditional %s rule with a RETURNING clause.\nwhere the ON DELETE DO INSTEAD part is inserted at execution time, and\ncan be things like ON UPDATE DO INSTEAD of ON INSERT DO INSTEAD. If the\nreduced string context causes inappropriate changes for any language, I\nsuppose we shouldn't make this kind of change, but I hope not.\n\nI'm also changing\n \"ucnv_fromUChars failed: %s\"\nto this:\n \"%s failed: %s\", \"ucnv_fromUChars\"\nso it essentially reduces the number of translated strings, because we\nalready have \"%s failed: %s\" in other parts of the backend. I think\nthis is not an issue. Alternatively, we could just remove that message\nfrom translation altogether, and have it emit the English version\nalways, by changing it from errmsg() to errmsg_internal().\n\nThe bulk of the changes are much less interesting that those.\n\nI'm proposing changes in a lot of files:\n\n src/backend/commands/copy.c | 6 +++---\n src/backend/commands/publicationcmds.c | 2 +-\n src/backend/commands/subscriptioncmds.c | 32 ++++++++++++++++++++------------\n src/backend/commands/tablecmds.c | 9 +++++----\n src/backend/parser/analyze.c | 2 +-\n src/backend/parser/parse_oper.c | 1 +\n src/backend/postmaster/postmaster.c | 7 ++++---\n src/backend/replication/basebackup.c | 17 +++++++----------\n src/backend/replication/walsender.c | 20 ++++++++++----------\n src/backend/rewrite/rewriteHandler.c | 19 +++++++++++++------\n src/backend/utils/adt/jsonpath.c | 3 ++-\n src/backend/utils/adt/jsonpath_exec.c | 2 +-\n src/backend/utils/adt/jsonpath_scan.l | 10 +++++-----\n src/backend/utils/adt/pg_locale.c | 10 ++++++----\n src/backend/utils/adt/regexp.c | 14 ++++++++++----\n 15 files changed, 89 insertions(+), 65 deletions(-)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 15 May 2019 14:30:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "more message fixes"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Here's a bunch of message fixes in the postgres.po module. Please\n> comment if anything seems amiss.\n\nThese sorts of changes trouble me a bit from a translatability standpoint:\n\n- errmsg(\"connect = false and enabled = true are mutually exclusive options\")));\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"connect = false\", \"enabled = true\")));\n\n- (errmsg(\"CREATE_REPLICATION_SLOT ... USE_SNAPSHOT \"\n- \"must not be called in a subtransaction\")));\n+ (errmsg(\"%s must not be called in a subtransaction\",\n+ \"CREATE_REPLICATION_SLOT ... USE_SNAPSHOT\")));\n\nA translator might expect the %s's to represent single words.\nI think at least you'd want a translator: comment to warn about\nwhat the insertion will be.\n\n+ /* XXX is it okay to use %d for BlockNumber everywhere? */\n\nBlockNumber should be %u, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 17:48:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: more message fixes"
},
{
"msg_contents": "On 2019-May-15, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Here's a bunch of message fixes in the postgres.po module. Please\n> > comment if anything seems amiss.\n> \n> These sorts of changes trouble me a bit from a translatability standpoint:\n> \n> - errmsg(\"connect = false and enabled = true are mutually exclusive options\")));\n> + errmsg(\"%s and %s are mutually exclusive options\",\n> + \"connect = false\", \"enabled = true\")));\n> \n> - (errmsg(\"CREATE_REPLICATION_SLOT ... USE_SNAPSHOT \"\n> - \"must not be called in a subtransaction\")));\n> + (errmsg(\"%s must not be called in a subtransaction\",\n> + \"CREATE_REPLICATION_SLOT ... USE_SNAPSHOT\")));\n> \n> A translator might expect the %s's to represent single words.\n> I think at least you'd want a translator: comment to warn about\n> what the insertion will be.\n\nFair point, I can add that. (As a translator, I know I have to\nreference the source files more often than I would like.) :-(\n\n> + /* XXX is it okay to use %d for BlockNumber everywhere? */\n> \n> BlockNumber should be %u, no?\n\nYeah. It's %d in basebackup.c, hence the comment. I think technically\nit's okay most of the time, because it's only used to reference to block\nnumbers in a *file*, not a relation; however, I fear it might still\nbreak in cases of a very large --with-segsize option.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 18:25:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: more message fixes"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at [1] I was rephrasing this comment + chck in\nheap_get_latest_tid():\n\n-\t * Since this can be called with user-supplied TID, don't trust the input\n-\t * too much. (RelationGetNumberOfBlocks is an expensive check, so we\n-\t * don't check t_ctid links again this way. Note that it would not do to\n-\t * call it just once and save the result, either.)\n \t */\n-\tblk = ItemPointerGetBlockNumber(tid);\n-\tif (blk >= RelationGetNumberOfBlocks(relation))\n-\t\telog(ERROR, \"block number %u is out of range for relation \\\"%s\\\"\",\n-\t\t\t blk, RelationGetRelationName(relation));\n\nWhich I dutifully rewrote. But I'm actually not sure it's safe at all\nfor heap to rely on t_ctid links to be valid. What prevents a ctid link\nto point to a page that's since been truncated away?\n\nAnd it's not just heap_get_latest_tid() afaict. As far as I can tell\njust about every ctid chaining code ought to test the t_ctid link\nagainst the relation size - otherwise it seems entirely possible to get\n\"could not read block %u in file \\\"%s\\\": %m\" or\n\"could not read block %u in file \\\"%s\\\": read only 0 of %d bytes\"\nstyle errors, no?\n\nThese loops are of such long-standing vintage, that I feel like I must\nbe missing something.\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20190515185447.gno2jtqxyktylyvs%40alap3.anarazel.de\n\n\n",
"msg_date": "Wed, 15 May 2019 12:02:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Are ctid chaining loops safe without relation size checks?"
},
{
"msg_contents": "On 2019-May-15, Andres Freund wrote:\n\n> -\tblk = ItemPointerGetBlockNumber(tid);\n> -\tif (blk >= RelationGetNumberOfBlocks(relation))\n> -\t\telog(ERROR, \"block number %u is out of range for relation \\\"%s\\\"\",\n> -\t\t\t blk, RelationGetRelationName(relation));\n> \n> Which I dutifully rewrote. But I'm actually not sure it's safe at all\n> for heap to rely on t_ctid links to be valid. What prevents a ctid link\n> to point to a page that's since been truncated away?\n\nUmm .. IIUC all index entries for truncated pages should have been\nremoved prior to the truncation. Otherwise, how would those index\nentries not become immediately data corruption the instant the heap is\nre-grown to cover those truncated pages? So I think if the TID comes\ndirectly from user then this is a check worth doing, but if the TID\ncomes from an index, then it isn't.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 15:07:13 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Are ctid chaining loops safe without relation size checks?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Which I dutifully rewrote. But I'm actually not sure it's safe at all\n> for heap to rely on t_ctid links to be valid. What prevents a ctid link\n> to point to a page that's since been truncated away?\n\nNothing, but when would the issue come up? The updated tuple must be\nnewer than the one pointing at it, so if it's dead then the one pointing\nat it must be too, no?\n\n(If we're not checking liveness of x_max before following the link,\nwe'd have trouble ...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 15:09:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Are ctid chaining loops safe without relation size checks?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 15:09:34 -0400, Tom Lane wrote:\n> (If we're not checking liveness of x_max before following the link,\n> we'd have trouble ...)\n\nI don't think we do everywhere - e.g. in heap_get_latest_tid() case that\nmade me think about this there's only this as an xmax based loop\ntermination:\n\n /*\n * After following a t_ctid link, we might arrive at an unrelated\n * tuple. Check for XMIN match.\n */\n if (TransactionIdIsValid(priorXmax) &&\n !TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(tp.t_data)))\n {\n UnlockReleaseBuffer(buffer);\n break;\n }\n\nbut that's after we already followed the link, and read the page (and it\nobviously isn't a liveliness check).\n\nAdditionally, note that heap_get_latest_tid() does *not* terminate when\nit finds a visible tuple:\n\n /*\n * Check tuple visibility; if visible, set it as the new result\n * candidate.\n */\n valid = HeapTupleSatisfiesVisibility(&tp, snapshot, buffer);\n CheckForSerializableConflictOut(valid, relation, &tp, buffer, snapshot);\n if (valid)\n *tid = ctid;\n\nit just continues to the next tuple version, if any.\n\nEvalPlanQualFetch() in <= 11 and heapam_tuple_lock() in master and\nheap_lock_updated_tuple_rec() don't have the problem that xmax might\nsuddenly abort while following the chain, because they have code like:\n\n /*\n * If tuple is being updated by other transaction then we\n * have to wait for its commit/abort, or die trying.\n */\n if (TransactionIdIsValid(SnapshotDirty.xmax))\n {\n ReleaseBuffer(buffer);\n switch (wait_policy)\n {\n case LockWaitBlock:\n XactLockTableWait(SnapshotDirty.xmax,\n relation, &tuple->t_self,\n XLTW_FetchUpdated);\n break;\n case LockWaitSkip:\n if (!ConditionalXactLockTableWait(SnapshotDirty.xmax))\n /* skip instead of waiting */\n return TM_WouldBlock;\n break;\n case LockWaitError:\n if (!ConditionalXactLockTableWait(SnapshotDirty.xmax))\n ereport(ERROR,\n (errcode(ERRCODE_LOCK_NOT_AVAILABLE),\n errmsg(\"could not obtain lock on row in relation \\\"%s\\\"\",\n RelationGetRelationName(relation))));\n break;\n }\n continue; /* loop back to repeat heap_fetch */\n }\n\nbut heap_get_latest_tid() doesn't have that logic.\n\n\nSo I think the problem is just that heap_get_latest_tid() is missing\nthis type of check. The reason for which presumably is this piece of\nintended functionality:\n\n * Actually, this gets the latest version that is visible according to\n * the passed snapshot. You can pass SnapshotDirty to get the very latest,\n * possibly uncommitted version.\n\nwhich means that neither can it block when xmax is still running, nor\ncan it terminate when HeapTupleSatisfiesVisibility() returns true.\nThere's no core code using a !mvcc snapshot however.\n\n\n\n\nBecause it's relevant for other work we've talked about for v13, and for\na potential fix:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > Which I dutifully rewrote. But I'm actually not sure it's safe at all\n> > for heap to rely on t_ctid links to be valid. What prevents a ctid link\n> > to point to a page that's since been truncated away?\n>\n> Nothing, but when would the issue come up? The updated tuple must be\n> newer than the one pointing at it, so if it's dead then the one pointing\n> at it must be too, no?\n\nWell, the current tuple might not be dead, it might be\nUPDATE_IN_PROGRESS when start following the ctid chain. By the time we\nget to the next tuple, that UPDATE might have rolled back, vacuum came\nalong, removed the new version of thew tuple (which then becomes DEAD,\nnot RECENTLY_DEAD) and then truncated the relation. Currently that's\nnot possible in the nodeTidscan.c case, because we'll have a lock\npreventing truncations. But if we were to allow truncations without an\nAEL, that'd be different.\n\n\nI went through a few possible ways to fix this:\n\n1) Break out of heap_get_latest_tid()/ loop, if the ctid to be chained\n to is bigger than the block length. It can't be visible by any\n definition except SnapshotAny, and we could just disallow that. As\n there's a lock on the relation heap_get_latest_tid() is operating on,\n we can rely on that value not getting too outdated.\n\n But I don't think that's correct, because the newest version of the\n tuple *actually* might be beyond the end of the table at the\n beginning of the scan - we *do* allow extension of the table while\n somebody holds a lock on the table after all.\n\n I also don't like adding more assumptions that depend on preventing\n truncations while any other lock is held.\n\n2) Just disallow SnapshotDirty/SnapshotAny for heap_get_latest_tid(),\n and break out of the loop if the current tuple is visible. There\n can't be a newer visible version anyway, and as long as the input tid\n for heap_get_latest_tid() points to something the calling transaction\n could see (even if in an earlier snapshot), there has to be a visible\n version somewhere.\n\n That'd not fix the tid.c callers, but they're essentially unused and\n weird anyway.\n\n I'm not sure if WHERE CURRENT OF with a WITH HOLD cursor is possible\n / would be a problem. In that case we might need to add a\n XactLockTableWait too.\n\n3) Declare these problems as esoteric, and don't care.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 14:44:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Are ctid chaining loops safe without relation size checks?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-15 15:07:13 -0400, Alvaro Herrera wrote:\n> On 2019-May-15, Andres Freund wrote:\n> \n> > -\tblk = ItemPointerGetBlockNumber(tid);\n> > -\tif (blk >= RelationGetNumberOfBlocks(relation))\n> > -\t\telog(ERROR, \"block number %u is out of range for relation \\\"%s\\\"\",\n> > -\t\t\t blk, RelationGetRelationName(relation));\n> > \n> > Which I dutifully rewrote. But I'm actually not sure it's safe at all\n> > for heap to rely on t_ctid links to be valid. What prevents a ctid link\n> > to point to a page that's since been truncated away?\n> \n> Umm .. IIUC all index entries for truncated pages should have been\n> removed prior to the truncation. Otherwise, how would those index\n> entries not become immediately data corruption the instant the heap is\n> re-grown to cover those truncated pages? So I think if the TID comes\n> directly from user then this is a check worth doing, but if the TID\n> comes from an index, then it isn't.\n\nI'm not sure how indexes come into play here? For one, I don't think\nheap_get_latest_tid() is called straight on a tuple returned from an\nindex scan. But also, I don't think that'd change much - it's not the\ntid that's passed to heap_get_latest_tid() that's the problem, it's the\ntuples it chains to via t_ctid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 May 2019 14:47:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Are ctid chaining loops safe without relation size checks?"
}
] |
[
{
"msg_contents": "catalog/pg_constraint.h defines a typedef ClonedConstraint,\nwhich AFAICS is no longer referenced anywhere. Is there a\nreason not to remove it?\n\n(I noticed this while eyeballing a test pgindent run.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 15:05:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "ClonedConstraint typedef is dead code?"
},
{
"msg_contents": "On 2019-May-15, Tom Lane wrote:\n\n> catalog/pg_constraint.h defines a typedef ClonedConstraint,\n> which AFAICS is no longer referenced anywhere. Is there a\n> reason not to remove it?\n\nOh, I didn't realize it had become completely unused! It was used for\nFK creation in partitioned tables, but we rewrote that code completely\nand I don't foresee needing that struct for anything in the future, so\nit seems safe to remove.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 May 2019 15:19:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ClonedConstraint typedef is dead code?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-May-15, Tom Lane wrote:\n>> catalog/pg_constraint.h defines a typedef ClonedConstraint,\n>> which AFAICS is no longer referenced anywhere. Is there a\n>> reason not to remove it?\n\n> Oh, I didn't realize it had become completely unused! It was used for\n> FK creation in partitioned tables, but we rewrote that code completely\n> and I don't foresee needing that struct for anything in the future, so\n> it seems safe to remove.\n\nThanks, done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 May 2019 17:27:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: ClonedConstraint typedef is dead code?"
}
] |
[
{
"msg_contents": "Hello,\n\nAs discussed elsewhere[1][2], our algorithm for deciding when to give\nup on repartitioning (AKA increasing the number of batches) tends to\nkeep going until it has a number of batches that is a function of the\nnumber of distinct well distributed keys. I wanted to move this minor\nissue away from Tomas Vondra's thread[2] since it's a mostly\nindependent problem.\n\nSET max_parallel_workers_per_gather = 0;\nSET synchronize_seqscans = off;\nSET work_mem = '4MB';\n\nCREATE TABLE r AS SELECT generate_series(1, 10000000)::int i;\nANALYZE r;\n\n-- 1k uniform keys + 1m duplicates\nCREATE TABLE s1k (i int);\nINSERT INTO s1k SELECT generate_series(1, 1000)::int i;\nALTER TABLE s1k SET (autovacuum_enabled = off);\nANALYZE s1k;\nINSERT INTO s1k SELECT 42 FROM generate_series(1, 1000000);\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM r JOIN s1k USING (i);\n\n Buckets: 1048576 (originally 1048576)\n Batches: 4096 (originally 16)\n Memory Usage: 35157kB\n\n-- 10k uniform keys + 1m duplicates\nCREATE TABLE s10k (i int);\nINSERT INTO s10k SELECT generate_series(1, 10000)::int i;\nALTER TABLE s10k SET (autovacuum_enabled = off);\nANALYZE s10k;\nINSERT INTO s10k SELECT 42 FROM generate_series(1, 1000000);\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM r JOIN s10k USING (i);\n\n Buckets: 131072 (originally 131072)\n Batches: 32768 (originally 16)\n Memory Usage: 35157kB\n\nSee how the number of batches is determined by the number of uniform\nkeys in r? That's because the explosion unfolds until there is\n*nothing left* but keys that hash to the same value in the problem\nbatch, which means those uniform keys have to keep spreading out until\nthere is something on the order of two batches per key. The point is\nthat it's bounded only by input data (or eventually INT_MAX / 2 and\nMaxAllocSize), and as Tomas has illuminated, batches eat unmetered\nmemory. Ouch.\n\nHere's a quick hack to show that a 95% cut-off fixes those examples.\nI don't really know how to choose the number, but I suspect it should\nbe much closer to 100 than 50. I think this is the easiest of three\nfundamental problems that need to be solved in this area. The others\nare: accounting for per-partition overheads as Tomas pointed out, and\nproviding an actual fallback strategy that respects work_mem when\nextreme skew is detected OR per-partition overheads dominate. I plan\nto experiment with nested loop hash join (or whatever you want to call\nit: the thing where you join every arbitrary fragment of the hash\ntable against the outer batch, and somehow deal with outer match\nflags) when time permits.\n\n[1] https://www.postgresql.org/message-id/flat/CAG_%3D8kBoWY4AXwW%3DCj44xe13VZnYohV9Yr-_hvZdx2xpiipr9w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20190504003414.bulcbnge3rhwhcsh%40development\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 16 May 2019 13:22:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoiding hash join batch explosions with extreme skew and weird stats"
},
{
"msg_contents": "On Thu, May 16, 2019 at 01:22:31PM +1200, Thomas Munro wrote:\n> ...\n>\n>Here's a quick hack to show that a 95% cut-off fixes those examples.\n>I don't really know how to choose the number, but I suspect it should\n>be much closer to 100 than 50. I think this is the easiest of three\n>fundamental problems that need to be solved in this area. The others\n>are: accounting for per-partition overheads as Tomas pointed out, and\n>providing an actual fallback strategy that respects work_mem when\n>extreme skew is detected OR per-partition overheads dominate. I plan\n>to experiment with nested loop hash join (or whatever you want to call\n>it: the thing where you join every arbitrary fragment of the hash\n>table against the outer batch, and somehow deal with outer match\n>flags) when time permits.\n>\n\nI think this is a step in the right direction, but as I said on the other\nthread(s), I think we should not disable growth forever and recheck once\nin a while. Otherwise we'll end up in sad situation with non-uniform data\nsets, as poined out by Hubert Zhang in [1]. It's probably even truer with\nthis less strict logic, using 95% as a threshold (instead of 100%).\n\nI kinda like the idea with increasing the spaceAllowed value. Essentially,\nif we decide adding batches would be pointless, increasing the memory\nbudget is the only thing we can do anyway.\n\nThe problem however is that we only really look at a single bit - it may\nbe that doubling the batches would not help, but doing it twice would\nactually reduce the memory usage. For example, assume there are 2 distinct\nvalues in the batch, with hash values (in binary)\n\n 101010000\n 101010111\n\nand assume we currently. Clearly, splitting batches is going to do nothing\nuntil we get to the 000 vs. 111 parts.\n\nAt first I thought this is rather unlikely and we can ignore that, but I'm\nnot really sure about that - it may actually be pretty likely. We may get\nto 101010 bucket with sufficiently large data set, and then it's ~50%\nprobability the next bit is the same (assuming two distinct values). So\nthis may be quite an issue, I think.\n\nregards\n\n\n[1] https://www.postgresql.org/message-id/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.com\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Thu, 16 May 2019 18:39:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, May 17, 2019 at 4:39 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I think this is a step in the right direction, but as I said on the other\n> thread(s), I think we should not disable growth forever and recheck once\n> in a while. Otherwise we'll end up in sad situation with non-uniform data\n> sets, as poined out by Hubert Zhang in [1]. It's probably even truer with\n> this less strict logic, using 95% as a threshold (instead of 100%).\n>\n> I kinda like the idea with increasing the spaceAllowed value. Essentially,\n> if we decide adding batches would be pointless, increasing the memory\n> budget is the only thing we can do anyway.\n\nBut that's not OK, we need to fix THAT.\n\n> The problem however is that we only really look at a single bit - it may\n> be that doubling the batches would not help, but doing it twice would\n> actually reduce the memory usage. For example, assume there are 2 distinct\n> values in the batch, with hash values (in binary)\n\nYes, that's a good point, and not a case that we should ignore. But\nif we had a decent fall-back strategy that respected work_mem, we\nwouldn't care so much if we get it wrong in a corner case. I'm\narguing that we should use Grace partitioning as our primary\npartitioning strategy, but fall back to looping (or possibly\nsort-merging) for the current batch if Grace doesn't seem to be\nworking. You'll always be able to find cases where if you'd just\ntried one more round, Grace would work, but that seems acceptable to\nme, because getting it wrong doesn't melt your computer, it just\nprobably takes longer. Or maybe it doesn't. How much longer would it\ntake to loop twice? Erm, twice as long, and each loop makes actual\nprogress, unlike extra speculative Grace partition expansions which\napply not just to the current batch but all batches, might not\nactually work, and you *have* to abandon at some point. The more I\nthink about it, the more I think that a loop-base escape valve, though\nunpalatably quadratic, is probably OK because we're in a sink-or-swim\nsituation at this point, and our budget is work_mem, not work_time.\n\nI'm concerned that we're trying to find ways to treat the symptoms,\nallowing us to exceed work_mem but maybe not so much, instead of\nfocusing on the fundamental problem, which is that we don't yet have\nan algorithm that is guaranteed to respect work_mem.\n\nAdmittedly I don't have a patch, just a bunch of handwaving. One\nreason I haven't attempted to write it is because although I know how\nto do the non-parallel version using a BufFile full of match bits in\nsync with the tuples for outer joins, I haven't figured out how to do\nit for parallel-aware hash join, because then each loop over the outer\nbatch could see different tuples in each participant. You could use\nthe match bit in HashJoinTuple header, but then you'd have to write\nall the tuples out again, which is more IO than I want to do. I'll\nprobably start another thread about that.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 10:21:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, May 17, 2019 at 10:21:56AM +1200, Thomas Munro wrote:\n>On Fri, May 17, 2019 at 4:39 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> I think this is a step in the right direction, but as I said on the other\n>> thread(s), I think we should not disable growth forever and recheck once\n>> in a while. Otherwise we'll end up in sad situation with non-uniform data\n>> sets, as poined out by Hubert Zhang in [1]. It's probably even truer with\n>> this less strict logic, using 95% as a threshold (instead of 100%).\n>>\n>> I kinda like the idea with increasing the spaceAllowed value. Essentially,\n>> if we decide adding batches would be pointless, increasing the memory\n>> budget is the only thing we can do anyway.\n>\n>But that's not OK, we need to fix THAT.\n>\n\nI agree increasing the budget is not ideal, althought at the moment it's\nthe only thing we can do. If we can improve that, great.\n\n>> The problem however is that we only really look at a single bit - it may\n>> be that doubling the batches would not help, but doing it twice would\n>> actually reduce the memory usage. For example, assume there are 2 distinct\n>> values in the batch, with hash values (in binary)\n>\n>Yes, that's a good point, and not a case that we should ignore. But\n>if we had a decent fall-back strategy that respected work_mem, we\n>wouldn't care so much if we get it wrong in a corner case. I'm\n>arguing that we should use Grace partitioning as our primary\n>partitioning strategy, but fall back to looping (or possibly\n>sort-merging) for the current batch if Grace doesn't seem to be\n>working. You'll always be able to find cases where if you'd just\n>tried one more round, Grace would work, but that seems acceptable to\n>me, because getting it wrong doesn't melt your computer, it just\n>probably takes longer. Or maybe it doesn't. How much longer would it\n>take to loop twice? Erm, twice as long, and each loop makes actual\n>progress, unlike extra speculative Grace partition expansions which\n>apply not just to the current batch but all batches, might not\n>actually work, and you *have* to abandon at some point. The more I\n>think about it, the more I think that a loop-base escape valve, though\n>unpalatably quadratic, is probably OK because we're in a sink-or-swim\n>situation at this point, and our budget is work_mem, not work_time.\n>\n\nTrue.\n\n>I'm concerned that we're trying to find ways to treat the symptoms,\n>allowing us to exceed work_mem but maybe not so much, instead of\n>focusing on the fundamental problem, which is that we don't yet have\n>an algorithm that is guaranteed to respect work_mem.\n>\n\nYes, that's a good point.\n\n>Admittedly I don't have a patch, just a bunch of handwaving. One\n>reason I haven't attempted to write it is because although I know how\n>to do the non-parallel version using a BufFile full of match bits in\n>sync with the tuples for outer joins, I haven't figured out how to do\n>it for parallel-aware hash join, because then each loop over the outer\n>batch could see different tuples in each participant. You could use\n>the match bit in HashJoinTuple header, but then you'd have to write\n>all the tuples out again, which is more IO than I want to do. I'll\n>probably start another thread about that.\n>\n\nThat pesky parallelism ;-)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 17 May 2019 00:54:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, May 17, 2019 at 4:39 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n>> I kinda like the idea with increasing the spaceAllowed value. Essentially,\n>> if we decide adding batches would be pointless, increasing the memory\n>> budget is the only thing we can do anyway.\n\n> But that's not OK, we need to fix THAT.\n\nI don't think it's necessarily a good idea to suppose that we MUST\nfit in work_mem come what may. It's likely impossible to guarantee\nthat in all cases. Even if we can, a query that runs for eons will\nhelp nobody.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 May 2019 18:58:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, May 16, 2019 at 06:58:43PM -0400, Tom Lane wrote:\n>Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Fri, May 17, 2019 at 4:39 AM Tomas Vondra\n>> <tomas.vondra@2ndquadrant.com> wrote:\n>>> I kinda like the idea with increasing the spaceAllowed value. Essentially,\n>>> if we decide adding batches would be pointless, increasing the memory\n>>> budget is the only thing we can do anyway.\n>\n>> But that's not OK, we need to fix THAT.\n>\n>I don't think it's necessarily a good idea to suppose that we MUST\n>fit in work_mem come what may. It's likely impossible to guarantee\n>that in all cases. Even if we can, a query that runs for eons will\n>help nobody.\n>\n\nI kinda agree with Thomas - arbitrarily increasing work_mem is something\nwe should not do unless abosolutely necessary. If the query is slow, it's\nup to the user to bump the value up, if deemed appropriate.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 17 May 2019 01:46:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, May 17, 2019 at 11:46 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, May 16, 2019 at 06:58:43PM -0400, Tom Lane wrote:\n> >Thomas Munro <thomas.munro@gmail.com> writes:\n> >> On Fri, May 17, 2019 at 4:39 AM Tomas Vondra\n> >> <tomas.vondra@2ndquadrant.com> wrote:\n> >>> I kinda like the idea with increasing the spaceAllowed value. Essentially,\n> >>> if we decide adding batches would be pointless, increasing the memory\n> >>> budget is the only thing we can do anyway.\n> >\n> >> But that's not OK, we need to fix THAT.\n> >\n> >I don't think it's necessarily a good idea to suppose that we MUST\n> >fit in work_mem come what may. It's likely impossible to guarantee\n> >that in all cases. Even if we can, a query that runs for eons will\n> >help nobody.\n>\n> I kinda agree with Thomas - arbitrarily increasing work_mem is something\n> we should not do unless abosolutely necessary. If the query is slow, it's\n> up to the user to bump the value up, if deemed appropriate.\n\n+1\n\nI think we can gaurantee that we can fit in work_mem with only one\nexception: we have to allow work_mem to be exceeded when we otherwise\ncouldn't fit a single tuple.\n\nThen the worst possible case with the looping algorithm is that we\ndegrade to loading just one inner tuple at a time into the hash table,\nat which point we effectively have a nested loop join (except (1) it's\nflipped around: for each tuple on the inner side, we scan the outer\nside; and (2) we can handle full outer joins). In any reasonable case\nyou'll have a decent amount of tuples at a time, so you won't have to\nloop too many times so it's not really quadratic in the number of\ntuples. The realisation that it's a nested loop join in the extreme\ncase is probably why the MySQL people called it 'block nested loop\njoin' (and as far as I can tell from quick googling, it might be their\n*primary* strategy for hash joins that don't fit in memory, not just a\nsecondary strategy after Grace fails, but I might be wrong about\nthat). Unlike plain old single-tuple nested loop join, it works in\narbitrary sized blocks (the hash table). What we would call a regular\nhash join, they call a BNL that just happens to have only one loop. I\nthink Grace is probably a better primary strategy, but loops are a\ngood fallback.\n\nThe reason I kept mentioning sort-merge in earlier threads is because\nit'd be better in the worst cases. Unfortunately it would be worse in\nthe best case (smallish numbers of loops) and I suspect many real\nworld cases. It's hard to decide, so perhaps we should be happy that\nsort-merge can't be considered currently because the join conditions\nmay not be merge-joinable.\n\n\n--\nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2019 12:26:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Admittedly I don't have a patch, just a bunch of handwaving. One\n> reason I haven't attempted to write it is because although I know how\n> to do the non-parallel version using a BufFile full of match bits in\n> sync with the tuples for outer joins, I haven't figured out how to do\n> it for parallel-aware hash join, because then each loop over the outer\n> batch could see different tuples in each participant. You could use\n> the match bit in HashJoinTuple header, but then you'd have to write\n> all the tuples out again, which is more IO than I want to do. I'll\n> probably start another thread about that.\n>\n>\nCould you explain more about the implementation you are suggesting?\n\nSpecifically, what do you mean \"BufFile full of match bits in sync with the\ntuples for outer joins?\"\n\nIs the implementation you are thinking of one which falls back to NLJ on a\nbatch-by-batch basis decided during the build phase?\nIf so, why do you need to keep track of the outer tuples seen?\nIf you are going to loop through the whole outer side for each tuple on the\ninner side, it seems like you wouldn't need to.\n\nCould you make an outer \"batch\" which is the whole of the outer relation?\nThat\nis, could you do something like: when hashing the inner side, if\nre-partitioning\nis resulting in batches that will overflow spaceAllowed, could you set a\nflag on\nthat batch use_NLJ and when making batches for the outer side, make one\n\"batch\"\nthat has all the tuples from the outer side which the inner side batch\nwhich was\nflagged will do NLJ with.\n\n-- \nMelanie Plageman\n\nOn Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nAdmittedly I don't have a patch, just a bunch of handwaving. One\nreason I haven't attempted to write it is because although I know how\nto do the non-parallel version using a BufFile full of match bits in\nsync with the tuples for outer joins, I haven't figured out how to do\nit for parallel-aware hash join, because then each loop over the outer\nbatch could see different tuples in each participant. You could use\nthe match bit in HashJoinTuple header, but then you'd have to write\nall the tuples out again, which is more IO than I want to do. I'll\nprobably start another thread about that.\n\nCould you explain more about the implementation you are suggesting?Specifically, what do you mean \"BufFile full of match bits in sync with thetuples for outer joins?\"Is the implementation you are thinking of one which falls back to NLJ on abatch-by-batch basis decided during the build phase?If so, why do you need to keep track of the outer tuples seen?If you are going to loop through the whole outer side for each tuple on theinner side, it seems like you wouldn't need to.Could you make an outer \"batch\" which is the whole of the outer relation? Thatis, could you do something like: when hashing the inner side, if re-partitioningis resulting in batches that will overflow spaceAllowed, could you set a flag onthat batch use_NLJ and when making batches for the outer side, make one \"batch\"that has all the tuples from the outer side which the inner side batch which wasflagged will do NLJ with.-- Melanie Plageman",
"msg_date": "Fri, 17 May 2019 17:14:59 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Admittedly I don't have a patch, just a bunch of handwaving. One\n>> reason I haven't attempted to write it is because although I know how\n>> to do the non-parallel version using a BufFile full of match bits in\n>> sync with the tuples for outer joins, I haven't figured out how to do\n>> it for parallel-aware hash join, because then each loop over the outer\n>> batch could see different tuples in each participant. You could use\n>> the match bit in HashJoinTuple header, but then you'd have to write\n>> all the tuples out again, which is more IO than I want to do. I'll\n>> probably start another thread about that.\n>\n> Could you explain more about the implementation you are suggesting?\n>\n> Specifically, what do you mean \"BufFile full of match bits in sync with the\n> tuples for outer joins?\"\n\nFirst let me restate the PostgreSQL terminology for this stuff so I\ndon't get confused while talking about it:\n\n* The inner side of the join = the right side = the side we use to\nbuild a hash table. Right and full joins emit inner tuples when there\nis no matching tuple on the outer side.\n\n* The outer side of the join = the left side = the side we use to\nprobe the hash table. Left and full joins emit outer tuples when\nthere is no matching tuple on the inner side.\n\n* Semi and anti joins emit exactly one instance of each outer tuple if\nthere is/isn't at least one match on the inner side.\n\nWe have a couple of relatively easy cases:\n\n* Inner joins: for every outer tuple, we try to find a match in the\nhash table, and if we find one we emit a tuple. To add looping\nsupport, if we run out of memory when loading the hash table we can\njust proceed to probe the fragment we've managed to load so far, and\nthen rewind the outer batch, clear the hash table and load in the next\nwork_mem-sized fragment and do it again... rinse and repeat until\nwe've eventually processed the whole inner batch. After we've\nfinished looping, we move on to the next batch.\n\n* For right and full joins (\"HJ_FILL_INNER\"), we also need to emit an\ninner tuple for every tuple that was loaded into the hash table but\nnever matched. That's done using a flag HEAP_TUPLE_HAS_MATCH in the\nheader of the tuples of the hash table, and a scan through the whole\nhash table at the end of each batch to look for unmatched tuples\n(ExecScanHashTableForUnmatched()). To add looping support, that just\nhas to be done at the end of every inner batch fragment, that is,\nafter every loop.\n\nAnd now for the cases that need a new kind of match bit, as far as I can see:\n\n* For left and full joins (\"HJ_FILL_OUTER\"), we also need to emit an\nouter tuple for every tuple that didn't find a match in the hash\ntable. Normally that is done while probing, without any need for\nmemory or match flags: if we don't find a match, we just spit out an\nouter tuple immediately. But that simple strategy won't work if the\nhash table holds only part of the inner batch. Since we'll be\nrewinding and looping over the outer batch again for the next inner\nbatch fragment, we can't yet say if there will be a match in a later\nloop. But the later loops don't know on their own either. So we need\nsome kind of cumulative memory between loops, and we only know which\nouter tuples have a match after we've finished all loops. So there\nwould need to be a new function ExecScanOuterBatchForUnmatched().\n\n* For semi joins, we need to emit exactly one outer tuple whenever\nthere is one or more match on the inner side. To add looping support,\nwe need to make sure that we don't emit an extra copy of the outer\ntuple if there is a second match in another inner batch fragment.\nAgain, this implies some kind of memory between loops, so we can\nsuppress later matches.\n\n* For anti joins, we need to emit an outer tuple whenever there is no\nmatch. To add looping support, we need to wait until we've seen all\nthe inner batch fragments before we know that a given outer tuple has\nno match, perhaps with the same new function\nExecScanOuterBatchForUnmatched().\n\nSo, we need some kind of inter-loop memory, but we obviously don't\nwant to create another source of unmetered RAM gobbling. So one idea\nis a BufFile that has one bit per outer tuple in the batch. In the\nfirst loop, we just stream out the match results as we go, and then\nsomehow we OR the bitmap with the match results in subsequent loops.\nAfter the last loop, we have a list of unmatched tuples -- just scan\nit in lock-step with the outer batch and look for 0 bits.\n\nUnfortunately that bits-in-order scheme doesn't work for parallel\nhash, where the SharedTuplestore tuples seen by each worker are\nnon-deterministic. So perhaps in that case we could use the\nHEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\nthe whole outer batch back out each time through the loop. That'd\nkeep the tuples and match bits together, but it seems like a lot of\nIO... Note that parallel hash doesn't support right/full joins today,\nbecause of some complications about waiting and deadlocks that might\nturn out to be relevant here too, and might be solvable (I should\nprobably write about that in another email), but left joins *are*\nsupported today so would need to be desupported if we wanted to add\nloop-based escape valve but not deal with with these problems. That\ndoesn't seem acceptable, which is why I'm a bit stuck on this point,\nand unfortunately it may be a while before I have time to tackle any\nof that personally.\n\n> Is the implementation you are thinking of one which falls back to NLJ on a\n> batch-by-batch basis decided during the build phase?\n\nYeah.\n\n> If so, why do you need to keep track of the outer tuples seen?\n> If you are going to loop through the whole outer side for each tuple on the\n> inner side, it seems like you wouldn't need to.\n\nThe idea is to loop through the whole outer batch for every\nwork_mem-sized inner batch fragment, not every tuple. Though in\ntheory it could be as small as a single tuple.\n\n> Could you make an outer \"batch\" which is the whole of the outer relation? That\n> is, could you do something like: when hashing the inner side, if re-partitioning\n> is resulting in batches that will overflow spaceAllowed, could you set a flag on\n> that batch use_NLJ and when making batches for the outer side, make one \"batch\"\n> that has all the tuples from the outer side which the inner side batch which was\n> flagged will do NLJ with.\n\nI didn't understand this... you always need to make one outer batch\ncorresponding to every inner batch. The problem is the tricky\nleft/full/anti/semi join cases when joining against fragments holding\nless that the full inner batch: we still need some way to implement\njoin logic that depends on knowing whether there is a match in *any*\nof the inner fragments/loops.\n\nAbout the question of when exactly to set the \"use_NLJ\" flag: I had\noriginally been thinking of this only as a way to deal with the\nextreme skew problem. But in light of Tomas's complaints about\nunmetered per-batch memory overheads, I had a new thought: it should\nalso be triggered whenever doubling the number of batches would halve\nthe amount of memory left for the hash table (after including the size\nof all those BufFile objects in the computation as Tomas proposes). I\nthink that might be exactly the right right cut-off if you want to do\nas much Grace partitioning as your work_mem can afford, and therefore\nas little looping as possible to complete the join while respecting\nwork_mem.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2019 11:07:03 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, May 20, 2019 at 11:07:03AM +1200, Thomas Munro wrote:\n>On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n><melanieplageman@gmail.com> wrote:\n>> On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> Admittedly I don't have a patch, just a bunch of handwaving. One\n>>> reason I haven't attempted to write it is because although I know how\n>>> to do the non-parallel version using a BufFile full of match bits in\n>>> sync with the tuples for outer joins, I haven't figured out how to do\n>>> it for parallel-aware hash join, because then each loop over the outer\n>>> batch could see different tuples in each participant. You could use\n>>> the match bit in HashJoinTuple header, but then you'd have to write\n>>> all the tuples out again, which is more IO than I want to do. I'll\n>>> probably start another thread about that.\n>>\n>> Could you explain more about the implementation you are suggesting?\n>>\n>> Specifically, what do you mean \"BufFile full of match bits in sync with the\n>> tuples for outer joins?\"\n>\n>First let me restate the PostgreSQL terminology for this stuff so I\n>don't get confused while talking about it:\n>\n>* The inner side of the join = the right side = the side we use to\n>build a hash table. Right and full joins emit inner tuples when there\n>is no matching tuple on the outer side.\n>\n>* The outer side of the join = the left side = the side we use to\n>probe the hash table. Left and full joins emit outer tuples when\n>there is no matching tuple on the inner side.\n>\n>* Semi and anti joins emit exactly one instance of each outer tuple if\n>there is/isn't at least one match on the inner side.\n>\n\nI think you're conflating inner/outer side and left/right, or rather\nassuming it's always left=inner and right=outer.\n\n> ... snip ...\n>\n>> Could you make an outer \"batch\" which is the whole of the outer relation? That\n>> is, could you do something like: when hashing the inner side, if re-partitioning\n>> is resulting in batches that will overflow spaceAllowed, could you set a flag on\n>> that batch use_NLJ and when making batches for the outer side, make one \"batch\"\n>> that has all the tuples from the outer side which the inner side batch which was\n>> flagged will do NLJ with.\n>\n>I didn't understand this... you always need to make one outer batch\n>corresponding to every inner batch. The problem is the tricky\n>left/full/anti/semi join cases when joining against fragments holding\n>less that the full inner batch: we still need some way to implement\n>join logic that depends on knowing whether there is a match in *any*\n>of the inner fragments/loops.\n>\n>About the question of when exactly to set the \"use_NLJ\" flag: I had\n>originally been thinking of this only as a way to deal with the\n>extreme skew problem. But in light of Tomas's complaints about\n>unmetered per-batch memory overheads, I had a new thought: it should\n>also be triggered whenever doubling the number of batches would halve\n>the amount of memory left for the hash table (after including the size\n>of all those BufFile objects in the computation as Tomas proposes). I\n>think that might be exactly the right right cut-off if you want to do\n>as much Grace partitioning as your work_mem can afford, and therefore\n>as little looping as possible to complete the join while respecting\n>work_mem.\n>\n\nNot sure what NLJ flag rule you propose, exactly.\n\nRegarding the threshold value - once the space for BufFiles (and other\noverhead) gets over work_mem/2, it does not make any sense to increase\nthe number of batches because then the work_mem would be entirely\noccupied by BufFiles.\n\nThe WIP patches don't actually do exactly that though - they just check\nif the incremented size would be over work_mem/2. I think we should\ninstead allow up to work_mem*2/3, i.e. stop adding batches after the\nBufFiles start consuming more than work_mem/3 memory.\n\nI think that's actually what you mean by \"halving the amount of memory\nleft for the hash table\" because that's what happens after reaching the\nwork_mem/3.\n\nBut I think that rule is irrelevant here, really, because this thread\nwas discussing cases where adding batches is futile due to skew, no? In\nwhich case we should stop adding batches after reaching some % of tuples\nnot moving from the batch.\n\nOr are you suggesting we should remove that rule, and instead realy on\nthis rule about halving the hash table space? That might work too, I\nguess.\n\nOTOH I'm not sure it's a good idea to handle both those cases the same\nway - \"overflow file\" idea works pretty well for cases where the hash\ntable actually can be split into batches, and I'm afraid NLJ will be\nmuch less efficient for those cases.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 20 May 2019 02:22:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, May 20, 2019 at 12:22 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Mon, May 20, 2019 at 11:07:03AM +1200, Thomas Munro wrote:\n> >First let me restate the PostgreSQL terminology for this stuff so I\n> >don't get confused while talking about it:\n> >\n> >* The inner side of the join = the right side = the side we use to\n> >build a hash table. Right and full joins emit inner tuples when there\n> >is no matching tuple on the outer side.\n> >\n> >* The outer side of the join = the left side = the side we use to\n> >probe the hash table. Left and full joins emit outer tuples when\n> >there is no matching tuple on the inner side.\n> >\n> >* Semi and anti joins emit exactly one instance of each outer tuple if\n> >there is/isn't at least one match on the inner side.\n> >\n>\n> I think you're conflating inner/outer side and left/right, or rather\n> assuming it's always left=inner and right=outer.\n\nIn PostgreSQL, it's always inner = right, outer = left. You can see\nthat reflected in plannodes.h and elsewhere:\n\n/* ----------------\n * these are defined to avoid confusion problems with \"left\"\n * and \"right\" and \"inner\" and \"outer\". The convention is that\n * the \"left\" plan is the \"outer\" plan and the \"right\" plan is\n * the inner plan, but these make the code more readable.\n * ----------------\n */\n#define innerPlan(node) (((Plan *)(node))->righttree)\n#define outerPlan(node) (((Plan *)(node))->lefttree)\n\nI'm not sure you think it's not always like that: are you referring to\nthe fact that the planner can choose to reverse the join (compared to\nthe SQL LEFT|RIGHT JOIN that appeared in the query), creating an extra\nlayer of confusion? In my email I was talking only about left and\nright as seen by the executor.\n\n> >About the question of when exactly to set the \"use_NLJ\" flag: I had\n> >originally been thinking of this only as a way to deal with the\n> >extreme skew problem. But in light of Tomas's complaints about\n> >unmetered per-batch memory overheads, I had a new thought: it should\n> >also be triggered whenever doubling the number of batches would halve\n> >the amount of memory left for the hash table (after including the size\n> >of all those BufFile objects in the computation as Tomas proposes). I\n> >think that might be exactly the right right cut-off if you want to do\n> >as much Grace partitioning as your work_mem can afford, and therefore\n> >as little looping as possible to complete the join while respecting\n> >work_mem.\n> >\n>\n> Not sure what NLJ flag rule you propose, exactly.\n>\n> Regarding the threshold value - once the space for BufFiles (and other\n> overhead) gets over work_mem/2, it does not make any sense to increase\n> the number of batches because then the work_mem would be entirely\n> occupied by BufFiles.\n>\n> The WIP patches don't actually do exactly that though - they just check\n> if the incremented size would be over work_mem/2. I think we should\n> instead allow up to work_mem*2/3, i.e. stop adding batches after the\n> BufFiles start consuming more than work_mem/3 memory.\n>\n> I think that's actually what you mean by \"halving the amount of memory\n> left for the hash table\" because that's what happens after reaching the\n> work_mem/3.\n\nWell, instead of an arbitrary number like work_mem/2 or work_mem *\n2/3, I was trying to figure out the precise threshold beyond which it\ndoesn't make sense to expend more memory on BufFile objects, even if\nthe keys are uniformly distributed so that splitting batches halves\nthe expect tuple count per batch. Let work_mem_for_hash_table =\nwork_mem - nbatch * sizeof(BufFile). Whenever you increase nbatch,\nwork_mem_for_hash_table goes down, but it had better be more than half\nwhat it was before, or we expect to run out of memory again (if the\nbatch didn't fit before, and we're now splitting it so that we'll try\nto load only half of it, we'd better have more than half the budget\nfor the hash table than we had before). Otherwise you'd be making\nmatters worse, and this process probably won't terminate.\n\n> But I think that rule is irrelevant here, really, because this thread\n> was discussing cases where adding batches is futile due to skew, no? In\n> which case we should stop adding batches after reaching some % of tuples\n> not moving from the batch.\n\nYeah, this thread started off just about the 95% thing, but veered off\ncourse since these topics are tangled up. Sorry.\n\n> Or are you suggesting we should remove that rule, and instead realy on\n> this rule about halving the hash table space? That might work too, I\n> guess.\n\nNo, I suspect you need both rules. We still want to detect extreme\nskew soon as possible, even though the other rule will eventually\nfire; might as well do it sooner in clear-cut cases.\n\n> OTOH I'm not sure it's a good idea to handle both those cases the same\n> way - \"overflow file\" idea works pretty well for cases where the hash\n> table actually can be split into batches, and I'm afraid NLJ will be\n> much less efficient for those cases.\n\nYeah, you might be right about that, and everything I'm describing is\npure vapourware anyway. But your overflow file scheme isn't exactly\nfree of IO-amplification and multiple-processing of input data\neither... and I haven't yet grokked how it would work for parallel\nhash. Parallel hash generally doesn't have the\n'throw-the-tuples-forward' concept. which is inherently based on\nsequential in-order processing of batches.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2019 13:25:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 13:25:52 +1200, Thomas Munro wrote:\n> In PostgreSQL, it's always inner = right, outer = left. You can see\n> that reflected in plannodes.h and elsewhere:\n> \n> /* ----------------\n> * these are defined to avoid confusion problems with \"left\"\n> * and \"right\" and \"inner\" and \"outer\". The convention is that\n> * the \"left\" plan is the \"outer\" plan and the \"right\" plan is\n> * the inner plan, but these make the code more readable.\n> * ----------------\n> */\n> #define innerPlan(node) (((Plan *)(node))->righttree)\n> #define outerPlan(node) (((Plan *)(node))->lefttree)\n\nI really don't understand why we don't just rename those fields.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 18:33:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, May 20, 2019 at 01:25:52PM +1200, Thomas Munro wrote:\n>On Mon, May 20, 2019 at 12:22 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Mon, May 20, 2019 at 11:07:03AM +1200, Thomas Munro wrote:\n>> >First let me restate the PostgreSQL terminology for this stuff so I\n>> >don't get confused while talking about it:\n>> >\n>> >* The inner side of the join = the right side = the side we use to\n>> >build a hash table. Right and full joins emit inner tuples when there\n>> >is no matching tuple on the outer side.\n>> >\n>> >* The outer side of the join = the left side = the side we use to\n>> >probe the hash table. Left and full joins emit outer tuples when\n>> >there is no matching tuple on the inner side.\n>> >\n>> >* Semi and anti joins emit exactly one instance of each outer tuple if\n>> >there is/isn't at least one match on the inner side.\n>> >\n>>\n>> I think you're conflating inner/outer side and left/right, or rather\n>> assuming it's always left=inner and right=outer.\n>\n>In PostgreSQL, it's always inner = right, outer = left. You can see\n>that reflected in plannodes.h and elsewhere:\n>\n>/* ----------------\n> * these are defined to avoid confusion problems with \"left\"\n> * and \"right\" and \"inner\" and \"outer\". The convention is that\n> * the \"left\" plan is the \"outer\" plan and the \"right\" plan is\n> * the inner plan, but these make the code more readable.\n> * ----------------\n> */\n>#define innerPlan(node) (((Plan *)(node))->righttree)\n>#define outerPlan(node) (((Plan *)(node))->lefttree)\n>\n>I'm not sure you think it's not always like that: are you referring to\n>the fact that the planner can choose to reverse the join (compared to\n>the SQL LEFT|RIGHT JOIN that appeared in the query), creating an extra\n>layer of confusion? In my email I was talking only about left and\n>right as seen by the executor.\n>\n\nIt might be my lack of understanding, but I'm not sure how we map\nLEFT/RIGHT JOIN to left/righttree and inner/outer at plan level. My\nassumption was that for \"a LEFT JOIN b\" then \"a\" and \"b\" can end up\nboth as inner and outer (sub)tree.\n\nBut I haven't checked so I may easily be wrong. Maybe the comment you\nquoted clarifies that, not sure.\n\n>> >About the question of when exactly to set the \"use_NLJ\" flag: I had\n>> >originally been thinking of this only as a way to deal with the\n>> >extreme skew problem. But in light of Tomas's complaints about\n>> >unmetered per-batch memory overheads, I had a new thought: it should\n>> >also be triggered whenever doubling the number of batches would halve\n>> >the amount of memory left for the hash table (after including the size\n>> >of all those BufFile objects in the computation as Tomas proposes). I\n>> >think that might be exactly the right right cut-off if you want to do\n>> >as much Grace partitioning as your work_mem can afford, and therefore\n>> >as little looping as possible to complete the join while respecting\n>> >work_mem.\n>> >\n>>\n>> Not sure what NLJ flag rule you propose, exactly.\n>>\n>> Regarding the threshold value - once the space for BufFiles (and other\n>> overhead) gets over work_mem/2, it does not make any sense to increase\n>> the number of batches because then the work_mem would be entirely\n>> occupied by BufFiles.\n>>\n>> The WIP patches don't actually do exactly that though - they just check\n>> if the incremented size would be over work_mem/2. I think we should\n>> instead allow up to work_mem*2/3, i.e. stop adding batches after the\n>> BufFiles start consuming more than work_mem/3 memory.\n>>\n>> I think that's actually what you mean by \"halving the amount of memory\n>> left for the hash table\" because that's what happens after reaching the\n>> work_mem/3.\n>\n>Well, instead of an arbitrary number like work_mem/2 or work_mem *\n>2/3, I was trying to figure out the precise threshold beyond which it\n>doesn't make sense to expend more memory on BufFile objects, even if\n>the keys are uniformly distributed so that splitting batches halves\n>the expect tuple count per batch. Let work_mem_for_hash_table =\n>work_mem - nbatch * sizeof(BufFile). Whenever you increase nbatch,\n>work_mem_for_hash_table goes down, but it had better be more than half\n>what it was before, or we expect to run out of memory again (if the\n>batch didn't fit before, and we're now splitting it so that we'll try\n>to load only half of it, we'd better have more than half the budget\n>for the hash table than we had before). Otherwise you'd be making\n>matters worse, and this process probably won't terminate.\n>\n\nBut the work_mem/3 does exactly that.\n\nLet's say BufFiles need a bit less than work_mem/3. That means we have\na bit more than 2*work_mem/3 for the hash table. If you double the number\nof batches, then you'll end up with a bit more than 2*work_mem/3. That is,\nwe've not halved the hash table size.\n\nIf BufFiles need a bit more memory than work_mem/3, then after doubling\nthe number of batches we'll end up with less than half the initial hash\ntable space.\n\nSo I think work_mem/3 is the threshold we're looking for.\n\n>> But I think that rule is irrelevant here, really, because this thread\n>> was discussing cases where adding batches is futile due to skew, no? In\n>> which case we should stop adding batches after reaching some % of tuples\n>> not moving from the batch.\n>\n>Yeah, this thread started off just about the 95% thing, but veered off\n>course since these topics are tangled up. Sorry.\n>\n>> Or are you suggesting we should remove that rule, and instead realy on\n>> this rule about halving the hash table space? That might work too, I\n>> guess.\n>\n>No, I suspect you need both rules. We still want to detect extreme\n>skew soon as possible, even though the other rule will eventually\n>fire; might as well do it sooner in clear-cut cases.\n>\n\nRight, I agree. I think we need the 95% rule (or whatever) to handle the\ncases with skew / many duplicates, and then the overflow files to handle\nunderestimates with uniform distribution (or some other solution).\n\n>> OTOH I'm not sure it's a good idea to handle both those cases the same\n>> way - \"overflow file\" idea works pretty well for cases where the hash\n>> table actually can be split into batches, and I'm afraid NLJ will be\n>> much less efficient for those cases.\n>\n>Yeah, you might be right about that, and everything I'm describing is\n>pure vapourware anyway. But your overflow file scheme isn't exactly\n>free of IO-amplification and multiple-processing of input data\n>either... and I haven't yet grokked how it would work for parallel\n>hash. Parallel hash generally doesn't have the\n>'throw-the-tuples-forward' concept. which is inherently based on\n>sequential in-order processing of batches.\n>\n\nSure, let's do some math.\n\nWith the overflow scheme, the amplification is roughly ~2x (relative to\nmaster), because we need to write data for most batches first into the\noverflow file and then to the correct one. Master has wrte aplification\nabout ~1.25x (due to the gradual increase of batches), so the \"total\"\namplification is ~2.5x.\n\nFor the NLJ, the amplification fully depends on what fraction of the hash\ntable fits into work_mem. For example when it needs to be split into 32\nfragments, we have ~32x amplification. It might affect just some batches,\nof course.\n\nSo I still think those approaches are complementary and we need both.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Mon, 20 May 2019 16:31:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Sun, May 19, 2019 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >> Admittedly I don't have a patch, just a bunch of handwaving. One\n> >> reason I haven't attempted to write it is because although I know how\n> >> to do the non-parallel version using a BufFile full of match bits in\n> >> sync with the tuples for outer joins, I haven't figured out how to do\n> >> it for parallel-aware hash join, because then each loop over the outer\n> >> batch could see different tuples in each participant. You could use\n> >> the match bit in HashJoinTuple header, but then you'd have to write\n> >> all the tuples out again, which is more IO than I want to do. I'll\n> >> probably start another thread about that.\n> >\n> > Could you explain more about the implementation you are suggesting?\n> >\n> > Specifically, what do you mean \"BufFile full of match bits in sync with\n> the\n> > tuples for outer joins?\"\n>\n> First let me restate the PostgreSQL terminology for this stuff so I\n> don't get confused while talking about it:\n>\n> * The inner side of the join = the right side = the side we use to\n> build a hash table. Right and full joins emit inner tuples when there\n> is no matching tuple on the outer side.\n>\n> * The outer side of the join = the left side = the side we use to\n> probe the hash table. Left and full joins emit outer tuples when\n> there is no matching tuple on the inner side.\n>\n> * Semi and anti joins emit exactly one instance of each outer tuple if\n> there is/isn't at least one match on the inner side.\n>\n> We have a couple of relatively easy cases:\n>\n> * Inner joins: for every outer tuple, we try to find a match in the\n> hash table, and if we find one we emit a tuple. To add looping\n> support, if we run out of memory when loading the hash table we can\n> just proceed to probe the fragment we've managed to load so far, and\n> then rewind the outer batch, clear the hash table and load in the next\n> work_mem-sized fragment and do it again... rinse and repeat until\n> we've eventually processed the whole inner batch. After we've\n> finished looping, we move on to the next batch.\n>\n> * For right and full joins (\"HJ_FILL_INNER\"), we also need to emit an\n> inner tuple for every tuple that was loaded into the hash table but\n> never matched. That's done using a flag HEAP_TUPLE_HAS_MATCH in the\n> header of the tuples of the hash table, and a scan through the whole\n> hash table at the end of each batch to look for unmatched tuples\n> (ExecScanHashTableForUnmatched()). To add looping support, that just\n> has to be done at the end of every inner batch fragment, that is,\n> after every loop.\n>\n> And now for the cases that need a new kind of match bit, as far as I can\n> see:\n>\n> * For left and full joins (\"HJ_FILL_OUTER\"), we also need to emit an\n> outer tuple for every tuple that didn't find a match in the hash\n> table. Normally that is done while probing, without any need for\n> memory or match flags: if we don't find a match, we just spit out an\n> outer tuple immediately. But that simple strategy won't work if the\n> hash table holds only part of the inner batch. Since we'll be\n> rewinding and looping over the outer batch again for the next inner\n> batch fragment, we can't yet say if there will be a match in a later\n> loop. But the later loops don't know on their own either. So we need\n> some kind of cumulative memory between loops, and we only know which\n> outer tuples have a match after we've finished all loops. So there\n> would need to be a new function ExecScanOuterBatchForUnmatched().\n>\n> * For semi joins, we need to emit exactly one outer tuple whenever\n> there is one or more match on the inner side. To add looping support,\n> we need to make sure that we don't emit an extra copy of the outer\n> tuple if there is a second match in another inner batch fragment.\n> Again, this implies some kind of memory between loops, so we can\n> suppress later matches.\n>\n> * For anti joins, we need to emit an outer tuple whenever there is no\n> match. To add looping support, we need to wait until we've seen all\n> the inner batch fragments before we know that a given outer tuple has\n> no match, perhaps with the same new function\n> ExecScanOuterBatchForUnmatched().\n>\n> So, we need some kind of inter-loop memory, but we obviously don't\n> want to create another source of unmetered RAM gobbling. So one idea\n> is a BufFile that has one bit per outer tuple in the batch. In the\n> first loop, we just stream out the match results as we go, and then\n> somehow we OR the bitmap with the match results in subsequent loops.\n> After the last loop, we have a list of unmatched tuples -- just scan\n> it in lock-step with the outer batch and look for 0 bits.\n>\n\nThat makes sense. Thanks for the detailed explanation.\n\n\n>\n> Unfortunately that bits-in-order scheme doesn't work for parallel\n> hash, where the SharedTuplestore tuples seen by each worker are\n> non-deterministic. So perhaps in that case we could use the\n> HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\n> the whole outer batch back out each time through the loop. That'd\n> keep the tuples and match bits together, but it seems like a lot of\n> IO...\n\n\nIf you set the has_match flag in the tuple header itself, wouldn't you only\nneed to write the tuples from the outer batch back out that don't have\nmatches?\n\n\n> > If so, why do you need to keep track of the outer tuples seen?\n> > If you are going to loop through the whole outer side for each tuple on\n> the\n> > inner side, it seems like you wouldn't need to.\n>\n> The idea is to loop through the whole outer batch for every\n> work_mem-sized inner batch fragment, not every tuple. Though in\n> theory it could be as small as a single tuple.\n>\n> > Could you make an outer \"batch\" which is the whole of the outer\n> relation? That\n> > is, could you do something like: when hashing the inner side, if\n> re-partitioning\n> > is resulting in batches that will overflow spaceAllowed, could you set a\n> flag on\n> > that batch use_NLJ and when making batches for the outer side, make one\n> \"batch\"\n> > that has all the tuples from the outer side which the inner side batch\n> which was\n> > flagged will do NLJ with.\n>\n> I didn't understand this... you always need to make one outer batch\n> corresponding to every inner batch. The problem is the tricky\n> left/full/anti/semi join cases when joining against fragments holding\n> less that the full inner batch: we still need some way to implement\n> join logic that depends on knowing whether there is a match in *any*\n> of the inner fragments/loops.\n>\n\nSorry, my suggestion was inaccurate and unclear: I was basically suggesting\nthat once you have all batches created for outer and inner sides, for a\ngiven inner side batch that does not fit in memory, for each outer tuple in\nthe corresponding outer batch file, load and join all of the chunks of the\ninner batch file. That way, before you emit that tuple, you have checked\nall of the corresponding inner batch.\n\nThinking about it now, I realize that that would be worse in all cases than\nwhat you are thinking of -- joining the outer side batch with the inner\nside batch chunk that fits in memory and marking the BufFile bit\nrepresenting that outer side tuple as \"matched\" and only emitting it with a\nNULL from the inner side after all chunks have been processed.\n\n-- \nMelanie Plageman\n\nOn Sun, May 19, 2019 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Admittedly I don't have a patch, just a bunch of handwaving. One\n>> reason I haven't attempted to write it is because although I know how\n>> to do the non-parallel version using a BufFile full of match bits in\n>> sync with the tuples for outer joins, I haven't figured out how to do\n>> it for parallel-aware hash join, because then each loop over the outer\n>> batch could see different tuples in each participant. You could use\n>> the match bit in HashJoinTuple header, but then you'd have to write\n>> all the tuples out again, which is more IO than I want to do. I'll\n>> probably start another thread about that.\n>\n> Could you explain more about the implementation you are suggesting?\n>\n> Specifically, what do you mean \"BufFile full of match bits in sync with the\n> tuples for outer joins?\"\n\nFirst let me restate the PostgreSQL terminology for this stuff so I\ndon't get confused while talking about it:\n\n* The inner side of the join = the right side = the side we use to\nbuild a hash table. Right and full joins emit inner tuples when there\nis no matching tuple on the outer side.\n\n* The outer side of the join = the left side = the side we use to\nprobe the hash table. Left and full joins emit outer tuples when\nthere is no matching tuple on the inner side.\n\n* Semi and anti joins emit exactly one instance of each outer tuple if\nthere is/isn't at least one match on the inner side.\n\nWe have a couple of relatively easy cases:\n\n* Inner joins: for every outer tuple, we try to find a match in the\nhash table, and if we find one we emit a tuple. To add looping\nsupport, if we run out of memory when loading the hash table we can\njust proceed to probe the fragment we've managed to load so far, and\nthen rewind the outer batch, clear the hash table and load in the next\nwork_mem-sized fragment and do it again... rinse and repeat until\nwe've eventually processed the whole inner batch. After we've\nfinished looping, we move on to the next batch.\n\n* For right and full joins (\"HJ_FILL_INNER\"), we also need to emit an\ninner tuple for every tuple that was loaded into the hash table but\nnever matched. That's done using a flag HEAP_TUPLE_HAS_MATCH in the\nheader of the tuples of the hash table, and a scan through the whole\nhash table at the end of each batch to look for unmatched tuples\n(ExecScanHashTableForUnmatched()). To add looping support, that just\nhas to be done at the end of every inner batch fragment, that is,\nafter every loop.\n\nAnd now for the cases that need a new kind of match bit, as far as I can see:\n\n* For left and full joins (\"HJ_FILL_OUTER\"), we also need to emit an\nouter tuple for every tuple that didn't find a match in the hash\ntable. Normally that is done while probing, without any need for\nmemory or match flags: if we don't find a match, we just spit out an\nouter tuple immediately. But that simple strategy won't work if the\nhash table holds only part of the inner batch. Since we'll be\nrewinding and looping over the outer batch again for the next inner\nbatch fragment, we can't yet say if there will be a match in a later\nloop. But the later loops don't know on their own either. So we need\nsome kind of cumulative memory between loops, and we only know which\nouter tuples have a match after we've finished all loops. So there\nwould need to be a new function ExecScanOuterBatchForUnmatched().\n\n* For semi joins, we need to emit exactly one outer tuple whenever\nthere is one or more match on the inner side. To add looping support,\nwe need to make sure that we don't emit an extra copy of the outer\ntuple if there is a second match in another inner batch fragment.\nAgain, this implies some kind of memory between loops, so we can\nsuppress later matches.\n\n* For anti joins, we need to emit an outer tuple whenever there is no\nmatch. To add looping support, we need to wait until we've seen all\nthe inner batch fragments before we know that a given outer tuple has\nno match, perhaps with the same new function\nExecScanOuterBatchForUnmatched().\n\nSo, we need some kind of inter-loop memory, but we obviously don't\nwant to create another source of unmetered RAM gobbling. So one idea\nis a BufFile that has one bit per outer tuple in the batch. In the\nfirst loop, we just stream out the match results as we go, and then\nsomehow we OR the bitmap with the match results in subsequent loops.\nAfter the last loop, we have a list of unmatched tuples -- just scan\nit in lock-step with the outer batch and look for 0 bits.That makes sense. Thanks for the detailed explanation. \n\nUnfortunately that bits-in-order scheme doesn't work for parallel\nhash, where the SharedTuplestore tuples seen by each worker are\nnon-deterministic. So perhaps in that case we could use the\nHEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\nthe whole outer batch back out each time through the loop. That'd\nkeep the tuples and match bits together, but it seems like a lot of\nIO... If you set the has_match flag in the tuple header itself, wouldn't you onlyneed to write the tuples from the outer batch back out that don't havematches? \n> If so, why do you need to keep track of the outer tuples seen?\n> If you are going to loop through the whole outer side for each tuple on the\n> inner side, it seems like you wouldn't need to.\n\nThe idea is to loop through the whole outer batch for every\nwork_mem-sized inner batch fragment, not every tuple. Though in\ntheory it could be as small as a single tuple.\n\n> Could you make an outer \"batch\" which is the whole of the outer relation? That\n> is, could you do something like: when hashing the inner side, if re-partitioning\n> is resulting in batches that will overflow spaceAllowed, could you set a flag on\n> that batch use_NLJ and when making batches for the outer side, make one \"batch\"\n> that has all the tuples from the outer side which the inner side batch which was\n> flagged will do NLJ with.\n\nI didn't understand this... you always need to make one outer batch\ncorresponding to every inner batch. The problem is the tricky\nleft/full/anti/semi join cases when joining against fragments holding\nless that the full inner batch: we still need some way to implement\njoin logic that depends on knowing whether there is a match in *any*\nof the inner fragments/loops.Sorry, my suggestion was inaccurate and unclear: I was basically suggestingthat once you have all batches created for outer and inner sides, for agiven inner side batch that does not fit in memory, for each outer tuple inthe corresponding outer batch file, load and join all of the chunks of theinner batch file. That way, before you emit that tuple, you have checkedall of the corresponding inner batch.Thinking about it now, I realize that that would be worse in all cases thanwhat you are thinking of -- joining the outer side batch with the innerside batch chunk that fits in memory and marking the BufFile bitrepresenting that outer side tuple as \"matched\" and only emitting it with aNULL from the inner side after all chunks have been processed.-- Melanie Plageman",
"msg_date": "Mon, 20 May 2019 12:05:56 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Sun, May 19, 2019 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >> Admittedly I don't have a patch, just a bunch of handwaving. One\n> >> reason I haven't attempted to write it is because although I know how\n> >> to do the non-parallel version using a BufFile full of match bits in\n> >> sync with the tuples for outer joins, I haven't figured out how to do\n> >> it for parallel-aware hash join, because then each loop over the outer\n> >> batch could see different tuples in each participant. You could use\n> >> the match bit in HashJoinTuple header, but then you'd have to write\n> >> all the tuples out again, which is more IO than I want to do. I'll\n> >> probably start another thread about that.\n> >\n> > Could you explain more about the implementation you are suggesting?\n> >\n> > Specifically, what do you mean \"BufFile full of match bits in sync with\n> the\n> > tuples for outer joins?\"\n>\n> First let me restate the PostgreSQL terminology for this stuff so I\n> don't get confused while talking about it:\n>\n> * The inner side of the join = the right side = the side we use to\n> build a hash table. Right and full joins emit inner tuples when there\n> is no matching tuple on the outer side.\n>\n> * The outer side of the join = the left side = the side we use to\n> probe the hash table. Left and full joins emit outer tuples when\n> there is no matching tuple on the inner side.\n>\n> * Semi and anti joins emit exactly one instance of each outer tuple if\n> there is/isn't at least one match on the inner side.\n>\n> We have a couple of relatively easy cases:\n>\n> * Inner joins: for every outer tuple, we try to find a match in the\n> hash table, and if we find one we emit a tuple. To add looping\n> support, if we run out of memory when loading the hash table we can\n> just proceed to probe the fragment we've managed to load so far, and\n> then rewind the outer batch, clear the hash table and load in the next\n> work_mem-sized fragment and do it again... rinse and repeat until\n> we've eventually processed the whole inner batch. After we've\n> finished looping, we move on to the next batch.\n>\n> * For right and full joins (\"HJ_FILL_INNER\"), we also need to emit an\n> inner tuple for every tuple that was loaded into the hash table but\n> never matched. That's done using a flag HEAP_TUPLE_HAS_MATCH in the\n> header of the tuples of the hash table, and a scan through the whole\n> hash table at the end of each batch to look for unmatched tuples\n> (ExecScanHashTableForUnmatched()). To add looping support, that just\n> has to be done at the end of every inner batch fragment, that is,\n> after every loop.\n>\n> And now for the cases that need a new kind of match bit, as far as I can\n> see:\n>\n> * For left and full joins (\"HJ_FILL_OUTER\"), we also need to emit an\n> outer tuple for every tuple that didn't find a match in the hash\n> table. Normally that is done while probing, without any need for\n> memory or match flags: if we don't find a match, we just spit out an\n> outer tuple immediately. But that simple strategy won't work if the\n> hash table holds only part of the inner batch. Since we'll be\n> rewinding and looping over the outer batch again for the next inner\n> batch fragment, we can't yet say if there will be a match in a later\n> loop. But the later loops don't know on their own either. So we need\n> some kind of cumulative memory between loops, and we only know which\n> outer tuples have a match after we've finished all loops. So there\n> would need to be a new function ExecScanOuterBatchForUnmatched().\n>\n> * For semi joins, we need to emit exactly one outer tuple whenever\n> there is one or more match on the inner side. To add looping support,\n> we need to make sure that we don't emit an extra copy of the outer\n> tuple if there is a second match in another inner batch fragment.\n> Again, this implies some kind of memory between loops, so we can\n> suppress later matches.\n>\n> * For anti joins, we need to emit an outer tuple whenever there is no\n> match. To add looping support, we need to wait until we've seen all\n> the inner batch fragments before we know that a given outer tuple has\n> no match, perhaps with the same new function\n> ExecScanOuterBatchForUnmatched().\n>\n> So, we need some kind of inter-loop memory, but we obviously don't\n> want to create another source of unmetered RAM gobbling. So one idea\n> is a BufFile that has one bit per outer tuple in the batch. In the\n> first loop, we just stream out the match results as we go, and then\n> somehow we OR the bitmap with the match results in subsequent loops.\n> After the last loop, we have a list of unmatched tuples -- just scan\n> it in lock-step with the outer batch and look for 0 bits.\n>\n> Unfortunately that bits-in-order scheme doesn't work for parallel\n> hash, where the SharedTuplestore tuples seen by each worker are\n> non-deterministic. So perhaps in that case we could use the\n> HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\n> the whole outer batch back out each time through the loop. That'd\n> keep the tuples and match bits together, but it seems like a lot of\n> IO... Note that parallel hash doesn't support right/full joins today,\n> because of some complications about waiting and deadlocks that might\n> turn out to be relevant here too, and might be solvable (I should\n> probably write about that in another email), but left joins *are*\n> supported today so would need to be desupported if we wanted to add\n> loop-based escape valve but not deal with with these problems. That\n> doesn't seem acceptable, which is why I'm a bit stuck on this point,\n> and unfortunately it may be a while before I have time to tackle any\n> of that personally.\n>\n>\nThere was an off-list discussion at PGCon last week about doing this\nhash looping strategy using the bitmap with match bits and solving the\nparallel hashjoin problem by having tuple-identifying information\nencoded in the bitmap which allowed each worker to indicate that an\nouter tuple had a match when processing that inner side chunk and\nthen, at the end of the scan of the outer side, the bitmaps would be\nOR'd together to represent a single view of the unmatched tuples from\nthat iteration.\n\nI was talking to Jeff Davis about this on Saturday, and, he felt that\nthere might be a way to solve the problem differently if we thought of\nthe left join case as performing an inner join and an antijoin\ninstead.\n\nRiffing on this idea a bit, I started trying to write a patch that\nwould basically emit a tuple if it matches and write the tuple out to\na file if it does not match. Then, after iterating through the outer\nbatch the first time for the first inner chunk, any tuples which do\nnot yet have a match are the only ones which need to be joined against\nthe other inner chunks. Instead of iterating through the outer side\noriginal batch file, use the unmatched outer tuples file to do the\njoin against the next chunk. Repeat this for all chunks.\n\nCould we not do this and avoid using the match bit? In the worst case,\nyou would have to write out all the tuples on the outer side (if none\nmatch) nchunks times (chunk is the work_mem sized chunk of inner\nloaded into the hashtable).\n\n-- \nMelanie Plageman\n\nOn Sun, May 19, 2019 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Sat, May 18, 2019 at 12:15 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Admittedly I don't have a patch, just a bunch of handwaving. One\n>> reason I haven't attempted to write it is because although I know how\n>> to do the non-parallel version using a BufFile full of match bits in\n>> sync with the tuples for outer joins, I haven't figured out how to do\n>> it for parallel-aware hash join, because then each loop over the outer\n>> batch could see different tuples in each participant. You could use\n>> the match bit in HashJoinTuple header, but then you'd have to write\n>> all the tuples out again, which is more IO than I want to do. I'll\n>> probably start another thread about that.\n>\n> Could you explain more about the implementation you are suggesting?\n>\n> Specifically, what do you mean \"BufFile full of match bits in sync with the\n> tuples for outer joins?\"\n\nFirst let me restate the PostgreSQL terminology for this stuff so I\ndon't get confused while talking about it:\n\n* The inner side of the join = the right side = the side we use to\nbuild a hash table. Right and full joins emit inner tuples when there\nis no matching tuple on the outer side.\n\n* The outer side of the join = the left side = the side we use to\nprobe the hash table. Left and full joins emit outer tuples when\nthere is no matching tuple on the inner side.\n\n* Semi and anti joins emit exactly one instance of each outer tuple if\nthere is/isn't at least one match on the inner side.\n\nWe have a couple of relatively easy cases:\n\n* Inner joins: for every outer tuple, we try to find a match in the\nhash table, and if we find one we emit a tuple. To add looping\nsupport, if we run out of memory when loading the hash table we can\njust proceed to probe the fragment we've managed to load so far, and\nthen rewind the outer batch, clear the hash table and load in the next\nwork_mem-sized fragment and do it again... rinse and repeat until\nwe've eventually processed the whole inner batch. After we've\nfinished looping, we move on to the next batch.\n\n* For right and full joins (\"HJ_FILL_INNER\"), we also need to emit an\ninner tuple for every tuple that was loaded into the hash table but\nnever matched. That's done using a flag HEAP_TUPLE_HAS_MATCH in the\nheader of the tuples of the hash table, and a scan through the whole\nhash table at the end of each batch to look for unmatched tuples\n(ExecScanHashTableForUnmatched()). To add looping support, that just\nhas to be done at the end of every inner batch fragment, that is,\nafter every loop.\n\nAnd now for the cases that need a new kind of match bit, as far as I can see:\n\n* For left and full joins (\"HJ_FILL_OUTER\"), we also need to emit an\nouter tuple for every tuple that didn't find a match in the hash\ntable. Normally that is done while probing, without any need for\nmemory or match flags: if we don't find a match, we just spit out an\nouter tuple immediately. But that simple strategy won't work if the\nhash table holds only part of the inner batch. Since we'll be\nrewinding and looping over the outer batch again for the next inner\nbatch fragment, we can't yet say if there will be a match in a later\nloop. But the later loops don't know on their own either. So we need\nsome kind of cumulative memory between loops, and we only know which\nouter tuples have a match after we've finished all loops. So there\nwould need to be a new function ExecScanOuterBatchForUnmatched().\n\n* For semi joins, we need to emit exactly one outer tuple whenever\nthere is one or more match on the inner side. To add looping support,\nwe need to make sure that we don't emit an extra copy of the outer\ntuple if there is a second match in another inner batch fragment.\nAgain, this implies some kind of memory between loops, so we can\nsuppress later matches.\n\n* For anti joins, we need to emit an outer tuple whenever there is no\nmatch. To add looping support, we need to wait until we've seen all\nthe inner batch fragments before we know that a given outer tuple has\nno match, perhaps with the same new function\nExecScanOuterBatchForUnmatched().\n\nSo, we need some kind of inter-loop memory, but we obviously don't\nwant to create another source of unmetered RAM gobbling. So one idea\nis a BufFile that has one bit per outer tuple in the batch. In the\nfirst loop, we just stream out the match results as we go, and then\nsomehow we OR the bitmap with the match results in subsequent loops.\nAfter the last loop, we have a list of unmatched tuples -- just scan\nit in lock-step with the outer batch and look for 0 bits.\n\nUnfortunately that bits-in-order scheme doesn't work for parallel\nhash, where the SharedTuplestore tuples seen by each worker are\nnon-deterministic. So perhaps in that case we could use the\nHEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\nthe whole outer batch back out each time through the loop. That'd\nkeep the tuples and match bits together, but it seems like a lot of\nIO... Note that parallel hash doesn't support right/full joins today,\nbecause of some complications about waiting and deadlocks that might\nturn out to be relevant here too, and might be solvable (I should\nprobably write about that in another email), but left joins *are*\nsupported today so would need to be desupported if we wanted to add\nloop-based escape valve but not deal with with these problems. That\ndoesn't seem acceptable, which is why I'm a bit stuck on this point,\nand unfortunately it may be a while before I have time to tackle any\nof that personally.\nThere was an off-list discussion at PGCon last week about doing thishash looping strategy using the bitmap with match bits and solving theparallel hashjoin problem by having tuple-identifying informationencoded in the bitmap which allowed each worker to indicate that anouter tuple had a match when processing that inner side chunk andthen, at the end of the scan of the outer side, the bitmaps would beOR'd together to represent a single view of the unmatched tuples fromthat iteration.I was talking to Jeff Davis about this on Saturday, and, he felt thatthere might be a way to solve the problem differently if we thought ofthe left join case as performing an inner join and an antijoininstead.Riffing on this idea a bit, I started trying to write a patch thatwould basically emit a tuple if it matches and write the tuple out toa file if it does not match. Then, after iterating through the outerbatch the first time for the first inner chunk, any tuples which donot yet have a match are the only ones which need to be joined againstthe other inner chunks. Instead of iterating through the outer sideoriginal batch file, use the unmatched outer tuples file to do thejoin against the next chunk. Repeat this for all chunks.Could we not do this and avoid using the match bit? In the worst case,you would have to write out all the tuples on the outer side (if nonematch) nchunks times (chunk is the work_mem sized chunk of innerloaded into the hashtable).-- Melanie Plageman",
"msg_date": "Mon, 3 Jun 2019 14:10:21 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 5:10 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I was talking to Jeff Davis about this on Saturday, and, he felt that\n> there might be a way to solve the problem differently if we thought of\n> the left join case as performing an inner join and an antijoin\n> instead.\n>\n> Riffing on this idea a bit, I started trying to write a patch that\n> would basically emit a tuple if it matches and write the tuple out to\n> a file if it does not match. Then, after iterating through the outer\n> batch the first time for the first inner chunk, any tuples which do\n> not yet have a match are the only ones which need to be joined against\n> the other inner chunks. Instead of iterating through the outer side\n> original batch file, use the unmatched outer tuples file to do the\n> join against the next chunk. Repeat this for all chunks.\n\nI'm not sure that I understanding this proposal correctly, but if I am\nthen I think it doesn't work in the case where a single outer row\nmatches rows in many different inner chunks. When you \"use the\nunmatched outer tuples file to do the join against the next chunk,\"\nyou deny any rows that have already matched the chance to produce\nadditional matches.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jun 2019 08:43:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Sun, May 19, 2019 at 7:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Unfortunately that bits-in-order scheme doesn't work for parallel\n> hash, where the SharedTuplestore tuples seen by each worker are\n> non-deterministic. So perhaps in that case we could use the\n> HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\n> the whole outer batch back out each time through the loop. That'd\n> keep the tuples and match bits together, but it seems like a lot of\n> IO...\n\nSo, I think the case you're worried about here is something like:\n\nGather\n-> Parallel Hash Left Join\n -> Parallel Seq Scan on a\n -> Parallel Hash\n -> Parallel Seq Scan on b\n\nIf I understand ExecParallelHashJoinPartitionOuter correctly, we're\ngoing to hash all of a and put it into a set of batch files before we\neven get started, so it's possible to identify precisely which tuple\nwe're talking about by just giving the batch number and the position\nof the tuple within that batch. So while it's true that the\nindividual workers can't use the number of tuples they've read to know\nwhere they are in the SharedTuplestore, maybe the SharedTuplestore\ncould just tell them. Then they could maintain a paged bitmap of the\ntuples that they've matched to something, indexed by\nposition-within-the-tuplestore, and those bitmaps could be OR'd\ntogether at the end.\n\nCrazy idea, or...?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jun 2019 09:05:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 5:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jun 3, 2019 at 5:10 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I was talking to Jeff Davis about this on Saturday, and, he felt that\n> > there might be a way to solve the problem differently if we thought of\n> > the left join case as performing an inner join and an antijoin\n> > instead.\n> >\n> > Riffing on this idea a bit, I started trying to write a patch that\n> > would basically emit a tuple if it matches and write the tuple out to\n> > a file if it does not match. Then, after iterating through the outer\n> > batch the first time for the first inner chunk, any tuples which do\n> > not yet have a match are the only ones which need to be joined against\n> > the other inner chunks. Instead of iterating through the outer side\n> > original batch file, use the unmatched outer tuples file to do the\n> > join against the next chunk. Repeat this for all chunks.\n>\n> I'm not sure that I understanding this proposal correctly, but if I am\n> then I think it doesn't work in the case where a single outer row\n> matches rows in many different inner chunks. When you \"use the\n> unmatched outer tuples file to do the join against the next chunk,\"\n> you deny any rows that have already matched the chance to produce\n> additional matches.\n>\n>\nOops! You are totally right.\nI will amend the idea:\nFor each chunk on the inner side, loop through both the original batch\nfile and the unmatched outer tuples file created for the last chunk.\nEmit any matches and write out any unmatched tuples to a new unmatched\nouter tuples file.\n\nI think, in the worst case, if no tuples from the outer have a match,\nyou end up writing out all of the outer tuples for each chunk on the\ninner side. However, using the match bit in the tuple header solution\nwould require this much writing.\nProbably the bigger problem is that in this worst case you would also\nneed to read double the number of outer tuples for each inner chunk.\n\nHowever, in the best case it seems like it would be better than the\nmatch bit/write everything from the outer side out solution.\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 4, 2019 at 5:43 AM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Jun 3, 2019 at 5:10 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I was talking to Jeff Davis about this on Saturday, and, he felt that\n> there might be a way to solve the problem differently if we thought of\n> the left join case as performing an inner join and an antijoin\n> instead.\n>\n> Riffing on this idea a bit, I started trying to write a patch that\n> would basically emit a tuple if it matches and write the tuple out to\n> a file if it does not match. Then, after iterating through the outer\n> batch the first time for the first inner chunk, any tuples which do\n> not yet have a match are the only ones which need to be joined against\n> the other inner chunks. Instead of iterating through the outer side\n> original batch file, use the unmatched outer tuples file to do the\n> join against the next chunk. Repeat this for all chunks.\n\nI'm not sure that I understanding this proposal correctly, but if I am\nthen I think it doesn't work in the case where a single outer row\nmatches rows in many different inner chunks. When you \"use the\nunmatched outer tuples file to do the join against the next chunk,\"\nyou deny any rows that have already matched the chance to produce\nadditional matches.\nOops! You are totally right.I will amend the idea: For each chunk on the inner side, loop through both the original batchfile and the unmatched outer tuples file created for the last chunk.Emit any matches and write out any unmatched tuples to a new unmatchedouter tuples file.I think, in the worst case, if no tuples from the outer have a match,you end up writing out all of the outer tuples for each chunk on theinner side. However, using the match bit in the tuple header solutionwould require this much writing.Probably the bigger problem is that in this worst case you would alsoneed to read double the number of outer tuples for each inner chunk.However, in the best case it seems like it would be better than thematch bit/write everything from the outer side out solution. -- Melanie Plageman",
"msg_date": "Tue, 4 Jun 2019 11:47:46 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 2:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Oops! You are totally right.\n> I will amend the idea:\n> For each chunk on the inner side, loop through both the original batch\n> file and the unmatched outer tuples file created for the last chunk.\n> Emit any matches and write out any unmatched tuples to a new unmatched\n> outer tuples file.\n>\n> I think, in the worst case, if no tuples from the outer have a match,\n> you end up writing out all of the outer tuples for each chunk on the\n> inner side. However, using the match bit in the tuple header solution\n> would require this much writing.\n> Probably the bigger problem is that in this worst case you would also\n> need to read double the number of outer tuples for each inner chunk.\n>\n> However, in the best case it seems like it would be better than the\n> match bit/write everything from the outer side out solution.\n\nI guess so, but the downside of needing to read twice as many outer\ntuples for each inner chunk seems pretty large. It would be a lot\nnicer if we could find a way to store the matched-bits someplace other\nthan where we are storing the tuples, what Thomas called a\nbits-in-order scheme, because then the amount of additional read and\nwrite I/O would be tiny -- one bit per tuple doesn't add up very fast.\n\nIn the scheme you propose here, I think that after you read the\noriginal outer tuples for each chunk and the unmatched outer tuples\nfor each chunk, you'll have to match up the unmatched tuples to the\noriginal tuples, probably by using memcmp() or something. Otherwise,\nwhen a new match occurs, you won't know which tuple should now not be\nemitted into the new unmatched outer tuples file that you're going to\nproduce. So I think what's going to happen is that you'll read the\noriginal batch file, then read the unmatched tuples file and use that\nto set or not set a bit on each tuple in memory, then do the real work\nsetting more bits, then write out a new unmatched-tuples file with the\ntuples that still don't have the bit set. So your unmatched tuple\nfile is basically a list of tuple identifiers in the least compact\nform imaginable: the tuple is identified by the entire tuple contents.\nThat doesn't seem very appealing, although I expect that it would\nstill win for some queries.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:08:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 6:05 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, May 19, 2019 at 7:07 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > Unfortunately that bits-in-order scheme doesn't work for parallel\n> > hash, where the SharedTuplestore tuples seen by each worker are\n> > non-deterministic. So perhaps in that case we could use the\n> > HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\n> > the whole outer batch back out each time through the loop. That'd\n> > keep the tuples and match bits together, but it seems like a lot of\n> > IO...\n>\n> So, I think the case you're worried about here is something like:\n>\n> Gather\n> -> Parallel Hash Left Join\n> -> Parallel Seq Scan on a\n> -> Parallel Hash\n> -> Parallel Seq Scan on b\n>\n> If I understand ExecParallelHashJoinPartitionOuter correctly, we're\n> going to hash all of a and put it into a set of batch files before we\n> even get started, so it's possible to identify precisely which tuple\n> we're talking about by just giving the batch number and the position\n> of the tuple within that batch. So while it's true that the\n> individual workers can't use the number of tuples they've read to know\n> where they are in the SharedTuplestore, maybe the SharedTuplestore\n> could just tell them. Then they could maintain a paged bitmap of the\n> tuples that they've matched to something, indexed by\n> position-within-the-tuplestore, and those bitmaps could be OR'd\n> together at the end.\n>\n> Crazy idea, or...?\n>\n>\nThat idea does sound like it could work. Basically a worker is given a\ntuple and a bit index (process this tuple and if it matches go flip\nthe bit at position 30) in its own bitmap, right?\n\nI need to spend some time understanding how SharedTupleStore works and\nhow workers get tuples, so what I'm saying might not make sense.\n\nOne question I have is, how would the OR'd together bitmap be\npropagated to workers after the first chunk? That is, when there are\nno tuples left in the outer bunch, for a given inner chunk, would you\nload the bitmaps from each worker into memory, OR them together, and\nthen write the updated bitmap back out so that each worker starts with\nthe updated bitmap?\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 4, 2019 at 6:05 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sun, May 19, 2019 at 7:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Unfortunately that bits-in-order scheme doesn't work for parallel\n> hash, where the SharedTuplestore tuples seen by each worker are\n> non-deterministic. So perhaps in that case we could use the\n> HEAP_TUPLE_HAS_MATCH bit in the outer tuple header itself, and write\n> the whole outer batch back out each time through the loop. That'd\n> keep the tuples and match bits together, but it seems like a lot of\n> IO...\n\nSo, I think the case you're worried about here is something like:\n\nGather\n-> Parallel Hash Left Join\n -> Parallel Seq Scan on a\n -> Parallel Hash\n -> Parallel Seq Scan on b\n\nIf I understand ExecParallelHashJoinPartitionOuter correctly, we're\ngoing to hash all of a and put it into a set of batch files before we\neven get started, so it's possible to identify precisely which tuple\nwe're talking about by just giving the batch number and the position\nof the tuple within that batch. So while it's true that the\nindividual workers can't use the number of tuples they've read to know\nwhere they are in the SharedTuplestore, maybe the SharedTuplestore\ncould just tell them. Then they could maintain a paged bitmap of the\ntuples that they've matched to something, indexed by\nposition-within-the-tuplestore, and those bitmaps could be OR'd\ntogether at the end.\n\nCrazy idea, or...?\nThat idea does sound like it could work. Basically a worker is given atuple and a bit index (process this tuple and if it matches go flipthe bit at position 30) in its own bitmap, right?I need to spend some time understanding how SharedTupleStore works andhow workers get tuples, so what I'm saying might not make sense.One question I have is, how would the OR'd together bitmap bepropagated to workers after the first chunk? That is, when there areno tuples left in the outer bunch, for a given inner chunk, would youload the bitmaps from each worker into memory, OR them together, andthen write the updated bitmap back out so that each worker starts withthe updated bitmap? -- Melanie Plageman",
"msg_date": "Tue, 4 Jun 2019 12:08:57 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 3:09 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> One question I have is, how would the OR'd together bitmap be\n> propagated to workers after the first chunk? That is, when there are\n> no tuples left in the outer bunch, for a given inner chunk, would you\n> load the bitmaps from each worker into memory, OR them together, and\n> then write the updated bitmap back out so that each worker starts with\n> the updated bitmap?\n\nI was assuming we'd elect one participant to go read all the bitmaps,\nOR them together, and generate all the required null-extended tuples,\nsort of like the PHJ_BUILD_ALLOCATING, PHJ_GROW_BATCHES_ALLOCATING,\nPHJ_GROW_BUCKETS_ALLOCATING, and/or PHJ_BATCH_ALLOCATING states only\ninvolve one participant being active at a time. Now you could hope for\nsomething better -- why not parallelize that work? But on the other\nhand, why not start simple and worry about that in some future patch\ninstead of right away? A committed patch that does something good is\nbetter than an uncommitted patch that does something AWESOME.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 Jun 2019 15:15:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 03:08:24PM -0400, Robert Haas wrote:\n>On Tue, Jun 4, 2019 at 2:47 PM Melanie Plageman\n><melanieplageman@gmail.com> wrote:\n>> Oops! You are totally right.\n>> I will amend the idea:\n>> For each chunk on the inner side, loop through both the original batch\n>> file and the unmatched outer tuples file created for the last chunk.\n>> Emit any matches and write out any unmatched tuples to a new unmatched\n>> outer tuples file.\n>>\n>> I think, in the worst case, if no tuples from the outer have a match,\n>> you end up writing out all of the outer tuples for each chunk on the\n>> inner side. However, using the match bit in the tuple header solution\n>> would require this much writing.\n>> Probably the bigger problem is that in this worst case you would also\n>> need to read double the number of outer tuples for each inner chunk.\n>>\n>> However, in the best case it seems like it would be better than the\n>> match bit/write everything from the outer side out solution.\n>\n>I guess so, but the downside of needing to read twice as many outer\n>tuples for each inner chunk seems pretty large. It would be a lot\n>nicer if we could find a way to store the matched-bits someplace other\n>than where we are storing the tuples, what Thomas called a\n>bits-in-order scheme, because then the amount of additional read and\n>write I/O would be tiny -- one bit per tuple doesn't add up very fast.\n>\n>In the scheme you propose here, I think that after you read the\n>original outer tuples for each chunk and the unmatched outer tuples\n>for each chunk, you'll have to match up the unmatched tuples to the\n>original tuples, probably by using memcmp() or something. Otherwise,\n>when a new match occurs, you won't know which tuple should now not be\n>emitted into the new unmatched outer tuples file that you're going to\n>produce. So I think what's going to happen is that you'll read the\n>original batch file, then read the unmatched tuples file and use that\n>to set or not set a bit on each tuple in memory, then do the real work\n>setting more bits, then write out a new unmatched-tuples file with the\n>tuples that still don't have the bit set. So your unmatched tuple\n>file is basically a list of tuple identifiers in the least compact\n>form imaginable: the tuple is identified by the entire tuple contents.\n>That doesn't seem very appealing, although I expect that it would\n>still win for some queries.\n>\n\nI wonder how big of an issue that actually is in practice. If this is \nmeant for significantly skewed data sets, which may easily cause OOM\n(e.g. per the recent report, which restarted this discussion). So if we\nstill only expect to use this for rare cases, which may easily end up\nwith an OOM at the moment, the extra cost might be acceptable.\n\nBut if we plan to use this more widely (say, allow hashjoins even for\ncases that we know won't fit into work_mem), then the extra cost would\nbe an issue. But even then it should be included in the cost estimate, \nand switch the plan to a merge join when appropriate.\n\nOf course, maybe there are many data sets with enough skew to consume \nexplosive growth and consume a lot of memory, but not enough to trigger \nOOM. Those cases may get slower, but I think that's OK. If appropriate,\nthe user can increase work_mem and get the \"good\" plan.\n\nFWIW this is a challenge for all approaches discussed in this thread,\nnot just this particular one. We're restricting the resources available\nto the query, switching to something (likely) slower.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 5 Jun 2019 00:31:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 12:08 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jun 4, 2019 at 2:47 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Oops! You are totally right.\n> > I will amend the idea:\n> > For each chunk on the inner side, loop through both the original batch\n> > file and the unmatched outer tuples file created for the last chunk.\n> > Emit any matches and write out any unmatched tuples to a new unmatched\n> > outer tuples file.\n> >\n> > I think, in the worst case, if no tuples from the outer have a match,\n> > you end up writing out all of the outer tuples for each chunk on the\n> > inner side. However, using the match bit in the tuple header solution\n> > would require this much writing.\n> > Probably the bigger problem is that in this worst case you would also\n> > need to read double the number of outer tuples for each inner chunk.\n> >\n> > However, in the best case it seems like it would be better than the\n> > match bit/write everything from the outer side out solution.\n>\n> I guess so, but the downside of needing to read twice as many outer\n> tuples for each inner chunk seems pretty large. It would be a lot\n> nicer if we could find a way to store the matched-bits someplace other\n> than where we are storing the tuples, what Thomas called a\n> bits-in-order scheme, because then the amount of additional read and\n> write I/O would be tiny -- one bit per tuple doesn't add up very fast.\n>\n> In the scheme you propose here, I think that after you read the\n> original outer tuples for each chunk and the unmatched outer tuples\n> for each chunk, you'll have to match up the unmatched tuples to the\n> original tuples, probably by using memcmp() or something. Otherwise,\n> when a new match occurs, you won't know which tuple should now not be\n> emitted into the new unmatched outer tuples file that you're going to\n> produce. So I think what's going to happen is that you'll read the\n> original batch file, then read the unmatched tuples file and use that\n> to set or not set a bit on each tuple in memory, then do the real work\n> setting more bits, then write out a new unmatched-tuples file with the\n> tuples that still don't have the bit set. So your unmatched tuple\n> file is basically a list of tuple identifiers in the least compact\n> form imaginable: the tuple is identified by the entire tuple contents.\n> That doesn't seem very appealing, although I expect that it would\n> still win for some queries.\n>\n>\nI'm not sure I understand why you would need to compare the original\ntuples to the unmatched tuples file.\n\nThis is the example I used to try and reason through it.\n\nlet's say you have a batch (you are joining two single column tables)\nand your outer side is:\n5,7,9,11,10,11\nand your inner is:\n7,10,7,12,5,9\nand for the inner, let's say that only two values can fit in memory,\nso it is split into 3 chunks:\n7,10 | 7,12 | 5,9\nThe first time you iterate through the outer side (joining it to the\nfirst chunk), you emit as matched\n7,7\n10,10\nand write to unmatched tuples file\n5\n9\n11\n11\nThe second time you iterate through the outer side (joining it to the\nsecond chunk) you emit as matched\n7,7\nThen, you iterate again through the outer side a third time to join it\nto the unmatched tuples in the unmatched tuples file (from the first\nchunk) and write the following to a new unmatched tuples file:\n5\n9\n11\n11\nThe fourth time you iterate through the outer side (joining it to the\nthird chunk), you emit as matched\n5,5\n9,9\nThen you iterate a fifth time through the outer side to join it to the\nunmatched tuples in the unmatched tuples file (from the second chunk)\nand write the following to a new unmatched tuples file:\n11\n11\nNow that all chunks from the inner side have been processed, you can\nloop through the final unmatched tuples file, NULL-extend, and emit\nthem\n\nWouldn't that work?\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 4, 2019 at 12:08 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jun 4, 2019 at 2:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Oops! You are totally right.\n> I will amend the idea:\n> For each chunk on the inner side, loop through both the original batch\n> file and the unmatched outer tuples file created for the last chunk.\n> Emit any matches and write out any unmatched tuples to a new unmatched\n> outer tuples file.\n>\n> I think, in the worst case, if no tuples from the outer have a match,\n> you end up writing out all of the outer tuples for each chunk on the\n> inner side. However, using the match bit in the tuple header solution\n> would require this much writing.\n> Probably the bigger problem is that in this worst case you would also\n> need to read double the number of outer tuples for each inner chunk.\n>\n> However, in the best case it seems like it would be better than the\n> match bit/write everything from the outer side out solution.\n\nI guess so, but the downside of needing to read twice as many outer\ntuples for each inner chunk seems pretty large. It would be a lot\nnicer if we could find a way to store the matched-bits someplace other\nthan where we are storing the tuples, what Thomas called a\nbits-in-order scheme, because then the amount of additional read and\nwrite I/O would be tiny -- one bit per tuple doesn't add up very fast.\n\nIn the scheme you propose here, I think that after you read the\noriginal outer tuples for each chunk and the unmatched outer tuples\nfor each chunk, you'll have to match up the unmatched tuples to the\noriginal tuples, probably by using memcmp() or something. Otherwise,\nwhen a new match occurs, you won't know which tuple should now not be\nemitted into the new unmatched outer tuples file that you're going to\nproduce. So I think what's going to happen is that you'll read the\noriginal batch file, then read the unmatched tuples file and use that\nto set or not set a bit on each tuple in memory, then do the real work\nsetting more bits, then write out a new unmatched-tuples file with the\ntuples that still don't have the bit set. So your unmatched tuple\nfile is basically a list of tuple identifiers in the least compact\nform imaginable: the tuple is identified by the entire tuple contents.\nThat doesn't seem very appealing, although I expect that it would\nstill win for some queries.\nI'm not sure I understand why you would need to compare the originaltuples to the unmatched tuples file.This is the example I used to try and reason through it.let's say you have a batch (you are joining two single column tables)and your outer side is:5,7,9,11,10,11 and your inner is:7,10,7,12,5,9and for the inner, let's say that only two values can fit in memory,so it is split into 3 chunks:7,10 | 7,12 | 5,9The first time you iterate through the outer side (joining it to thefirst chunk), you emit as matched7,710,10and write to unmatched tuples file591111The second time you iterate through the outer side (joining it to thesecond chunk) you emit as matched7,7Then, you iterate again through the outer side a third time to join itto the unmatched tuples in the unmatched tuples file (from the firstchunk) and write the following to a new unmatched tuples file:591111The fourth time you iterate through the outer side (joining it to thethird chunk), you emit as matched5,59,9Then you iterate a fifth time through the outer side to join it to theunmatched tuples in the unmatched tuples file (from the second chunk)and write the following to a new unmatched tuples file:1111Now that all chunks from the inner side have been processed, you canloop through the final unmatched tuples file, NULL-extend, and emitthemWouldn't that work?-- Melanie Plageman",
"msg_date": "Thu, 6 Jun 2019 16:31:46 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jun 4, 2019 at 3:09 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > One question I have is, how would the OR'd together bitmap be\n> > propagated to workers after the first chunk? That is, when there are\n> > no tuples left in the outer bunch, for a given inner chunk, would you\n> > load the bitmaps from each worker into memory, OR them together, and\n> > then write the updated bitmap back out so that each worker starts with\n> > the updated bitmap?\n>\n> I was assuming we'd elect one participant to go read all the bitmaps,\n> OR them together, and generate all the required null-extended tuples,\n> sort of like the PHJ_BUILD_ALLOCATING, PHJ_GROW_BATCHES_ALLOCATING,\n> PHJ_GROW_BUCKETS_ALLOCATING, and/or PHJ_BATCH_ALLOCATING states only\n> involve one participant being active at a time. Now you could hope for\n> something better -- why not parallelize that work? But on the other\n> hand, why not start simple and worry about that in some future patch\n> instead of right away? A committed patch that does something good is\n> better than an uncommitted patch that does something AWESOME.\n>\n>\nWhat if you have a lot of tuples -- couldn't the bitmaps get pretty\nbig? And then you have to OR them all together and if you can't put\nthe whole bitmap from each worker into memory at once to do it, it\nseems like it would be pretty slow. (I mean maybe not as slow as\nreading the outer side 5 times when you only have 3 chunks on the\ninner + all the extra writes from my unmatched tuple file idea, but\nstill...)\n\n-- \nMelanie Plageman\n\nOn Tue, Jun 4, 2019 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jun 4, 2019 at 3:09 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> One question I have is, how would the OR'd together bitmap be\n> propagated to workers after the first chunk? That is, when there are\n> no tuples left in the outer bunch, for a given inner chunk, would you\n> load the bitmaps from each worker into memory, OR them together, and\n> then write the updated bitmap back out so that each worker starts with\n> the updated bitmap?\n\nI was assuming we'd elect one participant to go read all the bitmaps,\nOR them together, and generate all the required null-extended tuples,\nsort of like the PHJ_BUILD_ALLOCATING, PHJ_GROW_BATCHES_ALLOCATING,\nPHJ_GROW_BUCKETS_ALLOCATING, and/or PHJ_BATCH_ALLOCATING states only\ninvolve one participant being active at a time. Now you could hope for\nsomething better -- why not parallelize that work? But on the other\nhand, why not start simple and worry about that in some future patch\ninstead of right away? A committed patch that does something good is\nbetter than an uncommitted patch that does something AWESOME.\nWhat if you have a lot of tuples -- couldn't the bitmaps get prettybig? And then you have to OR them all together and if you can't putthe whole bitmap from each worker into memory at once to do it, itseems like it would be pretty slow. (I mean maybe not as slow asreading the outer side 5 times when you only have 3 chunks on theinner + all the extra writes from my unmatched tuple file idea, but still...) -- Melanie Plageman",
"msg_date": "Thu, 6 Jun 2019 16:33:31 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Admittedly I don't have a patch, just a bunch of handwaving. One\n> reason I haven't attempted to write it is because although I know how\n> to do the non-parallel version using a BufFile full of match bits in\n> sync with the tuples for outer joins, I haven't figured out how to do\n> it for parallel-aware hash join, because then each loop over the outer\n> batch could see different tuples in each participant. You could use\n> the match bit in HashJoinTuple header, but then you'd have to write\n> all the tuples out again, which is more IO than I want to do. I'll\n> probably start another thread about that.\n>\n>\nGoing back to the idea of using the match bit in the HashJoinTuple header\nand writing out all of the outer side for every chunk of the inner\nside, I was wondering if there was something we could do that was kind\nof like mmap'ing the outer side file to give the workers in parallel\nhashjoin the ability to update a match bit in the tuple in place and\navoid writing the whole outer side out each time.\n\n-- \nMelanie Plageman\n\nOn Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nAdmittedly I don't have a patch, just a bunch of handwaving. One\nreason I haven't attempted to write it is because although I know how\nto do the non-parallel version using a BufFile full of match bits in\nsync with the tuples for outer joins, I haven't figured out how to do\nit for parallel-aware hash join, because then each loop over the outer\nbatch could see different tuples in each participant. You could use\nthe match bit in HashJoinTuple header, but then you'd have to write\nall the tuples out again, which is more IO than I want to do. I'll\nprobably start another thread about that.Going back to the idea of using the match bit in the HashJoinTuple headerand writing out all of the outer side for every chunk of the innerside, I was wondering if there was something we could do that was kindof like mmap'ing the outer side file to give the workers in parallelhashjoin the ability to update a match bit in the tuple in place andavoid writing the whole outer side out each time.-- Melanie Plageman",
"msg_date": "Thu, 6 Jun 2019 16:37:19 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 04:37:19PM -0700, Melanie Plageman wrote:\n>On Thu, May 16, 2019 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n>> Admittedly I don't have a patch, just a bunch of handwaving. One\n>> reason I haven't attempted to write it is because although I know how\n>> to do the non-parallel version using a BufFile full of match bits in\n>> sync with the tuples for outer joins, I haven't figured out how to do\n>> it for parallel-aware hash join, because then each loop over the outer\n>> batch could see different tuples in each participant. You could use\n>> the match bit in HashJoinTuple header, but then you'd have to write\n>> all the tuples out again, which is more IO than I want to do. I'll\n>> probably start another thread about that.\n>>\n>>\n>Going back to the idea of using the match bit in the HashJoinTuple header\n>and writing out all of the outer side for every chunk of the inner\n>side, I was wondering if there was something we could do that was kind\n>of like mmap'ing the outer side file to give the workers in parallel\n>hashjoin the ability to update a match bit in the tuple in place and\n>avoid writing the whole outer side out each time.\n>\n\nI think this was one of the things we discussed in Ottawa - we could pass\nindex of the tuple (in the batch) along with the tuple, so that each\nworker know which bit to set.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:05:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 04:33:31PM -0700, Melanie Plageman wrote:\n>On Tue, Jun 4, 2019 at 12:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Tue, Jun 4, 2019 at 3:09 PM Melanie Plageman\n>> <melanieplageman@gmail.com> wrote:\n>> > One question I have is, how would the OR'd together bitmap be\n>> > propagated to workers after the first chunk? That is, when there are\n>> > no tuples left in the outer bunch, for a given inner chunk, would you\n>> > load the bitmaps from each worker into memory, OR them together, and\n>> > then write the updated bitmap back out so that each worker starts with\n>> > the updated bitmap?\n>>\n>> I was assuming we'd elect one participant to go read all the bitmaps,\n>> OR them together, and generate all the required null-extended tuples,\n>> sort of like the PHJ_BUILD_ALLOCATING, PHJ_GROW_BATCHES_ALLOCATING,\n>> PHJ_GROW_BUCKETS_ALLOCATING, and/or PHJ_BATCH_ALLOCATING states only\n>> involve one participant being active at a time. Now you could hope for\n>> something better -- why not parallelize that work? But on the other\n>> hand, why not start simple and worry about that in some future patch\n>> instead of right away? A committed patch that does something good is\n>> better than an uncommitted patch that does something AWESOME.\n>>\n>>\n>What if you have a lot of tuples -- couldn't the bitmaps get pretty\n>big? And then you have to OR them all together and if you can't put\n>the whole bitmap from each worker into memory at once to do it, it\n>seems like it would be pretty slow. (I mean maybe not as slow as\n>reading the outer side 5 times when you only have 3 chunks on the\n>inner + all the extra writes from my unmatched tuple file idea, but\n>still...)\n>\n\nYes, they could get quite big, and I think you're right we need to\nkeep that in mind, because it's on the outer (often quite large) side of\nthe join. And if we're aiming to restrict memory usage, it'd be weird to\njust ignore this.\n\nBut I think Thomas Munro originally proposed to treat this as a separate\nBufFile, so my assumption was each worker would simply rewrite the bitmap\nrepeatedly for each hash table fragment. That means a bit more I/O, but as\nthose files are buffered and written in 8kB pages, with just 1 bit per\ntuple. I think that's pretty OK and way cheaper that rewriting the whole\nbatch, where each tuple can be hundreds of bytes.\n\nAlso, it does not require any concurrency control, which rewriting the\nbatches themselves probably does (because we'd be feeding the tuples into\nsome shared file, I suppose). Except for the final step when we need to\nmerge the bitmaps, of course.\n\nSo I think this would work, it does not have the issue with using too much\nmemory, and I don't think the overhead is too bad.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Fri, 7 Jun 2019 16:17:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 7:31 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I'm not sure I understand why you would need to compare the original\n> tuples to the unmatched tuples file.\n\nI think I was confused. Actually, I'm still not sure I understand this part:\n\n> Then, you iterate again through the outer side a third time to join it\n> to the unmatched tuples in the unmatched tuples file (from the first\n> chunk) and write the following to a new unmatched tuples file:\n> 5\n> 9\n> 11\n> 11\n\nand likewise here\n\n> Then you iterate a fifth time through the outer side to join it to the\n> unmatched tuples in the unmatched tuples file (from the second chunk)\n> and write the following to a new unmatched tuples file:\n> 11\n> 11\n\nSo you refer to joining the outer side to the unmatched tuples file,\nbut how would that tell you which outer tuples had no matches on the\ninner side? I think what you'd need to do is anti-join the unmatched\ntuples file to the current inner batch. So the algorithm would be\nsomething like:\n\nfor each inner batch:\n for each outer tuple:\n if tuple matches inner batch then emit match\n if tuple does not match inner batch and this is the first inner batch:\n write tuple to unmatched tuples file\n if this is not the first inner batch:\n for each tuple from the unmatched tuples file:\n if tuple does not match inner batch:\n write to new unmatched tuples file\n discard previous unmatched tuples file and use the new one for the\nnext iteration\n\nfor each tuple in the final unmatched tuples file:\n null-extend and emit\n\nIf that's not what you have in mind, maybe you could provide some\nsimilar pseudocode? Or you can just ignore me. I'm not trying to\ninterfere with an otherwise-fruitful discussion by being the only one\nin the room who is confused...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 10:30:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 10:17 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Yes, they could get quite big, and I think you're right we need to\n> keep that in mind, because it's on the outer (often quite large) side of\n> the join. And if we're aiming to restrict memory usage, it'd be weird to\n> just ignore this.\n>\n> But I think Thomas Munro originally proposed to treat this as a separate\n> BufFile, so my assumption was each worker would simply rewrite the bitmap\n> repeatedly for each hash table fragment. That means a bit more I/O, but as\n> those files are buffered and written in 8kB pages, with just 1 bit per\n> tuple. I think that's pretty OK and way cheaper that rewriting the whole\n> batch, where each tuple can be hundreds of bytes.\n\nYes, this is also my thought. I'm not 100% sure I understand\nMelanie's proposal, but I think that it involves writing every\nstill-unmatched outer tuple for every inner batch. This proposal --\nassuming we can get the tuple numbering worked out -- involves writing\na bit for every outer tuple for every inner batch. So each time you\ndo an inner batch, you write either (a) one bit for EVERY outer tuple\nor (b) the entirety of each unmatched tuple. It's possible for the\nlatter to be cheaper if the number of unmatched tuples is really,\nreally tiny, but it's not very likely.\n\nFor example, suppose that you've got 4 batches and each batch matches\n99% of the tuples, which are each 50 bytes wide. After each batch,\napproach A writes 1 bit per tuple, so a total of 4 bits per tuple\nafter 4 batches. Approach B writes a different amount of data after\neach batch. After the first batch, it writes 1% of the tuples, and\nfor each one written it writes 50 bytes, so it writes 50 bytes * 0.01\n= ~4 bits/tuple. That's already equal to what approach A wrote after\nall 4 batches, and it's going to do a little more I/O over the course\nof the remaining matches - although not much, because the unmatched\ntuples file will be very very tiny after we eliminate 99% of the 1%\nthat survived the first batch. However, these are extremely favorable\nassumptions for approach B. If the tuples are wider or the batches\nmatch only say 20% of the tuples, approach B is going to be waaaay\nmore I/O.\n\nAssuming I understand correctly, which I may not.\n\n> Also, it does not require any concurrency control, which rewriting the\n> batches themselves probably does (because we'd be feeding the tuples into\n> some shared file, I suppose). Except for the final step when we need to\n> merge the bitmaps, of course.\n\nI suppose that rewriting the batches -- or really the unmatched tuples\nfile -- could just use a SharedTuplestore, so we probably wouldn't\nneed a lot of new code for this. I don't know whether contention\nwould be a problem or not.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 7 Jun 2019 10:47:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, Jun 7, 2019 at 7:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jun 6, 2019 at 7:31 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I'm not sure I understand why you would need to compare the original\n> > tuples to the unmatched tuples file.\n>\n> I think I was confused. Actually, I'm still not sure I understand this\n> part:\n>\n> > Then, you iterate again through the outer side a third time to join it\n> > to the unmatched tuples in the unmatched tuples file (from the first\n> > chunk) and write the following to a new unmatched tuples file:\n> > 5\n> > 9\n> > 11\n> > 11\n>\n> and likewise here\n>\n> > Then you iterate a fifth time through the outer side to join it to the\n> > unmatched tuples in the unmatched tuples file (from the second chunk)\n> > and write the following to a new unmatched tuples file:\n> > 11\n> > 11\n>\n> So you refer to joining the outer side to the unmatched tuples file,\n> but how would that tell you which outer tuples had no matches on the\n> inner side? I think what you'd need to do is anti-join the unmatched\n> tuples file to the current inner batch. So the algorithm would be\n> something like:\n>\n> for each inner batch:\n> for each outer tuple:\n> if tuple matches inner batch then emit match\n> if tuple does not match inner batch and this is the first inner batch:\n> write tuple to unmatched tuples file\n> if this is not the first inner batch:\n> for each tuple from the unmatched tuples file:\n> if tuple does not match inner batch:\n> write to new unmatched tuples file\n> discard previous unmatched tuples file and use the new one for the\n> next iteration\n>\n> for each tuple in the final unmatched tuples file:\n> null-extend and emit\n>\n> If that's not what you have in mind, maybe you could provide some\n> similar pseudocode? Or you can just ignore me. I'm not trying to\n> interfere with an otherwise-fruitful discussion by being the only one\n> in the room who is confused...\n>\n>\nYep, the pseudo-code you have above is exactly what I was thinking. I\nhave been hacking around on my fork implementing this for the\nnon-parallel hashjoin (my idea was to implement a parallel-friendly\ndesign but for the non-parallel-aware case and then go back and\nimplement it for the parallel-aware hashjoin later) and have some\nthoughts.\n\nI'll call the whole adaptive hashjoin fallback strategy \"chunked\nhashloop join\" for the purposes of this description.\nI'll abbreviate the three approaches we've discussed like this:\n\nApproach A is using a separate data structure (a bitmap was the\nsuggested pick) to track the match status of each outer tuple\n\nApproach B is the inner-join + anti-join writing out unmatched tuples\nto a new file for every iteration through the outer side batch (for\neach chunk of inner)\n\nApproach C is setting a match bit in the tuple and then writing all\nouter side tuples out for every iteration through the outer side (for\neach chunk of inner)\n\nTo get started with I implemented the inner side chunking logic which\nis required for all of the approaches. I did a super basic version\nwhich only allows nbatches to be increased during the initial\nhashtable build--not during loading of subsequent batches--if a batch\nafter batch 0 runs out of work_mem, it just loads what will fit and\nsaves the inner page offset in the hashjoin state.\n\nPart of the allure of approaches B and C for me was that they seemed\nlike they would require less code complexity and concurrency control\nbecause you could just write out the unmatched tuples (to probably a\nSharedTupleStore) without having to care about their original order or\npage offset. It seemed like it didn't require treating a spill file\nlike it permits random access nor treating the tuples as ordered in a\nSharedTupleStore.\n\nThe benefit I saw of approach B over approach C was that, in the case\nwhere more tuples are matches, it requires fewer writes than approach\nC--at the cost of additional reads. It would require at most the same\nnumber of writes as approach C.\n\nApproach B turned out to be problematic for many reasons. First of\nall, with approach B, you end up having to keep track of an additional\nnew spill file for unmatched outer tuples for every chunk of the inner\nside. Each spill file could have a different number of tuples, so, any\nreuse of the file seems difficult to get right. For approach C (which\nI did not try to implement), it seems like you could get away with\nonly maintaining two spill files for the outer side--one to be read\nfrom and one to write to. I'm sure it is more complicated than this.\nHowever, it seemed like, for approach B you would need to create and\ndestroy entirely new unmatched tuple spill files for every chunk.\n\nApproach B was not simpler when it came to the code complexity of the\nstate machine either -- you have to do something different for the\nfirst chunk than the other chunks (write to the unmatched tups file\nbut read from the original spill file, whereas other chunks require\nwriting to the unmatched tups file and reading from the unmatched tups\nfile), which requires complexity in the state machine (and, I imagine,\nworker orchestration in the parallel implementation). And, you still\nhave to process all of the unmatched tups, null-extend them, and emit\nthem before advancing the batch.\n\nSo, I decided to try out approach A. The crux of the idea (my\nunderstanding of it, at least) is to keep a separate data structure\nwhich has the match status of each outer tuple in the batch. The\ndiscussion was to do this with a bitmap in a file, but, I started with\ndoing it with a list in memory.\n\nWhat I have so far is a list of structs--one for each outer\ntuple--where each struct has a match flag and the page offset of that\ntuple in the outer spill file. I add each struct to the list when I am\ngetting each tuple from a spill file in HJ_NEED_NEW_OUTER state to\njoin to the first chunk of the inner, and, since I only do this when I\nam getting an outer tuple from the spill file, I also grab the page\noffset and set it in the struct in the list.\n\nAs I am creating the list, and, while processing each subsequent chunk\nof the inner, if the tuple is a match, I set the match flag to true in\nthat outer tuple's member of the list.\n\nThen, after finishing the whole inner batch, I loop through the list,\nand, for each unmatched tuple, I go to that offset in the spill file\nand get that tuple and NULL-extend and emit it.\n\n(Currently, I have a problem with the list and it doesn't produce\ncorrect results yet.)\n\nThinking about how to move from my list of offsets to using a bitmap,\nI got confused.\n\nLet me try to articulate what I think the bitmap implementation would look\nlike:\n\nBefore doing chunked hashloop join for any batch, we would need to\nknow how many tuples are in the outer batch to make the bitmap the\ncorrect size.\n\nWe could do this either with one loop through the whole outer batch\nfile right before joining it to the inner batch (an extra loop).\n\nOr we could try and do it during the first read of the outer relation\nwhen processing batch 0 and keep a data structure with each batch\nnumber mapped to the number of outer tuples spilled to that batch.\n\nThen, once we have this number, before joining the outer to the first\nchunk of the inner, we would generate a bitmap with ntuples in outer\nbatch number of bits and save it somewhere (eventually in a file,\ninitially in the hjstate).\n\nNow, I am back to the original problem--how do you know which bit to\nset without somehow numbering the tuples with a unique identifier? Is\nthere anything that uniquely identifies a spill file tuple except its\noffset?\n\n-- \nMelanie Plageman\n\nOn Fri, Jun 7, 2019 at 7:30 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jun 6, 2019 at 7:31 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I'm not sure I understand why you would need to compare the original\n> tuples to the unmatched tuples file.\n\nI think I was confused. Actually, I'm still not sure I understand this part:\n\n> Then, you iterate again through the outer side a third time to join it\n> to the unmatched tuples in the unmatched tuples file (from the first\n> chunk) and write the following to a new unmatched tuples file:\n> 5\n> 9\n> 11\n> 11\n\nand likewise here\n\n> Then you iterate a fifth time through the outer side to join it to the\n> unmatched tuples in the unmatched tuples file (from the second chunk)\n> and write the following to a new unmatched tuples file:\n> 11\n> 11\n\nSo you refer to joining the outer side to the unmatched tuples file,\nbut how would that tell you which outer tuples had no matches on the\ninner side? I think what you'd need to do is anti-join the unmatched\ntuples file to the current inner batch. So the algorithm would be\nsomething like:\n\nfor each inner batch:\n for each outer tuple:\n if tuple matches inner batch then emit match\n if tuple does not match inner batch and this is the first inner batch:\n write tuple to unmatched tuples file\n if this is not the first inner batch:\n for each tuple from the unmatched tuples file:\n if tuple does not match inner batch:\n write to new unmatched tuples file\n discard previous unmatched tuples file and use the new one for the\nnext iteration\n\nfor each tuple in the final unmatched tuples file:\n null-extend and emit\n\nIf that's not what you have in mind, maybe you could provide some\nsimilar pseudocode? Or you can just ignore me. I'm not trying to\ninterfere with an otherwise-fruitful discussion by being the only one\nin the room who is confused...Yep, the pseudo-code you have above is exactly what I was thinking. Ihave been hacking around on my fork implementing this for thenon-parallel hashjoin (my idea was to implement a parallel-friendlydesign but for the non-parallel-aware case and then go back andimplement it for the parallel-aware hashjoin later) and have somethoughts.I'll call the whole adaptive hashjoin fallback strategy \"chunkedhashloop join\" for the purposes of this description.I'll abbreviate the three approaches we've discussed like this:Approach A is using a separate data structure (a bitmap was thesuggested pick) to track the match status of each outer tupleApproach B is the inner-join + anti-join writing out unmatched tuplesto a new file for every iteration through the outer side batch (foreach chunk of inner)Approach C is setting a match bit in the tuple and then writing allouter side tuples out for every iteration through the outer side (foreach chunk of inner)To get started with I implemented the inner side chunking logic whichis required for all of the approaches. I did a super basic versionwhich only allows nbatches to be increased during the initialhashtable build--not during loading of subsequent batches--if a batchafter batch 0 runs out of work_mem, it just loads what will fit andsaves the inner page offset in the hashjoin state.Part of the allure of approaches B and C for me was that they seemedlike they would require less code complexity and concurrency controlbecause you could just write out the unmatched tuples (to probably aSharedTupleStore) without having to care about their original order orpage offset. It seemed like it didn't require treating a spill filelike it permits random access nor treating the tuples as ordered in aSharedTupleStore.The benefit I saw of approach B over approach C was that, in the casewhere more tuples are matches, it requires fewer writes than approachC--at the cost of additional reads. It would require at most the samenumber of writes as approach C.Approach B turned out to be problematic for many reasons. First ofall, with approach B, you end up having to keep track of an additionalnew spill file for unmatched outer tuples for every chunk of the innerside. Each spill file could have a different number of tuples, so, anyreuse of the file seems difficult to get right. For approach C (whichI did not try to implement), it seems like you could get away withonly maintaining two spill files for the outer side--one to be readfrom and one to write to. I'm sure it is more complicated than this.However, it seemed like, for approach B you would need to create anddestroy entirely new unmatched tuple spill files for every chunk.Approach B was not simpler when it came to the code complexity of thestate machine either -- you have to do something different for thefirst chunk than the other chunks (write to the unmatched tups filebut read from the original spill file, whereas other chunks requirewriting to the unmatched tups file and reading from the unmatched tupsfile), which requires complexity in the state machine (and, I imagine,worker orchestration in the parallel implementation). And, you stillhave to process all of the unmatched tups, null-extend them, and emitthem before advancing the batch.So, I decided to try out approach A. The crux of the idea (myunderstanding of it, at least) is to keep a separate data structurewhich has the match status of each outer tuple in the batch. Thediscussion was to do this with a bitmap in a file, but, I started withdoing it with a list in memory.What I have so far is a list of structs--one for each outertuple--where each struct has a match flag and the page offset of thattuple in the outer spill file. I add each struct to the list when I amgetting each tuple from a spill file in HJ_NEED_NEW_OUTER state tojoin to the first chunk of the inner, and, since I only do this when Iam getting an outer tuple from the spill file, I also grab the pageoffset and set it in the struct in the list.As I am creating the list, and, while processing each subsequent chunkof the inner, if the tuple is a match, I set the match flag to true inthat outer tuple's member of the list.Then, after finishing the whole inner batch, I loop through the list,and, for each unmatched tuple, I go to that offset in the spill fileand get that tuple and NULL-extend and emit it.(Currently, I have a problem with the list and it doesn't producecorrect results yet.)Thinking about how to move from my list of offsets to using a bitmap,I got confused.Let me try to articulate what I think the bitmap implementation would looklike: Before doing chunked hashloop join for any batch, we would need toknow how many tuples are in the outer batch to make the bitmap thecorrect size.We could do this either with one loop through the whole outer batchfile right before joining it to the inner batch (an extra loop).Or we could try and do it during the first read of the outer relationwhen processing batch 0 and keep a data structure with each batchnumber mapped to the number of outer tuples spilled to that batch. Then, once we have this number, before joining the outer to the firstchunk of the inner, we would generate a bitmap with ntuples in outerbatch number of bits and save it somewhere (eventually in a file,initially in the hjstate).Now, I am back to the original problem--how do you know which bit toset without somehow numbering the tuples with a unique identifier? Isthere anything that uniquely identifies a spill file tuple except itsoffset?-- Melanie Plageman",
"msg_date": "Tue, 11 Jun 2019 11:35:23 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 2:35 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Let me try to articulate what I think the bitmap implementation would look\n> like:\n>\n> Before doing chunked hashloop join for any batch, we would need to\n> know how many tuples are in the outer batch to make the bitmap the\n> correct size.\n\nI was thinking that we wouldn't need to know this, because if the\nbitmap is in a file, we can always extend it. To imagine a needlessly\ndumb implementation, consider:\n\nset-bit(i):\n let b = i / 8\n while (b <= length of file in bytes)\n append '\\0' to file\n read byte b from the file\n modify the byte you read by setting bit i % 8\n write the modified byte back to the file\n\nIn reality, we'd have some kind of buffer. I imagine locality of\nreference would be pretty good, because the outer tuples are coming to\nus in increasing-tuple-number order.\n\nIf you want to prototype with an in-memory implementation, I'd suggest\njust pallocing 8kB initially and repallocing when the tuple number\ngets too big. It'll be sorta inefficient, but who cares? It's\ncertainly way cheaper than an extra pass over the data, and for a POC\nit should be fine.\n\n> Now, I am back to the original problem--how do you know which bit to\n> set without somehow numbering the tuples with a unique identifier? Is\n> there anything that uniquely identifies a spill file tuple except its\n> offset?\n\nI don't think so. Approach A hinges on being able to get the tuple\nnumber reliably and without contortions, and I have not tried to make\nthat work. So maybe it's really hard or not possible or something.\nMy intuition is that it ought to work, but that and a dollar will get\nyou cup of coffee, so...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:09:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Jun 13, 2019 at 7:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jun 11, 2019 at 2:35 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Let me try to articulate what I think the bitmap implementation would\n> look\n> > like:\n> >\n> > Before doing chunked hashloop join for any batch, we would need to\n> > know how many tuples are in the outer batch to make the bitmap the\n> > correct size.\n>\n> I was thinking that we wouldn't need to know this, because if the\n> bitmap is in a file, we can always extend it. To imagine a needlessly\n> dumb implementation, consider:\n>\n> set-bit(i):\n> let b = i / 8\n> while (b <= length of file in bytes)\n> append '\\0' to file\n> read byte b from the file\n> modify the byte you read by setting bit i % 8\n> write the modified byte back to the file\n>\n> In reality, we'd have some kind of buffer. I imagine locality of\n> reference would be pretty good, because the outer tuples are coming to\n> us in increasing-tuple-number order.\n>\n> If you want to prototype with an in-memory implementation, I'd suggest\n> just pallocing 8kB initially and repallocing when the tuple number\n> gets too big. It'll be sorta inefficient, but who cares? It's\n> certainly way cheaper than an extra pass over the data, and for a POC\n> it should be fine.\n>\n>\nThat approach makes sense. I have attached the first draft of a patch\nI wrote to do parallel-oblivious hashjoin fallback. I haven't switched\nto using the approach with a bitmap (or bytemap :) yet because I found\nthat using a linked list was easier to debug for now.\n\n(Also, I did things like include the value of the outer tuple\nattribute in the linked list nodes and assumed it was an int because\nthat is what I have been testing with--this would definitely be blown\naway with everything else that is just there to help me with debugging\nright now).\n\nI am refactoring it now to change the state machine to make more sense\nbefore changing the representation of the match statuses.\n\nSo, specifically, I am interested in high-level gut checks on the\nstate machine I am currently implementing (not reflected in this\npatch).\n\nThis patch adds only one state -- HJ_ADAPTIVE_EMIT_UNMATCHED-- which\nduplicates the logic of HJ_FILL_OUTER_TUPLE. Also, in this patch, the\nexisting HJ_NEED_NEW_BATCH state is used for new chunks. After\nseparating the logic that advanced the batches from that which loaded\na batch, it felt like NEED_NEW_CHUNK did not need to be its own state.\nWhen a new chunk is required, if more exist, then the next one should\nbe loaded and outer should be rewound. Rewinding of outer was already\nbeing done (seek to the beginning of the outer spill file is the\nequivalent of \"loading\" it).\n\nCurrently, I am tracking a lot of state in the HashJoinState, which is\nfiddly and error-prone.\n\nNew state machine (questions posed below):\nTo refactor the state machine, I am thinking of adding a new state\nHJ_NEED_NEW_INNER_CHUNK which we would transition to when outer batch\nis over. We would load the new chunk, rewind the outer, and transition\nto HJ_NEED_NEW_OUTER. However, we would have to emit unmatched inner\ntuples for that chunk (in case of ROJ) before that transition to\nHJ_NEED_NEW_OUTER. This feels a little less clean because the\nHJ_FILL_INNER_TUPLES state is transitioned into when the inner batch\nis over as well. And, in the current flow I am sketching out, if the\ninner batch is exhausted, we check if we should emit NULL-extended\ninner tuples and then check if we should emit NULL-extended outer\ntuples (since both batches are exhausted), whereas when a single inner\nchunk is done being processed, we only want to emit NULL-extended\ntuples for the inner side. Not to mention HJ_NEED_NEW_INNER_CHUNK\nwould transition to HJ_NEED_NEW_OUTER directly instead of first\nadvancing the batches. This can all be hacked around with if\nstatements, but, my point here is that if I am refactoring the state\nmachine to be more clear, ideally, it would be more clear.\n\nA similar problem happens with HJ_FILL_OUTER_TUPLE and the\nnon-fallback case. For the fallback case, with this implementation,\nyou must wait until after exhausting the inner side to emit\nNULL-extended outer tuples. In the non-fallback case -- a batch which\ncan fit in memory or, always, for batch 0 -- the unmatched outer\ntuples are emitted as they are encountered.\n\nIt makes most sense in the context of the state machine, as far as I\ncan tell, after exhausting both outer and inner batch, to emit\nNULL-extended inner tuples for that chunk and then emit NULL-extended\nouter tuples for that batch.\n\nSo, requiring an additional read of the outer side to emit\nNULL-extended tuples at the end of the inner batch would slow things\ndown for the non-fallback case, however, it seems like special casing\nthe fallback case would make the state machine much more confusing --\nbasically like mashing two totally different state machines together.\n\nThese questions will probably make a lot more sense with corresponding\ncode, so I will follow up with the second version of the state machine\npatch once I finish it.\n\n-- \nMelanie Plageman",
"msg_date": "Tue, 18 Jun 2019 15:24:08 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 18, 2019 at 3:24 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n> These questions will probably make a lot more sense with corresponding\n> code, so I will follow up with the second version of the state machine\n> patch once I finish it.\n>\n>\nI have changed the state machine and resolved the questions I had\nraised in the previous email. This seems to work for the parallel and\nnon-parallel cases. I have not yet rewritten the unmatched outer tuple\nstatus as a bitmap in a spill file (for ease of debugging).\n\nBefore doing that, I wanted to ask what a desirable fallback condition\nwould be. In this patch, fallback to hashloop join happens only when\ninserting tuples into the hashtable after batch 0 when inserting\nanother tuple from the batch file would exceed work_mem. This means\nyou can't increase nbatches, which, I would think is undesirable.\n\nI thought a bit about when fallback should happen. So, let's say that\nwe would like to fallback to hashloop join when we have increased\nnbatches X times. At that point, since we do not want to fall back to\nhashloop join for all batches, we have to make a decision. After\nincreasing nbatches the Xth time, do we then fall back for all batches\nfor which inserting inner tuples exceeds work_mem? Do we use this\nstrategy but work_mem + some fudge factor?\n\nOr, do we instead try to determine if data skew led us to increase\nnbatches both times and then determine which batch, given new\nnbatches, contains that data, set fallback to true only for that\nbatch, and let all other batches use the existing logic (with no\nfallback option) unless they contain a value which leads to increasing\nnbatches X number of times?\n\n-- \nMelanie Plageman",
"msg_date": "Wed, 3 Jul 2019 14:22:09 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Wed, Jul 03, 2019 at 02:22:09PM -0700, Melanie Plageman wrote:\n>On Tue, Jun 18, 2019 at 3:24 PM Melanie Plageman <melanieplageman@gmail.com>\n>wrote:\n>\n>>\n>> These questions will probably make a lot more sense with corresponding\n>> code, so I will follow up with the second version of the state machine\n>> patch once I finish it.\n>>\n>>\n>I have changed the state machine and resolved the questions I had\n>raised in the previous email. This seems to work for the parallel and\n>non-parallel cases. I have not yet rewritten the unmatched outer tuple\n>status as a bitmap in a spill file (for ease of debugging).\n>\n>Before doing that, I wanted to ask what a desirable fallback condition\n>would be. In this patch, fallback to hashloop join happens only when\n>inserting tuples into the hashtable after batch 0 when inserting\n>another tuple from the batch file would exceed work_mem. This means\n>you can't increase nbatches, which, I would think is undesirable.\n>\n\nYes, I think that's undesirable.\n\n>I thought a bit about when fallback should happen. So, let's say that\n>we would like to fallback to hashloop join when we have increased\n>nbatches X times. At that point, since we do not want to fall back to\n>hashloop join for all batches, we have to make a decision. After\n>increasing nbatches the Xth time, do we then fall back for all batches\n>for which inserting inner tuples exceeds work_mem? Do we use this\n>strategy but work_mem + some fudge factor?\n>\n>Or, do we instead try to determine if data skew led us to increase\n>nbatches both times and then determine which batch, given new\n>nbatches, contains that data, set fallback to true only for that\n>batch, and let all other batches use the existing logic (with no\n>fallback option) unless they contain a value which leads to increasing\n>nbatches X number of times?\n>\n\nI think we should try to detect the skew and use this hashloop logic\nonly for the one batch. That's based on the assumption that the hashloop\nis less efficient than the regular hashjoin.\n\nWe may need to apply it even for some non-skewed (but misestimated)\ncases, though. At some point we'd need more than work_mem for BufFiles,\nat which point we ought to use this hashloop.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 14 Jul 2019 01:44:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "So, I've rewritten the patch to use a BufFile for the outer table\nbatch file tuples' match statuses and write bytes to and from the file\nwhich start as 0 and, upon encountering a match for a tuple, I set its\nbit in the file to 1 (also rebased with current master).\n\nIt, of course, only works for parallel-oblivious hashjoin -- it relies\non deterministic order of tuples encountered in the outer side batch\nfile to set the right match bit and uses a counter to decide which bit\nto set.\n\nI did the \"needlessly dumb implementation\" Robert mentioned, though,\nI thought about it and couldn't come up with a much smarter way to\nwrite match bits to a file. I think there might be an optimization\nopportunity in not writing the current_byte to the file each time that\nthe outer tuple matches and only doing this once we have advanced to a\ntuple number that wouldn't have its match bit in the current_byte. I\ndidn't do that to keep it simple, and, I suspect there might be a bit\nof gymnastics needed to make sure that that byte is actually written\nto the file in case we exit from some other state before we encounter\nthe tuple represented in the last bit in that byte.\n\nI plan to work on a separate implementation for parallel hashjoin\nnext--to understand what is required. I believe the logic to decide\nwhen to fall back should be fairly easy to slot in at the end once\nwe've decided what that logic is.\n\nOn Sat, Jul 13, 2019 at 4:44 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Wed, Jul 03, 2019 at 02:22:09PM -0700, Melanie Plageman wrote:\n> >On Tue, Jun 18, 2019 at 3:24 PM Melanie Plageman <\n> melanieplageman@gmail.com>\n> >\n> >Before doing that, I wanted to ask what a desirable fallback condition\n> >would be. In this patch, fallback to hashloop join happens only when\n> >inserting tuples into the hashtable after batch 0 when inserting\n> >another tuple from the batch file would exceed work_mem. This means\n> >you can't increase nbatches, which, I would think is undesirable.\n> >\n>\n> Yes, I think that's undesirable.\n>\n> >I thought a bit about when fallback should happen. So, let's say that\n> >we would like to fallback to hashloop join when we have increased\n> >nbatches X times. At that point, since we do not want to fall back to\n> >hashloop join for all batches, we have to make a decision. After\n> >increasing nbatches the Xth time, do we then fall back for all batches\n> >for which inserting inner tuples exceeds work_mem? Do we use this\n> >strategy but work_mem + some fudge factor?\n> >\n> >Or, do we instead try to determine if data skew led us to increase\n> >nbatches both times and then determine which batch, given new\n> >nbatches, contains that data, set fallback to true only for that\n> >batch, and let all other batches use the existing logic (with no\n> >fallback option) unless they contain a value which leads to increasing\n> >nbatches X number of times?\n> >\n>\n> I think we should try to detect the skew and use this hashloop logic\n> only for the one batch. That's based on the assumption that the hashloop\n> is less efficient than the regular hashjoin.\n>\n\n> We may need to apply it even for some non-skewed (but misestimated)\n> cases, though. At some point we'd need more than work_mem for BufFiles,\n> at which point we ought to use this hashloop.\n>\n>\nI have not yet changed the logic for deciding to fall back from\nmy original design. It will still only fall back for a given batch if\nthat batch's inner batch file doesn't fit in memory. I haven't,\nhowever, changed the logic to allow it to increase the number of\nbatches some number of times or according to some criteria before\nfalling back for that batch.\n\n-- \nMelanie Plageman",
"msg_date": "Tue, 30 Jul 2019 11:46:59 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 2:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I did the \"needlessly dumb implementation\" Robert mentioned, though,\n> I thought about it and couldn't come up with a much smarter way to\n> write match bits to a file. I think there might be an optimization\n> opportunity in not writing the current_byte to the file each time that\n> the outer tuple matches and only doing this once we have advanced to a\n> tuple number that wouldn't have its match bit in the current_byte. I\n> didn't do that to keep it simple, and, I suspect there might be a bit\n> of gymnastics needed to make sure that that byte is actually written\n> to the file in case we exit from some other state before we encounter\n> the tuple represented in the last bit in that byte.\n\nI mean, I was assuming we'd write in like 8kB blocks or something.\nDoing it a byte at a time seems like it'd produce way too many\nsyscals.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 30 Jul 2019 19:35:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 4:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 30, 2019 at 2:47 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I did the \"needlessly dumb implementation\" Robert mentioned, though,\n> > I thought about it and couldn't come up with a much smarter way to\n> > write match bits to a file. I think there might be an optimization\n> > opportunity in not writing the current_byte to the file each time that\n> > the outer tuple matches and only doing this once we have advanced to a\n> > tuple number that wouldn't have its match bit in the current_byte. I\n> > didn't do that to keep it simple, and, I suspect there might be a bit\n> > of gymnastics needed to make sure that that byte is actually written\n> > to the file in case we exit from some other state before we encounter\n> > the tuple represented in the last bit in that byte.\n>\n> I mean, I was assuming we'd write in like 8kB blocks or something.\n> Doing it a byte at a time seems like it'd produce way too many\n> syscals.\n>\n>\nFor the actual write to disk, I'm pretty sure I get that for free from\nthe BufFile API, no?\nI was more thinking about optimizing when I call BufFileWrite at all.\n\n-- \nMelanie Plageman\n\nOn Tue, Jul 30, 2019 at 4:36 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 30, 2019 at 2:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I did the \"needlessly dumb implementation\" Robert mentioned, though,\n> I thought about it and couldn't come up with a much smarter way to\n> write match bits to a file. I think there might be an optimization\n> opportunity in not writing the current_byte to the file each time that\n> the outer tuple matches and only doing this once we have advanced to a\n> tuple number that wouldn't have its match bit in the current_byte. I\n> didn't do that to keep it simple, and, I suspect there might be a bit\n> of gymnastics needed to make sure that that byte is actually written\n> to the file in case we exit from some other state before we encounter\n> the tuple represented in the last bit in that byte.\n\nI mean, I was assuming we'd write in like 8kB blocks or something.\nDoing it a byte at a time seems like it'd produce way too many\nsyscals.For the actual write to disk, I'm pretty sure I get that for free fromthe BufFile API, no?I was more thinking about optimizing when I call BufFileWrite at all.-- Melanie Plageman",
"msg_date": "Tue, 30 Jul 2019 20:07:21 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jul 30, 2019 at 8:07 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> For the actual write to disk, I'm pretty sure I get that for free from\n> the BufFile API, no?\n> I was more thinking about optimizing when I call BufFileWrite at all.\n\nRight. Clearly several existing buffile.c users regularly have very\nsmall BufFileWrite() size arguments. tuplestore.c, for one.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Jul 2019 20:11:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Wed, Jul 31, 2019 at 6:47 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> So, I've rewritten the patch to use a BufFile for the outer table\n> batch file tuples' match statuses and write bytes to and from the file\n> which start as 0 and, upon encountering a match for a tuple, I set its\n> bit in the file to 1 (also rebased with current master).\n>\n> It, of course, only works for parallel-oblivious hashjoin -- it relies\n> on deterministic order of tuples encountered in the outer side batch\n> file to set the right match bit and uses a counter to decide which bit\n> to set.\n>\n> I did the \"needlessly dumb implementation\" Robert mentioned, though,\n> I thought about it and couldn't come up with a much smarter way to\n> write match bits to a file. I think there might be an optimization\n> opportunity in not writing the current_byte to the file each time that\n> the outer tuple matches and only doing this once we have advanced to a\n> tuple number that wouldn't have its match bit in the current_byte. I\n> didn't do that to keep it simple, and, I suspect there might be a bit\n> of gymnastics needed to make sure that that byte is actually written\n> to the file in case we exit from some other state before we encounter\n> the tuple represented in the last bit in that byte.\n\nThanks for working on this! I plan to poke at it a bit in the next few weeks.\n\n> I plan to work on a separate implementation for parallel hashjoin\n> next--to understand what is required. I believe the logic to decide\n> when to fall back should be fairly easy to slot in at the end once\n> we've decided what that logic is.\n\nSeems like a good time for me to try to summarise what I think the\nmain problems are here:\n\n1. The match-bit storage problem already discussed. The tuples that\neach process receives while reading from SharedTupleStore are\nnon-deterministic (like other parallel scans). To use a bitmap-based\napproach, I guess we'd need to invent some way to give the tuples a\nstable identifier within some kind of densely packed number space that\nwe could use to address the bitmap, or take the IO hit and write all\nthe tuples back. That might involve changing the way SharedTupleStore\nholds data.\n\n2. Tricky problems relating to barriers and flow control. First, let\nme explain why PHJ doesn't support full/right outer joins yet. At\nfirst I thought it was going to be easy, because, although the shared\nmemory hash table is read-only after it has been built, it seems safe\nto weaken that only slightly and let the match flag be set by any\nprocess during probing: it's OK if two processes clobber each other's\nwrites, as the only transition is a single bit going strictly from 0\nto 1, and there will certainly be a full memory barrier before anyone\ntries to read those match bits. Then during the scan for unmatched,\nyou just have to somehow dole out hash table buckets or ranges of\nbuckets to processes on a first-come-first-served basis. But.... then\nI crashed into the following problem:\n\n* You can't begin the scan for unmatched tuples until every process\nhas finished probing (ie until you have the final set of match bits).\n* You can't wait for every process to finish probing, because any\nprocess that has emitted a tuple might never come back if there is\nanother node that is also waiting for all processes (ie deadlock\nagainst another PHJ doing the same thing), and probing is a phase that\nemits tuples.\n\nGenerally, it's not safe to emit tuples while you are attached to a\nBarrier, unless you're only going to detach from it, not wait at it,\nbecause emitting tuples lets the program counter escape your control.\nGenerally, it's not safe to detach from a Barrier while accessing\nresources whose lifetime it controls, such as a hash table, because\nthen it might go away underneath you.\n\nThe PHJ plans that are supported currently adhere to that programming\nrule and so don't have a problem: after the Barrier reaches the\nprobing phase, processes never wait for each other again so they're\nfree to begin emitting tuples. They just detach when they're done\nprobing, and the last to detach cleans up (frees the hash table etc).\nIf there is more than one batch, they detach from one batch and attach\nto another when they're ready (each batch has its own Barrier), so we\ncan consider the batches to be entirely independent.\n\nThere is probably a way to make a scan-for-unmatched-inner phase work,\npossibly involving another Barrier or something like that, but I ran\nout of time trying to figure it out and wanted to ship a working PHJ\nfor the more common plan types. I suppose PHLJ will face two variants\nof this problem: (1) you need to synchronise the loops (you can't dump\nthe hash table in preparation for the next loop until all have\nfinished probing for the current loop), and yet you've already emitted\ntuples, so you're not allowed to wait for other processes and they're\nnot allowed to wait for you, and (2) you can't start the\nscan-for-unmatched-outer until all the probe loops belonging to one\nbatch are done. The first problem is sort of analogous to a problem I\nfaced with batches in the first place, which Robert and I found a\nsolution to by processing the batches in parallel, and could perhaps\nbe solved in the same way: run the loops in parallel (if that sounds\ncrazy, recall that every worker has its own quota of work_mem and the\ndata is entirely prepartitioned up front, which is why we are able to\nrun the batches in parallel; in constrast, single-batch mode makes a\nhash table with a quota of nparticipants * work_mem). The second\nproblem is sort of analogous to the existing scan-for-unmatched-inner\nproblem that I haven't solved.\n\nI think there may be ways to make that general class of deadlock\nproblem go away in a future asynchronous executor model where N\nstreams conceptually run concurrently in event-driven nodes so that\ncontrol never gets stuck in a node, but that seems quite far off and I\nhaven't worked out the details. The same problem comes up in a\nhypothetical Parallel Repartition node: you're not done with your\npartition until all processes have run out of input tuples, so you\nhave to wait for all of them to send an EOF, so you risk deadlock if\nthey are waiting for you elsewhere in the tree. A stupid version of\nthe idea is to break the node up into a consumer part and a producer\npart, and put the producer into a subprocess so that its program\ncounter can never escape and deadlock somewhere in the consumer part\nof the plan. Obviously we don't want to have loads of extra OS\nprocesses all over the place, but I think you can get the same effect\nusing a form of asynchronous execution where the program counter jumps\nbetween nodes and streams based on readiness, and yields control\ninstead of blocking. Similar ideas have been proposed to deal with\nasynchronous IO.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Sep 2019 17:34:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 10:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Seems like a good time for me to try to summarise what I think the\n> main problems are here:\n>\n> 1. The match-bit storage problem already discussed. The tuples that\n> each process receives while reading from SharedTupleStore are\n> non-deterministic (like other parallel scans). To use a bitmap-based\n> approach, I guess we'd need to invent some way to give the tuples a\n> stable identifier within some kind of densely packed number space that\n> we could use to address the bitmap, or take the IO hit and write all\n> the tuples back. That might involve changing the way SharedTupleStore\n> holds data.\n>\n\nThis I've dealt with by adding a tuplenum to the SharedTupleStore\nitself which I atomically increment in sts_puttuple().\nIn ExecParallelHashJoinPartitionOuter(), as each worker writes tuples\nto the batch files, they call sts_puttuple() and this increments the\nnumber so each tuple has a unique number.\nFor persisting this number, I added the tuplenum to the meta data\nsection of the MinimalTuple (along with the hashvalue -- there was a\ncomment about this meta data that said it could be used for other\nthings in the future, so this seemed like a good place to put it) and\nwrite that out to the batch file.\n\nAt the end of ExecParallelHashJoinPartitionOuter(), I make the outer\nmatch status bitmap file. I use the final tuplenum count to determine\nthe number of bytes to write to it. Each worker has a file with a\nbitmap which has the number of bytes required to represent the number\nof tuples in that batch.\n\nBecause one worker may beat the other(s) and build the whole batch\nfile for a batch before the others have a chance, I also make the\nouter match status bitmap file for workers who missed out in\nExecParallelHashJoinOuterGetTuple() using the final tuplenum as well.\n\n\n>\n> 2. Tricky problems relating to barriers and flow control. First, let\n> me explain why PHJ doesn't support full/right outer joins yet. At\n> first I thought it was going to be easy, because, although the shared\n> memory hash table is read-only after it has been built, it seems safe\n> to weaken that only slightly and let the match flag be set by any\n> process during probing: it's OK if two processes clobber each other's\n> writes, as the only transition is a single bit going strictly from 0\n> to 1, and there will certainly be a full memory barrier before anyone\n> tries to read those match bits. Then during the scan for unmatched,\n> you just have to somehow dole out hash table buckets or ranges of\n> buckets to processes on a first-come-first-served basis. But.... then\n> I crashed into the following problem:\n>\n> * You can't begin the scan for unmatched tuples until every process\n> has finished probing (ie until you have the final set of match bits).\n> * You can't wait for every process to finish probing, because any\n> process that has emitted a tuple might never come back if there is\n> another node that is also waiting for all processes (ie deadlock\n> against another PHJ doing the same thing), and probing is a phase that\n> emits tuples.\n>\n> Generally, it's not safe to emit tuples while you are attached to a\n> Barrier, unless you're only going to detach from it, not wait at it,\n> because emitting tuples lets the program counter escape your control.\n> Generally, it's not safe to detach from a Barrier while accessing\n> resources whose lifetime it controls, such as a hash table, because\n> then it might go away underneath you.\n>\n> The PHJ plans that are supported currently adhere to that programming\n> rule and so don't have a problem: after the Barrier reaches the\n> probing phase, processes never wait for each other again so they're\n> free to begin emitting tuples. They just detach when they're done\n> probing, and the last to detach cleans up (frees the hash table etc).\n> If there is more than one batch, they detach from one batch and attach\n> to another when they're ready (each batch has its own Barrier), so we\n> can consider the batches to be entirely independent.\n>\n> There is probably a way to make a scan-for-unmatched-inner phase work,\n> possibly involving another Barrier or something like that, but I ran\n> out of time trying to figure it out and wanted to ship a working PHJ\n> for the more common plan types. I suppose PHLJ will face two variants\n> of this problem: (1) you need to synchronise the loops (you can't dump\n> the hash table in preparation for the next loop until all have\n> finished probing for the current loop), and yet you've already emitted\n> tuples, so you're not allowed to wait for other processes and they're\n> not allowed to wait for you, and (2) you can't start the\n> scan-for-unmatched-outer until all the probe loops belonging to one\n> batch are done. The first problem is sort of analogous to a problem I\n> faced with batches in the first place, which Robert and I found a\n> solution to by processing the batches in parallel, and could perhaps\n> be solved in the same way: run the loops in parallel (if that sounds\n> crazy, recall that every worker has its own quota of work_mem and the\n> data is entirely prepartitioned up front, which is why we are able to\n> run the batches in parallel; in constrast, single-batch mode makes a\n> hash table with a quota of nparticipants * work_mem). The second\n> problem is sort of analogous to the existing scan-for-unmatched-inner\n> problem that I haven't solved.\n>\n>\nI \"solved\" these problem for now by having all workers except for one\ndetach from the outer batch file after finishing probing. The last\nworker to arrive does not detach from the batch and instead iterates\nthrough all of the workers' outer match status files per participant\nshared mem SharedTuplestoreParticipant) and create a single unified\nbitmap. All the other workers continue to wait at the barrier until\nthe sole remaining worker has finished with iterating through the\nouter match status bitmap files.\n\nAdmittedly, I'm still fighting with this step a bit, but, my intent is\nto have all the backends wait until the lone remaining worker has\ncreated the unified bitmap, then, that worker, which is still attached\nto the outer batch will scan the outer batch file and the unified\nouter match status bitmap and emit unmatched tuples.\n\nI thought that the other workers can move on and stop waiting at the\nbarrier once the lone remaining worker has scanned their outer match\nstatus files. All the probe loops would be done, and the worker that\nis emitting tuples is not referencing the inner side hashtable at all\nand only the outer batch file and the combined bitmap.\n\n-- \nMelanie Plageman\n\nOn Thu, Sep 5, 2019 at 10:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nSeems like a good time for me to try to summarise what I think the\nmain problems are here:\n\n1. The match-bit storage problem already discussed. The tuples that\neach process receives while reading from SharedTupleStore are\nnon-deterministic (like other parallel scans). To use a bitmap-based\napproach, I guess we'd need to invent some way to give the tuples a\nstable identifier within some kind of densely packed number space that\nwe could use to address the bitmap, or take the IO hit and write all\nthe tuples back. That might involve changing the way SharedTupleStore\nholds data.This I've dealt with by adding a tuplenum to the SharedTupleStoreitself which I atomically increment in sts_puttuple().In ExecParallelHashJoinPartitionOuter(), as each worker writes tuplesto the batch files, they call sts_puttuple() and this increments thenumber so each tuple has a unique number.For persisting this number, I added the tuplenum to the meta datasection of the MinimalTuple (along with the hashvalue -- there was acomment about this meta data that said it could be used for otherthings in the future, so this seemed like a good place to put it) andwrite that out to the batch file.At the end of ExecParallelHashJoinPartitionOuter(), I make the outermatch status bitmap file. I use the final tuplenum count to determinethe number of bytes to write to it. Each worker has a file with abitmap which has the number of bytes required to represent the numberof tuples in that batch.Because one worker may beat the other(s) and build the whole batchfile for a batch before the others have a chance, I also make theouter match status bitmap file for workers who missed out inExecParallelHashJoinOuterGetTuple() using the final tuplenum as well. \n\n2. Tricky problems relating to barriers and flow control. First, let\nme explain why PHJ doesn't support full/right outer joins yet. At\nfirst I thought it was going to be easy, because, although the shared\nmemory hash table is read-only after it has been built, it seems safe\nto weaken that only slightly and let the match flag be set by any\nprocess during probing: it's OK if two processes clobber each other's\nwrites, as the only transition is a single bit going strictly from 0\nto 1, and there will certainly be a full memory barrier before anyone\ntries to read those match bits. Then during the scan for unmatched,\nyou just have to somehow dole out hash table buckets or ranges of\nbuckets to processes on a first-come-first-served basis. But.... then\nI crashed into the following problem:\n\n* You can't begin the scan for unmatched tuples until every process\nhas finished probing (ie until you have the final set of match bits).\n* You can't wait for every process to finish probing, because any\nprocess that has emitted a tuple might never come back if there is\nanother node that is also waiting for all processes (ie deadlock\nagainst another PHJ doing the same thing), and probing is a phase that\nemits tuples.\n\nGenerally, it's not safe to emit tuples while you are attached to a\nBarrier, unless you're only going to detach from it, not wait at it,\nbecause emitting tuples lets the program counter escape your control.\nGenerally, it's not safe to detach from a Barrier while accessing\nresources whose lifetime it controls, such as a hash table, because\nthen it might go away underneath you.\n\nThe PHJ plans that are supported currently adhere to that programming\nrule and so don't have a problem: after the Barrier reaches the\nprobing phase, processes never wait for each other again so they're\nfree to begin emitting tuples. They just detach when they're done\nprobing, and the last to detach cleans up (frees the hash table etc).\nIf there is more than one batch, they detach from one batch and attach\nto another when they're ready (each batch has its own Barrier), so we\ncan consider the batches to be entirely independent.\n\nThere is probably a way to make a scan-for-unmatched-inner phase work,\npossibly involving another Barrier or something like that, but I ran\nout of time trying to figure it out and wanted to ship a working PHJ\nfor the more common plan types. I suppose PHLJ will face two variants\nof this problem: (1) you need to synchronise the loops (you can't dump\nthe hash table in preparation for the next loop until all have\nfinished probing for the current loop), and yet you've already emitted\ntuples, so you're not allowed to wait for other processes and they're\nnot allowed to wait for you, and (2) you can't start the\nscan-for-unmatched-outer until all the probe loops belonging to one\nbatch are done. The first problem is sort of analogous to a problem I\nfaced with batches in the first place, which Robert and I found a\nsolution to by processing the batches in parallel, and could perhaps\nbe solved in the same way: run the loops in parallel (if that sounds\ncrazy, recall that every worker has its own quota of work_mem and the\ndata is entirely prepartitioned up front, which is why we are able to\nrun the batches in parallel; in constrast, single-batch mode makes a\nhash table with a quota of nparticipants * work_mem). The second\nproblem is sort of analogous to the existing scan-for-unmatched-inner\nproblem that I haven't solved.\nI \"solved\" these problem for now by having all workers except for onedetach from the outer batch file after finishing probing. The lastworker to arrive does not detach from the batch and instead iteratesthrough all of the workers' outer match status files per participantshared mem SharedTuplestoreParticipant) and create a single unifiedbitmap. All the other workers continue to wait at the barrier untilthe sole remaining worker has finished with iterating through theouter match status bitmap files.Admittedly, I'm still fighting with this step a bit, but, my intent isto have all the backends wait until the lone remaining worker hascreated the unified bitmap, then, that worker, which is still attachedto the outer batch will scan the outer batch file and the unifiedouter match status bitmap and emit unmatched tuples.I thought that the other workers can move on and stop waiting at thebarrier once the lone remaining worker has scanned their outer matchstatus files. All the probe loops would be done, and the worker thatis emitting tuples is not referencing the inner side hashtable at alland only the outer batch file and the combined bitmap. -- Melanie Plageman",
"msg_date": "Fri, 6 Sep 2019 10:54:13 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, Sep 06, 2019 at 10:54:13AM -0700, Melanie Plageman wrote:\n>On Thu, Sep 5, 2019 at 10:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n>> Seems like a good time for me to try to summarise what I think the\n>> main problems are here:\n>>\n>> 1. The match-bit storage problem already discussed. The tuples that\n>> each process receives while reading from SharedTupleStore are\n>> non-deterministic (like other parallel scans). To use a bitmap-based\n>> approach, I guess we'd need to invent some way to give the tuples a\n>> stable identifier within some kind of densely packed number space that\n>> we could use to address the bitmap, or take the IO hit and write all\n>> the tuples back. That might involve changing the way SharedTupleStore\n>> holds data.\n>>\n>\n>This I've dealt with by adding a tuplenum to the SharedTupleStore\n>itself which I atomically increment in sts_puttuple().\n>In ExecParallelHashJoinPartitionOuter(), as each worker writes tuples\n>to the batch files, they call sts_puttuple() and this increments the\n>number so each tuple has a unique number.\n>For persisting this number, I added the tuplenum to the meta data\n>section of the MinimalTuple (along with the hashvalue -- there was a\n>comment about this meta data that said it could be used for other\n>things in the future, so this seemed like a good place to put it) and\n>write that out to the batch file.\n>\n>At the end of ExecParallelHashJoinPartitionOuter(), I make the outer\n>match status bitmap file. I use the final tuplenum count to determine\n>the number of bytes to write to it. Each worker has a file with a\n>bitmap which has the number of bytes required to represent the number\n>of tuples in that batch.\n>\n>Because one worker may beat the other(s) and build the whole batch\n>file for a batch before the others have a chance, I also make the\n>outer match status bitmap file for workers who missed out in\n>ExecParallelHashJoinOuterGetTuple() using the final tuplenum as well.\n>\n\nThat seems like a perfectly sensible solution to me. I'm sure there are\nways to optimize it (say, having a bitmap optimized for sparse data, or\nbitmap shared by all the workers or something like that), but that's\ndefinitely not needed for v1.\n\nEven having a bitmap per worker is pretty cheap. Assume we have 1B rows,\nthe bitmap is 1B/8 bytes = ~120MB per worker. So with 16 workers that's\n~2GB, give or take. But with 100B rows, the original data is ~100GB. So\nthe bitmaps are not free, but it's not terrible either.\n\n>>\n>> 2. Tricky problems relating to barriers and flow control. First, let\n>> me explain why PHJ doesn't support full/right outer joins yet. At\n>> first I thought it was going to be easy, because, although the shared\n>> memory hash table is read-only after it has been built, it seems safe\n>> to weaken that only slightly and let the match flag be set by any\n>> process during probing: it's OK if two processes clobber each other's\n>> writes, as the only transition is a single bit going strictly from 0\n>> to 1, and there will certainly be a full memory barrier before anyone\n>> tries to read those match bits. Then during the scan for unmatched,\n>> you just have to somehow dole out hash table buckets or ranges of\n>> buckets to processes on a first-come-first-served basis. But.... then\n>> I crashed into the following problem:\n>>\n>> * You can't begin the scan for unmatched tuples until every process\n>> has finished probing (ie until you have the final set of match bits).\n>> * You can't wait for every process to finish probing, because any\n>> process that has emitted a tuple might never come back if there is\n>> another node that is also waiting for all processes (ie deadlock\n>> against another PHJ doing the same thing), and probing is a phase that\n>> emits tuples.\n>>\n>> Generally, it's not safe to emit tuples while you are attached to a\n>> Barrier, unless you're only going to detach from it, not wait at it,\n>> because emitting tuples lets the program counter escape your control.\n>> Generally, it's not safe to detach from a Barrier while accessing\n>> resources whose lifetime it controls, such as a hash table, because\n>> then it might go away underneath you.\n>>\n>> The PHJ plans that are supported currently adhere to that programming\n>> rule and so don't have a problem: after the Barrier reaches the\n>> probing phase, processes never wait for each other again so they're\n>> free to begin emitting tuples. They just detach when they're done\n>> probing, and the last to detach cleans up (frees the hash table etc).\n>> If there is more than one batch, they detach from one batch and attach\n>> to another when they're ready (each batch has its own Barrier), so we\n>> can consider the batches to be entirely independent.\n>>\n>> There is probably a way to make a scan-for-unmatched-inner phase work,\n>> possibly involving another Barrier or something like that, but I ran\n>> out of time trying to figure it out and wanted to ship a working PHJ\n>> for the more common plan types. I suppose PHLJ will face two variants\n>> of this problem: (1) you need to synchronise the loops (you can't dump\n>> the hash table in preparation for the next loop until all have\n>> finished probing for the current loop), and yet you've already emitted\n>> tuples, so you're not allowed to wait for other processes and they're\n>> not allowed to wait for you, and (2) you can't start the\n>> scan-for-unmatched-outer until all the probe loops belonging to one\n>> batch are done. The first problem is sort of analogous to a problem I\n>> faced with batches in the first place, which Robert and I found a\n>> solution to by processing the batches in parallel, and could perhaps\n>> be solved in the same way: run the loops in parallel (if that sounds\n>> crazy, recall that every worker has its own quota of work_mem and the\n>> data is entirely prepartitioned up front, which is why we are able to\n>> run the batches in parallel; in constrast, single-batch mode makes a\n>> hash table with a quota of nparticipants * work_mem). The second\n>> problem is sort of analogous to the existing scan-for-unmatched-inner\n>> problem that I haven't solved.\n>>\n>>\n>I \"solved\" these problem for now by having all workers except for one\n>detach from the outer batch file after finishing probing. The last\n>worker to arrive does not detach from the batch and instead iterates\n>through all of the workers' outer match status files per participant\n>shared mem SharedTuplestoreParticipant) and create a single unified\n>bitmap. All the other workers continue to wait at the barrier until\n>the sole remaining worker has finished with iterating through the\n>outer match status bitmap files.\n>\n\nWhy did you put solved in quotation marks? This seems like a reasonable\nsolution to me, at least for now, but the quotation marks kinda suggest\nyou think it's either not correct or not good enough. Or did I miss some\nflaw that makes this unacceptable?\n\n>Admittedly, I'm still fighting with this step a bit, but, my intent is\n>to have all the backends wait until the lone remaining worker has\n>created the unified bitmap, then, that worker, which is still attached\n>to the outer batch will scan the outer batch file and the unified\n>outer match status bitmap and emit unmatched tuples.\n>\n\nMakes sense, I think.\n\nThe one \"issue\" this probably has is that it serializes the last step, \ni.e. the search for unmatched tuples is done in a single process, instead\nof parallelized over multiple workers. That's certainly unfortunate, but \nis that really an issue in practice? Probably not for queries with just a\nsmall number of unmatched tuples. And for cases with many unmatched rows \nit's probably going to degrade to non-parallel case.\n\n>I thought that the other workers can move on and stop waiting at the\n>barrier once the lone remaining worker has scanned their outer match\n>status files. All the probe loops would be done, and the worker that\n>is emitting tuples is not referencing the inner side hashtable at all\n>and only the outer batch file and the combined bitmap.\n>\n\nWhy would the workers need to wait for the lone worker to scan their\nbitmap file? Or do the files disappear with the workers, or something\nlike that? \n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Tue, 10 Sep 2019 15:10:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "So, I finally have a prototype to share of parallel hashloop fallback.\n\nSee the commit message for a full description of the functionality of the\npatch.\n\nThis patch does contain refactoring of nodeHashjoin.\n\nI have split the Parallel HashJoin and Serial HashJoin state machines\nup, as they were diverging in my patch to a point that made for a\nreally cluttered ExecHashJoinImpl() (ExecHashJoinImpl() is now gone).\n\nThe reason I didn't do this refactoring in one patch and then put the\nadaptive hashjoin code on top of it is that I might like to make\nParallel HashJoin and Serial HashJoin different nodes.\n\nI think that has been discussed elsewhere and was looking to\nunderstand the rationale for keeping them in the same node.\n\nThe patch is a rough prototype. Below are some of the high-level\npieces of work that I plan to do next. (there are many TODOs in the\ncode as well).\n\nSome of the major outstanding work:\n\n- correctness:\n - haven't tried it with anti-joins and don't think it works\n - number of batches is not deterministic from run-to-run\n\n- performance:\n - join_hash.sql is *much* slower.\n While there are loads of performance fixes needed in the patch,\n the basic criteria for \"falling back\" is likely the culprit here.\n - There are many bottlenecks (there are several places where a\n barrier could be moved to somewhere less hot, an atomic used\n instead of a lock, or a method of coordination could be used to\n allow workers to do backend-local accounting and aggregate it)\n - need to make sure it does not create outer match status files when\n it shouldn't (inner joins, for example)\n\n- testing:\n - many unexercised cases\n - add number of chunks to EXPLAIN (for users and for testing)\n\n- refactoring:\n - The match status bitmap should have its own API or, at least,\n manipulation of it should be done in a centralized set of\n functions\n - Rename \"chunk\" (as in chunks of inner side) to something that is\n not already used in the context of memory chunks and, more\n importantly, SharedTuplestoreChunk\n - Make references to \"hashloop fallback\" and \"adaptive hashjoin\"\n more consistent\n - Rename adaptiveHashjoin.h/.c files and change what is in the files\n which are separate from nodeHashjoin.h/.c (depending on outcome of\n \"new node\")\n - The state machines are big and unwieldy now, so, there is probably\n some larger restructuring that could be done\n - Should probably use the ParallelHashJoinBatchAccessor to access\n the ParallelHashJoinBatch everywhere (realized this recently)\n\n-- \nMelanie Plageman",
"msg_date": "Sun, 29 Dec 2019 19:34:02 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 4:34 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> So, I finally have a prototype to share of parallel hashloop fallback.\n\nHi Melanie,\n\nThanks for all your continued work on this! I started looking at it\ntoday; it's a difficult project and I think it'll take me a while to\ngrok. I do have some early comments though:\n\n* I am uneasy about BarrierArriveExplicitAndWait() (a variant of\nBarrierArriveAndWait() that lets you skip directly to a given phase?);\nperhaps you only needed that for a circular phase system, which you\ncould do with modular phase numbers, like PHJ_GROW_BATCHES_PHASE? I\ntried to make the barrier interfaces look like the libraries in other\nparallel programming environments, and I'd be worried that the\nexplicit phase thing could easily lead to bugs.\n* It seems a bit strange to have \"outer_match_status_file\" in\nSharedTupleStore; something's gone awry layering-wise there.\n* I'm not sure it's OK to wait at the end of each loop, as described\nin the commit message:\n\n Workers probing a fallback batch will wait until all workers have\n finished probing before moving on so that an elected worker can read\n and combine the outer match status files into a single bitmap and use\n it to emit unmatched outer tuples after all chunks of the inner side\n have been processed.\n\nMaybe I misunderstood completely, but that seems to break the\nprogramming rule described in nodeHashjoin.c's comment beginning \"To\navoid deadlocks, ...\". To recap: (1) When you emit a tuple, the\nprogram counter escapes to some other node, and maybe that other node\nwaits for thee, (2) Maybe the leader is waiting for you but you're\nwaiting for it to drain its queue so you can emit a tuple (I learned a\nproper name for this: \"flow control deadlock\"). That's why the\ncurrent code only ever detaches (a non-waiting operation) after it's\nbegun emitting tuples (that is, the probing phase). It just moves\nonto another batch. That's not a solution here: you can't simply move\nto another loop, loops are not independent of each other like batches.\nIt's possible that barriers are not the right tool for this part of\nthe problem, or that there is a way to use a barrier that you don't\nremain attached to while emitting, or that we should remove the\ndeadlock risks another way entirely[1] but I'm not sure. Furthermore,\nthe new code in ExecParallelHashJoinNewBatch() appears to break the\nrule even in the non-looping case (it calls BarrierArriveAndWait() in\nExecParallelHashJoinNewBatch(), where the existing code just\ndetaches).\n\n> This patch does contain refactoring of nodeHashjoin.\n>\n> I have split the Parallel HashJoin and Serial HashJoin state machines\n> up, as they were diverging in my patch to a point that made for a\n> really cluttered ExecHashJoinImpl() (ExecHashJoinImpl() is now gone).\n\nHmm. I'm rather keen on extending that technique further: I'd like\nthere to be more configuration points in the form of parameters to\nthat function, so that we write the algorithm just once but we\ngenerate a bunch of specialised variants that are the best possible\nmachine code for each combination of parameters via constant-folding\nusing the \"always inline\" trick (steampunk C++ function templates).\nMy motivations for wanting to do that are: supporting different hash\nsizes (CF commit e69d6445), removing branches for unused optimisations\n(eg skew), and inlining common hash functions. That isn't to say we\ncouldn't have two different templatoid functions from which many\nothers are specialised, but I feel like that's going to lead to a lot\nof duplication.\n\n> The reason I didn't do this refactoring in one patch and then put the\n> adaptive hashjoin code on top of it is that I might like to make\n> Parallel HashJoin and Serial HashJoin different nodes.\n>\n> I think that has been discussed elsewhere and was looking to\n> understand the rationale for keeping them in the same node.\n\nWell, there is a discussion about getting rid of the Hash node, since\nit's so tightly coupled with Hash Join that it might as well not exist\nas a separate entity. (Incidentally, I noticed in someone's blog that\nMySQL now shows Hash separately in its PostgreSQL-style EXPLAIN\noutput; now we'll remove it, CF the Dr Seuss story about the\nSneetches). But as for Parallel Hash Join vs [Serial] Hash Join, I\nthink it makes sense to use the same node because they are\nsubstantially the same thing, with optional extra magic, and I think\nit's our job to figure out how to write code in a style that makes the\ndifferences maintainable. That fits into a general pattern that\n\"Parallel\" is a mode, not a different node. On the other hand, PHJ is\nby far the most different from the original code, compared to things\nlike Parallel Sequential Scan etc. FWIW I think we're probably in\nrelatively new territory here: as far as I know, other traditional\nRDBMSs didn't really seem to have a concept like parallel-aware\nexecutor nodes, because they tended to be based on partitioning, so\nthat the operators are all oblivious to parallelism and don't have to\nshare/coordinate anything at this level. It seems that everyone is\nnow coming around to the view that shared hash table hash joins are a\ngood idea now that we have so many cores connected up to shared\nmemory. Curiously, judging from another blog article I saw, on the\nsurface it looks like Oracle's brand new HASH JOIN SHARED is a\ndifferent operator than HASH JOIN (just an observation, I could be way\noff and I don't know or want to know how that's done under the covers\nin that system).\n\n> - number of batches is not deterministic from run-to-run\n\nYeah, I had a lot of fun with that sort of thing on the build farm\nwhen PHJ was first committed, and the effects were different on\nsystems I don't have access to that have different sizeof() for\ncertain types.\n\n> - Rename \"chunk\" (as in chunks of inner side) to something that is\n> not already used in the context of memory chunks and, more\n> importantly, SharedTuplestoreChunk\n\n+1. Fragments? Loops? Blocks (from\nhttps://en.wikipedia.org/wiki/Block_nested_loop, though, no, strike\nthat, blocks are also super overloaded).\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 8 Jan 2020 13:13:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> * I am uneasy about BarrierArriveExplicitAndWait() (a variant of\n> BarrierArriveAndWait() that lets you skip directly to a given phase?);\n> perhaps you only needed that for a circular phase system, which you\n> could do with modular phase numbers, like PHJ_GROW_BATCHES_PHASE? I\n> tried to make the barrier interfaces look like the libraries in other\n> parallel programming environments, and I'd be worried that the\n> explicit phase thing could easily lead to bugs.\n>\n\nSo, I actually use it to circle back up to the first phase while\nskipping the last phase.\nSo I couldn't do it with modular phase numbers and a loop.\nThe last phase detaches from the chunk barrier. I don't want to detach\nfrom the chunk barrier if there are more chunks.\nI basically need a way to only attach to the chunk barrier at the\nbegininng of the first chunk and only detach at the end of the last\nchunk--not in between chunks. I will return from the function and\nre-enter between chunks -- say between chunk 2 and chunk 3 of 5.\n\nHowever, could this be solved by having more than one chunk\nbarrier?\nA worker would attach to one chunk barrier and then when it moves to\nthe next chunk it would attach to the other chunk barrier and then\nswitch back when it switches to the next chunk. Then it could detach\nand attach each time it enters/leaves the function.\n\n\n> * I'm not sure it's OK to wait at the end of each loop, as described\n> in the commit message:\n>\n> Workers probing a fallback batch will wait until all workers have\n> finished probing before moving on so that an elected worker can read\n> and combine the outer match status files into a single bitmap and use\n> it to emit unmatched outer tuples after all chunks of the inner side\n> have been processed.\n>\n> Maybe I misunderstood completely, but that seems to break the\n> programming rule described in nodeHashjoin.c's comment beginning \"To\n> avoid deadlocks, ...\". To recap: (1) When you emit a tuple, the\n> program counter escapes to some other node, and maybe that other node\n> waits for thee, (2) Maybe the leader is waiting for you but you're\n> waiting for it to drain its queue so you can emit a tuple (I learned a\n> proper name for this: \"flow control deadlock\"). That's why the\n> current code only ever detaches (a non-waiting operation) after it's\n> begun emitting tuples (that is, the probing phase). It just moves\n> onto another batch. That's not a solution here: you can't simply move\n> to another loop, loops are not independent of each other like batches.\n> It's possible that barriers are not the right tool for this part of\n> the problem, or that there is a way to use a barrier that you don't\n> remain attached to while emitting, or that we should remove the\n> deadlock risks another way entirely[1] but I'm not sure. Furthermore,\n> the new code in ExecParallelHashJoinNewBatch() appears to break the\n> rule even in the non-looping case (it calls BarrierArriveAndWait() in\n> ExecParallelHashJoinNewBatch(), where the existing code just\n> detaches).\n>\n\nYea, I think I'm totally breaking that rule.\nJust to make sure I understand the way in which I am breaking that\nrule:\n\nIn my patch, while attached to a chunk_barrier, worker1 emits a\nmatched tuple (control leaves the current node). Meanwhile, worker2\nhas finished probing the chunk and is waiting on the chunk_barrier for\nworker1.\nHow though could worker1 be waiting for worker2?\n\nIs this only a problem when one of the barrier participants is the\nleader and is reading from the tuple queue? (reading your tuple queue\ndeadlock hazard example in the thread [1] you referred to).\nBasically is my deadlock hazard a tuple queue deadlock hazard?\n\nI thought maybe this could be a problem with nested HJ nodes, but I'm\nnot sure.\n\nAs I understand it, this isn't a problem with current master with\nbatch barriers because while attached to a batch_barrier, a worker can\nemit tuples. No other workers will wait on the batch barrier once they\nhave started probing.\n\nI need to think more about the suggestions you provided in [1] about\nnixing the tuple queue deadlock hazard.\n\nHowever, hypothetically, if we decide we don't want to break the no\nemitting tuples while attached to a barrier rule, how can we still\nallow workers to coordinate while probing chunks of the batch\nsequentially (1 chunk at a time)?\n\nI could think of two options (both sound slow and bad):\n\nOption 1:\nStash away the matched tuples in a tuplestore and emit them at the end\nof the batch (incurring more writes).\n\nOption 2:\nDegenerate to 1 worker for fallback batches\n\nAny other ideas?\n\n\n>\n> > - Rename \"chunk\" (as in chunks of inner side) to something that is\n> > not already used in the context of memory chunks and, more\n> > importantly, SharedTuplestoreChunk\n>\n> +1. Fragments? Loops? Blocks (from\n> https://en.wikipedia.org/wiki/Block_nested_loop, though, no, strike\n> that, blocks are also super overloaded).\n>\n\nHmmm. I think loop is kinda confusing. \"fragment\" has potential.\nI also thought of \"piece\". That is actually where I am leaning now.\nWhat do you think?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com\n\n-- \nMelanie Plageman\n\nOn Tue, Jan 7, 2020 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n* I am uneasy about BarrierArriveExplicitAndWait() (a variant of\nBarrierArriveAndWait() that lets you skip directly to a given phase?);\nperhaps you only needed that for a circular phase system, which you\ncould do with modular phase numbers, like PHJ_GROW_BATCHES_PHASE? I\ntried to make the barrier interfaces look like the libraries in other\nparallel programming environments, and I'd be worried that the\nexplicit phase thing could easily lead to bugs.So, I actually use it to circle back up to the first phase whileskipping the last phase.So I couldn't do it with modular phase numbers and a loop.The last phase detaches from the chunk barrier. I don't want to detachfrom the chunk barrier if there are more chunks.I basically need a way to only attach to the chunk barrier at thebegininng of the first chunk and only detach at the end of the lastchunk--not in between chunks. I will return from the function andre-enter between chunks -- say between chunk 2 and chunk 3 of 5.However, could this be solved by having more than one chunkbarrier?A worker would attach to one chunk barrier and then when it moves tothe next chunk it would attach to the other chunk barrier and thenswitch back when it switches to the next chunk. Then it could detachand attach each time it enters/leaves the function. \n* I'm not sure it's OK to wait at the end of each loop, as described\nin the commit message:\n\n Workers probing a fallback batch will wait until all workers have\n finished probing before moving on so that an elected worker can read\n and combine the outer match status files into a single bitmap and use\n it to emit unmatched outer tuples after all chunks of the inner side\n have been processed.\n\nMaybe I misunderstood completely, but that seems to break the\nprogramming rule described in nodeHashjoin.c's comment beginning \"To\navoid deadlocks, ...\". To recap: (1) When you emit a tuple, the\nprogram counter escapes to some other node, and maybe that other node\nwaits for thee, (2) Maybe the leader is waiting for you but you're\nwaiting for it to drain its queue so you can emit a tuple (I learned a\nproper name for this: \"flow control deadlock\"). That's why the\ncurrent code only ever detaches (a non-waiting operation) after it's\nbegun emitting tuples (that is, the probing phase). It just moves\nonto another batch. That's not a solution here: you can't simply move\nto another loop, loops are not independent of each other like batches.\nIt's possible that barriers are not the right tool for this part of\nthe problem, or that there is a way to use a barrier that you don't\nremain attached to while emitting, or that we should remove the\ndeadlock risks another way entirely[1] but I'm not sure. Furthermore,\nthe new code in ExecParallelHashJoinNewBatch() appears to break the\nrule even in the non-looping case (it calls BarrierArriveAndWait() in\nExecParallelHashJoinNewBatch(), where the existing code just\ndetaches).Yea, I think I'm totally breaking that rule.Just to make sure I understand the way in which I am breaking thatrule:In my patch, while attached to a chunk_barrier, worker1 emits amatched tuple (control leaves the current node). Meanwhile, worker2has finished probing the chunk and is waiting on the chunk_barrier forworker1.How though could worker1 be waiting for worker2?Is this only a problem when one of the barrier participants is theleader and is reading from the tuple queue? (reading your tuple queuedeadlock hazard example in the thread [1] you referred to).Basically is my deadlock hazard a tuple queue deadlock hazard?I thought maybe this could be a problem with nested HJ nodes, but I'mnot sure.As I understand it, this isn't a problem with current master withbatch barriers because while attached to a batch_barrier, a worker canemit tuples. No other workers will wait on the batch barrier once theyhave started probing.I need to think more about the suggestions you provided in [1] aboutnixing the tuple queue deadlock hazard.However, hypothetically, if we decide we don't want to break the noemitting tuples while attached to a barrier rule, how can we stillallow workers to coordinate while probing chunks of the batchsequentially (1 chunk at a time)?I could think of two options (both sound slow and bad):Option 1:Stash away the matched tuples in a tuplestore and emit them at the endof the batch (incurring more writes).Option 2:Degenerate to 1 worker for fallback batches Any other ideas? \n\n> - Rename \"chunk\" (as in chunks of inner side) to something that is\n> not already used in the context of memory chunks and, more\n> importantly, SharedTuplestoreChunk\n\n+1. Fragments? Loops? Blocks (from\nhttps://en.wikipedia.org/wiki/Block_nested_loop, though, no, strike\nthat, blocks are also super overloaded).Hmmm. I think loop is kinda confusing. \"fragment\" has potential.I also thought of \"piece\". That is actually where I am leaning now.What do you think?[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BA6ftXPz4oe92%2Bx8Er%2BxpGZqto70-Q_ERwRaSyA%3DafNg%40mail.gmail.com-- Melanie Plageman",
"msg_date": "Thu, 9 Jan 2020 18:37:11 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 4:14 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> * I am uneasy about BarrierArriveExplicitAndWait() (a variant of\n> BarrierArriveAndWait() that lets you skip directly to a given phase?);\n> perhaps you only needed that for a circular phase system, which you\n> could do with modular phase numbers, like PHJ_GROW_BATCHES_PHASE? I\n> tried to make the barrier interfaces look like the libraries in other\n> parallel programming environments, and I'd be worried that the\n> explicit phase thing could easily lead to bugs.\n>\n\nBarrierArriveExplicitAndWait() is gone now due to the refactor to\naddress the barrier waiting deadlock hazard (mentioned below).\n\n\n> * It seems a bit strange to have \"outer_match_status_file\" in\n> SharedTupleStore; something's gone awry layering-wise there.\n>\n\nouter_match_status_file is now out of the SharedTuplestore. Jesse\nZhang and I worked on a new API, SharedBits, for workers to\ncollaboratively make a bitmap and then used it for the outer match\nstatus file and the combined bitmap file\n(v4-0004-Add-SharedBits-API.patch).\n\nThe SharedBits API is modeled closely after the SharedTuplestore API.\nIt uses a control object in shared memory to synchronize access to\nsome files in a SharedFileset and maintains some participant-specific\nshared state. The big difference (other than that the files are for\nbitmaps and not tuples) is that each backend writes to its file in one\nphase and a single backend reads from all of the files and combines\nthem in another phase.\nIn other words, it supports parallel write but not parallel scan (and\nnot concurrent read/write). This could definitely be modified in the\nfuture.\n\nAlso, the SharedBits uses a SharedFileset which uses BufFiles. This is\nnot the ideal API for the bitmap. The access pattern is small sequential\nwrites and random reads. It would also be nice to maintain the fixed\nsize buffer but have an API that let us write an arbitrary number of\nbytes to it in bufsize chunks without incurring additional function call\noverhead.\n\n\n> * I'm not sure it's OK to wait at the end of each loop, as described\n> in the commit message:\n>\n> Workers probing a fallback batch will wait until all workers have\n> finished probing before moving on so that an elected worker can read\n> and combine the outer match status files into a single bitmap and use\n> it to emit unmatched outer tuples after all chunks of the inner side\n> have been processed.\n>\n> Maybe I misunderstood completely, but that seems to break the\n> programming rule described in nodeHashjoin.c's comment beginning \"To\n> avoid deadlocks, ...\". To recap: (1) When you emit a tuple, the\n> program counter escapes to some other node, and maybe that other node\n> waits for thee, (2) Maybe the leader is waiting for you but you're\n> waiting for it to drain its queue so you can emit a tuple (I learned a\n> proper name for this: \"flow control deadlock\"). That's why the\n> current code only ever detaches (a non-waiting operation) after it's\n> begun emitting tuples (that is, the probing phase). It just moves\n> onto another batch. That's not a solution here: you can't simply move\n> to another loop, loops are not independent of each other like batches.\n> It's possible that barriers are not the right tool for this part of\n> the problem, or that there is a way to use a barrier that you don't\n> remain attached to while emitting, or that we should remove the\n> deadlock risks another way entirely[1] but I'm not sure. Furthermore,\n> the new code in ExecParallelHashJoinNewBatch() appears to break the\n> rule even in the non-looping case (it calls BarrierArriveAndWait() in\n> ExecParallelHashJoinNewBatch(), where the existing code just\n> detaches).\n>\n>\nSo, after a more careful reading of the parallel full hashjoin email\n[1], I think I understand the ways in which I am violating the rule in\nnodeHashJoin.c.\nI do have some questions about the potential solutions mentioned in\nthat thread, however, I'll pose those over there.\n\nFor adaptive hashjoin, for now, the options for addressing the barrier\nwait hazard that Jesse and I came up with based on the PFHJ thread are:\n- leader doesn't participate in fallback batches (has the downside of\n reduced parallelism and needing special casing when it ends up being\n the only worker because other workers get used for something else\n [like autovaccuum])\n- use some kind of spool to avoid deadlock\n- the original solution I proposed in which all workers detach from\n the batch barrier (instead of waiting)\n\nI revisited the original solution I proposed and realized that I had\nnot implemented it as advertised. By reverting to the original\ndesign, I can skirt the issue for now.\n\nIn the original solution I suggested, I mentioned all workers would\ndetach from the batch barrier and the last to detach would combine the\nbitmaps. That was not what I actually implemented (my patch had all\nthe workers wait on the barrier).\n\nI've changed to actually doing this--which addresses some of the\npotential deadlock hazard.\n\nThe two deadlock waits causing the deadlock hazard were waiting on the\nchunk barrier and waiting on the batch barrier. In order to fully\naddress the deadlock hazard, Jesse and I came up with the following\nsolution (in v4-0003-Address-barrier-wait-deadlock-hazard.patch in the\nattached patchset) to each:\n\nchunk barrier wait:\n- instead of waiting on the chunk barrier when it is not in its final\n state and then reusing it and jumping back to the initial state,\n initialize an array of chunk barriers, one per chunk, and, workers\n only wait on a chunk barrier when it is in its final state. The last\n worker to arrive will increment the chunk number. All workers detach\n from the chunk barrier they are attached to and select the next\n chunk barrier\n\nJesse brought up that there isn't a safe time to reinitialize the\nchunk barrier, so reusing it doesn't seem like a good idea.\n\nbatch barrier wait:\n- In order to mitigate the other cause of deadlock hazard (workers\n wait on the batch barrier after emitting tuples), now, in\n ExecParallelHashJoinNewBatch(), if we are attached to a batch\n barrier and it is a fallback batch, all workers will detach from the\n batch barrier and then end their scan of that batch. The last\n worker to detach will combine the outer match status files, then it\n will detach from the batch, clean up the hashtable, and end its scan\n of the inner side. Then it will return and proceed to emit\n unmatched outer tuples.\n\n\n> > This patch does contain refactoring of nodeHashjoin.\n> >\n> > I have split the Parallel HashJoin and Serial HashJoin state machines\n> > up, as they were diverging in my patch to a point that made for a\n> > really cluttered ExecHashJoinImpl() (ExecHashJoinImpl() is now gone).\n>\n> Hmm. I'm rather keen on extending that technique further: I'd like\n> there to be more configuration points in the form of parameters to\n> that function, so that we write the algorithm just once but we\n> generate a bunch of specialised variants that are the best possible\n> machine code for each combination of parameters via constant-folding\n> using the \"always inline\" trick (steampunk C++ function templates).\n> My motivations for wanting to do that are: supporting different hash\n> sizes (CF commit e69d6445), removing branches for unused optimisations\n> (eg skew), and inlining common hash functions. That isn't to say we\n> couldn't have two different templatoid functions from which many\n> others are specialised, but I feel like that's going to lead to a lot\n> of duplication.\n>\n>\nI'm okay with using templating. For now, while I am addressing large\nTODO items with the patchset, I will keep them as separate functions.\nOnce it is in a better state, I will look at the overlap and explore\ntemplating. The caveat here is if a lot of new commits start going\ninto nodeHashjoin.c and keeping this long-running branch rebased gets\npainful.\n\nThe patchset has also been run through pg_indent, so,\nv4-0001-Implement-Adaptive-Hashjoin.patch will look a bit different\nthan v3-0001-hashloop-fallback.patch, but, it is the same content.\nv4-0002-Fixup-tupleMetadata-struct-issues.patch is just some other\nfixups and small cosmetic changes.\n\nThe new big TODOs is to make a file type that suits the SharedBits API\nbetter--but I don't want to do that unless the idea is validated.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA+hUKG+A6ftXPz4oe92+x8Er+xpGZqto70-Q_ERwRaSyA=afNg@mail.gmail.com",
"msg_date": "Fri, 24 Jan 2020 18:22:30 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "I've implemented avoiding rescanning all inner tuples for each stripe\nin the attached patch:\nv5-0005-Avoid-rescanning-inner-tuples-per-stripe.patch\n\nPatchset is rebased--and I had my first merge conflicts as I contend\nwith maintaining this long-running branch with large differences\nbetween it and current hashjoin. I think I'll need to reconsider the\nchanges I've made if I want to make it maintainable.\n\nAs for patch 0005, not rescanning inner tuples for every stripe,\nbasically, instead of reinitializing the SharedTuplestore for the\ninner side for each stripe (I'm using \"stripe\" from now on, but I\nhaven't done any retroactive renaming yet) during fallback, each\nparticipant's read_page is set to the beginning of the\nSharedTuplestoreChunk which contains the end of one stripe and the\nbeginning of another.\n\nPreviously all inner tuples were scanned and only tuples from the\ncurrent stripe were loaded.\n\nEach SharedTuplestoreAccessor now has a variable \"start_page\", which\nis initialized when it is assigned its read_page (which will always be\nthe beginning of a SharedTuplestoreChunk).\n\nWhile loading tuples into the hashtable, if a tuple is from a past\nstripe, the worker skips it (that will happen when a stripe straddles\ntwo SharedTuplestoreChunks). If a tuple is from the future, the worker\nbacks that SharedTuplestoreChunk out and sets the shared read_page (in\nthe shared SharedTuplestoreParticipant) back to its start_page.\n\nThere are a couple mechanisms to provide for synchronization that\naddress specific race conditions/synchronization points -- those\nscenarios are laid out in the commit message.\n\nThe first is a rule that a worker can only set read_page to a\nstart_page which is less than the current value of read_page.\n\nThe second is a \"rewound\" flag in the SharedTuplestoreParticipant. It\nindicates if this participant has been rewound during loading of the\ncurrent stripe. If it has, a worker cannot be assigned a\nSharedTuplestoreChunk. This flag is reset between stripes.\n\nIn this patch, Hashjoin makes an unacceptable intrusion into the\nSharedTuplestore API. I am looking for feedback on how to solve this.\n\nBasically, because the SharedTuplestore does not know about stripes or\nabout HashJoin, the logic to decide if a tuple should be loaded into a\nhashtable or not is in the stripe phase machine where tuples are loaded\ninto the hashtable.\n\nSo, to ensure that workers have read from all participant files before\nassuming all tuples from a stripe are loaded, I have duplicated the\nlogic from sts_parallel_scan_next() which has workers get the next\nparticipant file and added it into in the body of the tuple loading\nloop in the stripe phase machine (see sts_ready_for_next_stripe() and\nsts_seen_all_participants()).\n\nThis clearly needs to be fixed and it is arguable that there are other\nintrusions into the SharedTuplestore API in these patches.\n\nOne option is to write each stripe for each participant to a different\nfile, preserving the idea that a worker is done with a read_file when it\nis at EOF.\n\nOutside of addressing the relationship between SharedTuplestore,\nstripes, and Hashjoin, I have re-prioritized the next steps for the\npatch as follows:\n\nNext Steps:\n1) Rename \"chunk\" to \"stripe\"\n1) refine fallback logic\n3) refactor code to make it easier to keep it rebased\n4) EXPLAIN ANALYZE instrumentation to show stripes probed by workers\n5) anti/semi-join support\n\n1)\nThe chunk/stripe thing is becoming extremely confusing.\n\n2)\nI re-prioritized refining the fallback logic because the premature\ndisabling of growth in serial hashjoin is making the join_hash test so\nslow that it is slowing down iteration speed for me.\n\n3)\nI am wondering if Thomas Munro's idea to template-ize Hashjoin [1]\nwould make maintaining the diff easier, harder, or no different. The\ncode I've added made the main hashjoin state machine incredibly long,\nso I broke it up into Parallel Hashjoin and Serial Hashjoin to make it\nmore manageable. This, of course, lends itself to difficult rebasing\n(luckily only one small commit has been made to nodeHashjoin.c). If\nthe template-ization were to happen sooner, I could refactor my code\nso that there were at least the same function names and the diffs\nwould be more clear.\n\n4)\nIt is important that I have some way of knowing if I'm even exercising\ncode that I'm adding that involves multiple workers probing the same\nstripes. As I make changes to the code, even though it will not\nnecessarily be deterministic, I can change the tests if I am no longer\nable to get any of the concurrent behavior I'm looking for.\n\n5)\nSeems like it's time\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKGJjs6H77u%2BPL3ovMSowFZ8nib9Z%2BnHGNF6YNmw6osUU%2BA%40mail.gmail.com\n\n--\nMelanie Plageman",
"msg_date": "Tue, 11 Feb 2020 16:57:05 -0800",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "I've attached a patch which should address some of the previous feedback\nabout code complexity. Two of my co-workers and I wrote what is\nessentially a new prototype of the idea. It uses the main state machine\nto route emitting unmatched tuples instead of introducing a separate\nstate. The logic for falling back is also more developed.\n\nIn addition to many assorted TODOs in the code, there are a few major\nprojects left:\n- Batch 0 falling back\n- Stripe barrier deadlock\n- Performance improvements and testing\n\nI will address the stripe barrier deadlock here. David is going to send\na separate email about batch 0 falling back.\n\nThere is a deadlock hazard in parallel hashjoin (pointed out by Thomas\nMunro in the past). Workers attached to the stripe_barrier emit tuples\nand then wait on that barrier.\nI believe that that can be addressed starting with this\nrelatively unoptimized solution:\n- after probing a stripe in a batch, a worker sets the status of that\n batch to \"tentatively done\" and saves the stripe_barrier phase\n- if that worker is not the only worker attached to that batch, it\n detaches from both stripe and batch barriers and moves on to other\n batches\n- if that worker is the only worker attached to the batch, it will\n proceed to load the next stripe of that batch, and, once it has\n finished loading, it will set the status of the batch back to \"not\n done\" for itself\n- when the other worker encounters that batch again, if the\n stripe_barrier phase has not moved forward, it will mark that batch as\n done for itself. if the stripe_barrier phase has moved forward, it can\n join in in probing this batch for the current stripe.",
"msg_date": "Tue, 28 Apr 2020 19:03:53 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Wed, Apr 29, 2020 at 4:39 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> In addition to many assorted TODOs in the code, there are a few major\n> projects left:\n> - Batch 0 falling back\n> - Stripe barrier deadlock\n> - Performance improvements and testing\n>\n\nBatch 0 never spills. That behavior is an artifact of the existing design that\nas an optimization special cases batch 0 to fill the initial hash table. This\nmeans it can skip loading and doesn't need to create a batch file.\n\nHowever in the pathalogical case where all tuples hash to batch 0 there is no\nway to redistribute those tuples to other batches. So, existing hash join\nimplementation allows work_mem to be exceeded for batch 0.\n\nIn adaptive hash join approach, there is another way to deal with a batch that\nexceeds work_mem. If increasing the number of batches does not work then the\nbatch can be split into stripes that will not exceed work_mem. Doing this\nrequires spilling the excess tuples to batch files. Following patch adds logic\nto create a batch 0 file for serial hash join so that even in pathalogical case\nwe do not need to exceed work_mem.\n\nThanks,\nDavid",
"msg_date": "Wed, 29 Apr 2020 16:44:53 -0700",
"msg_from": "David Kimura <david.g.kimura@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Apr 28, 2020 at 11:50 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 29/04/2020 05:03, Melanie Plageman wrote:\n> > I've attached a patch which should address some of the previous feedback\n> > about code complexity. Two of my co-workers and I wrote what is\n> > essentially a new prototype of the idea. It uses the main state machine\n> > to route emitting unmatched tuples instead of introducing a separate\n> > state. The logic for falling back is also more developed.\n>\n> I haven't looked at the patch in detail, but thanks for the commit\n> message; it describes very well what this is all about. It would be nice\n> to copy that explanation to the top comment in nodeHashJoin.c in some\n> form. I think we're missing a high level explanation of how the batching\n> works even before this new patch, and that commit message does a good\n> job at it.\n>\n>\nThanks for taking a look, Heikki!\n\nI made a few edits to the message and threw it into a draft patch (on\ntop of master, of course). I didn't want to junk up peoples' inboxes, so\nI didn't start a separate thread, but, it will be pretty hard to\ncollaboratively edit the comment/ever register it for a commitfest if it\nis wedged into this thread. What do you think?\n\n-- \nMelanie Plageman",
"msg_date": "Thu, 30 Apr 2020 07:30:35 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On 2020-Apr-30, Melanie Plageman wrote:\n\n> On Tue, Apr 28, 2020 at 11:50 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> > I haven't looked at the patch in detail, but thanks [...]\n\n> Thanks for taking a look, Heikki!\n\nHmm. We don't have Heikki's message in the archives. In fact, the last\nmessage from Heikki we seem to have in any list is\ncca4e4dc-32ac-b9ab-039d-98dcb5650791@iki.fi dated February 19 in\npgsql-bugs. I wonder if there's some problem between Heikki and the\nlists.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 30 Apr 2020 15:39:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Fri, May 1, 2020 at 2:30 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I made a few edits to the message and threw it into a draft patch (on\n> top of master, of course). I didn't want to junk up peoples' inboxes, so\n> I didn't start a separate thread, but, it will be pretty hard to\n> collaboratively edit the comment/ever register it for a commitfest if it\n> is wedged into this thread. What do you think?\n\n+1, this is a good description and I'm sure you're right about the\nname of the algorithm. It's a \"hybrid\" between a simple no partition\nhash join, and partitioning like the Grace machine, since batch 0 is\nprocessed directly without touching the disk.\n\nYou mention that PHJ finalises the number of batches during build\nphase while SHJ can extend it later. There's also a difference in the\nprobe phase: although inner batch 0 is loaded into the hash table\ndirectly and not written to disk during the build phase (= classic\nhybrid, just like the serial algorithm), outer batch 0 *is* written\nout to disk at the start of the probe phase (unlike classic hybrid at\nleast as we have it for serial hash join). That's because I couldn't\nfigure out how to begin emitting tuples before partitioning was\nfinished, without breaking the deadlock-avoidance programming rule\nthat you can't let the program counter escape from the node when\nsomeone might wait for you. So maybe it's erm, a hybrid between\nhybrid and Grace...\n\n\n",
"msg_date": "Fri, 1 May 2020 08:59:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Wed, Apr 29, 2020 at 4:44 PM David Kimura <david.g.kimura@gmail.com> wrote:\n>\n> Following patch adds logic to create a batch 0 file for serial hash join so\n> that even in pathalogical case we do not need to exceed work_mem.\n\nUpdated the patch to spill batch 0 tuples after it is marked as fallback.\n\nA couple questions from looking more at serial code:\n\n1) Does the current pattern to repartition batches *after* the previous\n hashtable insert exceeds work_mem still make sense?\n\n In that case we'd allow ourselves to exceed work_mem by one tuple. If that\n doesn't seem correct anymore then I think we can move the space exceeded\n check in ExecHashTableInsert() *before* actual hashtable insert.\n\n2) After batch 0 is marked fallback, does the logic to insert into its batch\n file fit more in MultiExecPrivateHash() or ExecHashTableInsert()?\n\n The latter already has logic to decide whether to insert into hashtable or\n batchfile\n\nThanks,\nDavid",
"msg_date": "Mon, 4 May 2020 13:39:36 -0700",
"msg_from": "David Kimura <david.g.kimura@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Apr 28, 2020 at 7:03 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n>\n> There is a deadlock hazard in parallel hashjoin (pointed out by Thomas\n> Munro in the past). Workers attached to the stripe_barrier emit tuples\n> and then wait on that barrier.\n> I believe that that can be addressed starting with this\n> relatively unoptimized solution:\n> - after probing a stripe in a batch, a worker sets the status of that\n> batch to \"tentatively done\" and saves the stripe_barrier phase\n> - if that worker is not the only worker attached to that batch, it\n> detaches from both stripe and batch barriers and moves on to other\n> batches\n> - if that worker is the only worker attached to the batch, it will\n> proceed to load the next stripe of that batch, and, once it has\n> finished loading, it will set the status of the batch back to \"not\n> done\" for itself\n> - when the other worker encounters that batch again, if the\n> stripe_barrier phase has not moved forward, it will mark that batch as\n> done for itself. if the stripe_barrier phase has moved forward, it can\n> join in in probing this batch for the current stripe.\n>\n\n\nJust to follow-up on the stripe barrier deadlock, I've implemented a\nsolution and attached it.\n\nThere are three solutions I've thought about so far:\n\n1) leaders don't participate in fallback batches\n2) serial after stripe 0\n no worker can join a batch after any worker has left and only one\n worker can work on stripes after stripe 0\n3) provisionally complete batches\n After the end of stripe 0, all workers except the last worker\n detach from the stripe barrier, mark the batch as provisionally\n done, save the stripe barrier phase, and move on to another batch.\n Later, when one of these workers returns to the batch, if it is\n not already done, the worker checks to see if the phase of the\n stripe barrier has advanced. If the phase has advanced, it means\n that no one is waiting for that worker. The worker can join that\n batch. If the phase hasn't advanced, the worker won't risk\n deadlock and will simply mark the batch as done. The last worker\n executes the normal path -- participating in each stripe.\n\nI've attached a patch to implement solution 3\nv7-0002-Provisionally-detach-unless-last-worker.patch\n\nThis isn't a very optimized version of this solution. It detaches from\nthe stripe barrier and closes the outer match status bitmap upon\nprovisional completion by a worker. However, I ran into some problems\nkeeping outer match status bitmaps open for multiple batches at a time.\n\nI've also attached the original adaptive hashjoin patch with a couple\nsmall tweaks (not quite meriting a patch version bump, but that seemed\nlike the easiest way).\n\n-- \nMelanie Plageman",
"msg_date": "Fri, 8 May 2020 18:58:10 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "I've attached a rebased patch which includes the \"provisionally detach\"\ndeadlock hazard fix approach as well as addresses some of the following\nfeedback from Jeff Davis provided off-list:\n\n> Can you add some high-level comments that describe the algorithm and\n> what the terms mean?\n\nI added to the large comment at the top of nodeHashjoin.c. I've also\nadded comments to a few of the new members in some structs. Plus I've\nadded some in-line comments to assist the reviewer that may or may not\nbe overkill in a final version.\n\n> Can you add some comments to describe what's happening when a batch is\n> entering fallback mode?\n...\n> Can you add some comments describing tuple relocation?\n...\n> Can you describe somewhere what all the bits for outer matches are for?\nAll three done.\n\nAlso, we kept the batch 0 spilling patch David Kimura authored [1]\nseparate so it could be discussed separately because we still had some\nquestions.\nIt would be great to discuss those, however, keeping them separate might\nbe more confusing -- I'm not sure.\n\n[1]\nhttps://www.postgresql.org/message-id/CAHnPFjQiYN83NjQ4KvjX19Wti%3D%3Duzyw8D24va56zJKzOt%2BB51A%40mail.gmail.com",
"msg_date": "Wed, 27 May 2020 19:25:50 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Wed, May 27, 2020 at 7:25 PM Melanie Plageman <melanieplageman@gmail.com>\nwrote:\n\n> I've attached a rebased patch which includes the \"provisionally detach\"\n> deadlock hazard fix approach\n>\n\nAlas, the \"provisional detach\" logic proved incorrect (see last point in\nthe list of changes included in the patch at bottom).\n\n\n> Also, we kept the batch 0 spilling patch David Kimura authored [1]\n> separate so it could be discussed separately because we still had some\n> questions.\n>\n\nThe serial batch 0 spilling is in the attached patch. Parallel batch 0\nspilling is still in a separate batch that David Kimura is working on.\n\nI've attached a rebased and updated patch with a few fixes:\n\n- semi-join fallback works now\n- serial batch 0 spilling in main patch\n- added instrumentation for stripes to the parallel case\n- SharedBits uses same SharedFileset as SharedTuplestore\n- reverted the optimization to allow workers to re-attach to a batch and\n help out with stripes if they are sure they pose no deadlock risk\n\nFor the last point, I discovered a pretty glaring problem with this\noptimization: I did not include the bitmap created by a worker while\nworking on its first participating stripe in the final combined bitmap.\nI only was combining the last bitmap file each worker worked on.\n\nI had the workers make new bitmaps for each time that they attached to\nthe batch and participated because having them keep an open file\ntracking information for a batch they are no longer attached to on the\nchance that they might return and work on that batch was a\nsynchronization nightmare. It was difficult to figure out when to close\nthe file if they never returned and hard to make sure that the combining\nworker is actually combining all the files from all participants who\nwere ever active.\n\nI am sure I can hack around those, but I think we need a better solution\noverall. After reverting those changes, loading and probing of stripes\nafter stripe 0 is serial. This is not only sub-optimal, it also means\nthat all the synchronization variables and code complexity around\ncoordinating work on fallback batches is practically wasted.\nSo, they have to be able to collaborate on stripes after the first\nstripe. This version of the patch has correct results and no deadlock\nhazard, however, it lacks parallelism on stripes after stripe 0.\nI am looking for ideas on how to address the deadlock hazard more\nefficiently.\n\nThe next big TODOs are:\n- come up with a better solution to the potential tuple emitting/barrier\n waiting deadlock issue\n- parallel batch 0 spilling complete\n\n-- \nMelanie Plageman",
"msg_date": "Mon, 8 Jun 2020 17:12:25 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, Jun 08, 2020 at 05:12:25PM -0700, Melanie Plageman wrote:\n>On Wed, May 27, 2020 at 7:25 PM Melanie Plageman <melanieplageman@gmail.com>\n>wrote:\n>\n>> I've attached a rebased patch which includes the \"provisionally detach\"\n>> deadlock hazard fix approach\n>>\n>\n>Alas, the \"provisional detach\" logic proved incorrect (see last point in\n>the list of changes included in the patch at bottom).\n>\n>\n>> Also, we kept the batch 0 spilling patch David Kimura authored [1]\n>> separate so it could be discussed separately because we still had some\n>> questions.\n>>\n>\n>The serial batch 0 spilling is in the attached patch. Parallel batch 0\n>spilling is still in a separate batch that David Kimura is working on.\n>\n>I've attached a rebased and updated patch with a few fixes:\n>\n>- semi-join fallback works now\n>- serial batch 0 spilling in main patch\n>- added instrumentation for stripes to the parallel case\n>- SharedBits uses same SharedFileset as SharedTuplestore\n>- reverted the optimization to allow workers to re-attach to a batch and\n> help out with stripes if they are sure they pose no deadlock risk\n>\n>For the last point, I discovered a pretty glaring problem with this\n>optimization: I did not include the bitmap created by a worker while\n>working on its first participating stripe in the final combined bitmap.\n>I only was combining the last bitmap file each worker worked on.\n>\n>I had the workers make new bitmaps for each time that they attached to\n>the batch and participated because having them keep an open file\n>tracking information for a batch they are no longer attached to on the\n>chance that they might return and work on that batch was a\n>synchronization nightmare. It was difficult to figure out when to close\n>the file if they never returned and hard to make sure that the combining\n>worker is actually combining all the files from all participants who\n>were ever active.\n>\n>I am sure I can hack around those, but I think we need a better solution\n>overall. After reverting those changes, loading and probing of stripes\n>after stripe 0 is serial. This is not only sub-optimal, it also means\n>that all the synchronization variables and code complexity around\n>coordinating work on fallback batches is practically wasted.\n>So, they have to be able to collaborate on stripes after the first\n>stripe. This version of the patch has correct results and no deadlock\n>hazard, however, it lacks parallelism on stripes after stripe 0.\n>I am looking for ideas on how to address the deadlock hazard more\n>efficiently.\n>\n>The next big TODOs are:\n>- come up with a better solution to the potential tuple emitting/barrier\n> waiting deadlock issue\n>- parallel batch 0 spilling complete\n>\n\n\nHi Melanie,\n\nI started looking at the patch to refresh my knowledge both of this\npatch and parallel hash join, but I think it needs a rebase. The\nchanges in 7897e3bb90 apparently touched some of the code. I assume\nyou're working on a patch addressing the remaining TODOS, right?\n\nI see you've switched to \"stripe\" naming - I find that a bit confusing,\nbecause when I hear stripe I think about RAID, where it means pieces of\ndata interleaved and stored on different devices. But maybe that's just\nme and it's a good name. Maybe it'd be better to keep the naming and\nonly tweak it at the end, not to disrupt reviews unnecessarily.\n\nNow, a couple comments / questions about the code.\n\n\nnodeHash.c\n----------\n\n\n1) MultiExecPrivateHash says this\n\n /*\n * Not subject to skew optimization, so either insert normally\n * or save to batch file if it belongs to another stripe\n */\n\nI wonder what it means to \"belong to another stripe\". I understand what\nthat means for batches, which are identified by batchno computed from\nthe hash value. But I thought \"stripes\" are just work_mem-sized pieces\nof a batch, so I don't quite understand this. Especially when the code\ndoes not actually check \"which stripe\" the row belongs to.\n\n\n2) I find the fields hashloop_fallback rather confusing. We have one in\nHashJoinTable (and it's array of BufFile items) and another one in\nParallelHashJoinBatch (this time just bool).\n\nI think HashJoinTable should be renamed to hashloopBatchFile (similarly\nto the other BufFile arrays). Although I'm not sure why we even need\nthis file, when we have innerBatchFile? BufFile(s) are not exactly free,\nin fact it's one of the problems for hashjoins with many batches.\n\n\n\n3) I'm a bit puzzled about this formula in ExecHashIncreaseNumBatches\n\n childbatch = (1U << (my_log2(hashtable->nbatch) - 1)) | hashtable->curbatch;\n\nand also about this comment\n\n /*\n * TODO: what to do about tuples that don't go to the child\n * batch or stay in the current batch? (this is why we are\n * counting tuples to child and curbatch with two diff\n * variables in case the tuples go to a batch that isn't the\n * child)\n */\n if (batchno == childbatch)\n childbatch_outgoing_tuples++;\n\nI thought each old batch is split into two new ones, and the tuples\neither stay in the current one, or are moved to the new one - which I\npresume is the childbatch, although I haven't tried to decode that\nformula. So where else could the tuple go, as the comment tried to\nsuggest?\n\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 24 Jun 2020 00:24:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "Hi Tomas,\n\nOn Tue, Jun 23, 2020 at 3:24 PM Tomas Vondra wrote:\n>\n> Now, a couple comments / questions about the code.\n>\n>\n> nodeHash.c\n> ----------\n>\n>\n> 1) MultiExecPrivateHash says this\n>\n> /*\n> * Not subject to skew optimization, so either insert normally\n> * or save to batch file if it belongs to another stripe\n> */\n>\n> I wonder what it means to \"belong to another stripe\". I understand what\n> that means for batches, which are identified by batchno computed from\n> the hash value. But I thought \"stripes\" are just work_mem-sized pieces\n> of a batch, so I don't quite understand this. Especially when the code\n> does not actually check \"which stripe\" the row belongs to.\n\nI have to concur that \"stripe\" did inspire a RAID vibe when I heard it,\nbut it seemed to be a better name than what it replaces\n\n> 3) I'm a bit puzzled about this formula in ExecHashIncreaseNumBatches\n>\n> childbatch = (1U << (my_log2(hashtable->nbatch) - 1)) | hashtable->curbatch;\n>\n> and also about this comment\n>\n> /*\n> * TODO: what to do about tuples that don't go to the child\n> * batch or stay in the current batch? (this is why we are\n> * counting tuples to child and curbatch with two diff\n> * variables in case the tuples go to a batch that isn't the\n> * child)\n> */\n> if (batchno == childbatch)\n> childbatch_outgoing_tuples++;\n>\n> I thought each old batch is split into two new ones, and the tuples\n> either stay in the current one, or are moved to the new one - which I\n> presume is the childbatch, although I haven't tried to decode that\n> formula. So where else could the tuple go, as the comment tried to\n> suggest?\n\nTrue, every old batch is split into two new ones, if you only consider\ntuples coming from the batch file that _still belong in there_. i.e.\nthere are tuples in the old batch file that belong to a future batch. As\nan example, if the current nbatch = 8, and we want to expand to nbatch =\n16, (old) batch 1 will split into (new) batch 1 and batch 9, but it can\nalready contain tuples that need to go into (current) batches 3, 5, and\n7 (soon-to-be batches 11, 13, and 15).\n\nCheers,\nJesse\n\n\n",
"msg_date": "Wed, 24 Jun 2020 08:55:11 -0700",
"msg_from": "Jesse Zhang <sbjesse@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Tue, Jun 23, 2020 at 3:24 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> I started looking at the patch to refresh my knowledge both of this\n> patch and parallel hash join, but I think it needs a rebase. The\n> changes in 7897e3bb90 apparently touched some of the code.\n\n\nThanks so much for the review, Tomas!\n\nI've attached a rebased patch which also contains updates discussed\nbelow.\n\n\n> I assume\n> you're working on a patch addressing the remaining TODOS, right?\n>\n\nI wanted to get some feedback on the patch before working through the\nTODOs to make sure I was on the right track.\nNow that you are reviewing this, I will focus all my attention\non addressing your feedback. If there are any TODOs that you feel are\nmost important, let me know, so I can start with those. Otherwise, I\nwill prioritize parallel batch 0 spilling.\n\nI wanted to get some feedback on the patch before working through the\nTODOs to make sure I was on the right track.\n\nNow that you are reviewing this, I will focus all my attention\non addressing your feedback. If there are any TODOs that you feel are\nmost important, let me know, so I can start with those.\n\nOtherwise, I will prioritize parallel batch 0 spilling.\nDavid Kimura plans to do a bit of work on on parallel hash join batch 0\nspilling tomorrow. Whatever is left after that, I will pick up next\nweek. Parallel hash join batch 0 spilling is the last large TODO that I\nhad.\n\nMy plan was to then focus on the feedback (either about which TODOs are\nmost important or outside of the TODOs I've identified) I get from you\nand anyone else who reviews this.\n\n\n>\n> I see you've switched to \"stripe\" naming - I find that a bit confusing,\n> because when I hear stripe I think about RAID, where it means pieces of\n> data interleaved and stored on different devices. But maybe that's just\n> me and it's a good name. Maybe it'd be better to keep the naming and\n> only tweak it at the end, not to disrupt reviews unnecessarily.\n>\n\nI hear you about \"stripe\". I still quite like it, especially as compared\nto its predecessor (originally, I called them chunks -- which is\nimpossible given that SharedTuplestoreChunks are a thing).\n\nFor ease of review, as you mentioned, I will keep the name for now. I am\nopen to changing it later, though.\n\nI've been soliciting ideas for alternatives and, so far, folks have\nsuggested \"stride\", \"step\", \"flock\", \"herd\", \"cohort\", and \"school\". I'm\nstill on team \"stripe\" though, as it stands.\n\n\n>\n> nodeHash.c\n> ----------\n>\n>\n> 1) MultiExecPrivateHash says this\n>\n> /*\n> * Not subject to skew optimization, so either insert normally\n> * or save to batch file if it belongs to another stripe\n> */\n>\n> I wonder what it means to \"belong to another stripe\". I understand what\n> that means for batches, which are identified by batchno computed from\n> the hash value. But I thought \"stripes\" are just work_mem-sized pieces\n> of a batch, so I don't quite understand this. Especially when the code\n> does not actually check \"which stripe\" the row belongs to.\n>\n>\nI agree this was confusing.\n\n\"belongs to another stripe\" meant here that if batch 0 falls back and we\nare still loading it, once we've filled up work_mem, we need to start\nsaving those tuples to a spill file for batch 0. I've changed the\ncomment to this:\n\n- * or save to batch file if it belongs to another stripe\n+ * or save to batch file if batch 0 falls back and we have\n+ * already filled the hashtable up to space_allowed.\n\n\n> 2) I find the fields hashloop_fallback rather confusing. We have one in\n> HashJoinTable (and it's array of BufFile items) and another one in\n> ParallelHashJoinBatch (this time just bool).\n>\n> I think HashJoinTable should be renamed to hashloopBatchFile (similarly\n> to the other BufFile arrays).\n\n\nI think you are right about the name. I've changed the name in\nHashJoinTableData to hashloopBatchFile.\n\nThe array of BufFiles hashloop_fallback was only used by serial\nhashjoin. The boolean hashloop_fallback variable is used only by\nparallel hashjoin.\n\nThe reason I had them named the same thing is that I thought it would be\nnice to have a variable with the same name to indicate if a batch \"fell\nback\" for both parallel and serial hashjoin--especially since we check\nit in the main hashjoin state machine used by parallel and serial\nhashjoin.\n\nIn serial hashjoin, the BufFiles aren't identified by name, so I kept\nthem in that array. In parallel hashjoin, each ParallelHashJoinBatch has\nthe status saved (in the struct).\nSo, both represented the fall back status of a batch.\n\nHowever, I agree with you, so I've renamed the serial one to\nhashloopBatchFile.\n\n>\n> Although I'm not sure why we even need\n> this file, when we have innerBatchFile? BufFile(s) are not exactly free,\n> in fact it's one of the problems for hashjoins with many batches.\n>\n>\nInteresting -- it didn't even occur to me to combine the bitmap with the\ninner side batch file data.\nIt definitely seems like a good idea to save the BufFile given that so\nlittle data will likely go in it and that it has a 1-1 relationship with\ninner side batches.\n\nHow might it work? Would you reserve some space at the beginning of the\nfile? When would you reserve the bytes (before adding tuples you won't\nknow how many bytes you need, so it might be hard to make sure there is\nenough space.) Would all inner side files have space reserved or just\nfallback batches?\n\n-- \nMelanie Plageman",
"msg_date": "Thu, 25 Jun 2020 15:09:44 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Thu, Jun 25, 2020 at 03:09:44PM -0700, Melanie Plageman wrote:\n>On Tue, Jun 23, 2020 at 3:24 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\n>wrote:\n>\n>> I started looking at the patch to refresh my knowledge both of this\n>> patch and parallel hash join, but I think it needs a rebase. The\n>> changes in 7897e3bb90 apparently touched some of the code.\n>\n>\n>Thanks so much for the review, Tomas!\n>\n>I've attached a rebased patch which also contains updates discussed\n>below.\n>\n\nThanks.\n\n>\n>> I assume\n>> you're working on a patch addressing the remaining TODOS, right?\n>>\n>\n>I wanted to get some feedback on the patch before working through the\n>TODOs to make sure I was on the right track.\n>\n>Now that you are reviewing this, I will focus all my attention\n>on addressing your feedback. If there are any TODOs that you feel are\n>most important, let me know, so I can start with those.\n>\n>Otherwise, I will prioritize parallel batch 0 spilling.\n\nFeel free to work on the batch 0 spilling, please. I still need to get\nfamiliar with various parts of the parallel hash join etc. so I don't\nhave any immediate feedback which TODOs to work on first.\n\n>David Kimura plans to do a bit of work on on parallel hash join batch 0\n>spilling tomorrow. Whatever is left after that, I will pick up next\n>week. Parallel hash join batch 0 spilling is the last large TODO that I\n>had.\n>\n>My plan was to then focus on the feedback (either about which TODOs are\n>most important or outside of the TODOs I've identified) I get from you\n>and anyone else who reviews this.\n>\n\nOK.\n\n>>\n>> I see you've switched to \"stripe\" naming - I find that a bit confusing,\n>> because when I hear stripe I think about RAID, where it means pieces of\n>> data interleaved and stored on different devices. But maybe that's just\n>> me and it's a good name. Maybe it'd be better to keep the naming and\n>> only tweak it at the end, not to disrupt reviews unnecessarily.\n>>\n>\n>I hear you about \"stripe\". I still quite like it, especially as compared\n>to its predecessor (originally, I called them chunks -- which is\n>impossible given that SharedTuplestoreChunks are a thing).\n>\n\nI don't think using chunks in one place means we can't use it elsewhere\nin a different context. I'm sure we have \"chunks\" in other places. But\nlet's not bikeshed on this too much.\n\n>For ease of review, as you mentioned, I will keep the name for now. I am\n>open to changing it later, though.\n>\n>I've been soliciting ideas for alternatives and, so far, folks have\n>suggested \"stride\", \"step\", \"flock\", \"herd\", \"cohort\", and \"school\". I'm\n>still on team \"stripe\" though, as it stands.\n>\n\n;-)\n\n>\n>>\n>> nodeHash.c\n>> ----------\n>>\n>>\n>> 1) MultiExecPrivateHash says this\n>>\n>> /*\n>> * Not subject to skew optimization, so either insert normally\n>> * or save to batch file if it belongs to another stripe\n>> */\n>>\n>> I wonder what it means to \"belong to another stripe\". I understand what\n>> that means for batches, which are identified by batchno computed from\n>> the hash value. But I thought \"stripes\" are just work_mem-sized pieces\n>> of a batch, so I don't quite understand this. Especially when the code\n>> does not actually check \"which stripe\" the row belongs to.\n>>\n>>\n>I agree this was confusing.\n>\n>\"belongs to another stripe\" meant here that if batch 0 falls back and we\n>are still loading it, once we've filled up work_mem, we need to start\n>saving those tuples to a spill file for batch 0. I've changed the\n>comment to this:\n>\n>- * or save to batch file if it belongs to another stripe\n>+ * or save to batch file if batch 0 falls back and we have\n>+ * already filled the hashtable up to space_allowed.\n>\n\nOK. Silly question - what does \"batch 0 falls back\" mean? Does it mean\nthat we realized the hash table for batch 0 would not fit into work_mem,\nso we switched to the \"hashloop\" strategy?\n\n>\n>> 2) I find the fields hashloop_fallback rather confusing. We have one in\n>> HashJoinTable (and it's array of BufFile items) and another one in\n>> ParallelHashJoinBatch (this time just bool).\n>>\n>> I think HashJoinTable should be renamed to hashloopBatchFile (similarly\n>> to the other BufFile arrays).\n>\n>\n>I think you are right about the name. I've changed the name in\n>HashJoinTableData to hashloopBatchFile.\n>\n>The array of BufFiles hashloop_fallback was only used by serial\n>hashjoin. The boolean hashloop_fallback variable is used only by\n>parallel hashjoin.\n>\n>The reason I had them named the same thing is that I thought it would be\n>nice to have a variable with the same name to indicate if a batch \"fell\n>back\" for both parallel and serial hashjoin--especially since we check\n>it in the main hashjoin state machine used by parallel and serial\n>hashjoin.\n>\n>In serial hashjoin, the BufFiles aren't identified by name, so I kept\n>them in that array. In parallel hashjoin, each ParallelHashJoinBatch has\n>the status saved (in the struct).\n>So, both represented the fall back status of a batch.\n>\n>However, I agree with you, so I've renamed the serial one to\n>hashloopBatchFile.\n>\n\nOK\n\n>>\n>> Although I'm not sure why we even need\n>> this file, when we have innerBatchFile? BufFile(s) are not exactly free,\n>> in fact it's one of the problems for hashjoins with many batches.\n>>\n>>\n>Interesting -- it didn't even occur to me to combine the bitmap with the\n>inner side batch file data.\n>It definitely seems like a good idea to save the BufFile given that so\n>little data will likely go in it and that it has a 1-1 relationship with\n>inner side batches.\n>\n>How might it work? Would you reserve some space at the beginning of the\n>file? When would you reserve the bytes (before adding tuples you won't\n>know how many bytes you need, so it might be hard to make sure there is\n>enough space.) Would all inner side files have space reserved or just\n>fallback batches?\n>\n\nOh! So the hashloopBatchFile is only used for the bitmap? I haven't\nrealized that. In that case it probably makes sense to keep it separate\nfrom the files with spilled tuples, interleaving that somehow would be\nway too complex, I think.\n\nHowever, do we need an array of those files? I thought we only need the\nbitmap until we process all rows from each \"stripe\" and then we can\nthrow it away, right? Which would also mean we don't need to worry about\nthe memory usage too much, because the 8kB buffer will go away after\ncalling BufFileClose.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 26 Jun 2020 02:22:32 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "Attached is the current version of adaptive hash join with two\nsignificant changes as compared to v10:\n\n1) Implements spilling of batch 0 for parallel-aware parallel hash join.\n2) Moves \"striping\" of fallback batches from \"build\" to \"load\" stage\nIt includes several smaller changes as well.\n\nBatch 0 spilling is necessary when the hash table for batch 0 cannot fit\nin memory and allows us to use the \"hashloop\" strategy for batch 0.\n\nSpilling of batch 0 necessitated the addition of a few new pieces of\ncode. The most noticeable one is probably the hash table eviction phase\nmachine. If batch 0 was marked as a \"fallback\" batch in\nExecParallelHashIncreaseNumBatches() PHJ_GROW_BATCHES_DECIDING phase,\nany future attempt to insert a tuple that would exceed the space_allowed\ntriggers eviction of the hash table.\nExecParallelHashTableEvictBatch0() will evict all batch 0 tuples in\nmemory into spill files in a batch 0 inner SharedTuplestore.\n\nThis means that when repartitioning batch 0 in the future, both the\nbatch 0 spill file and the hash table need to be drained and relocated\ninto the new generation of batches and the hash table. If enough memory\nis freed up from batch 0 tuples relocating to other batches, then it is\npossible that tuples from the batch 0 spill files will go back into the\nhash table.\nAfter batch 0 is evicted, the build stage proceeds as normal.\n\nThe main alternative to this design that we considered was to \"close\" the\nhash table after it is full. That is, if batch 0 has been marked to fall\nback, once it is full, all subsequent tuples pulled from the outer child\nwould bypass the hash table altogether and go directly into a spill\nfile.\n\nWe chose the hash table eviction route because I thought it might be\nbetter to write chunks of the hashtable into a file together rather than\nsporadically write new batch 0 tuples to spill files as they are\npulled out of the child node. However, since the same sts_puttuple() API\nis used in both cases, it is highly possible this won't actually matter\nand we will do the same amount of I/O.\nBoth designs involved changing the flow of the code for inserting and\nrepartitioning tuples, so I figured that I would choose one, do some\ntesting, and try the other one later after more discussion and review.\n\nThis patch also introduces a significant change to how tuples are split\ninto stripes. Previously, during the build stage, tuples were written to\nspill files in the SharedTuplestore with a stripe number in the metadata\nsection of the MinimalTuple.\nFor a batch that had been designated a \"fallback\" batch,\nonce the space_allowed had been exhausted, the shared stripe number\nwould be incremented and the new stripe number was written in the tuple\nmetadata to the files. Then, during loading, tuples were only loaded\ninto the hashtable if their stripe number matched the current stripe number.\n\nThis had several downsides. It introduced a couple new shared variables --\nthe current stripe number for the batch and its size.\nIn master, during the normal mode of the \"build\" stage, shared variables\nfor the size or estimated_size of the batch are checked on each\nallocation of a STS Chunk or HashMemoryChunk, however, during\nrepartitioning, because bailing out early was not an option, workers\ncould use backend-local variables to keep track of size and merge them\nat the end of repartitioning. This wasn't possible if we needed accurate\nstripe numbers written into the tuples. This meant that we had to add\nnew shared variable accesses to repartitioning.\n\nTo avoid this, Deep and I worked on moving the \"striping\" logic from the\n\"build\" stage to the \"load\" stage for batches. Serial hash join already\ndid striping in this way. This patch now pauses loading once the\nspace_allowed has been exhausted for parallel hash join as well. The\ntricky part was keeping track of multiple read_pages for a given file.\n\nWhen tuples had explicit stripe numbers, we simply rewound the read_page\nin the SharedTuplestoreParticipant to the earliest SharedTuplestoreChunk\nthat anyone had read and relied on the stripe numbers to avoid loading\ntuples more than once. Now, each worker participating in reading from\nthe SharedTuplestore could have received a read_page \"assignment\" (four\nblocks, currently) and then failed to allocate a HashMemoryChunk. We\ncannot risk rewinding the read_page because there could be\nSharedTuplestoreChunks that have already been loaded in between ones\nthat have not.\n\nThe design we went with was to \"overflow\" the tuples from this\nSharedTuplestoreChunk onto the end of the write_file which this worker\nwrote--if it participated in writing this STS--or by making a new\nwrite_file if it did not participate in writing. This entailed keeping\ntrack of who participated in the write phase. SharedTuplestore\nparticipation now has three \"modes\"-- reading, writing, and appending.\nDuring appending, workers can write to their own file and read from any\nfile.\n\nOne of the alternative designs I considered was to store the offset and\nlength of leftover blocks that still needed to be loaded into the hash\ntable in the SharedTuplestoreParticipant data structure. Then, workers\nwould pick up these \"assignments\". It is basically a\nSharedTuplestoreParticipant work queue.\nThe main stumbling block I faced here was allocating a variable number of\nthings in shared memory. You don't know how many read participants will\nread from the file and how many stripes there will be (until you've\nloaded the file). In the worst case, you would need space for\nnparticipants * nstripes - 1 offset/length combos.\nSince I don't know how many stripes I have until I've loaded the file, I\ncan't allocate shared memory for this up front.\n\nThe downside of the \"append overflow\" design is that, currently, all\nworkers participating in loading a fallback batch write an overflow\nchunk for every fallback stripe.\nIt seems like something could be done to check if there is space in the\nhashtable before accepting an assignment of blocks to read from the\nSharedTuplestore and moving the shared variable read_page. It might\nreduce instances in which workers have to overflow. However, I tried\nthis and it is very intrusive on the SharedTuplestore API (it would have\nto know about the hash table). Also, oversized tuples would not be\naddressed by this pre-assignment check since memory is allocated a\nHashMemoryChunk at a time. So, even if this was solved, you would need\noverflow functionality\n\nOne note is that I had to comment out a test in join_hash.sql which\ninserts tuples larger than work_mem in size (each), because it no longer\nsuccessfully executes.\nAlso, the stripe number is not deterministic, so sometimes the tests that\ncompare fallback batches' number of stripes fail (also in join_hash.sql).\n\nMajor outstanding TODOs:\n--\n- Potential redesign of stripe loading pausing and resumption\n- The instrumentation for parallel fallback batches has some problems\n- Deadlock hazard avoidance design of the stripe barrier still needs work\n- Assorted smaller TODOs in the code\n\n\nOn Thu, Jun 25, 2020 at 5:22 PM Tomas Vondra <tomas.vondra@2ndquadrant.com>\nwrote:\n\n> On Thu, Jun 25, 2020 at 03:09:44PM -0700, Melanie Plageman wrote:\n> >On Tue, Jun 23, 2020 at 3:24 PM Tomas Vondra <\n> tomas.vondra@2ndquadrant.com>\n> >wrote:\n> >\n> >> I assume\n> >> you're working on a patch addressing the remaining TODOS, right?\n> >>\n> >\n> >I wanted to get some feedback on the patch before working through the\n> >TODOs to make sure I was on the right track.\n> >\n> >Now that you are reviewing this, I will focus all my attention\n> >on addressing your feedback. If there are any TODOs that you feel are\n> >most important, let me know, so I can start with those.\n> >\n> >Otherwise, I will prioritize parallel batch 0 spilling.\n>\n> Feel free to work on the batch 0 spilling, please. I still need to get\n> familiar with various parts of the parallel hash join etc. so I don't\n> have any immediate feedback which TODOs to work on first.\n>\n> >David Kimura plans to do a bit of work on on parallel hash join batch 0\n> >spilling tomorrow. Whatever is left after that, I will pick up next\n> >week. Parallel hash join batch 0 spilling is the last large TODO that I\n> >had.\n> >\n> >My plan was to then focus on the feedback (either about which TODOs are\n> >most important or outside of the TODOs I've identified) I get from you\n> >and anyone else who reviews this.\n> >\n>\n> OK.\n>\n\nSee list of patch contents above.\n\nTomas, I wasn't sure if you would want a patchset which included a\ncommit with just the differences between this version and v10 since you\nhad already started reviewing it.\nThis commit [1] is on a branch off of my fork that has just the delta\nbetween v10 and v11.\nAs a warning, I have added a few updates to comments and such after\nsquashing the two in my current branch (which is what is in this patch).\nI didn't intend to maintain the commits separately as I felt it would be\nmore confusing for other reviewers.\n\n\n>\n> >\n> >>\n> >> nodeHash.c\n> >> ----------\n> >>\n> >>\n> >> 1) MultiExecPrivateHash says this\n> >>\n> >> /*\n> >> * Not subject to skew optimization, so either insert normally\n> >> * or save to batch file if it belongs to another stripe\n> >> */\n> >>\n> >> I wonder what it means to \"belong to another stripe\". I understand what\n> >> that means for batches, which are identified by batchno computed from\n> >> the hash value. But I thought \"stripes\" are just work_mem-sized pieces\n> >> of a batch, so I don't quite understand this. Especially when the code\n> >> does not actually check \"which stripe\" the row belongs to.\n> >>\n> >>\n> >I agree this was confusing.\n> >\n> >\"belongs to another stripe\" meant here that if batch 0 falls back and we\n> >are still loading it, once we've filled up work_mem, we need to start\n> >saving those tuples to a spill file for batch 0. I've changed the\n> >comment to this:\n> >\n> >- * or save to batch file if it belongs to another stripe\n> >+ * or save to batch file if batch 0 falls back and we have\n> >+ * already filled the hashtable up to space_allowed.\n> >\n>\n> OK. Silly question - what does \"batch 0 falls back\" mean? Does it mean\n> that we realized the hash table for batch 0 would not fit into work_mem,\n> so we switched to the \"hashloop\" strategy?\n>\n\nExactly.\n\n\n> >\n> >> 2) I find the fields hashloop_fallback rather confusing. We have one in\n> >> HashJoinTable (and it's array of BufFile items) and another one in\n> >> ParallelHashJoinBatch (this time just bool).\n> >>\n> >> I think HashJoinTable should be renamed to hashloopBatchFile (similarly\n> >> to the other BufFile arrays).\n> >\n> >\n> >I think you are right about the name. I've changed the name in\n> >HashJoinTableData to hashloopBatchFile.\n> >\n> >The array of BufFiles hashloop_fallback was only used by serial\n> >hashjoin. The boolean hashloop_fallback variable is used only by\n> >parallel hashjoin.\n> >\n> >The reason I had them named the same thing is that I thought it would be\n> >nice to have a variable with the same name to indicate if a batch \"fell\n> >back\" for both parallel and serial hashjoin--especially since we check\n> >it in the main hashjoin state machine used by parallel and serial\n> >hashjoin.\n> >\n> >In serial hashjoin, the BufFiles aren't identified by name, so I kept\n> >them in that array. In parallel hashjoin, each ParallelHashJoinBatch has\n> >the status saved (in the struct).\n> >So, both represented the fall back status of a batch.\n> >\n> >However, I agree with you, so I've renamed the serial one to\n> >hashloopBatchFile.\n> >\n>\n> OK\n>\n> >>\n> >> Although I'm not sure why we even need\n> >> this file, when we have innerBatchFile? BufFile(s) are not exactly free,\n> >> in fact it's one of the problems for hashjoins with many batches.\n> >>\n> >>\n> >Interesting -- it didn't even occur to me to combine the bitmap with the\n> >inner side batch file data.\n> >It definitely seems like a good idea to save the BufFile given that so\n> >little data will likely go in it and that it has a 1-1 relationship with\n> >inner side batches.\n> >\n> >How might it work? Would you reserve some space at the beginning of the\n> >file? When would you reserve the bytes (before adding tuples you won't\n> >know how many bytes you need, so it might be hard to make sure there is\n> >enough space.) Would all inner side files have space reserved or just\n> >fallback batches?\n> >\n>\n> Oh! So the hashloopBatchFile is only used for the bitmap? I haven't\n> realized that. In that case it probably makes sense to keep it separate\n> from the files with spilled tuples, interleaving that somehow would be\n> way too complex, I think.\n>\n> However, do we need an array of those files? I thought we only need the\n> bitmap until we process all rows from each \"stripe\" and then we can\n> throw it away, right? Which would also mean we don't need to worry about\n> the memory usage too much, because the 8kB buffer will go away after\n> calling BufFileClose.\n>\n>\n Good point! I will try this change.\n\nRegards,\nMelanie (VMWare)\n\n[1]\nhttps://github.com/melanieplageman/postgres/commit/c6843ef9e0767f80d928d87bdb1078c9d20346e3",
"msg_date": "Mon, 31 Aug 2020 15:13:06 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
},
{
"msg_contents": "On Mon, Aug 31, 2020 at 03:13:06PM -0700, Melanie Plageman wrote:\n> Attached is the current version of adaptive hash join with two\n> significant changes as compared to v10:\n\nThe CF bot is complaining about a regression test failure:\n@@ -2465,7 +2465,7 @@\n Gather (actual rows=469 loops=1)\n Workers Planned: 1\n Workers Launched: 1\n- -> Parallel Hash Left Join (actual rows=234 loops=2)\n+ -> Parallel Hash Left Join (actual rows=235 loops=2)\n--\nMichael",
"msg_date": "Thu, 24 Sep 2020 12:39:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding hash join batch explosions with extreme skew and weird\n stats"
}
] |
[
{
"msg_contents": "Hi!\n\nGreetings from OSGeo Code sprint in Minneapolis :)\n\nWe're trying to make FULL JOIN on equality of geometry and can't figure out\nwhy it doesn't work.\n\nHere's reproducer, it works on bytea but not on PostGIS geometry throwing\nout\n\nERROR: FULL JOIN is only supported with merge-joinable or hash-joinable\njoin conditions\n\nhttps://trac.osgeo.org/postgis/ticket/4394\n\nWe already have a btree opclass with equality:\nhttps://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L420\n\n\nWe also have hash equality opclass:\nhttps://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L440\n\n\nReading through Postgres documentation I can't figure out what else shall\nwe do for this join to work. How do we make it work?\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi!Greetings from OSGeo Code sprint in Minneapolis :)We're trying to make FULL JOIN on equality of geometry and can't figure out why it doesn't work.Here's reproducer, it works on bytea but not on PostGIS geometry throwing out ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditionshttps://trac.osgeo.org/postgis/ticket/4394 We already have a btree opclass with equality: https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L420 We also have hash equality opclass:https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L440 Reading through Postgres documentation I can't figure out what else shall we do for this join to work. How do we make it work?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Thu, 16 May 2019 11:05:53 -0500",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "How do we support FULL JOIN on PostGIS types?"
},
{
"msg_contents": "Hi,\n\nThanks a lot RhodiumToad on IRC for suggestion of setting HASHES, MERGES on\nOPERATOR =.\n\nNow we have other problem: how do we set these flags on upgrade to new\nversion of extension? Dropping an OPERATOR = will drop all indexes an views\ndepending on it so isn't really an option.\n\nAlso, if someone can sneak \"ERROR: FULL JOIN is only supported with\nmerge-joinable or hash-joinable join conditions\" keywords into\nhttps://www.postgresql.org/docs/current/xoper-optimization.html#id-1.8.3.17.8\nit would greatly help future extension writers - it's not possible to\ngoogle this page out by the error message.\n\nOn Thu, May 16, 2019 at 7:05 PM Darafei \"Komяpa\" Praliaskouski <\nme@komzpa.net> wrote:\n\n> Hi!\n>\n> Greetings from OSGeo Code sprint in Minneapolis :)\n>\n> We're trying to make FULL JOIN on equality of geometry and can't figure\n> out why it doesn't work.\n>\n> Here's reproducer, it works on bytea but not on PostGIS geometry throwing\n> out\n>\n> ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable\n> join conditions\n>\n> https://trac.osgeo.org/postgis/ticket/4394\n>\n> We already have a btree opclass with equality:\n>\n> https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L420\n>\n>\n> We also have hash equality opclass:\n>\n> https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L440\n>\n>\n> Reading through Postgres documentation I can't figure out what else shall\n> we do for this join to work. How do we make it work?\n>\n> --\n> Darafei Praliaskouski\n> Support me: http://patreon.com/komzpa\n>\n\n\n-- \nDarafei Praliaskouski\nSupport me: http://patreon.com/komzpa\n\nHi,Thanks a lot RhodiumToad on IRC for suggestion of setting HASHES, MERGES on OPERATOR =.Now we have other problem: how do we set these flags on upgrade to new version of extension? Dropping an OPERATOR = will drop all indexes an views depending on it so isn't really an option.Also, if someone can sneak \"ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditions\" keywords into https://www.postgresql.org/docs/current/xoper-optimization.html#id-1.8.3.17.8 it would greatly help future extension writers - it's not possible to google this page out by the error message.On Thu, May 16, 2019 at 7:05 PM Darafei \"Komяpa\" Praliaskouski <me@komzpa.net> wrote:Hi!Greetings from OSGeo Code sprint in Minneapolis :)We're trying to make FULL JOIN on equality of geometry and can't figure out why it doesn't work.Here's reproducer, it works on bytea but not on PostGIS geometry throwing out ERROR: FULL JOIN is only supported with merge-joinable or hash-joinable join conditionshttps://trac.osgeo.org/postgis/ticket/4394 We already have a btree opclass with equality: https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L420 We also have hash equality opclass:https://github.com/postgis/postgis/blob/svn-trunk/postgis/postgis.sql.in#L440 Reading through Postgres documentation I can't figure out what else shall we do for this join to work. How do we make it work?-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa\n-- Darafei PraliaskouskiSupport me: http://patreon.com/komzpa",
"msg_date": "Mon, 3 Jun 2019 12:55:37 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": true,
"msg_subject": "Re: How do we support FULL JOIN on PostGIS types?"
},
{
"msg_contents": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net> writes:\n> Thanks a lot RhodiumToad on IRC for suggestion of setting HASHES, MERGES on\n> OPERATOR =.\n> Now we have other problem: how do we set these flags on upgrade to new\n> version of extension? Dropping an OPERATOR = will drop all indexes an views\n> depending on it so isn't really an option.\n\nI think you're going to have to use a direct UPDATE on pg_operator in\nthe extension update script :-(. Perhaps ALTER OPERATOR should be able\nto handle changing these flags, but for now it can't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 09:56:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How do we support FULL JOIN on PostGIS types?"
}
] |
[
{
"msg_contents": "Hi,\n\nConsider the below test:\n\ncreate tablespace mytbs location '/home/rushabh/mywork/workspace/pg/';\ncreate table test ( a int , b int ) partition by list (a);\n\nset default_tablespace to mytbs;\ncreate table test_p1 partition of test for values in (1);\n\nIn the above test, after the setting the default_tablespace I am creating\na partition table and expecting that to get created under \"mytbs\"\ntablespace.\n\nBut that is not the case:\n\npostgres@66247=#select relname, reltablespace from pg_class where relname =\n'test_p1';\n relname | reltablespace\n---------+---------------\n test_p1 | 0\n(1 row)\n\nI noticed the behaviour change for default_tablespace with partition table\nwith below commit.\n\ncommit 87259588d0ab0b8e742e30596afa7ae25caadb18\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Thu Apr 25 10:20:23 2019 -0400\n\n Fix tablespace inheritance for partitioned rels\n\n Commit ca4103025dfe left a few loose ends. The most important one\n (broken pg_dump output) is already fixed by virtue of commit\n 3b23552ad8bb, but some things remained:\n\nI don't think that new behaviour is intended and if it's an intended change\nthan need to fix pg_dump as well - other wise lets say,\n1) create the above test on v11\n2) take a dump\n3) restore on v12\n\nwill end up partition into wrong tablesapce.\n\nLooking at the commit changes, it seems like at condition when no other\ntablespace is specified, we default the tablespace to the parent partitioned\ntable's.\n\n else if (stmt->partbound)\n {\n /*\n * For partitions, when no other tablespace is specified, we default\n * the tablespace to the parent partitioned table's.\n */\n Assert(list_length(inheritOids) == 1);\n tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n }\n\nBut here it doesn't consider the default_tablespace if the parent\npartitioned\ntablespace is an InvalidOid (which was the care before this commit).\n\nPFA patch to fix the same.\n\nThanks,\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Fri, 17 May 2019 09:10:10 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": true,
"msg_subject": "behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "Agree that this behavior change seems unintentional.\n\nOn 2019/05/17 12:40, Rushabh Lathia wrote:\n> Looking at the commit changes, it seems like at condition when no other\n> tablespace is specified, we default the tablespace to the parent partitioned\n> table's.\n> \n> else if (stmt->partbound)\n> {\n> /*\n> * For partitions, when no other tablespace is specified, we default\n> * the tablespace to the parent partitioned table's.\n> */\n> Assert(list_length(inheritOids) == 1);\n> tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n> }\n> \n> But here it doesn't consider the default_tablespace if the parent\n> partitioned\n> tablespace is an InvalidOid (which was the care before this commit).\n> \n> PFA patch to fix the same.\n\n+\n+\t\tif (!OidIsValid(tablespaceId))\n+\t\t\ttablespaceId = GetDefaultTablespace(stmt->relation->relpersistence,\npartitioned);\n \t}\n \telse\n \t\ttablespaceId = GetDefaultTablespace(stmt->relation->relpersistence,\n\nWhy not change it like this instead:\n\n@@ -681,7 +681,8 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\nownerId,\n Assert(list_length(inheritOids) == 1);\n tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n }\n- else\n+\n+ if (!OidIsValid(tablespaceId))\n tablespaceId = GetDefaultTablespace(stmt->relation->relpersistence,\n partitioned);\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 17 May 2019 14:00:22 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019/05/17 12:40, Rushabh Lathia wrote:\n> Hi,\n> \n> Consider the below test:\n> \n> create tablespace mytbs location '/home/rushabh/mywork/workspace/pg/';\n> create table test ( a int , b int ) partition by list (a);\n> \n> set default_tablespace to mytbs;\n> create table test_p1 partition of test for values in (1);\n> \n> In the above test, after the setting the default_tablespace I am creating\n> a partition table and expecting that to get created under \"mytbs\"\n> tablespace.\n> \n> But that is not the case:\n> \n> postgres@66247=#select relname, reltablespace from pg_class where relname =\n> 'test_p1';\n> relname | reltablespace\n> ---------+---------------\n> test_p1 | 0\n> (1 row)\n> \n> I noticed the behaviour change for default_tablespace with partition table\n> with below commit.\n> \n> commit 87259588d0ab0b8e742e30596afa7ae25caadb18\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> Date: Thu Apr 25 10:20:23 2019 -0400\n> \n> Fix tablespace inheritance for partitioned rels\n> \n> Commit ca4103025dfe left a few loose ends. The most important one\n> (broken pg_dump output) is already fixed by virtue of commit\n> 3b23552ad8bb, but some things remained:\n> \n> I don't think that new behaviour is intended\n\nShould we add this to open items?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 20 May 2019 13:30:30 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On Fri, May 17, 2019 at 10:30 AM Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>\nwrote:\n\n> Agree that this behavior change seems unintentional.\n>\n> On 2019/05/17 12:40, Rushabh Lathia wrote:\n> > Looking at the commit changes, it seems like at condition when no other\n> > tablespace is specified, we default the tablespace to the parent\n> partitioned\n> > table's.\n> >\n> > else if (stmt->partbound)\n> > {\n> > /*\n> > * For partitions, when no other tablespace is specified, we\n> default\n> > * the tablespace to the parent partitioned table's.\n> > */\n> > Assert(list_length(inheritOids) == 1);\n> > tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n> > }\n> >\n> > But here it doesn't consider the default_tablespace if the parent\n> > partitioned\n> > tablespace is an InvalidOid (which was the care before this commit).\n> >\n> > PFA patch to fix the same.\n>\n> +\n> + if (!OidIsValid(tablespaceId))\n> + tablespaceId =\n> GetDefaultTablespace(stmt->relation->relpersistence,\n> partitioned);\n> }\n> else\n> tablespaceId =\n> GetDefaultTablespace(stmt->relation->relpersistence,\n>\n> Why not change it like this instead:\n>\n> @@ -681,7 +681,8 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n> ownerId,\n> Assert(list_length(inheritOids) == 1);\n> tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n> }\n> - else\n> +\n> + if (!OidIsValid(tablespaceId))\n> tablespaceId =\n> GetDefaultTablespace(stmt->relation->relpersistence,\n> partitioned);\n>\n\n\nYes, sure we can do that. Here is the patch for the same.\n\n\n-- \nRushabh Lathia\nwww.EnterpriseDB.com",
"msg_date": "Mon, 20 May 2019 10:12:40 +0530",
"msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019/05/20 13:42, Rushabh Lathia wrote:\n> On Fri, May 17, 2019 at 10:30 AM Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>\n>> Why not change it like this instead:\n>>\n>> @@ -681,7 +681,8 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid\n>> ownerId,\n>> Assert(list_length(inheritOids) == 1);\n>> tablespaceId = get_rel_tablespace(linitial_oid(inheritOids));\n>> }\n>> - else\n>> +\n>> + if (!OidIsValid(tablespaceId))\n>> tablespaceId =\n>> GetDefaultTablespace(stmt->relation->relpersistence,\n>> partitioned);\n> \n> Yes, sure we can do that. Here is the patch for the same.\n\nThanks Rushabh.\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Mon, 20 May 2019 13:45:10 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019-May-20, Amit Langote wrote:\n\n> Should we add this to open items?\n\nYeah. I'm AFK this week, but can handle it afterwards. The fix already\nmissed beta1, so I don't think there's a problem with taking a little\nbit longer.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 May 2019 22:29:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019/05/21 11:29, Alvaro Herrera wrote:\n> On 2019-May-20, Amit Langote wrote:\n> \n>> Should we add this to open items?\n> \n> Yeah. I'm AFK this week, but can handle it afterwards. The fix already\n> missed beta1, so I don't think there's a problem with taking a little\n> bit longer.\n\nOK, added.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 12:00:36 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019-May-20, Alvaro Herrera wrote:\n\n> On 2019-May-20, Amit Langote wrote:\n> \n> > Should we add this to open items?\n> \n> Yeah. I'm AFK this week, but can handle it afterwards. The fix already\n> missed beta1, so I don't think there's a problem with taking a little\n> bit longer.\n\nHere's my proposed patch. I changed/expanded the tests a little bit to\nensure more complete coverage.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 6 Jun 2019 18:52:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "On 2019-Jun-06, Alvaro Herrera wrote:\n\n> Here's my proposed patch. I changed/expanded the tests a little bit to\n> ensure more complete coverage.\n\nWell, revise the comments a little bit.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 6 Jun 2019 19:02:42 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Fri, Jun 7, 2019 at 8:02 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2019-Jun-06, Alvaro Herrera wrote:\n>\n> > Here's my proposed patch. I changed/expanded the tests a little bit to\n> > ensure more complete coverage.\n>\n> Well, revise the comments a little bit.\n\nThanks for adding the tests. Looks good.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Fri, 7 Jun 2019 09:51:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
},
{
"msg_contents": "Hi Amit, Rushabh,\n\nOn 2019-Jun-07, Amit Langote wrote:\n\n> Thanks for adding the tests. Looks good.\n\nThanks Amit, pushed now. Marking open item as done.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 01:10:07 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: behaviour change - default_tablesapce + partition table"
}
] |
[
{
"msg_contents": "Hi all,\n\nIt seems my approach was quite candid because, of all postgres client\napplications, some document usage of connection string whereas other don't.\nThen, several ways of using connection strings are involved.\n\nHere is a little digest:\n\n| Postgres Client Application | Connection string syntax\n | Documented ? |\n|-----------------------------|------------------------------------------------------------------------------------|--------------|\n| clusterdb | clusterdb -d <connection_string> or\nclusterdb <connection_string> | No |\n| createdb | createdb --maintenance-db\n<connection_string> | No |\n| createuser | Couldn't find if possible\n | No |\n| dropdb | dropdb --maintenance-db\n<connection_string> | No |\n| dropuser | Couldn't find if possible\n | No |\n| pg_basebackup | pg_basebackup -d <connection_string>\n | Yes |\n| pgbench | Couldn't find if possible\n | No |\n| pg_dump | pg_dump -d <connection_string>\n | Yes |\n| pg_dumpall | pg_dumpall -d <connection_string>\n | Yes |\n| pg_isready | pg_isready -d <connection_string>\n | Yes |\n| pg_receivewal | pg_receivewal -d <connection_string>\n | Yes |\n| pg_recvlogical | pg_recvlogical -d <connection_string>\n | Yes |\n| pg_restore | pg_restore -d <connection_string>\n | No |\n| psql | psql <connection_string> or psql -d\n<connection_string> | Yes |\n| reindexdb | reindexdb -d <connection_string> or\nreindexdb --maintenance-db <connection_string> | No |\n| vacuumdb | vacuumdb -d <connection_string> or vacuumdb\n--maintenance-db <connection_string> | No |\n\nAnd here are some statistics about connection string usage:\n\n| | Number of tool using that syntax |\n|------------------|----------------------------------|\n| No switch | 2 |\n| -d | 11 |\n| --maintenance-db | 4 |\n\n- Both tools that allow connection strings without strings also allow the\n-d switch.\n- From the 4 tools that use the --maintenance-db switch, only 2 won't allow\nthe -d switch. Those don't have a -d switch now.\n\nGiven that, I think it would be a good thing to generalize the -d switch\n(and maybe the --maintenance-db switch too).\n\nWhat do you think ?\n\nCheers,\n\nLætitia\n\nLe mar. 30 avr. 2019 à 19:10, Lætitia Avrot <laetitia.avrot@gmail.com> a\nécrit :\n\n> Hi all,\n>\n> I'm a big fan a service file to connect to PostgreSQL client applications.\n> However I know just a few people use them.\n>\n> I ran into an issue today: I wanted to user pg_restore with my service\n> file and couldn't find a way to do so.\n>\n> Documentation didn't help. It was all about \"basic\" options like providing\n> host, port, user and database... Nothing about how to connect using a\n> connection string.\n>\n> I tried `pg_restore service=my_db <other options> <dumpfile>`, but it\n> didn't work. `pg_restore` complaining about too many arguments.\n>\n> I had to ask people or IRC to find out that the `-d` switch accepted\n> connection strings.\n>\n> It's really disturbing because :\n> - It's undocumented\n> - It doesn't work the way it works with the other PostgreSQL client\n> applications (For example, `pg_dump` will accept `pg_dump service=my_db\n> <other_options>`)\n>\n> ***I write a quick patch to document that feature***, but maybe we could\n> go further. I suggest :\n>\n> - Creating a \"Connection Options\" section before the other options (as the\n> synopsis is pg_restore [*connection-option*...] [*option*...] [*filename*]\n> )\n> - Put all connection parameters here (including the -d switch witch is\n> somehow in the middle of the other options\n> - Change other PostgreSQL client application documentation accordingly\n> - As a bonus, I'd like pg_restore to accept connection strings just as\n> other client accept them (without a switch), but maybe it's too difficult\n>\n> Could you please tell me what you think about it before I make such a huge\n> change ?\n>\n> Cheers,\n>\n> Lætitia\n> --\n> *Paper doesn’t grow on trees. Please print responsibly.*\n>\n\n\n-- \n*Paper doesn’t grow on trees. Please print responsibly.*\n\nHi all,It seems my approach was quite candid because, of all postgres client applications, some document usage of connection string whereas other don't. Then, several ways of using connection strings are involved.Here is a little digest:| Postgres Client Application | Connection string syntax | Documented ? ||-----------------------------|------------------------------------------------------------------------------------|--------------|| clusterdb | clusterdb -d <connection_string> or clusterdb <connection_string> | No || createdb | createdb --maintenance-db <connection_string> | No || createuser | Couldn't find if possible | No || dropdb | dropdb --maintenance-db <connection_string> | No || dropuser | Couldn't find if possible | No || pg_basebackup | pg_basebackup -d <connection_string> | Yes || pgbench | Couldn't find if possible | No || pg_dump | pg_dump -d <connection_string> | Yes || pg_dumpall | pg_dumpall -d <connection_string> | Yes || pg_isready | pg_isready -d <connection_string> | Yes || pg_receivewal | pg_receivewal -d <connection_string> | Yes || pg_recvlogical | pg_recvlogical -d <connection_string> | Yes || pg_restore | pg_restore -d <connection_string> | No || psql | psql <connection_string> or psql -d <connection_string> | Yes || reindexdb | reindexdb -d <connection_string> or reindexdb --maintenance-db <connection_string> | No || vacuumdb | vacuumdb -d <connection_string> or vacuumdb --maintenance-db <connection_string> | No |And here are some statistics about connection string usage:| | Number of tool using that syntax ||------------------|----------------------------------|| No switch | 2 || -d | 11 || --maintenance-db | 4 |- Both tools that allow connection strings without strings also allow the -d switch.- From the 4 tools that use the --maintenance-db switch, only 2 won't allow the -d switch. Those don't have a -d switch now.Given that, I think it would be a good thing to generalize the -d switch (and maybe the --maintenance-db switch too).What do you think ?Cheers,LætitiaLe mar. 30 avr. 2019 à 19:10, Lætitia Avrot <laetitia.avrot@gmail.com> a écrit :Hi all,I'm a big fan a service file to connect to PostgreSQL client applications. However I know just a few people use them.I ran into an issue today: I wanted to user pg_restore with my service file and couldn't find a way to do so.Documentation didn't help. It was all about \"basic\" options like providing host, port, user and database... Nothing about how to connect using a connection string.I tried `pg_restore service=my_db <other options> <dumpfile>`, but it didn't work. `pg_restore` complaining about too many arguments.I had to ask people or IRC to find out that the `-d` switch accepted connection strings.It's really disturbing because :- It's undocumented- It doesn't work the way it works with the other PostgreSQL client applications (For example, `pg_dump` will accept `pg_dump service=my_db <other_options>`)**I write a quick patch to document that feature**, but maybe we could go further. I suggest :- Creating a \"Connection Options\" section before the other options (as the synopsis is pg_restore [connection-option...] [option...] [filename])- Put all connection parameters here (including the -d switch witch is somehow in the middle of the other options- Change other PostgreSQL client application documentation accordingly- As a bonus, I'd like pg_restore to accept connection strings just as other client accept them (without a switch), but maybe it's too difficultCould you please tell me what you think about it before I make such a huge change ?Cheers,Lætitia-- Paper doesn’t grow on trees. Please print responsibly.\n-- Paper doesn’t grow on trees. Please print responsibly.",
"msg_date": "Fri, 17 May 2019 09:16:04 +0200",
"msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "On Fri, May 17, 2019 at 9:16 AM Lætitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n>\n> Given that, I think it would be a good thing to generalize the -d switch\n> (and maybe the --maintenance-db switch too).\n>\n>\nJust a couple of quick comments:\n\n Some of those tools user --dbname as a long option.\n Most of those tools also use the connection environment variables used\nby libpq: PGDATABASE\n Pgbench is documented [1]: pgbench [option...] [dbname]\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1] https://www.postgresql.org/docs/current/pgbench.html\n\nOn Fri, May 17, 2019 at 9:16 AM Lætitia Avrot <laetitia.avrot@gmail.com> wrote:Given that, I think it would be a good thing to generalize the -d switch (and maybe the --maintenance-db switch too).Just a couple of quick comments: Some of those tools user --dbname as a long option. Most of those tools also use the connection environment variables used by libpq: PGDATABASE Pgbench is documented [1]: pgbench [option...] [dbname]Regards,Juan José Santamaría Flecha[1] https://www.postgresql.org/docs/current/pgbench.html",
"msg_date": "Fri, 17 May 2019 11:26:00 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "Hi Juan,\n\nLe ven. 17 mai 2019 à 11:26, Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> a écrit :\n\n>\n> On Fri, May 17, 2019 at 9:16 AM Lætitia Avrot <laetitia.avrot@gmail.com>\n> wrote:\n>\n>>\n>> Given that, I think it would be a good thing to generalize the -d switch\n>> (and maybe the --maintenance-db switch too).\n>>\n>>\n> Just a couple of quick comments:\n>\n> Some of those tools user --dbname as a long option.\n>\n\nYou're right. I checked and each and every tool that allow the -d switch\nallows the --dbname. So, of course, if -d is implemented for all Postgres\nclient, --dbname should be allowed too.\n\n\n> Most of those tools also use the connection environment variables used\n> by libpq: PGDATABASE\n> Pgbench is documented [1]: pgbench [option...] [dbname]\n>\n\nMaybe I wasn't clear enough. My point was using a connection string is not\ndocumented. Here is PgBench documentation about dbname:\n\n> where *dbname* is the name of the already-created database to test in.\n(You may also need -h, -p, and/or -U options to specify how to connect to\nthe database server.)\n\nCheers,\n\nLætitia\n\nHi Juan,Le ven. 17 mai 2019 à 11:26, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> a écrit :On Fri, May 17, 2019 at 9:16 AM Lætitia Avrot <laetitia.avrot@gmail.com> wrote:Given that, I think it would be a good thing to generalize the -d switch (and maybe the --maintenance-db switch too).Just a couple of quick comments: Some of those tools user --dbname as a long option.You're right. I checked and each and every tool that allow the -d switch allows the --dbname. So, of course, if -d is implemented for all Postgres client, --dbname should be allowed too. Most of those tools also use the connection environment variables used by libpq: PGDATABASE Pgbench is documented [1]: pgbench [option...] [dbname]Maybe I wasn't clear enough. My point was using a connection string is not documented. Here is PgBench documentation about dbname:> where dbname is the name of the already-created database to test in. (You may also need -h, -p, and/or -U options to specify how to connect to the database server.) Cheers,Lætitia",
"msg_date": "Fri, 17 May 2019 11:38:01 +0200",
"msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "On Fri, May 17, 2019 at 11:38 AM Lætitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n\n> Maybe I wasn't clear enough. My point was using a connection string is not\n> documented. Here is PgBench documentation about dbname:\n>\n> > where *dbname* is the name of the already-created database to test in.\n> (You may also need -h, -p, and/or -U options to specify how to connect to\n> the database server.)\n>\n>\nIn the \"Common Options\" section of PgBench you can find the connect options.\n\nI really just wanted to make a couple of comments, I have not intention on\nreviewing your proposal. So as a final note, dbname defaults to the\nusername if no other information is found.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, May 17, 2019 at 11:38 AM Lætitia Avrot <laetitia.avrot@gmail.com> wrote: Maybe I wasn't clear enough. My point was using a connection string is not documented. Here is PgBench documentation about dbname:> where dbname is the name of the already-created database to test in. (You may also need -h, -p, and/or -U options to specify how to connect to the database server.)In the \"Common Options\" section of PgBench you can find the connect options.I really just wanted to make a couple of comments, I have not intention on reviewing your proposal. So as a final note, dbname defaults to the username if no other information is found. Regards,Juan José Santamaría Flecha",
"msg_date": "Fri, 17 May 2019 12:06:39 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": ">\n>\n>> Maybe I wasn't clear enough. My point was using a connection string is\n>> not documented. Here is PgBench documentation about dbname:\n>>\n>> > where *dbname* is the name of the already-created database to test in.\n>> (You may also need -h, -p, and/or -U options to specify how to connect\n>> to the database server.)\n>>\n>>\n> In the \"Common Options\" section of PgBench you can find the connect\n> options.\n>\n>\nStill nothing about how to use a connection string:\n\n>pgbench accepts the following command-line common arguments:\n>\n>-h hostname\n>--host=hostname\n>The database server's host name\n>\n>-p port\n>--port=port\n>The database server's port number\n>\n>-U login\n>--username=login\n>The user name to connect as\n\n\nI really just wanted to make a couple of comments, I have not intention on\n> reviewing your proposal. So as a final note, dbname defaults to the\n> username if no other information is found.\n>\n>\nI do really appreciate that you took the time and your point of view is\nvaluable to me.\n\nRegards,\n\nLætitia\n\nMaybe I wasn't clear enough. My point was using a connection string is not documented. Here is PgBench documentation about dbname:> where dbname is the name of the already-created database to test in. (You may also need -h, -p, and/or -U options to specify how to connect to the database server.)In the \"Common Options\" section of PgBench you can find the connect options.Still nothing about how to use a connection string:>pgbench accepts the following command-line common arguments:>>-h hostname>--host=hostname>The database server's host name>>-p port>--port=port>The database server's port number>>-U login>--username=login>The user name to connect asI really just wanted to make a couple of comments, I have not intention on reviewing your proposal. So as a final note, dbname defaults to the username if no other information is found. I do really appreciate that you took the time and your point of view is valuable to me.Regards,Lætitia",
"msg_date": "Fri, 17 May 2019 12:28:20 +0200",
"msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "On Fri, May 17, 2019 at 12:28 PM Lætitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n> I do really appreciate that you took the time and your point of view is\n> valuable to me.\n>\n>\nI did not see your original mail from the 30th, we were talking about\napples and oranges. Sorry for the noise.\n\nI have gone though that original mail and the undocumented behaviour you\nare seeing is from libpq itself, maybe not intentional at tool level.\n\nSo, if you want to resize your proposal to a more manageable scope breaking\nit down at tool level might take you further, there you want to make sure\nthe behaviour is actually supported.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, May 17, 2019 at 12:28 PM Lætitia Avrot <laetitia.avrot@gmail.com> wrote:I do really appreciate that you took the time and your point of view is valuable to me.I did not see your original mail from the 30th, we were talking about apples and oranges. Sorry for the noise.I have gone though that original mail and the undocumented behaviour you are seeing is from libpq itself, maybe not intentional at tool level.So, if you want to resize your proposal to a more manageable scope breaking it down at tool level might take you further, there you want to make sure the behaviour is actually supported.Regards,Juan José Santamaría Flecha",
"msg_date": "Fri, 17 May 2019 13:39:16 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "Hi all,\n\nSo after some thoughts I did the minimal patch (for now).\nI corrected documentation for the following tools so that now, using\nconnection string for Postgres client applications is documented in\nPostgres:\n- clusterdb\n- pgbench\n- pg_dump\n- pg_restore\n- reindexdb\n- vacuumdb\n\nYou'll find it enclosed.\n\nI just think it's too bad you can't use the same syntax with every Postgres\nclient using connection string. If somebody else feel the same way about\nit, please jump into this thread so we can think together how to achieve\nthis.\n\nHave a nice day,\n\nLætitia\n\nLe ven. 17 mai 2019 à 09:16, Lætitia Avrot <laetitia.avrot@gmail.com> a\nécrit :\n\n> Hi all,\n>\n> It seems my approach was quite candid because, of all postgres client\n> applications, some document usage of connection string whereas other don't.\n> Then, several ways of using connection strings are involved.\n>\n> Here is a little digest:\n>\n> | Postgres Client Application | Connection string syntax\n> | Documented ? |\n>\n> |-----------------------------|------------------------------------------------------------------------------------|--------------|\n> | clusterdb | clusterdb -d <connection_string> or\n> clusterdb <connection_string> | No |\n> | createdb | createdb --maintenance-db\n> <connection_string> | No |\n> | createuser | Couldn't find if possible\n> | No |\n> | dropdb | dropdb --maintenance-db\n> <connection_string> | No |\n> | dropuser | Couldn't find if possible\n> | No |\n> | pg_basebackup | pg_basebackup -d <connection_string>\n> | Yes |\n> | pgbench | Couldn't find if possible\n> | No |\n> | pg_dump | pg_dump -d <connection_string>\n> | Yes |\n> | pg_dumpall | pg_dumpall -d <connection_string>\n> | Yes |\n> | pg_isready | pg_isready -d <connection_string>\n> | Yes |\n> | pg_receivewal | pg_receivewal -d <connection_string>\n> | Yes |\n> | pg_recvlogical | pg_recvlogical -d <connection_string>\n> | Yes |\n> | pg_restore | pg_restore -d <connection_string>\n> | No |\n> | psql | psql <connection_string> or psql -d\n> <connection_string> | Yes |\n> | reindexdb | reindexdb -d <connection_string> or\n> reindexdb --maintenance-db <connection_string> | No |\n> | vacuumdb | vacuumdb -d <connection_string> or\n> vacuumdb --maintenance-db <connection_string> | No |\n>\n> And here are some statistics about connection string usage:\n>\n> | | Number of tool using that syntax |\n> |------------------|----------------------------------|\n> | No switch | 2 |\n> | -d | 11 |\n> | --maintenance-db | 4 |\n>\n> - Both tools that allow connection strings without strings also allow the\n> -d switch.\n> - From the 4 tools that use the --maintenance-db switch, only 2 won't\n> allow the -d switch. Those don't have a -d switch now.\n>\n> Given that, I think it would be a good thing to generalize the -d switch\n> (and maybe the --maintenance-db switch too).\n>\n> What do you think ?\n>\n> Cheers,\n>\n> Lætitia\n>\n> Le mar. 30 avr. 2019 à 19:10, Lætitia Avrot <laetitia.avrot@gmail.com> a\n> écrit :\n>\n>> Hi all,\n>>\n>> I'm a big fan a service file to connect to PostgreSQL client\n>> applications. However I know just a few people use them.\n>>\n>> I ran into an issue today: I wanted to user pg_restore with my service\n>> file and couldn't find a way to do so.\n>>\n>> Documentation didn't help. It was all about \"basic\" options like\n>> providing host, port, user and database... Nothing about how to connect\n>> using a connection string.\n>>\n>> I tried `pg_restore service=my_db <other options> <dumpfile>`, but it\n>> didn't work. `pg_restore` complaining about too many arguments.\n>>\n>> I had to ask people or IRC to find out that the `-d` switch accepted\n>> connection strings.\n>>\n>> It's really disturbing because :\n>> - It's undocumented\n>> - It doesn't work the way it works with the other PostgreSQL client\n>> applications (For example, `pg_dump` will accept `pg_dump service=my_db\n>> <other_options>`)\n>>\n>> ***I write a quick patch to document that feature***, but maybe we could\n>> go further. I suggest :\n>>\n>> - Creating a \"Connection Options\" section before the other options (as\n>> the synopsis is pg_restore [*connection-option*...] [*option*...] [\n>> *filename*])\n>> - Put all connection parameters here (including the -d switch witch is\n>> somehow in the middle of the other options\n>> - Change other PostgreSQL client application documentation accordingly\n>> - As a bonus, I'd like pg_restore to accept connection strings just as\n>> other client accept them (without a switch), but maybe it's too difficult\n>>\n>> Could you please tell me what you think about it before I make such a\n>> huge change ?\n>>\n>> Cheers,\n>>\n>> Lætitia\n>> --\n>> *Paper doesn’t grow on trees. Please print responsibly.*\n>>\n>\n>\n> --\n> *Paper doesn’t grow on trees. Please print responsibly.*\n>\n\n\n-- \n*Paper doesn’t grow on trees. Please print responsibly.*",
"msg_date": "Wed, 13 Nov 2019 16:48:57 +0100",
"msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "On Wed, 2019-11-13 at 16:48 +0100, Lætitia Avrot wrote:\n> So after some thoughts I did the minimal patch (for now).\n> I corrected documentation for the following tools so that now, using connection string\n> for Postgres client applications is documented in Postgres:\n> - clusterdb\n> - pgbench\n> - pg_dump\n> - pg_restore\n> - reindexdb\n> - vacuumdb\n\nI think that this patch is a good idea.\nEven if it adds some redundancy, that can hardly be avoided because, as you said,\nthe options to specify the database name are not the same everywhere.\n\nThe patch applies and build fine.\n\nI see some room for improvement:\n\n- I think that \"connection string\" is better than \"conninfo string\".\n At least the chapter to which you link is headed \"Connection Strings\".\n\n This would also be consistent with the use of that term in the\n \"pg_basebackup\" , \"pg_dumpall\" and \"pg_receivewal\" documentation.\n\n You seem to have copied that wording from the \"pg_isready\", \"psql\",\n \"reindexdb\" and \"vacuumdb\" documentation, but I think it would be better\n to reword those too.\n\n- You begin your paragraph with \"if this parameter contains ...\".\n\n First, I think \"argument\" might be more appropriate here, as you\n are talking about\n a) the supplied value and\n b) a command line argument or the argument to an option\n\n Besides, it might be confusing to refer to \"*this* parameter\" if the text\n is not immediately after what you are referring to, like for example\n in \"pgbench\", where it might refer to the -h, -p or -U options.\n\n I think it would be better and less ambiguous to use\n \"If <replaceable class=\"parameter\">dbname</replaceable> contains ...\"\n\n In the cases where there is no ambiguity, it might be better to use\n a wording like in the \"pg_recvlogical\" documentation.\n\n- There are two places you forgot:\n\n createdb --maintenance-db=dbname\n dropdb --maintenance-db=dbname\n\nWhile looking at this patch, I noticed that \"createuser\" and \"dropuser\"\nexplicitly connect to the \"postgres\" database rather than using\n\"connectMaintenanceDatabase()\" like the other scripts, which would try\nthe database \"postgres\" first and fall back to \"template1\".\n\nThis is unrelated to the patch, but low-hanging fruit for unified behavior.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 25 Nov 2019 22:34:18 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
},
{
"msg_contents": "Hi Laurenz,\n\nThank you for taking the time to review that patch.\n\nLe lun. 25 nov. 2019 à 22:34, Laurenz Albe <laurenz.albe@cybertec.at> a\nécrit :\n\n> On Wed, 2019-11-13 at 16:48 +0100, Lætitia Avrot wrote:\n> > So after some thoughts I did the minimal patch (for now).\n> > I corrected documentation for the following tools so that now, using\n> connection string\n> > for Postgres client applications is documented in Postgres:\n> > - clusterdb\n> > - pgbench\n> > - pg_dump\n> > - pg_restore\n> > - reindexdb\n> > - vacuumdb\n>\n> I think that this patch is a good idea.\n> Even if it adds some redundancy, that can hardly be avoided because, as\n> you said,\n> the options to specify the database name are not the same everywhere.\n>\n> The patch applies and build fine.\n>\n> I see some room for improvement:\n>\n> - I think that \"connection string\" is better than \"conninfo string\".\n> At least the chapter to which you link is headed \"Connection Strings\".\n>\n> This would also be consistent with the use of that term in the\n> \"pg_basebackup\" , \"pg_dumpall\" and \"pg_receivewal\" documentation.\n>\n> You seem to have copied that wording from the \"pg_isready\", \"psql\",\n> \"reindexdb\" and \"vacuumdb\" documentation, but I think it would be better\n> to reword those too.\n>\n> I agree.\n\n\n> - You begin your paragraph with \"if this parameter contains ...\".\n>\n> First, I think \"argument\" might be more appropriate here, as you\n> are talking about\n> a) the supplied value and\n> b) a command line argument or the argument to an option\n>\n> Besides, it might be confusing to refer to \"*this* parameter\" if the text\n> is not immediately after what you are referring to, like for example\n> in \"pgbench\", where it might refer to the -h, -p or -U options.\n>\n> I think it would be better and less ambiguous to use\n> \"If <replaceable class=\"parameter\">dbname</replaceable> contains ...\"\n>\n> In the cases where there is no ambiguity, it might be better to use\n> a wording like in the \"pg_recvlogical\" documentation.\n>\n> You're right.\n\n\n> - There are two places you forgot:\n>\n> createdb --maintenance-db=dbname\n> dropdb --maintenance-db=dbname\n>\n> You're perfectly right!\n\n\n> While looking at this patch, I noticed that \"createuser\" and \"dropuser\"\n> explicitly connect to the \"postgres\" database rather than using\n> \"connectMaintenanceDatabase()\" like the other scripts, which would try\n> the database \"postgres\" first and fall back to \"template1\".\n>\n> This is unrelated to the patch, but low-hanging fruit for unified behavior.\n>\n\nI agree and while trying to unify everything, you'r better try and make\nright for all the tools.\n\nI'm not very satisfied with this patch. I think I want to go further with\nunifying connection string usage. I'd like at least each and every client\napp to accept the same syntax and argument. Let me think a little further\non it, so I try to come up with a simple and neat solution.\n\nSeveral ones are possible and I'd like to find them all to be able to pick\nthe best.\n\nHave a nice day,\n\nLætitia\n-- \n*Paper doesn’t grow on trees. Please print responsibly.*\n\nHi Laurenz,Thank you for taking the time to review that patch.Le lun. 25 nov. 2019 à 22:34, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :On Wed, 2019-11-13 at 16:48 +0100, Lætitia Avrot wrote:\n> So after some thoughts I did the minimal patch (for now).\n> I corrected documentation for the following tools so that now, using connection string\n> for Postgres client applications is documented in Postgres:\n> - clusterdb\n> - pgbench\n> - pg_dump\n> - pg_restore\n> - reindexdb\n> - vacuumdb\n\nI think that this patch is a good idea.\nEven if it adds some redundancy, that can hardly be avoided because, as you said,\nthe options to specify the database name are not the same everywhere.\n\nThe patch applies and build fine.\n\nI see some room for improvement:\n\n- I think that \"connection string\" is better than \"conninfo string\".\n At least the chapter to which you link is headed \"Connection Strings\".\n\n This would also be consistent with the use of that term in the\n \"pg_basebackup\" , \"pg_dumpall\" and \"pg_receivewal\" documentation.\n\n You seem to have copied that wording from the \"pg_isready\", \"psql\",\n \"reindexdb\" and \"vacuumdb\" documentation, but I think it would be better\n to reword those too.\nI agree. \n- You begin your paragraph with \"if this parameter contains ...\".\n\n First, I think \"argument\" might be more appropriate here, as you\n are talking about\n a) the supplied value and\n b) a command line argument or the argument to an option\n\n Besides, it might be confusing to refer to \"*this* parameter\" if the text\n is not immediately after what you are referring to, like for example\n in \"pgbench\", where it might refer to the -h, -p or -U options.\n\n I think it would be better and less ambiguous to use\n \"If <replaceable class=\"parameter\">dbname</replaceable> contains ...\"\n\n In the cases where there is no ambiguity, it might be better to use\n a wording like in the \"pg_recvlogical\" documentation.\nYou're right. \n- There are two places you forgot:\n\n createdb --maintenance-db=dbname\n dropdb --maintenance-db=dbname\nYou're perfectly right! \nWhile looking at this patch, I noticed that \"createuser\" and \"dropuser\"\nexplicitly connect to the \"postgres\" database rather than using\n\"connectMaintenanceDatabase()\" like the other scripts, which would try\nthe database \"postgres\" first and fall back to \"template1\".\n\nThis is unrelated to the patch, but low-hanging fruit for unified behavior.I agree and while trying to unify everything, you'r better try and make right for all the tools. I'm not very satisfied with this patch. I think I want to go further with unifying connection string usage. I'd like at least each and every client app to accept the same syntax and argument. Let me think a little further on it, so I try to come up with a simple and neat solution.Several ones are possible and I'd like to find them all to be able to pick the best.Have a nice day,Lætitia-- Paper doesn’t grow on trees. Please print responsibly.",
"msg_date": "Sat, 18 Jan 2020 09:11:07 +0100",
"msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Doc] pg_restore documentation didn't explain how to use\n connection string"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that there is (another) oddity in commit aa09cd242f: in the\n!fpinfo->use_remote_estimate mode, when first called for costing an\nunsorted foreign scan, estimate_path_cost_size() computes\nretrieved_rows, which is the estimated number of rows fetched from the\nremote server, by these:\n\n retrieved_rows = clamp_row_est(rows / fpinfo->local_conds_sel)\n retrieved_rows = Min(retrieved_rows, foreignrel->tuples);\n\nwhere rows is the estimated number of output rows emitted from the\nforeign scan, fpinfo->local_conds_sel is the selectivity of local\nconditions, and foreignrel->tuples is the foreign table's reltuples.\nThis is good, BUT when next called for costing the presorted foreign\nscan, that function re-computes retrieved_rows by the former, but\ndoesn't clamp it by the latter, which would produce wrong results.\nHere is such an example:\n\ncreate table t (a int, b int);\ncreate foreign table ft (a int, b int) server loopback options\n(table_name 't');insert into ft values (1, 10);\ninsert into ft values (2, 20);\nanalyze ft;\ncreate function postgres_fdw_abs(int) returns int as $$ begin return\nabs($1); end $$ language plpgsql immutable;\nexplain verbose select * from ft where postgres_fdw_abs(b) > 10 order by a;\n QUERY PLAN\n-------------------------------------------------------------------\n Foreign Scan on public.ft (cost=100.00..101.89 rows=1 width=8)\n Output: a, b\n Filter: (postgres_fdw_abs(ft.b) > 10)\n Remote SQL: SELECT a, b FROM public.t ORDER BY a ASC NULLS LAST\n(4 rows)\n\nFor this query, we have rows=1 and foreignrel->tuples=2.\npostgres_fdw_abs(b) > 10 is a local condition, for which we have\nfpinfo->local_conds_sel=0.333333 (I got this by printf debugging). So\nwhen first called for costing an unsorted foreign scan, by the former\nequation retrieved_rows=3, then by the latter retrieved_rows=2, which\nis correct. BUT when next called for costing the presorted foreign\nscan, we have retrieved_rows=3, as that function doesn't clamp the\nretrieved_rows. This is wrong, leading to incorrect cost estimates.\n(This is an issue for the foreign-scan case, but I think we would have\nthe same issue for the foreign-join case.)\n\nTo fix, I propose to handle retrieved_rows in the same way as cached\ncosts; 1) cache retrieved_rows computed in the first call of\nestimate_path_cost_size() into the foreign table's fpinfo, and 2) use\nit after the first call. Also, I'd like to propose to put this code\nin that function for !use_remote_estimate mode in each of the below\ncode for the cases of foreign scan, foreign join, and foreign grouping\nas needed, and use the rows/width estimates stored in the fpinfo (ie,\nfpinfo->rows and fpinfo->width) after the first call, like the\nattached.\n\n /*\n * Use rows/width estimates made by set_baserel_size_estimates() for\n * base foreign relations and set_joinrel_size_estimates() for join\n * between foreign relations.\n */\n rows = foreignrel->rows;\n width = foreignrel->reltarget->width;\n\nI think that that would make the code more consistent and easier to\nunderstand. Also, there is another two reasons: a) this code seems\nconfusing to me for the foreign-grouping case, as the core code\ndoesn't set foreignrel->rows at all for grouped relations. The change\nproposed above would avoid that confusion. And b) we can remove a\nchange made by commit ffab494a4d, which added support for sorting\ngrouped relations remotely in postgres_fdw. In that commit, to extend\nthe logic for re-using cached costs to the foreign-grouping case, I\nmodified add_foreign_grouping_paths() so that it saves the rows\nestimate for a grouped relation made by estimate_path_cost_size() into\nthe grouped relation's foreignrel->rows. But for grouped relations,\nwe already save the row/width estimates into fpinfo->rows and\nfpinfo->width, so the change proposed above would make that change\nunnecessary.\n\nOther change is: I noticed that commit 7012b132d0 incorrectly re-sets\nthe width estimates for grouped relations in the\n!fpinfo->use_remote_estimate mode, so I fixed that as well in the\nattached.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 17 May 2019 20:31:36 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: oddity in costing presorted foreign scans with local\n stats"
},
{
"msg_contents": "On Fri, May 17, 2019 at 8:31 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I noticed that there is (another) oddity in commit aa09cd242f: in the\n> !fpinfo->use_remote_estimate mode, when first called for costing an\n> unsorted foreign scan, estimate_path_cost_size() computes\n> retrieved_rows, which is the estimated number of rows fetched from the\n> remote server, by these:\n>\n> retrieved_rows = clamp_row_est(rows / fpinfo->local_conds_sel)\n> retrieved_rows = Min(retrieved_rows, foreignrel->tuples);\n>\n> where rows is the estimated number of output rows emitted from the\n> foreign scan, fpinfo->local_conds_sel is the selectivity of local\n> conditions, and foreignrel->tuples is the foreign table's reltuples.\n> This is good, BUT when next called for costing the presorted foreign\n> scan, that function re-computes retrieved_rows by the former, but\n> doesn't clamp it by the latter, which would produce wrong results.\n> Here is such an example:\n>\n> create table t (a int, b int);\n> create foreign table ft (a int, b int) server loopback options\n> (table_name 't');insert into ft values (1, 10);\n> insert into ft values (2, 20);\n> analyze ft;\n> create function postgres_fdw_abs(int) returns int as $$ begin return\n> abs($1); end $$ language plpgsql immutable;\n> explain verbose select * from ft where postgres_fdw_abs(b) > 10 order by a;\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Foreign Scan on public.ft (cost=100.00..101.89 rows=1 width=8)\n> Output: a, b\n> Filter: (postgres_fdw_abs(ft.b) > 10)\n> Remote SQL: SELECT a, b FROM public.t ORDER BY a ASC NULLS LAST\n> (4 rows)\n>\n> For this query, we have rows=1 and foreignrel->tuples=2.\n> postgres_fdw_abs(b) > 10 is a local condition, for which we have\n> fpinfo->local_conds_sel=0.333333 (I got this by printf debugging). So\n> when first called for costing an unsorted foreign scan, by the former\n> equation retrieved_rows=3, then by the latter retrieved_rows=2, which\n> is correct. BUT when next called for costing the presorted foreign\n> scan, we have retrieved_rows=3, as that function doesn't clamp the\n> retrieved_rows. This is wrong, leading to incorrect cost estimates.\n> (This is an issue for the foreign-scan case, but I think we would have\n> the same issue for the foreign-join case.)\n>\n> To fix, I propose to handle retrieved_rows in the same way as cached\n> costs; 1) cache retrieved_rows computed in the first call of\n> estimate_path_cost_size() into the foreign table's fpinfo, and 2) use\n> it after the first call. Also, I'd like to propose to put this code\n> in that function for !use_remote_estimate mode in each of the below\n> code for the cases of foreign scan, foreign join, and foreign grouping\n> as needed, and use the rows/width estimates stored in the fpinfo (ie,\n> fpinfo->rows and fpinfo->width) after the first call, like the\n> attached.\n>\n> /*\n> * Use rows/width estimates made by set_baserel_size_estimates() for\n> * base foreign relations and set_joinrel_size_estimates() for join\n> * between foreign relations.\n> */\n> rows = foreignrel->rows;\n> width = foreignrel->reltarget->width;\n>\n> I think that that would make the code more consistent and easier to\n> understand. Also, there is another two reasons: a) this code seems\n> confusing to me for the foreign-grouping case, as the core code\n> doesn't set foreignrel->rows at all for grouped relations. The change\n> proposed above would avoid that confusion. And b) we can remove a\n> change made by commit ffab494a4d, which added support for sorting\n> grouped relations remotely in postgres_fdw. In that commit, to extend\n> the logic for re-using cached costs to the foreign-grouping case, I\n> modified add_foreign_grouping_paths() so that it saves the rows\n> estimate for a grouped relation made by estimate_path_cost_size() into\n> the grouped relation's foreignrel->rows. But for grouped relations,\n> we already save the row/width estimates into fpinfo->rows and\n> fpinfo->width, so the change proposed above would make that change\n> unnecessary.\n>\n> Other change is: I noticed that commit 7012b132d0 incorrectly re-sets\n> the width estimates for grouped relations in the\n> !fpinfo->use_remote_estimate mode, so I fixed that as well in the\n> attached.\n\nI made stricter an assertion test I added on retrieved_rows. Also, I\ndid some editorialization further and added the commit message.\nAttached is an updated version of the patch. If there are no\nobjections, I'll commit the patch.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 6 Jun 2019 17:58:00 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: oddity in costing presorted foreign scans with\n local stats"
},
{
"msg_contents": "On Thu, Jun 6, 2019 at 5:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I made stricter an assertion test I added on retrieved_rows. Also, I\n> did some editorialization further and added the commit message.\n> Attached is an updated version of the patch. If there are no\n> objections, I'll commit the patch.\n\nI noticed that the previous patch was an old version; it didn't update\nthe assertion test at all. Attached is a new version updating that\ntest. I think I had been under the weather last week due to a long\nflight.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Mon, 10 Jun 2019 17:37:14 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: oddity in costing presorted foreign scans with\n local stats"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 5:37 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Jun 6, 2019 at 5:58 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > I made stricter an assertion test I added on retrieved_rows. Also, I\n> > did some editorialization further and added the commit message.\n> > Attached is an updated version of the patch. If there are no\n> > objections, I'll commit the patch.\n>\n> I noticed that the previous patch was an old version; it didn't update\n> the assertion test at all. Attached is a new version updating that\n> test. I think I had been under the weather last week due to a long\n> flight.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 14 Jun 2019 20:57:14 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: oddity in costing presorted foreign scans with\n local stats"
}
] |
[
{
"msg_contents": "I'm trying to convert SAP Hana procedures in PG and i'm not able to handle\nbelow scenario in Postgres 11\n\nScenario: I want to pass a table (Multiple rows) to function and use it\ninside as a temp table.\n\nSample Code:\n\ncreate a table tbl_id (id int, name character varying (10));\ninsert few rows to tbl_id;\ncreate a function myfun (in tt_table <How to define a table type here> )\nbegin\nreturn setof table(few columns)\nbegin\nas\nselect id,name into lv_var1,lv_var2;\nfrom tt_table --> Want to use the input table\nwhere id = <some value>;\nreturn query\nselect *\nfrom tbl2 where id in (select id from tt_table); --> Want to use the input\ntable\nend;\nI don't want to go with dynamic sql, is there any other way to declare a\ntable as input argument and use it a normal temp table inside the function\nbody?\n--> Function invocation issue:\nselect * from myfun(tbl_id);\nHow to invoke a function by passing a table as argument?\n\nI'm trying to convert SAP Hana procedures in PG and i'm not able to handle below scenario in Postgres 11Scenario: I want to pass a table (Multiple rows) to function and use it inside as a temp table. Sample Code: create a table tbl_id (id int, name character varying (10)); insert few rows to tbl_id; create a function myfun (in tt_table <How to define a table type here> ) begin return setof table(few columns) begin as select id,name into lv_var1,lv_var2; from tt_table --> Want to use the input table where id = <some value>; return query select * from tbl2 where id in (select id from tt_table); --> Want to use the input table end; I don't want to go with dynamic sql, is there any other way to declare a table as input argument and use it a normal temp table inside the function body? --> Function invocation issue: select * from myfun(tbl_id); How to invoke a function by passing a table as argument?",
"msg_date": "Fri, 17 May 2019 18:58:02 +0530",
"msg_from": "RAJIN RAJ K <rajin89@gmail.com>",
"msg_from_op": true,
"msg_subject": "Table as argument in postgres function"
},
{
"msg_contents": "Hi,\n\nI'm trying to convert SAP Hana procedures in PG and i'm not able to handle\nbelow scenario in Postgres 11\n\nScenario: I want to pass a table (Multiple rows) to function and use it\ninside as a temp table.\n\nSample Code:\n\ncreate a table tbl_id (id int, name character varying (10));\ninsert few rows to tbl_id;\ncreate a function myfun (in tt_table <How to define a table type here> )\nbegin\nreturn setof table(few columns)\nbegin\nas\nselect id,name into lv_var1,lv_var2;\nfrom tt_table --> Want to use the input table\nwhere id = <some value>;\nreturn query\nselect *\nfrom tbl2 where id in (select id from tt_table); --> Want to use the input\ntable\nend;\nI don't want to go with dynamic sql, is there any other way to declare a\ntable as input argument and use it a normal temp table inside the function\nbody?\n--> Function invocation issue:\nselect * from myfun(tbl_id);\nHow to invoke a function by passing a table as argument?\nRegards,\nRajin\n\nHi,I'm trying to convert SAP Hana procedures in PG and i'm not able to handle below scenario in Postgres 11Scenario: I want to pass a table (Multiple rows) to function and use it inside as a temp table. Sample Code: create a table tbl_id (id int, name character varying (10)); insert few rows to tbl_id; create a function myfun (in tt_table <How to define a table type here> ) begin return setof table(few columns) begin as select id,name into lv_var1,lv_var2; from tt_table --> Want to use the input table where id = <some value>; return query select * from tbl2 where id in (select id from tt_table); --> Want to use the input table end; I don't want to go with dynamic sql, is there any other way to declare a table as input argument and use it a normal temp table inside the function body? --> Function invocation issue: select * from myfun(tbl_id); How to invoke a function by passing a table as argument? Regards,Rajin",
"msg_date": "Sun, 19 May 2019 21:30:23 +0530",
"msg_from": "RAJIN RAJ K <rajin89@gmail.com>",
"msg_from_op": true,
"msg_subject": "Table as argument in postgres function"
},
{
"msg_contents": "Hi\n\nne 19. 5. 2019 v 18:00 odesílatel RAJIN RAJ K <rajin89@gmail.com> napsal:\n\n> Hi,\n>\n> I'm trying to convert SAP Hana procedures in PG and i'm not able to handle\n> below scenario in Postgres 11\n>\n> Scenario: I want to pass a table (Multiple rows) to function and use it\n> inside as a temp table.\n>\n> Sample Code:\n>\n> create a table tbl_id (id int, name character varying (10));\n> insert few rows to tbl_id;\n> create a function myfun (in tt_table <How to define a table type here> )\n> begin\n> return setof table(few columns)\n> begin\n> as\n> select id,name into lv_var1,lv_var2;\n> from tt_table --> Want to use the input table\n> where id = <some value>;\n> return query\n> select *\n> from tbl2 where id in (select id from tt_table); --> Want to use the input\n> table\n> end;\n> I don't want to go with dynamic sql, is there any other way to declare a\n> table as input argument and use it a normal temp table inside the function\n> body?\n> --> Function invocation issue:\n> select * from myfun(tbl_id);\n> How to invoke a function by passing a table as argument?\n>\n\nYou can pass table name as text or table object id as regclass type.\n\ninside procedure you should to use dynamic sql - execute statement.\nGenerally you cannot to use a variable as table or column name ever.\n\nDynamic SQL is other mechanism - attention on SQL injection.\n\ncreate or replace function foo(regclass)\nreturns setof record as $$\nbegin\n return query execute format('select * from %s', $1); -- cast from\nregclass to text is safe\nend;\n$$ language plpgsql;\n\nwith text type a escaping is necessary\n\ncreate or replace function foo(text)\nreturns setof record as $$\nbegin\n return query execute format('select * from %I', $1); -- %I ensure\nnecessary escaping against SQL injection\nend;\n$$ language plpgsql;\n\nyou need to call \"setof record\" function with special syntax\n\nselect * from foo('xxx') as (a int, b int);\n\nSometimes you can use polymorphic types, then the function will be different\n\ncreate or replace function foo2(regclass, anyelement)\nreturns setof anyelement as $$\nbegin\n return query execute format('select * from %s', $1); -- cast from\nregclass to text is safe\nend;\n$$ language plpgsql;\n\nselect * from foo2('xxx', null::xxx);\n\nyou can read some more in doc\n\nhttps://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\nhttps://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNING\nhttps://www.postgresql.org/docs/current/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SET\n\nRegards\n\nPavel\n\nRegards,\n> Rajin\n>\n\nHine 19. 5. 2019 v 18:00 odesílatel RAJIN RAJ K <rajin89@gmail.com> napsal:Hi,I'm trying to convert SAP Hana procedures in PG and i'm not able to handle below scenario in Postgres 11Scenario: I want to pass a table (Multiple rows) to function and use it inside as a temp table. Sample Code: create a table tbl_id (id int, name character varying (10)); insert few rows to tbl_id; create a function myfun (in tt_table <How to define a table type here> ) begin return setof table(few columns) begin as select id,name into lv_var1,lv_var2; from tt_table --> Want to use the input table where id = <some value>; return query select * from tbl2 where id in (select id from tt_table); --> Want to use the input table end; I don't want to go with dynamic sql, is there any other way to declare a table as input argument and use it a normal temp table inside the function body? --> Function invocation issue: select * from myfun(tbl_id); How to invoke a function by passing a table as argument? You can pass table name as text or table object id as regclass type. inside procedure you should to use dynamic sql - execute statement. Generally you cannot to use a variable as table or column name ever. Dynamic SQL is other mechanism - attention on SQL injection.create or replace function foo(regclass)returns setof record as $$begin return query execute format('select * from %s', $1); -- cast from regclass to text is safeend;$$ language plpgsql;with text type a escaping is necessarycreate or replace function foo(text)returns setof record as $$begin return query execute format('select * from %I', $1); -- %I ensure necessary escaping against SQL injectionend;$$ language plpgsql;you need to call \"setof record\" function with special syntaxselect * from foo('xxx') as (a int, b int);Sometimes you can use polymorphic types, then the function will be differentcreate or replace function foo2(regclass, anyelement)returns setof anyelement as $$begin return query execute format('select * from %s', $1); -- cast from regclass to text is safeend;$$ language plpgsql;select * from foo2('xxx', null::xxx);you can read some more in dochttps://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYNhttps://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-STATEMENTS-RETURNINGhttps://www.postgresql.org/docs/current/xfunc-sql.html#XFUNC-SQL-FUNCTIONS-RETURNING-SETRegardsPavel Regards,Rajin",
"msg_date": "Sun, 19 May 2019 18:20:29 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table as argument in postgres function"
},
{
"msg_contents": ">\n>\n> You can pass table name as text or table object id as regclass type.\n>\n> inside procedure you should to use dynamic sql - execute statement.\n> Generally you cannot to use a variable as table or column name ever.\n>\n> Dynamic SQL is other mechanism - attention on SQL injection.\n>\n\nOn this note, Snowflake has the ability to to parameterize object names\n(see:\nhttps://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )\n\nSo you can do things like\n SELECT col_a, col_b FROM identifier('a_table_name')\nor as a bind variable\n SELECT col_a, col_b FROM identifier($1)\n\nWhich is their way of avoiding SQL injection attacks in *some* circumstances.\nTheir implementation of it is a bit uneven, but it has proven useful for my\nwork.\n\nI can see where this obviously would prevent the planning of a prepared\nstatement when a table name is a parameter, but the request comes up often\nenough, and the benefits to avoiding SQL injection attacks are significant\nenough that maybe we should try to enable it for one-off. I don't\nnecessarily think we need an identifier(string) function, a\n'schema.table'::regclass would be more our style.\n\nIs there anything preventing us from having the planner resolve object\nnames from strings?\n\nYou can pass table name as text or table object id as regclass type. inside procedure you should to use dynamic sql - execute statement. Generally you cannot to use a variable as table or column name ever. Dynamic SQL is other mechanism - attention on SQL injection.On this note, Snowflake has the ability to to parameterize object names (see: https://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )So you can do things like SELECT col_a, col_b FROM identifier('a_table_name')or as a bind variable SELECT col_a, col_b FROM identifier($1)Which is their way of avoiding SQL injection attacks in some circumstances. Their implementation of it is a bit uneven, but it has proven useful for my work.I can see where this obviously would prevent the planning of a prepared statement when a table name is a parameter, but the request comes up often enough, and the benefits to avoiding SQL injection attacks are significant enough that maybe we should try to enable it for one-off. I don't necessarily think we need an identifier(string) function, a 'schema.table'::regclass would be more our style.Is there anything preventing us from having the planner resolve object names from strings?",
"msg_date": "Mon, 20 May 2019 01:56:01 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table as argument in postgres function"
},
{
"msg_contents": "po 20. 5. 2019 v 7:56 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n>\n>> You can pass table name as text or table object id as regclass type.\n>>\n>> inside procedure you should to use dynamic sql - execute statement.\n>> Generally you cannot to use a variable as table or column name ever.\n>>\n>> Dynamic SQL is other mechanism - attention on SQL injection.\n>>\n>\n> On this note, Snowflake has the ability to to parameterize object names\n> (see:\n> https://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )\n>\n> So you can do things like\n> SELECT col_a, col_b FROM identifier('a_table_name')\n> or as a bind variable\n> SELECT col_a, col_b FROM identifier($1)\n>\n> Which is their way of avoiding SQL injection attacks in *some* circumstances.\n> Their implementation of it is a bit uneven, but it has proven useful for my\n> work.\n>\n> I can see where this obviously would prevent the planning of a prepared\n> statement when a table name is a parameter, but the request comes up often\n> enough, and the benefits to avoiding SQL injection attacks are significant\n> enough that maybe we should try to enable it for one-off. I don't\n> necessarily think we need an identifier(string) function, a\n> 'schema.table'::regclass would be more our style.\n>\n> Is there anything preventing us from having the planner resolve object\n> names from strings?\n>\n\nThe basic problem is fact so when you use PREPARE, EXECUTE protocol, you\nhas not parameters in planning time.\n\nRegards\n\nPavel\n\npo 20. 5. 2019 v 7:56 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:You can pass table name as text or table object id as regclass type. inside procedure you should to use dynamic sql - execute statement. Generally you cannot to use a variable as table or column name ever. Dynamic SQL is other mechanism - attention on SQL injection.On this note, Snowflake has the ability to to parameterize object names (see: https://docs.snowflake.net/manuals/sql-reference/identifier-literal.html )So you can do things like SELECT col_a, col_b FROM identifier('a_table_name')or as a bind variable SELECT col_a, col_b FROM identifier($1)Which is their way of avoiding SQL injection attacks in some circumstances. Their implementation of it is a bit uneven, but it has proven useful for my work.I can see where this obviously would prevent the planning of a prepared statement when a table name is a parameter, but the request comes up often enough, and the benefits to avoiding SQL injection attacks are significant enough that maybe we should try to enable it for one-off. I don't necessarily think we need an identifier(string) function, a 'schema.table'::regclass would be more our style.Is there anything preventing us from having the planner resolve object names from strings?The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time. RegardsPavel",
"msg_date": "Mon, 20 May 2019 08:03:46 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table as argument in postgres function"
},
{
"msg_contents": ">\n>\n>> Is there anything preventing us from having the planner resolve object\n>> names from strings?\n>>\n>\n> The basic problem is fact so when you use PREPARE, EXECUTE protocol, you\n> has not parameters in planning time.\n>\n\nI agree that it defeats PREPARE as it is currently implemented with\nPQprepare(), and it would never be meaningful to have a query plan that\nhasn't finalized which objects are involved.\n\nBut could it be made to work with PQexecParams(), where the parameter\nvalues are already provided?\n\nCould we make a version of PQprepare() that takes an extra array of\nparamValues for object names that must be supplied at prepare-time?\n\nIs there anything preventing us from having the planner resolve object names from strings?The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time.I agree that it defeats PREPARE as it is currently implemented with PQprepare(), and it would never be meaningful to have a query plan that hasn't finalized which objects are involved.But could it be made to work with PQexecParams(), where the parameter values are already provided?Could we make a version of PQprepare() that takes an extra array of paramValues for object names that must be supplied at prepare-time?",
"msg_date": "Tue, 21 May 2019 03:04:07 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table as argument in postgres function"
},
{
"msg_contents": "út 21. 5. 2019 v 9:04 odesílatel Corey Huinker <corey.huinker@gmail.com>\nnapsal:\n\n>\n>>> Is there anything preventing us from having the planner resolve object\n>>> names from strings?\n>>>\n>>\n>> The basic problem is fact so when you use PREPARE, EXECUTE protocol, you\n>> has not parameters in planning time.\n>>\n>\n> I agree that it defeats PREPARE as it is currently implemented with\n> PQprepare(), and it would never be meaningful to have a query plan that\n> hasn't finalized which objects are involved.\n>\n> But could it be made to work with PQexecParams(), where the parameter\n> values are already provided?\n>\n> Could we make a version of PQprepare() that takes an extra array of\n> paramValues for object names that must be supplied at prepare-time?\n>\n\nI think so it is possible, but there is a question how much this design\nuglify source code. Passing query parameters is maybe too complex already.\n\nSecond question. I am not sure if described feature is some different. ANSI\nSQL 2016 knows Polymorphic table functions - looks like that. For me, I\nwould to see implementation of PTF instead increasing complexity of work\nwith parameters.\n\nhttps://www.doag.org/formes/pubfiles/11270472/2019-SQL-Andrej_Pashchenko-Polymorphic_Table_Functions_in_18c_Einfuehrung_und_Beispiele-Praesentation.pdf\n\n\n\n\n>\n>\n>\n>\n\nút 21. 5. 2019 v 9:04 odesílatel Corey Huinker <corey.huinker@gmail.com> napsal:Is there anything preventing us from having the planner resolve object names from strings?The basic problem is fact so when you use PREPARE, EXECUTE protocol, you has not parameters in planning time.I agree that it defeats PREPARE as it is currently implemented with PQprepare(), and it would never be meaningful to have a query plan that hasn't finalized which objects are involved.But could it be made to work with PQexecParams(), where the parameter values are already provided?Could we make a version of PQprepare() that takes an extra array of paramValues for object names that must be supplied at prepare-time?I think so it is possible, but there is a question how much this design uglify source code. Passing query parameters is maybe too complex already.Second question. I am not sure if described feature is some different. ANSI SQL 2016 knows Polymorphic table functions - looks like that. For me, I would to see implementation of PTF instead increasing complexity of work with parameters.https://www.doag.org/formes/pubfiles/11270472/2019-SQL-Andrej_Pashchenko-Polymorphic_Table_Functions_in_18c_Einfuehrung_und_Beispiele-Praesentation.pdf",
"msg_date": "Tue, 21 May 2019 09:13:46 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Table as argument in postgres function"
}
] |
[
{
"msg_contents": "We should do a pgindent run fairly soon, so that people with patches\nawaiting the next CF will have plenty of time to rebase them as necessary.\nI don't want to do it right this minute, to avoid making trouble for the\nseveral urgent patches we're trying to get done before Monday's beta1\nwrap. But after the beta is tagged seems like it'd be a good time.\n\nAlso, how do people feel about adopting the function prototype\nindenting change discussed in \n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug%40mail.gmail.com\n\n? The required change in pg_bsd_indent isn't quite done, but it\ncould be done by next week.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 10:29:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run next week?"
},
{
"msg_contents": "On Fri, May 17, 2019 at 10:29:46AM -0400, Tom Lane wrote:\n> We should do a pgindent run fairly soon, so that people with patches\n> awaiting the next CF will have plenty of time to rebase them as necessary.\n> I don't want to do it right this minute, to avoid making trouble for the\n> several urgent patches we're trying to get done before Monday's beta1\n> wrap. But after the beta is tagged seems like it'd be a good time.\n> \n> Also, how do people feel about adopting the function prototype\n> indenting change discussed in \n> \n> https://www.postgresql.org/message-id/flat/CAEepm%3D0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug%40mail.gmail.com\n> \n> ? The required change in pg_bsd_indent isn't quite done, but it\n> could be done by next week.\n\nYes, I think we are good with everything above. I am thinking you\nshould do the run since you did the pg_indent modifications.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 13:23:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-17 10:29:46 -0400, Tom Lane wrote:\n> We should do a pgindent run fairly soon, so that people with patches\n> awaiting the next CF will have plenty of time to rebase them as\n> necessary.\n\n+1\n\n> I don't want to do it right this minute, to avoid making trouble for the\n> several urgent patches we're trying to get done before Monday's beta1\n> wrap. But after the beta is tagged seems like it'd be a good time.\n\n+1\n\n> Also, how do people feel about adopting the function prototype\n> indenting change discussed in\n\n> https://www.postgresql.org/message-id/flat/CAEepm%3D0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug%40mail.gmail.com\n\nI think it'd be a huge improvement. I find it pretty annoying having to\nfigure out the indentations to avoid unnecessary pgindent changes (after\nThomas' explanation as to why it happens, I usually just add a linebreak\nafter the return type, indent everything, and remove it).\n\nWould we want to also apply this to the back branches to avoid spurious\nconflicts?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 10:27:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-17 10:29:46 -0400, Tom Lane wrote:\n>> Also, how do people feel about adopting the function prototype\n>> indenting change discussed in\n>> https://www.postgresql.org/message-id/flat/CAEepm%3D0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug%40mail.gmail.com\n\n> I think it'd be a huge improvement.\n\nYeah, that's probably the biggest remaining bug/issue in pgindent.\n\n> Would we want to also apply this to the back branches to avoid spurious\n> conflicts?\n\nI dunno, how far back are you thinking? I've occasionally wished we\ncould reindent all the back branches to match HEAD, but realistically,\npeople carrying out-of-tree patches would scream.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 13:47:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "On Fri, May 17, 2019 at 01:47:02PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-17 10:29:46 -0400, Tom Lane wrote:\n> >> Also, how do people feel about adopting the function prototype\n> >> indenting change discussed in\n> >> https://www.postgresql.org/message-id/flat/CAEepm%3D0P3FeTXRcU5B2W3jv3PgRVZ-kGUXLGfd42FFhUROO3ug%40mail.gmail.com\n> \n> > I think it'd be a huge improvement.\n> \n> Yeah, that's probably the biggest remaining bug/issue in pgindent.\n> \n> > Would we want to also apply this to the back branches to avoid spurious\n> > conflicts?\n> \n> I dunno, how far back are you thinking? I've occasionally wished we\n> could reindent all the back branches to match HEAD, but realistically,\n> people carrying out-of-tree patches would scream.\n\nMy regular backpatch pain is SGML files. :-(\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 May 2019 13:49:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-17 13:47:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n > > Would we want to also apply this to the back branches to avoid spurious\n> > conflicts?\n> \n> I dunno, how far back are you thinking? I've occasionally wished we\n> could reindent all the back branches to match HEAD, but realistically,\n> people carrying out-of-tree patches would scream.\n\nI somehow thought we'd backpatched pgindent changes before, around when\nmoving to the newer version of indent. But I think we might just have\ndiscussed that, and then didn't go for it...\n\nNot sure if a three-way merge wouldn't take care of many, but not all,\nthe out-of-tree patch concerns.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 11:11:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-17 13:47:02 -0400, Tom Lane wrote:\n>> I dunno, how far back are you thinking? I've occasionally wished we\n>> could reindent all the back branches to match HEAD, but realistically,\n>> people carrying out-of-tree patches would scream.\n\n> I somehow thought we'd backpatched pgindent changes before, around when\n> moving to the newer version of indent. But I think we might just have\n> discussed that, and then didn't go for it...\n\nYeah, we talked about it but never actually did it.\n\n> Not sure if a three-way merge wouldn't take care of many, but not all,\n> the out-of-tree patch concerns.\n\nI was wondering about \"patch --ignore-whitespace\" myself. In theory,\nto the extent that our recent rounds of pgindent fixes just change\nindentation, that would be able to cope (most of the time anyway).\nBut I don't think I'd want to just assume that without testing.\n\nAnybody around here got large patches they're carrying against\nback branches, that they could try reapplying after running\na newer version of pgindent?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 15:10:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "\n\n> On May 17, 2019, at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Andres Freund <andres@anarazel.de> writes:\n>> On 2019-05-17 13:47:02 -0400, Tom Lane wrote:\n>>> I dunno, how far back are you thinking? I've occasionally wished we\n>>> could reindent all the back branches to match HEAD, but realistically,\n>>> people carrying out-of-tree patches would scream.\n> \n>> I somehow thought we'd backpatched pgindent changes before, around when\n>> moving to the newer version of indent. But I think we might just have\n>> discussed that, and then didn't go for it...\n> \n> Yeah, we talked about it but never actually did it.\n> \n>> Not sure if a three-way merge wouldn't take care of many, but not all,\n>> the out-of-tree patch concerns.\n> \n> I was wondering about \"patch --ignore-whitespace\" myself. In theory,\n> to the extent that our recent rounds of pgindent fixes just change\n> indentation, that would be able to cope (most of the time anyway).\n> But I don't think I'd want to just assume that without testing.\n> \n> Anybody around here got large patches they're carrying against\n> back branches, that they could try reapplying after running\n> a newer version of pgindent?\n\nI have forks of 9.1 and 9.5 that each amount to large changes\nagainst the public sources, though I consider those forks to be\ndefunct. If you want me to run some particular version of pg_indent\nagainst the public sources of 9.1 and 9.5 and then try to merge the\nchanged sources into my forks, I could give it a try. I'm not\nsure if this is the sort of thing you have in mind....\n\nmark\n\n\n\n",
"msg_date": "Fri, 17 May 2019 14:39:50 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On May 17, 2019, at 12:10 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anybody around here got large patches they're carrying against\n>> back branches, that they could try reapplying after running\n>> a newer version of pgindent?\n\n> I have forks of 9.1 and 9.5 that each amount to large changes\n> against the public sources, though I consider those forks to be\n> defunct. If you want me to run some particular version of pg_indent\n> against the public sources of 9.1 and 9.5 and then try to merge the\n> changed sources into my forks, I could give it a try. I'm not\n> sure if this is the sort of thing you have in mind....\n\n9.1 is probably too far back to be interesting, but it'd be good\nto try the experiment with your 9.5 fork.\n\nAssuming you only want to do this once, I'd suggest waiting till\nI push the function-prototype changes to the pg_bsd_indent repo,\nand then use that along with the latest pgindent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 17:49:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-17 10:29:46 -0400, Tom Lane wrote:\n>> We should do a pgindent run fairly soon, so that people with patches\n>> awaiting the next CF will have plenty of time to rebase them as\n>> necessary.\n>> I don't want to do it right this minute, to avoid making trouble for the\n>> several urgent patches we're trying to get done before Monday's beta1\n>> wrap. But after the beta is tagged seems like it'd be a good time.\n\n> +1\n\nHearing no objections, I'll plan on running pgindent tomorrow sometime.\n\nThe new underlying pg_bsd_indent (2.1) is available now from\n\nhttps://git.postgresql.org/git/pg_bsd_indent.git\n\nif anyone wants to do further testing on it. (To use it with current\npgindent, adjust the INDENT_VERSION value in that script. You don't\nreally need to do anything else; the code rendered unnecessary by this\nchange won't do anything.)\n\n\n> Would we want to also apply this to the back branches to avoid spurious\n> conflicts?\n\nI think we should hold off on any talk of that until we get some results\nfrom Mark Dilger (or anyone else) on how much pain it would cause for\npeople carrying private patches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 17:46:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "I wrote:\n> Hearing no objections, I'll plan on running pgindent tomorrow sometime.\n\nAnd done.\n\n> The new underlying pg_bsd_indent (2.1) is available now from\n> https://git.postgresql.org/git/pg_bsd_indent.git\n\nPlease update your local copy if you have one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 13:07:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "On Wed, May 22, 2019 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Hearing no objections, I'll plan on running pgindent tomorrow sometime.\n>\n> And done.\n>\n> > The new underlying pg_bsd_indent (2.1) is available now from\n> > https://git.postgresql.org/git/pg_bsd_indent.git\n>\n> Please update your local copy if you have one.\n>\n> regards, tom lane\n>\n\nI cloned, built and used the new pg_bsd_indent to format my fork of\nPostgreSQL 11 (not the 9.1 or 9.5 forks I previously mentioned) and\nit caused me no problems whatsoever. I don't have a strong preference,\nbut I would vote in favor of running pgindent on the back branches\nrather than against, since to the extent that I might need to move\npatches between forks of different versions, it will be easier to do if\nthey have the same indentation. (In practice, this probably won't\ncome up for me, since the older forks are defunct and unlikely to\nbe patched by me.)\n\nmark\n\n\n",
"msg_date": "Wed, 22 May 2019 11:54:29 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "On 2019-05-21 23:46, Tom Lane wrote:\n>> Would we want to also apply this to the back branches to avoid spurious\n>> conflicts?\n> I think we should hold off on any talk of that until we get some results\n> from Mark Dilger (or anyone else) on how much pain it would cause for\n> people carrying private patches.\n\nIn my experience, changes to function declarations in header files\nhappen a lot in forks. So applying the pgindent change to backbranches\nwould cause some trouble.\n\nOn the other hand, it seems to me that patches that we backpatch between\nPostgreSQL branches should normally not touch function declarations in\nheader files, since that would be an ABI break. So by not applying the\npgindent change in backbranches we don't lose anything. And so it would\nbe better to just leave things as they are.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 21:13:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> In my experience, changes to function declarations in header files\n> happen a lot in forks. So applying the pgindent change to backbranches\n> would cause some trouble.\n\n> On the other hand, it seems to me that patches that we backpatch between\n> PostgreSQL branches should normally not touch function declarations in\n> header files, since that would be an ABI break. So by not applying the\n> pgindent change in backbranches we don't lose anything. And so it would\n> be better to just leave things as they are.\n\nMaybe we could wait awhile and see how much pain we find in back-patching\nacross this change. I have to admit that the v10 pgindent changes have\nnot been as painful as I expected them to be, so maybe this round will\nalso prove to be just an annoyance not a major PITA for that.\n\nAnother thought is that, at least in principle, we could re-indent only\n.c files not .h files in the back branches. But I'm not sure I believe\nyour argument that forks are more likely to touch exposed extern\ndeclarations than local static declarations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 15:27:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run next week?"
},
{
"msg_contents": "Em qua, 22 de mai de 2019 às 14:08, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>\n> I wrote:\n> > Hearing no objections, I'll plan on running pgindent tomorrow sometime.\n>\n> And done.\n>\n> > The new underlying pg_bsd_indent (2.1) is available now from\n> > https://git.postgresql.org/git/pg_bsd_indent.git\n>\n> Please update your local copy if you have one.\n>\nI give it a try in a fork of PostgreSQL 10. The difference between v10\nand my fork is not huge. The stats are 56 files changed, 2240\ninsertions(+), 203 deletions(-) and patch size is 139 Kb. I have\nconflicts in 3 of 19 .h files and 1 of 25 .c files. Like Mark, I don't\nhave a strong preference, however, re-indent files would reduce\ndeveloper time while preparing patches to back branches.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Wed, 22 May 2019 18:16:42 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: pgindent run next week?"
}
] |
[
{
"msg_contents": "Hi,\n\ntable_update and table_delete both have comments that document all of\ntheir input parameters *except* for snapshot. This seems like an\noversight, especially because:\n\n * crosscheck - if not InvalidSnapshot, also check tuple against this\n\nWithout a comment about snapshot, what's the \"also\" about?\n\nSuspiciously, the heap implementations of these functions completely\nignore the snapshot parameter and have no comments explaining the\nreasons why they do so. In fact, the only comment in\nheapam_tuple_delete is this one, and it seems both misplaced (since it\nseems to be a general comment about table AMs, not something\nheap-specific) and in need of editing:\n\n /*\n * Currently Deleting of index tuples are handled at vacuum, in case if\n * the storage itself is cleaning the dead tuples by itself, it is the\n * time to call the index tuple deletion also.\n */\n\nOne particular thing I'm curious whether it's ever OK to pass the\nsnapshot as InvalidSnapshot, or whether it's expected a valid snapshot\nshould always be supplied. If the latter, I think it would be a good\nidea to add an Assert() to table_update and table_delete() to avoid\ncoding mistakes.\n\nThanks,\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 May 2019 11:34:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "table_delete and table_update don't document snapshot parameter"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-17 11:34:25 -0400, Robert Haas wrote:\n> table_update and table_delete both have comments that document all of\n> their input parameters *except* for snapshot. This seems like an\n> oversight\n\nHm, yea, it ought to be documented.\n\n\n> , especially because:\n> \n> * crosscheck - if not InvalidSnapshot, also check tuple against this\n> \n> Without a comment about snapshot, what's the \"also\" about?\n\nI don't think I've materially changed anything around that. It's just\nthe < 12 comment. The also refers to cid.\n\n\n> Suspiciously, the heap implementations of these functions completely\n> ignore the snapshot parameter and have no comments explaining the\n> reasons why they do so.\n\nI don't think there's a case where heap needs them - but it's different\nfor e.g. zheap (c.f. ZHeapTupleSatisfiesUpdate needing it the version\nI'm looking at rn). IMO it's reasonable for an AM needing it to\ndisambiguate versions (although there's the complication that in the EPQ\ncase versions *newer* than the snapshot might need to be deleted).\n\n\n> One particular thing I'm curious whether it's ever OK to pass the\n> snapshot as InvalidSnapshot, or whether it's expected a valid snapshot\n> should always be supplied.\n\nI can't see any case where it would be OK to not supply it.\n\n\n> If the latter, I think it would be a good\n> idea to add an Assert() to table_update and table_delete() to avoid\n> coding mistakes.\n\nYea, probably a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 08:50:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: table_delete and table_update don't document snapshot parameter"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is in context with the previous mail I created dated April 30, 2019 in\nregard to GSoD'19.\nI am really interested to work on building a \"Mumbo-Jumbo PostgreSQL\ndictionary\".\nI am looking forward to a reply.\n\nThanking You\n\nManish Devgan\nhttps://github.com/gabru-md\nmanish.nsit8@gmail.com\n3rd-Year Information Technology\nNetaji Subhas University of Technology\n(formerly NSIT)\nDelhi, India\n\nHello,This is in context with the previous mail I created dated April 30, 2019 in regard to GSoD'19.I am really interested to work on building a \"Mumbo-Jumbo PostgreSQL dictionary\".I am looking forward to a reply.Thanking YouManish Devganhttps://github.com/gabru-mdmanish.nsit8@gmail.com3rd-Year Information TechnologyNetaji Subhas University of Technology(formerly NSIT)Delhi, India",
"msg_date": "Fri, 17 May 2019 22:44:46 +0530",
"msg_from": "Manish Devgan <manish.nsit8@gmail.com>",
"msg_from_op": true,
"msg_subject": "Google Season of Docs 2019"
},
{
"msg_contents": "Greetings,\n\n* Manish Devgan (manish.nsit8@gmail.com) wrote:\n> This is in context with the previous mail I created dated April 30, 2019 in\n> regard to GSoD'19.\n\nI might be missing it, but I don't see a prior email from you, and our\narchives only show this email when I search across all lists.\n\n> I am really interested to work on building a \"Mumbo-Jumbo PostgreSQL\n> dictionary\".\n\nGlad to hear it.\n\n[...]\n\n> Manish Devgan\n> https://github.com/gabru-md\n> manish.nsit8@gmail.com\n> 3rd-Year Information Technology\n\nNote that, as I understand it, GSoD is not intended as an internship but\nis for experienced technical writers. I'd suggest you discuss with\nGoogle what your experience is and if it is a good match for you.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 May 2019 09:42:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Google Season of Docs 2019"
}
] |
[
{
"msg_contents": "Hackers,\n\nmost places that use SPI_connect ... SPI_finish check the\nreturn value of SPI_finish and elog if it failed. There\nare a few places that do not, and it is unclear to me\nwhy this is safe. SPI_finish appears to be needed to\nclean up memory contexts.\n\nExamples can be found in:\n src/backend/utils/adt/xml.c\n src/backend/utils/adt/tsvector_op.c\n src/backend/utils/adt/tsquery_rewrite.c\n src/test/regress/regress.c\n contrib/spi/refint.c\n\nThe return value of SPI_execute is ignored in one spot:\n src/backend/utils/adt/xml.c circa line 2465.\n\nI checked the archives and did not see any discussion\nabout this in the past. Please excuse me if this has\nbeen asked before.\n\n\n\n\n",
"msg_date": "Fri, 17 May 2019 11:00:52 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is it safe to ignore the return value of SPI_finish and SPI_execute?"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> most places that use SPI_connect ... SPI_finish check the\n> return value of SPI_finish and elog if it failed. There\n> are a few places that do not, and it is unclear to me\n> why this is safe. SPI_finish appears to be needed to\n> clean up memory contexts.\n\nWell, looking through spi.c, the only failure return that SPI_finish\nactually has right now is from _SPI_begin_call:\n\n\tif (_SPI_current == NULL)\n\t\treturn SPI_ERROR_UNCONNECTED;\n\nand if you're willing to posit that those callers did call SPI_connect,\nthat's unreachable for them. The more interesting cases such as\nfailure within memory context cleanup would throw elog/ereport, so\nthey're not at issue here.\n\nBut I agree that not checking is crap coding practice, because there is\ncertainly no reason why SPI_finish could not have other error-return\ncases in future.\n\nOne reasonable solution would be to change the callers that got this\nwrong. Another one would be to reconsider whether the error-return-code\nconvention makes any sense at all here. If we changed the above-quoted\nbit to be an ereport(ERROR), then we could say that SPI_finish either\nreturns 0 or throws error, making it moot whether callers check, and\nallowing removal of now-useless checks from all the in-core callers.\n\nI don't think that actually doing that would be a great idea unless\nwe went through all of the SPI functions and did it for every \"unexpected\"\nerror case. Is it worth the trouble? Maybe, but I don't wanna do\nthe legwork.\n\n> The return value of SPI_execute is ignored in one spot:\n> src/backend/utils/adt/xml.c circa line 2465.\n\nThat seems like possibly a real bug. It's certainly poor practice\nas things stand.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 21:12:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
},
{
"msg_contents": "On Fri, May 17, 2019 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > most places that use SPI_connect ... SPI_finish check the\n> > return value of SPI_finish and elog if it failed. There\n> > are a few places that do not, and it is unclear to me\n> > why this is safe. SPI_finish appears to be needed to\n> > clean up memory contexts.\n>\n> Well, looking through spi.c, the only failure return that SPI_finish\n> actually has right now is from _SPI_begin_call:\n>\n> if (_SPI_current == NULL)\n> return SPI_ERROR_UNCONNECTED;\n>\n> and if you're willing to posit that those callers did call SPI_connect,\n> that's unreachable for them. The more interesting cases such as\n> failure within memory context cleanup would throw elog/ereport, so\n> they're not at issue here.\n>\n> But I agree that not checking is crap coding practice, because there is\n> certainly no reason why SPI_finish could not have other error-return\n> cases in future.\n\nAgreed.\n\n> One reasonable solution would be to change the callers that got this\n> wrong. Another one would be to reconsider whether the error-return-code\n> convention makes any sense at all here. If we changed the above-quoted\n> bit to be an ereport(ERROR), then we could say that SPI_finish either\n> returns 0 or throws error, making it moot whether callers check, and\n> allowing removal of now-useless checks from all the in-core callers.\n\nDoes this proposal of yours seem good enough for me to make a patch\nbased on this design?\n\n> I don't think that actually doing that would be a great idea unless\n> we went through all of the SPI functions and did it for every \"unexpected\"\n> error case. Is it worth the trouble? Maybe, but I don't wanna do\n> the legwork.\n\nI would like to clean this up and submit a patch, so long as the general\nsolution seems acceptable to the pgsql-hackers list.\n\nJust as background information:\n\nI only hit this issue because I have been auditing the version 12 code\nand adding __attribute__((warn_unused_result)) on non-void functions in\nthe tree and then checking each one that gets compiler warnings to see\nif there is a bug inherent in the way it is being used. These SPI_* functions\nare the first ones I found where it seemed clearly wrong to me that the\nreturn values were being ignored. There have been many others where\nignoring the return value seemed acceptable given the way the function\nis designed to work, and though I am not always happy with the design,\nI'm not trying to go so far as redesigning large sections of the code.\n\n> > The return value of SPI_execute is ignored in one spot:\n> > src/backend/utils/adt/xml.c circa line 2465.\n>\n> That seems like possibly a real bug. It's certainly poor practice\n> as things stand.\n\nmark\n\n\n",
"msg_date": "Wed, 22 May 2019 12:12:39 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On Fri, May 17, 2019 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> One reasonable solution would be to change the callers that got this\n>> wrong. Another one would be to reconsider whether the error-return-code\n>> convention makes any sense at all here. If we changed the above-quoted\n>> bit to be an ereport(ERROR), then we could say that SPI_finish either\n>> returns 0 or throws error, making it moot whether callers check, and\n>> allowing removal of now-useless checks from all the in-core callers.\n\n> Does this proposal of yours seem good enough for me to make a patch\n> based on this design?\n\nJust to clarify --- I think what's being discussed here is \"change some\nlarge fraction of the SPI functions that can return SPI_ERROR_xxx error\ncodes to throw elog/ereport(ERROR) instead\". Figuring out what fraction\nthat should be is part of the work --- but just in a quick scan through\nspi.c, it seems like there might be a case for deprecating practically\nall the SPI_ERROR_xxx codes except for SPI_ERROR_NOATTRIBUTE.\nI'd definitely argue that SPI_ERROR_UNCONNECTED and SPI_ERROR_ARGUMENT\ndeserve that treatment.\n\nI'm for it, if you want to do the work, but I don't speak for everybody.\n\nIt's not entirely clear to me whether we ought to change the return\nconvention to be \"returns void\" or make it \"always returns SPI_OK\"\nfor those functions where the return code becomes trivial. The\nlatter would avoid churn for external modules, but it seems not to\nhave much other attractiveness.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 16:52:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
},
{
"msg_contents": "On Wed, May 22, 2019 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > On Fri, May 17, 2019 at 6:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> One reasonable solution would be to change the callers that got this\n> >> wrong. Another one would be to reconsider whether the error-return-code\n> >> convention makes any sense at all here. If we changed the above-quoted\n> >> bit to be an ereport(ERROR), then we could say that SPI_finish either\n> >> returns 0 or throws error, making it moot whether callers check, and\n> >> allowing removal of now-useless checks from all the in-core callers.\n>\n> > Does this proposal of yours seem good enough for me to make a patch\n> > based on this design?\n>\n> Just to clarify --- I think what's being discussed here is \"change some\n> large fraction of the SPI functions that can return SPI_ERROR_xxx error\n> codes to throw elog/ereport(ERROR) instead\".\n\nYes, I was talking about that, but was ambiguous in how I phrased my\nquestion.\n\n> Figuring out what fraction\n> that should be is part of the work --- but just in a quick scan through\n> spi.c, it seems like there might be a case for deprecating practically\n> all the SPI_ERROR_xxx codes except for SPI_ERROR_NOATTRIBUTE.\n> I'd definitely argue that SPI_ERROR_UNCONNECTED and SPI_ERROR_ARGUMENT\n> deserve that treatment.\n>\n> I'm for it, if you want to do the work, but I don't speak for everybody.\n\nI do want to write the patch, but I'll wait for other opinions.\n\nmark\n\n\n",
"msg_date": "Wed, 22 May 2019 14:07:06 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
},
{
"msg_contents": "I wrote:\n> It's not entirely clear to me whether we ought to change the return\n> convention to be \"returns void\" or make it \"always returns SPI_OK\"\n> for those functions where the return code becomes trivial. The\n> latter would avoid churn for external modules, but it seems not to\n> have much other attractiveness.\n\nFurther thought about that --- it's clearly better in the long run\nif we switch to \"returns void\" where possible. We don't want to\nencourage people to waste code space on dead error checks. However,\ndoing that could be quite a PITA for people who are trying to maintain\nextension code that works across multiple backend versions.\n\nWe could address that by providing compatibility macros, say per\nthis sketch:\n\nextern void SPI_finish(void);\n...\n\n#ifdef BACKWARDS_COMPATIBLE_SPI_CALLS\n\n#define SPI_finish() (SPI_finish(), SPI_OK_FINISH)\n...\n\n#endif\n\n(This relies on non-recursive macro expansion, but that's been\nstandard since C89.)\n\nThe #ifdef stanza could be ripped out someday when all older branches\nare out of support, but there wouldn't be any hurry.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 12:48:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
},
{
"msg_contents": "On 2019-May-22, Mark Dilger wrote:\n\n> On Wed, May 22, 2019 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Figuring out what fraction\n> > that should be is part of the work --- but just in a quick scan through\n> > spi.c, it seems like there might be a case for deprecating practically\n> > all the SPI_ERROR_xxx codes except for SPI_ERROR_NOATTRIBUTE.\n> > I'd definitely argue that SPI_ERROR_UNCONNECTED and SPI_ERROR_ARGUMENT\n> > deserve that treatment.\n> >\n> > I'm for it, if you want to do the work, but I don't speak for everybody.\n> \n> I do want to write the patch, but I'll wait for other opinions.\n\nIn my perusal, the SPI API is unnecessarily baroque and could stand some\nsimplification, so +1 for the proposed approach.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 15:03:56 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Is it safe to ignore the return value of SPI_finish and\n SPI_execute?"
}
] |
[
{
"msg_contents": "Currently TOAST table is always created (if needed based on data type\nproperties) independent of table AM. How toasting is handled seems\nshould be AM responsibility. Generic code shouldn't force the use of\nthe separate table for the same. Like for Zedstore we store toasted\nchunks in separate blocks but within the table file itself and don't\nneed separate toast table. Some other AM may implement the\nfunctionality differently. So, similar to relation forks, usage of\ntoast table should be optional and left to AM to handle.\n\nWish to discuss ways on how best to achieve it. Attaching patch just\nto showcase a way could be done. The patch adds property to\nTableAmRoutine to convey if AM uses separate Toast table or not.\n\nOther possibility could be with some refactoring move toast table\ncreation inside relation_set_new_filenode callback or provide separate\ncallback for Toast Table creation to AM.",
"msg_date": "Fri, 17 May 2019 11:26:29 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Create TOAST table only if AM needs"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-17 11:26:29 -0700, Ashwin Agrawal wrote:\n> Currently TOAST table is always created (if needed based on data type\n> properties) independent of table AM. How toasting is handled seems\n> should be AM responsibility. Generic code shouldn't force the use of\n> the separate table for the same. Like for Zedstore we store toasted\n> chunks in separate blocks but within the table file itself and don't\n> need separate toast table. Some other AM may implement the\n> functionality differently. So, similar to relation forks, usage of\n> toast table should be optional and left to AM to handle.\n\nYea, Robert is also working on this. In fact, we were literally chatting\nabout it a few minutes ago. He'll probably chime in too.\n\n\n> +static inline bool\n> +table_uses_toast_table(Relation relation)\n> +{\n> +\treturn relation->rd_tableam->uses_toast_table;\n> +}\n\nDon't think this is sufficient - imo it needs to be a callback to look\nat the columns etc.\n\n\nMy inclination is that it's too late for 12 to do anything about\nthis. There are many known limitations, and we'll discover many more, of\nthe current tableam interface. If we try to fix them for 12, we'll never\nget anywhere. It'll take a while to iron out all those wrinkles...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 11:34:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, Robert is also working on this. In fact, we were literally chatting\n> about it a few minutes ago. He'll probably chime in too.\n\nYeah, I'm aiming to post a patch set very soon that does a bunch of\nrefactoring of the TOAST stuff to make life easier for new AMs - maybe\ntoday, or else Monday. I think your use case of wanting to suppress\nTOAST table creation altogether is a valid one, but I want to do go a\nlittle further and make it easier for non-heap AMs to implement\nheap-like toasting based on their own page format.\n\nGenerally, I would say that the state of play with respect to table\nAMs and toasting is annoyingly bad right now:\n\n- If your AM uses some system other than TOAST to store large values,\nyou are out of luck. You will get TOAST tables whether you want them\nor not.\n\n- If your AM uses some page or tuple format that results in a\ndifferent maximum tuple size, you are out of luck. You will get TOAST\ntables based on whether a heap table with the same set of columns\nwould need one.\n\n- If your AM would like to use the heap for TOAST data, you are out of\nluck. The AM used for TOAST data must be the same as the AM used for\nthe main table.\n\n- If your AM would like to use itself to store TOAST data, you are\nalso out of luck, because all of the existing TOAST code works with\nheap tuples.\n\n- Even if you copy all of tuptoaster.c/h - which is a lot of code -\nand change everything that is different for your AM than for the\nregular heap, you are still out of luck, because code that knows\nnothing about tableam is going to call heap_tuple_untoast_attr() to\ndetoast stuff, and that code is only going to be happy if you've used\nthe same chunk size that we use for the regular heap, and that chunk\nsize has a good chance of being mildly to severely suboptimal if your\nheap has made any sort of page format changes.\n\nSo I think this basically just doesn't work right now. I am\nsympathetic to Andres's position that we shouldn't go whacking the\ncode around too much at this late date, and he's probably right that\nwe're going to find lots of other problems with tableam as well and\nyou have to draw the line someplace, but on the other hand given your\nexperience and mine, it's probably pretty likely that anybody who\ntries to use tableam for anything is going to run into this problem,\nso maybe it's not crazy to think about a few last-minute changes.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 May 2019 15:13:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> So I think this basically just doesn't work right now. I am\n> sympathetic to Andres's position that we shouldn't go whacking the\n> code around too much at this late date, and he's probably right that\n> we're going to find lots of other problems with tableam as well and\n> you have to draw the line someplace, but on the other hand given your\n> experience and mine, it's probably pretty likely that anybody who\n> tries to use tableam for anything is going to run into this problem,\n> so maybe it's not crazy to think about a few last-minute changes.\n\nIt seems to me that the entire tableam project is still very much WIP,\nand if anybody is able to do anything actually useful with a different\nAM right at the moment, that's just mighty good fortune for them.\nIt's way too late to be making destabilizing changes in v12 in order\nto move the frontier of what can be done in a new AM. I'm all for\nthe sorts of changes you're describing here --- but as v13 material.\nWe should be looking at v12 as something we're trying to get out\nthe door soon, with as few bugs as possible. \"I can't do X in an\nexternal AM\" is not a bug, not for v12 anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 15:26:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-17 15:13:50 -0400, Robert Haas wrote:\n> On Fri, May 17, 2019 at 2:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, Robert is also working on this. In fact, we were literally chatting\n> > about it a few minutes ago. He'll probably chime in too.\n> \n> Yeah, I'm aiming to post a patch set very soon that does a bunch of\n> refactoring of the TOAST stuff to make life easier for new AMs - maybe\n> today, or else Monday. I think your use case of wanting to suppress\n> TOAST table creation altogether is a valid one, but I want to do go a\n> little further and make it easier for non-heap AMs to implement\n> heap-like toasting based on their own page format.\n> \n> Generally, I would say that the state of play with respect to table\n> AMs and toasting is annoyingly bad right now:\n> \n> - If your AM uses some system other than TOAST to store large values,\n> you are out of luck. You will get TOAST tables whether you want them\n> or not.\n\nWhich is aesthetically and indode usage wise annoying, but not\n*terrible*. You get a a bunch of useless pg_class/pg_index entries and a\nfew close-to-empty relfilenodes.\n\n\n> - If your AM uses some page or tuple format that results in a\n> different maximum tuple size, you are out of luck. You will get TOAST\n> tables based on whether a heap table with the same set of columns\n> would need one.\n\n> - If your AM would like to use the heap for TOAST data, you are out of\n> luck. The AM used for TOAST data must be the same as the AM used for\n> the main table.\n> \n> - If your AM would like to use itself to store TOAST data, you are\n> also out of luck, because all of the existing TOAST code works with\n> heap tuples.\n> \n> - Even if you copy all of tuptoaster.c/h - which is a lot of code -\n> and change everything that is different for your AM than for the\n> regular heap, you are still out of luck, because code that knows\n> nothing about tableam is going to call heap_tuple_untoast_attr() to\n> detoast stuff, and that code is only going to be happy if you've used\n> the same chunk size that we use for the regular heap, and that chunk\n> size has a good chance of being mildly to severely suboptimal if your\n> heap has made any sort of page format changes.\n\nWell, I don't *quite* buy the suboptimal. As far as I can tell, the\ncurrent chunking size doesn't have much going for it for heap either -\nand while a few people (me including) have complained about it, it's not\nthat many people either. My impression is that the current chunking is\nessentially a randomly chosen choice without much to go for it, and so\nit's not going to be much different for other AMs.\n\n\n> So I think this basically just doesn't work right now.\n\nI mean, the zheap on tableam code copies more toast code than I'm happy\nabout, and it's chunking is somewhat suboptimal, but that's not\n*terrible*. There's no if(zheap) branches outside of zheap related to\ntoasting.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 12:28:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On 2019-05-17 15:26:38 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > So I think this basically just doesn't work right now. I am\n> > sympathetic to Andres's position that we shouldn't go whacking the\n> > code around too much at this late date, and he's probably right that\n> > we're going to find lots of other problems with tableam as well and\n> > you have to draw the line someplace, but on the other hand given your\n> > experience and mine, it's probably pretty likely that anybody who\n> > tries to use tableam for anything is going to run into this problem,\n> > so maybe it's not crazy to think about a few last-minute changes.\n> \n> It seems to me that the entire tableam project is still very much WIP,\n\nAgreed on that front.\n\n\n> and if anybody is able to do anything actually useful with a different\n> AM right at the moment, that's just mighty good fortune for them.\n\nI think this is too negative. Yes, there's a warts, but you can write\nsomething like zheap without tableam related code modifications (undo\nhowever...). You can write something like zedstore, and it will works,\nwith a few warts. Yes, a bit of code duplication, and a few efficiency\nlosses are to be expected. But that's different from it being impossible\nto write an AM.\n\n\n> \"I can't do X in an external AM\" is not a bug, not for v12 anyway.\n\nIndeed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 12:31:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 3:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It seems to me that the entire tableam project is still very much WIP,\n> and if anybody is able to do anything actually useful with a different\n> AM right at the moment, that's just mighty good fortune for them.\n> It's way too late to be making destabilizing changes in v12 in order\n> to move the frontier of what can be done in a new AM.\n\nWhat about non-destabilizing changes? It seems to me that we could do\nsome good with a pretty simple patch that just moves most of the logic\nfrom needs_toast_table() below tableam, as in the attached. Then we\ncould leave the broader refactoring for v13.\n\nMaybe this is still too much, but it seems pretty simple so I thought I'd ask.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 17 May 2019 16:47:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "Hi,\n\n> +\n> +/*\n> + * Check to see whether the table needs a TOAST table. It does only if\n> + * (1) there are any toastable attributes, and (2) the maximum length\n> + * of a tuple could exceed TOAST_TUPLE_THRESHOLD. (We don't want to\n> + * create a toast table for something like \"f1 varchar(20)\".)\n> + */\n> +static bool\n> +heapam_needs_toast_table(Relation rel)\n> +{\n> +\tint32\t\tdata_length = 0;\n> +\tbool\t\tmaxlength_unknown = false;\n> +\tbool\t\thas_toastable_attrs = false;\n> +\tTupleDesc\ttupdesc = rel->rd_att;\n> +\tint32\t\ttuple_length;\n> +\tint\t\t\ti;\n> +\n> +\tfor (i = 0; i < tupdesc->natts; i++)\n> +\t{\n> +\t\tForm_pg_attribute att = TupleDescAttr(tupdesc, i);\n> +\n> +\t\tif (att->attisdropped)\n> +\t\t\tcontinue;\n> +\t\tdata_length = att_align_nominal(data_length, att->attalign);\n> +\t\tif (att->attlen > 0)\n> +\t\t{\n> +\t\t\t/* Fixed-length types are never toastable */\n> +\t\t\tdata_length += att->attlen;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tint32\t\tmaxlen = type_maximum_size(att->atttypid,\n> +\t\t\t\t\t\t\t\t\t\t\t\t att->atttypmod);\n> +\n> +\t\t\tif (maxlen < 0)\n> +\t\t\t\tmaxlength_unknown = true;\n> +\t\t\telse\n> +\t\t\t\tdata_length += maxlen;\n> +\t\t\tif (att->attstorage != 'p')\n> +\t\t\t\thas_toastable_attrs = true;\n> +\t\t}\n> +\t}\n> +\tif (!has_toastable_attrs)\n> +\t\treturn false;\t\t\t/* nothing to toast? */\n> +\tif (maxlength_unknown)\n> +\t\treturn true;\t\t\t/* any unlimited-length attrs? */\n> +\ttuple_length = MAXALIGN(SizeofHeapTupleHeader +\n> +\t\t\t\t\t\t\tBITMAPLEN(tupdesc->natts)) +\n> +\t\tMAXALIGN(data_length);\n> +\treturn (tuple_length > TOAST_TUPLE_THRESHOLD);\n> +}\n\n\nI'm ok with adding something roughly like this.\n\n\n> /* ------------------------------------------------------------------------\n> * Planner related callbacks for the heap AM\n> @@ -2558,6 +2615,8 @@ static const TableAmRoutine heapam_methods = {\n> \n> \t.relation_estimate_size = heapam_estimate_rel_size,\n> \n> +\t.needs_toast_table = heapam_needs_toast_table,\n> +\n\nI'd rather see this have a relation_ prefix.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 May 2019 13:51:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 1:51 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > /*\n> ------------------------------------------------------------------------\n> > * Planner related callbacks for the heap AM\n> > @@ -2558,6 +2615,8 @@ static const TableAmRoutine heapam_methods = {\n> >\n> > .relation_estimate_size = heapam_estimate_rel_size,\n> >\n> > + .needs_toast_table = heapam_needs_toast_table,\n> > +\n>\n> I'd rather see this have a relation_ prefix.\n>\n\n+1 to overall patch with that comment incorporated. This seems simple\nenough to incorporate for v12. Though stating that blind-folded with what\nelse is remaining to be must done for v12.\n\nOn Fri, May 17, 2019 at 1:51 PM Andres Freund <andres@anarazel.de> wrote:\n> /* ------------------------------------------------------------------------\n> * Planner related callbacks for the heap AM\n> @@ -2558,6 +2615,8 @@ static const TableAmRoutine heapam_methods = {\n> \n> .relation_estimate_size = heapam_estimate_rel_size,\n> \n> + .needs_toast_table = heapam_needs_toast_table,\n> +\n\nI'd rather see this have a relation_ prefix.+1 to overall patch with that comment incorporated. This seems simple enough to incorporate for v12. Though stating that blind-folded with what else is remaining to be must done for v12.",
"msg_date": "Fri, 17 May 2019 14:12:12 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 11:34 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-05-17 11:26:29 -0700, Ashwin Agrawal wrote:\n> > Currently TOAST table is always created (if needed based on data type\n> > properties) independent of table AM. How toasting is handled seems\n> > should be AM responsibility. Generic code shouldn't force the use of\n> > the separate table for the same. Like for Zedstore we store toasted\n> > chunks in separate blocks but within the table file itself and don't\n> > need separate toast table. Some other AM may implement the\n> > functionality differently. So, similar to relation forks, usage of\n> > toast table should be optional and left to AM to handle.\n>\n> Yea, Robert is also working on this. In fact, we were literally chatting\n> about it a few minutes ago. He'll probably chime in too.\n>\n\nThank You.\n\n\n> My inclination is that it's too late for 12 to do anything about\n> this. There are many known limitations, and we'll discover many more, of\n> the current tableam interface. If we try to fix them for 12, we'll never\n> get anywhere. It'll take a while to iron out all those wrinkles...\n>\n\nAgree on that, most of the stuff would be enhancements. And enhancements\ncan and will be made as we find them. Plus, will get added to the version\nactive in development that time. Intent is to start the discussion, and not\nto convey a bug or has to be fixed in v12.\n\nOn Fri, May 17, 2019 at 11:34 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-05-17 11:26:29 -0700, Ashwin Agrawal wrote:\n> Currently TOAST table is always created (if needed based on data type\n> properties) independent of table AM. How toasting is handled seems\n> should be AM responsibility. Generic code shouldn't force the use of\n> the separate table for the same. Like for Zedstore we store toasted\n> chunks in separate blocks but within the table file itself and don't\n> need separate toast table. Some other AM may implement the\n> functionality differently. So, similar to relation forks, usage of\n> toast table should be optional and left to AM to handle.\n\nYea, Robert is also working on this. In fact, we were literally chatting\nabout it a few minutes ago. He'll probably chime in too.Thank You. \nMy inclination is that it's too late for 12 to do anything about\nthis. There are many known limitations, and we'll discover many more, of\nthe current tableam interface. If we try to fix them for 12, we'll never\nget anywhere. It'll take a while to iron out all those wrinkles...Agree on that, most of the stuff would be enhancements. And enhancements can and will be made as we find them. Plus, will get added to the version active in development that time. Intent is to start the discussion, and not to convey a bug or has to be fixed in v12.",
"msg_date": "Fri, 17 May 2019 14:20:44 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > - If your AM uses some system other than TOAST to store large values,\n> > you are out of luck. You will get TOAST tables whether you want them\n> > or not.\n>\n> Which is aesthetically and indode usage wise annoying, but not\n> *terrible*. You get a a bunch of useless pg_class/pg_index entries and a\n> few close-to-empty relfilenodes.\n\nOK, that's fair.\n\n> Well, I don't *quite* buy the suboptimal. As far as I can tell, the\n> current chunking size doesn't have much going for it for heap either -\n> and while a few people (me including) have complained about it, it's not\n> that many people either. My impression is that the current chunking is\n> essentially a randomly chosen choice without much to go for it, and so\n> it's not going to be much different for other AMs.\n\nI don't think that's really quite fair. The size is carefully chosen\nso that you can fit 4 rows on a page with no free space left over.\nThe wisdom of that particular choice is debatable, but think how sad\nyou'd be if your AM had 4 bytes less free space available on every\npage (because, idk, you stored the epoch in the special space, or\nwhatever). If you could somehow get the system to store your TOAST\nchunks in your side table, you'd end up only being able to fit 3 toast\nchunks per page, because the remaining space after you put in 3 chunks\nwould be 4 bytes too small for another chunk. That is really the\npits, because now your toast table is going to be 33% larger than it\nwould have been otherwise.\n\n> > So I think this basically just doesn't work right now.\n>\n> I mean, the zheap on tableam code copies more toast code than I'm happy\n> about, and it's chunking is somewhat suboptimal, but that's not\n> *terrible*. There's no if(zheap) branches outside of zheap related to\n> toasting.\n\nI admit to not having studied that terribly closely, so maybe the\nsituation is not as bad as I think. In any case, it bears saying that\ntableam is a remarkable accomplishment regardless of whatever\nshortcomings it has in this area or elsewhere. And it's not really\nany skin off my neck whether we do anything to improve this for v12 or\nnot, because no table AM written by me is likely to get deployed\nagainst PostgreSQL 12, so why am I even arguing about this? Am I just\na naturally argumentative person?\n\nDon't answer that...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 May 2019 17:41:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 2:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> In any case, it bears saying that\n> tableam is a remarkable accomplishment regardless of whatever\n> shortcomings it has in this area or elsewhere.\n>\n\nBig +1 to this.\n\nOn Fri, May 17, 2019 at 2:42 PM Robert Haas <robertmhaas@gmail.com> wrote:In any case, it bears saying that\ntableam is a remarkable accomplishment regardless of whatever\nshortcomings it has in this area or elsewhere.Big +1 to this.",
"msg_date": "Fri, 17 May 2019 14:54:50 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "Just throwing this out there.... Perhaps we should just disable toasting\nfor non-heap tables entirely for now?\n\nThat way at least people can use it and storage plugins just have to be\nable to deal with large datums in their own (or throw errors).\n\nOn Fri., May 17, 2019, 5:56 p.m. Ashwin Agrawal, <aagrawal@pivotal.io>\nwrote:\n\n> On Fri, May 17, 2019 at 2:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> In any case, it bears saying that\n>> tableam is a remarkable accomplishment regardless of whatever\n>> shortcomings it has in this area or elsewhere.\n>>\n>\n> Big +1 to this.\n>\n\nJust throwing this out there.... Perhaps we should just disable toasting for non-heap tables entirely for now? That way at least people can use it and storage plugins just have to be able to deal with large datums in their own (or throw errors). On Fri., May 17, 2019, 5:56 p.m. Ashwin Agrawal, <aagrawal@pivotal.io> wrote:On Fri, May 17, 2019 at 2:42 PM Robert Haas <robertmhaas@gmail.com> wrote:In any case, it bears saying that\ntableam is a remarkable accomplishment regardless of whatever\nshortcomings it has in this area or elsewhere.Big +1 to this.",
"msg_date": "Sun, 19 May 2019 10:03:23 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Fri, May 17, 2019 at 5:12 PM Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n>> I'd rather see this have a relation_ prefix.\n>\n> +1 to overall patch with that comment incorporated. This seems simple enough to incorporate for v12. Though stating that blind-folded with what else is remaining to be must done for v12.\n\nRebased and updated patch attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 20 May 2019 10:29:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-20 10:29:29 -0400, Robert Haas wrote:\n> From 4e361bfe51810d7c637bf57968da2dfea4197701 Mon Sep 17 00:00:00 2001\n> From: Robert Haas <rhaas@postgresql.org>\n> Date: Fri, 17 May 2019 16:01:47 -0400\n> Subject: [PATCH v2] tableam: Move heap-specific logic from needs_toast_table\n> below tableam.\n\nAssuming you didn't sneakily change the content of\nheapam_relation_needs_toast_table from its previous behaviour, this\nlooks good to me ;)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 08:53:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:53 AM Andres Freund <andres@anarazel.de> wrote:\n> Assuming you didn't sneakily change the content of\n> heapam_relation_needs_toast_table from its previous behaviour, this\n> looks good to me ;)\n\ngit diff --color-moved=zebra says I didn't.\n\nCommitted; thanks for the review.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 May 2019 12:04:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Create TOAST table only if AM needs"
}
] |
[
{
"msg_contents": "In a nearby thread[1], Ashwin Agrawal complained that there is no way\nfor a table AM to get rid the TOAST table that the core system thinks\nshould be created. To that I added a litany of complaints of my own,\nincluding...\n\n- the core system decides whether or not a TOAST table is needed based\non criteria that are very much heap-specific,\n- the code for reading and writing values stored in a TOAST table is\nheap-specific, and\n- the core system assumes that you want to use the same table AM for\nthe main table and the toast table, but you might not (e.g. you might\nwant to use the regular old heap for the latter).\n\nAttached as a series of patches which try to improve things in this\narea. Except possibly for 0001, this is v13 material; see discussion\non the other thread. These likely need some additional work, but I've\ndone enough with them that I thought it would be worth publishing them\nat this stage, because it seems that I'm not the only one thinking\nabout the problems that exist in this general area. Here is an\noverview:\n\n0001 moves the needs_toast_table() calculation below the table AM\nlayer. That allows a table AM to decide for itself whether it wants a\nTOAST table. The most obvious way in which a table AM might want to\nbe different from what core expects is to decide that the answer is\nalways \"no,\" which it can do if it has some other method of storing\nlarge values or doesn't wish to support them. Another possibility is\nthat it wants logic that is basically similar to the heap, but with a\ndifferent size threshold because its tuple format is different. There\nare probably other possibilities.\n\n0002 breaks tuptoaster.c into three separate files. It just does code\nmovement; no functional changes. The three pieces are detoast.c,\nwhich handles detoasting of toast values and inspection of the sizes\nof toasted datums; heaptoast.c, which keeps all the functions that are\nintrinsically heap-specific; and toast_internals.c, which is intended\nto have a very limited audience. A nice fringe benefit of this stuff\nis that a lot of other files that current have to include tuptoaster.h\nand thus htup_details.h no longer do.\n\n0003 creates a new file toast_helper.c which is intended to help table\nAMs implement insertion and deletion of toast table rows. Most of the\nAM-independent logic from the functions remaining in heaptoast.c is\nmoved to this file. This leaves about ~600 of the original ~2400\nlines from tuptoaster.c as heap-specific logic, but a new heap AM\nactually wouldn't need all of that stuff, because some of the logic\nhere is in support of stuff like record types, which use HeapTuple\ninternally and will continue to do so even if those record types are\nstored in some other kind of table.\n\n0004 allows TOAST tables to be implemented using a table AM other than\nheap. In a certain sense this is the opposite of 0003. 0003 is\nintended to help people who are implementing a new kind of main table,\nwhereas 0004 is intended to help people implementing a new kind of\nTOAST table. It teaches the code that inserts, deletes, and retrieves\nTOAST row to use slots, and it makes some efficiency improvements in\nthe hopes of offsetting any performance loss from so doing. See\ncommit message and/or patch for full details.\n\nI believe that with all of these changes it should be pretty\nstraightforward for a table AM that wants to use itself to store TOAST\ndata to do so, or to delegate that task back to say the regular heap.\nI haven't really validated that yet, but plan to do so.\n\nIn addition to what's in this patch set, I believe that we should\nprobably rename some of these functions and macros, so that the\nheap-specific ones have heap-specific names and the generic ones\ndon't, but I haven't gone through all of that yet. The existing\npatches try to choose good names for the new things they add, but they\ndon't rename any of the existing stuff. I also think we should\nconsider removing TOAST_MAX_CHUNK_SIZE from the control file, both\nbecause I'm not sure anybody's really using the ability to vary that\nfor anything and because that solution doesn't seem entirely sensible\nin a world of multiple AMs. However, that is a debatable change, so\nmaybe others will disagree.\n\n[1] http://postgr.es/m/CALfoeitE+P8UGii8=BsGQLpHch2EZWJhq4M+D-jfaj8YCa_FSw@mail.gmail.com\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 17 May 2019 17:21:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "tableam vs. TOAST"
},
{
"msg_contents": "Updated and rebased patches attached.\n\nOn Fri, May 17, 2019 at 5:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 0001 moves the needs_toast_table() calculation below the table AM\n> layer. That allows a table AM to decide for itself whether it wants a\n> TOAST table. The most obvious way in which a table AM might want to\n> be different from what core expects is to decide that the answer is\n> always \"no,\" which it can do if it has some other method of storing\n> large values or doesn't wish to support them. Another possibility is\n> that it wants logic that is basically similar to the heap, but with a\n> different size threshold because its tuple format is different. There\n> are probably other possibilities.\n\nThis was committed as 1171d7d58545f26a402f76a05936d572bf29d53b per\ndiscussion on another thread.\n\n> 0002 breaks tuptoaster.c into three separate files. It just does code\n> movement; no functional changes. The three pieces are detoast.c,\n> which handles detoasting of toast values and inspection of the sizes\n> of toasted datums; heaptoast.c, which keeps all the functions that are\n> intrinsically heap-specific; and toast_internals.c, which is intended\n> to have a very limited audience. A nice fringe benefit of this stuff\n> is that a lot of other files that current have to include tuptoaster.h\n> and thus htup_details.h no longer do.\n\nNow 0001. No changes.\n\n> 0003 creates a new file toast_helper.c which is intended to help table\n> AMs implement insertion and deletion of toast table rows. Most of the\n> AM-independent logic from the functions remaining in heaptoast.c is\n> moved to this file. This leaves about ~600 of the original ~2400\n> lines from tuptoaster.c as heap-specific logic, but a new heap AM\n> actually wouldn't need all of that stuff, because some of the logic\n> here is in support of stuff like record types, which use HeapTuple\n> internally and will continue to do so even if those record types are\n> stored in some other kind of table.\n\nNow 0002. No changes.\n\n> 0004 allows TOAST tables to be implemented using a table AM other than\n> heap. In a certain sense this is the opposite of 0003. 0003 is\n> intended to help people who are implementing a new kind of main table,\n> whereas 0004 is intended to help people implementing a new kind of\n> TOAST table. It teaches the code that inserts, deletes, and retrieves\n> TOAST row to use slots, and it makes some efficiency improvements in\n> the hopes of offsetting any performance loss from so doing. See\n> commit message and/or patch for full details.\n\nNow 0003. Some brain fade repaired.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 21 May 2019 14:10:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, May 21, 2019 at 2:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Updated and rebased patches attached.\n\nAnd again.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 11 Jun 2019 12:17:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 9:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, May 21, 2019 at 2:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Updated and rebased patches attached.\n>\n> And again.\n>\n\nHi Robert,\n\nI have tested the TOAST patches(v3) with different storage options\nlike(MAIN, EXTERNAL, EXTENDED, etc.), and\ncombinations of compression and out-of-line storage options.\nI have used a few dummy tables with various tuple count say 10k, 20k, 40k,\netc. with different column lengths.\nUsed manual CHECKPOINT option with (checkpoint_timeout = 1d, max_wal_size =\n10GB) before the test to avoid performance fluctuations,\nand calculated the results as a median value of a few consecutive test\nexecutions.\n\nPlease find the SQL script attached herewith, which I have used to perform\nthe observation.\n\nBelow are the test scenarios, how I have checked the behavior and\nperformance of TOAST patches against PG master.\n1. where a single column is compressed(SCC)\n2. where multiple columns are compressed(MCC)\n -- ALTER the table column/s for storage as \"MAIN\" to make sure that\nthe column values are COMPRESSED.\n\n3. where a single column is pushed to the TOAST table but not\ncompressed(SCTNC)\n4. where multiple columns are pushed to the TOAST table but not\ncompressed(MCTNC)\n -- ALTER the table column/s for storage as \"EXTERNAL\" to make sure\nthat the column values are pushed to the TOAST table but not COMPRESSED.\n\n5. where a single column is pushed to the TOAST table and also\ncompressed(SCTC)\n6. where multiple columns are pushed to the TOAST table and also\ncompressed(MCTC)\n -- ALTER the table column/s for storage as \"EXTENDED\" to make sure\nthat the column values are pushed to the TOAST table and also COMPRESSED.\n\n7. updating the tuples with similar data shouldn't affect the behavior of\nstorage options.\n\nPlease find my observation as below:\nSystem Used: (VCPUs: 8, RAM: 16GB, Size: 640GB)\n10000 Tuples 20000 Tuples 40000 Tuples 80000 Tuples\nWithout Patch With Patch Without Patch With Patch Without Patch With\nPatch Without\nPatch With Patch\n1. SCC INSERT 125921.737 ms (02:05.922) 125992.563 ms (02:05.993) 234263.295\nms (03:54.263) 235952.336 ms (03:55.952) 497290.442 ms (08:17.290) 502820.139\nms (08:22.820) 948470.603 ms (15:48.471) 941778.952 ms (15:41.779)\n1. SCC UPDATE 263017.814 ms (04:23.018) 270893.910 ms (04:30.894) 488393.748\nms (08:08.394) 507937.377 ms (08:27.937) 1078862.613 ms (17:58.863) 1053029.428\nms (17:33.029) 2037119.576 ms (33:57.120) 2023633.862 ms (33:43.634)\n2. MCC INSERT 35415.089 ms (00:35.415) 35910.552 ms (00:35.911) 70899.737\nms (01:10.900) 70800.964 ms (01:10.801) 142185.996 ms (02:22.186) 142241.913\nms (02:22.242)\n2. MCC UPDATE 72043.757 ms (01:12.044) 73848.732 ms (01:13.849) 137717.696\nms (02:17.718) 137577.606 ms (02:17.578) 276358.752 ms (04:36.359) 276520.727\nms (04:36.521)\n3. SCTNC INSERT 26377.274 ms (00:26.377) 25600.189 ms (00:25.600) 45702.630\nms (00:45.703) 45163.510 ms (00:45.164) 99903.299 ms (01:39.903) 100013.004\nms (01:40.013)\n3. SCTNC UPDATE 78385.225 ms (01:18.385) 76680.325 ms (01:16.680) 151823.250\nms (02:31.823) 153503.971 ms (02:33.504) 308197.734 ms (05:08.198) 308474.937\nms (05:08.475)\n4. MCTNC INSERT 26214.069 ms (00:26.214) 25383.522 ms (00:25.384) 50826.522\nms (00:50.827) 50221.669 ms (00:50.222) 106034.338 ms (01:46.034) 106122.827\nms (01:46.123)\n4. MCTNC UPDATE 78423.817 ms (01:18.424) 75154.593 ms (01:15.155) 158885.787\nms (02:38.886) 156530.964 ms (02:36.531) 319721.266 ms (05:19.721) 322385.709\nms (05:22.386)\n5. SCTC INSERT 38451.022 ms (00:38.451) 38652.520 ms (00:38.653) 71590.748\nms (01:11.591) 71048.975 ms (01:11.049) 143327.913 ms (02:23.328) 142593.207\nms (02:22.593)\n5. SCTC UPDATE 82069.311 ms (01:22.069) 81678.131 ms (01:21.678) 138763.508\nms (02:18.764) 138625.473 ms (02:18.625) 277534.080 ms (04:37.534) 277091.611\nms (04:37.092)\n6. MCTC INSERT 36325.730 ms (00:36.326) 35803.368 ms (00:35.803) 73285.204\nms (01:13.285) 72728.371 ms (01:12.728) 142324.859 ms (02:22.325) 144368.335\nms (02:24.368)\n6. MCTC UPDATE 73740.729 ms (01:13.741) 73002.511 ms (01:13.003) 141309.859\nms (02:21.310) 139676.173 ms (02:19.676) 278906.647 ms (04:38.907) 279522.408\nms (04:39.522)\n\nAll the observation looks good to me,\nexcept for the \"Test1\" for SCC UPDATE with tuple count(10K/20K), for SCC\nINSERT with tuple count(40K) there was a slightly increse in time taken\nincase of \"with patch\" result. For a better observation, I also have ran\nthe same \"Test 1\" for higher tuple count(i.e. 80K), and it also looks fine.\n\nI also have performed the below test with TOAST table objects.\n8. pg_dump/restore, pg_upgrade with these\n9. Streaming Replication setup\n10. Concurrent Transactions\n\nWhile testing few concurrent transactions I have below query:\n-- Concurrent transactions acquire a lock for TOAST option(ALTER TABLE ..\nSET STORAGE .. MAIN/EXTERNAL/EXTENDED/ etc)\n\n-- Session 1:\nCREATE TABLE a (a_id text PRIMARY KEY);\nCREATE TABLE b (b_id text);\nINSERT INTO a VALUES ('a'), ('b');\nINSERT INTO b VALUES ('a'), ('b'), ('b');\n\nBEGIN;\nALTER TABLE b ADD CONSTRAINT bfk FOREIGN KEY (b_id) REFERENCES a (a_id);\n -- Not Acquiring any lock\n\n-- Session 2:\nSELECT * FROM b WHERE b_id = 'a'; -- Shows result\n\n-- Session 1:\nALTER TABLE b ALTER COLUMN b_id SET STORAGE EXTERNAL; -- Acquire a\nlock\n\n-- Session 2:\nSELECT * FROM b WHERE b_id = 'a'; -- Hang/Waiting for lock in\nsession 1\n\nIs this an expected behavior?\n\n\n-- \n\nWith Regards,\nPrabhat Kumar Sahu",
"msg_date": "Tue, 25 Jun 2019 11:49:09 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 4:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, May 21, 2019 at 2:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Updated and rebased patches attached.\n>\n> And again.\n\nHi Robert,\n\nThus spake GCC:\n\ndetoast.c: In function ‘toast_fetch_datum’:\ndetoast.c:308:12: error: variable ‘toasttupDesc’ set but not used\n[-Werror=unused-but-set-variable]\nTupleDesc toasttupDesc;\n^\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:08:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, Jun 25, 2019 at 2:19 AM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n> I have tested the TOAST patches(v3) with different storage options like(MAIN, EXTERNAL, EXTENDED, etc.), and\n> combinations of compression and out-of-line storage options.\n> I have used a few dummy tables with various tuple count say 10k, 20k, 40k, etc. with different column lengths.\n> Used manual CHECKPOINT option with (checkpoint_timeout = 1d, max_wal_size = 10GB) before the test to avoid performance fluctuations,\n> and calculated the results as a median value of a few consecutive test executions.\n\nThanks for testing.\n\n> All the observation looks good to me,\n> except for the \"Test1\" for SCC UPDATE with tuple count(10K/20K), for SCC INSERT with tuple count(40K) there was a slightly increse in time taken\n> incase of \"with patch\" result. For a better observation, I also have ran the same \"Test 1\" for higher tuple count(i.e. 80K), and it also looks fine.\n\nDid you run each test just once? How stable are the results?\n\n> While testing few concurrent transactions I have below query:\n> -- Concurrent transactions acquire a lock for TOAST option(ALTER TABLE .. SET STORAGE .. MAIN/EXTERNAL/EXTENDED/ etc)\n>\n> -- Session 1:\n> CREATE TABLE a (a_id text PRIMARY KEY);\n> CREATE TABLE b (b_id text);\n> INSERT INTO a VALUES ('a'), ('b');\n> INSERT INTO b VALUES ('a'), ('b'), ('b');\n>\n> BEGIN;\n> ALTER TABLE b ADD CONSTRAINT bfk FOREIGN KEY (b_id) REFERENCES a (a_id); -- Not Acquiring any lock\n\nFor me, this acquires AccessShareLock and ShareRowExclusiveLock on the\ntarget table.\n\nrhaas=# select locktype, database, relation, pid, mode, granted from\npg_locks where relation = 'b'::regclass;\n locktype | database | relation | pid | mode | granted\n----------+----------+----------+-------+-----------------------+---------\n relation | 16384 | 16872 | 93197 | AccessShareLock | t\n relation | 16384 | 16872 | 93197 | ShareRowExclusiveLock | t\n(2 rows)\n\nI don't see what that has to do with the topic at hand, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 8 Jul 2019 11:36:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Sun, Jul 7, 2019 at 11:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thus spake GCC:\n>\n> detoast.c: In function ‘toast_fetch_datum’:\n> detoast.c:308:12: error: variable ‘toasttupDesc’ set but not used\n> [-Werror=unused-but-set-variable]\n> TupleDesc toasttupDesc;\n> ^\n\nHmm, fixed, I hope.\n\nHere's an updated patch set. In addition to the above fix, I fixed\nthings up for the new pgindent rules and added a fourth patch that\nrenames the detoasting functions to something that doesn't include\n'heap.'\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 8 Jul 2019 12:52:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Mon, Jul 8, 2019 at 9:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jun 25, 2019 at 2:19 AM Prabhat Sahu\n> <prabhat.sahu@enterprisedb.com> wrote:\n> > I have tested the TOAST patches(v3) with different storage options\n> like(MAIN, EXTERNAL, EXTENDED, etc.), and\n> > combinations of compression and out-of-line storage options.\n> > I have used a few dummy tables with various tuple count say 10k, 20k,\n> 40k, etc. with different column lengths.\n> > Used manual CHECKPOINT option with (checkpoint_timeout = 1d,\n> max_wal_size = 10GB) before the test to avoid performance fluctuations,\n> > and calculated the results as a median value of a few consecutive test\n> executions.\n>\n> Thanks for testing.\n>\n> > All the observation looks good to me,\n> > except for the \"Test1\" for SCC UPDATE with tuple count(10K/20K), for SCC\n> INSERT with tuple count(40K) there was a slightly increse in time taken\n> > incase of \"with patch\" result. For a better observation, I also have ran\n> the same \"Test 1\" for higher tuple count(i.e. 80K), and it also looks fine.\n>\n> Did you run each test just once? How stable are the results?\n>\nNo, I have executed the test multiple times(7times each) and calculated the\nresult as the median among those,\nand the result looks stable(with v3 patches).\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nOn Mon, Jul 8, 2019 at 9:06 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jun 25, 2019 at 2:19 AM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n> I have tested the TOAST patches(v3) with different storage options like(MAIN, EXTERNAL, EXTENDED, etc.), and\n> combinations of compression and out-of-line storage options.\n> I have used a few dummy tables with various tuple count say 10k, 20k, 40k, etc. with different column lengths.\n> Used manual CHECKPOINT option with (checkpoint_timeout = 1d, max_wal_size = 10GB) before the test to avoid performance fluctuations,\n> and calculated the results as a median value of a few consecutive test executions.\n\nThanks for testing.\n\n> All the observation looks good to me,\n> except for the \"Test1\" for SCC UPDATE with tuple count(10K/20K), for SCC INSERT with tuple count(40K) there was a slightly increse in time taken\n> incase of \"with patch\" result. For a better observation, I also have ran the same \"Test 1\" for higher tuple count(i.e. 80K), and it also looks fine.\n\nDid you run each test just once? How stable are the results?No, I have executed the test multiple times(7times each) and calculated the result as the median among those,and the result looks stable(with v3 patches).-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Tue, 9 Jul 2019 10:10:11 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 12:40 AM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>> Did you run each test just once? How stable are the results?\n>\n> No, I have executed the test multiple times(7times each) and calculated the result as the median among those,\n> and the result looks stable(with v3 patches).\n\nI spent some time looking at your SCC test today. I think this isn't\nreally testing the code that actually got changed in the patch: a\nquick CPU profile shows that your SCC test is bottlenecked on\npg_lzcompress, which spends a huge amount of time compressing the\ngigantic string of 'a's you've constructed, and that code is exactly\nthe same with the patch as it in master. So, I think that any\nfluctuations between the patched and unpatched results are just random\nvariation. There's no reason the patch should be slower with one row\ncount and faster with a different row count, anyway.\n\nI tried to come up with a better test case that uses a more modest\namount of data, and ended up with this:\n\n-- Setup.\nCREATE OR REPLACE FUNCTION randomish_string(integer) RETURNS text AS $$\nSELECT string_agg(random()::text, '') FROM generate_series(1, $1);\n$$ LANGUAGE sql;\n\nCREATE TABLE source_compressed (a int, b text);\nINSERT INTO source_compressed\nSELECT g, repeat('a', 2000) FROM generate_series(1, 10000) g;\nCREATE TABLE sink_compressed (LIKE source_compressed);\n\nCREATE TABLE source_external (a int, b text);\nINSERT INTO source_external\nSELECT g, randomish_string(400) FROM generate_series(1, 10000) g;\n\nCREATE TABLE sink_external (LIKE source_external);\nCREATE TABLE source_external_uncompressed (a int, b text);\nALTER TABLE source_external_uncompressed ALTER COLUMN b SET STORAGE EXTERNAL;\nINSERT INTO source_external_uncompressed\nSELECT g, randomish_string(400) FROM generate_series(1, 10000) g;\nCREATE TABLE sink_external_uncompressed (LIKE source_external_uncompressed);\nALTER TABLE sink_external_uncompressed ALTER COLUMN b SET STORAGE EXTERNAL;\n\n-- Test.\n\\timing\nTRUNCATE sink_compressed, sink_external, sink_external_uncompressed;\nCHECKPOINT;\nINSERT INTO sink_compressed SELECT * FROM source_compressed;\nINSERT INTO sink_external SELECT * FROM source_external;\nINSERT INTO sink_external_uncompressed SELECT * FROM\nsource_external_uncompressed;\n\nRoughly, on both master and with the patches, the first one takes\nabout 4.2 seconds, the second 7.5, and the third 1.2. The third one\nis the fastest because it doesn't do any compression. Since it does\nless irrelevant work than the other two cases, it has the best chance\nof showing up any performance regression that the patch has caused --\nif any regression existed, I suppose that it would be an increased\nper-toast-fetch or per-toast-chunk overhead. However, I can't\nreproduce any such regression. My first attempt at testing that case\nshowed the patch about 1% slower, but that wasn't reliably\nreproducible when I did it a bunch more times. So as far as I can\nfigure, this patch does not regress performance in any\neasily-measurable way.\n\nBarring objections, I plan to commit the whole series of patches here\n(latest rebase attached). They are not perfect and could likely be\nimproved in various ways, but I think they are an improvement over\nwhat we have now, and it's not like it's set in stone once it's\ncommitted. We can change it more if we come up with a better idea.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 1 Aug 2019 12:23:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 12:23:42 -0400, Robert Haas wrote:\n> Barring objections, I plan to commit the whole series of patches here\n> (latest rebase attached). They are not perfect and could likely be\n> improved in various ways, but I think they are an improvement over\n> what we have now, and it's not like it's set in stone once it's\n> committed. We can change it more if we come up with a better idea.\n\nCould you wait until I either had a chance to look again, or until, say,\nMonday if I don't get around to it? I'll try to get to it earlier than\nthat.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Aug 2019 10:53:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Aug 1, 2019 at 1:53 PM Andres Freund <andres@anarazel.de> wrote:\n> Could you wait until I either had a chance to look again, or until, say,\n> Monday if I don't get around to it? I'll try to get to it earlier than\n> that.\n\nSure, no problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 1 Aug 2019 13:55:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-01 12:23:42 -0400, Robert Haas wrote:\n> Roughly, on both master and with the patches, the first one takes\n> about 4.2 seconds, the second 7.5, and the third 1.2. The third one\n> is the fastest because it doesn't do any compression. Since it does\n> less irrelevant work than the other two cases, it has the best chance\n> of showing up any performance regression that the patch has caused --\n> if any regression existed, I suppose that it would be an increased\n> per-toast-fetch or per-toast-chunk overhead. However, I can't\n> reproduce any such regression. My first attempt at testing that case\n> showed the patch about 1% slower, but that wasn't reliably\n> reproducible when I did it a bunch more times. So as far as I can\n> figure, this patch does not regress performance in any\n> easily-measurable way.\n\nHm, those all include writing, right? And for read-only we don't expect\nany additional overhead, correct? The write overhead is probably too\nlarge show a bit of function call overhead - but if so, it'd probably be\non unlogged tables? And with COPY, because that utilizes multi_insert,\nwhich means more toasting in a shorter amount of time?\n\n\n.oO(why does everyone attach attachements out of order? Is that\na gmail thing?)\n\n\n> From a4c858c75793f0f8aff7914c572a6615ea5babf8 Mon Sep 17 00:00:00 2001\n> From: Robert Haas <rhaas@postgresql.org>\n> Date: Mon, 8 Jul 2019 11:58:05 -0400\n> Subject: [PATCH 1/4] Split tuptoaster.c into three separate files.\n>\n> detoast.c/h contain functions required to detoast a datum, partially\n> or completely, plus a few other utility functions for examining the\n> size of toasted datums.\n>\n> toast_internals.c/h contain functions that are used internally to the\n> TOAST subsystem but which (mostly) do not need to be accessed from\n> outside.\n>\n> heaptoast.c/h contains code that is intrinsically specific to the\n> heap AM, either because it operates on HeapTuples or is based on the\n> layout of a heap page.\n>\n> detoast.c and toast_internals.c are placed in\n> src/backend/access/common rather than src/backend/access/heap. At\n> present, both files still have dependencies on the heap, but that will\n> be improved in a future commit.\n\nI wonder if toasting.c should be moved too?\n\nIf anybody doesn't know git's --color-moved, it makes patches like this\na lot easier to review.\n\n> index 00000000000..582af147ea1\n> --- /dev/null\n> +++ b/src/include/access/detoast.h\n> @@ -0,0 +1,92 @@\n> +/*-------------------------------------------------------------------------\n> + *\n> + * detoast.h\n> + * Access to compressed and external varlena values.\n>\n> Hm. Does it matter that that also includes stuff like expanded datums?\n>\n> + * Copyright (c) 2000-2019, PostgreSQL Global Development Group\n> + *\n> + * src/include/access/detoast.h\n> + *\n> + *-------------------------------------------------------------------------\n> + */\n> +#ifndef DETOAST_H\n> +#define DETOAST_H\n\ntrailing whitespace after \"#ifndef DETOAST_H \".\n\n\n> commit 60d51e6510c66f79c51e43fe22730fe050d87854\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: 2019-07-08 12:02:16 -0400\n>\n> Create an API for inserting and deleting rows in TOAST tables.\n>\n> This moves much of the non-heap-specific logic from toast_delete and\n> toast_insert_or_update into a helper functions accessible via a new\n> header, toast_helper.h. Using the functions in this module, a table\n> AM can implement creation and deletion of TOAST table rows with\n> much less code duplication than was possible heretofore. Some\n> table AMs won't want to use the TOAST logic at all, but for those\n> that do this will make that easier.\n>\n> Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com\n\nHm. This leaves toast_insert_or_update() as a name. That makes it sound\nlike it's generic toast code, rather than heap specific?\n\nIt's definitely nice how a lot of repetitive code has been deduplicated.\nAlso makes it easier to see how algorithmically expensive\ntoast_insert_or_update() is :(.\n\n\n>\t/*\n>\t * Second we look for attributes of attstorage 'x' or 'e' that are still\n>\t * inline, and make them external. But skip this if there's no toast\n>\t * table to push them to.\n>\t */\n>\twhile (heap_compute_data_size(tupleDesc,\n>\t\t\t\t\t\t\t\t toast_values, toast_isnull) > maxDataLen &&\n>\t\t rel->rd_rel->reltoastrelid != InvalidOid)\n\nShouldn't this condition be the other way round?\n\n\n> \tif (for_compression)\n> \t\tskip_colflags |= TOASTCOL_INCOMPRESSIBLE;\n>\n> \tfor (i = 0; i < numAttrs; i++)\n> \t{\n> \t\tForm_pg_attribute att = TupleDescAttr(tupleDesc, i);\n>\n> \t\tif ((ttc->ttc_attr[i].tai_colflags & skip_colflags) != 0)\n> \t\t\tcontinue;\n> \t\tif (VARATT_IS_EXTERNAL(DatumGetPointer(ttc->ttc_values[i])))\n> \t\t\tcontinue;\t\t\t/* can't happen, toast_action would be 'p' */\n> \t\tif (for_compression &&\n> \t\t\tVARATT_IS_COMPRESSED(DatumGetPointer(ttc->ttc_values[i])))\n> \t\t\tcontinue;\n> \t\tif (check_main && att->attstorage != 'm')\n> \t\t\tcontinue;\n> \t\tif (!check_main && att->attstorage != 'x' && att->attstorage != 'e')\n> \t\t\tcontinue;\n>\n> \t\tif (ttc->ttc_attr[i].tai_size > biggest_size)\n> \t\t{\n> \t\t\tbiggest_attno = i;\n> \t\t\tbiggest_size = ttc->ttc_attr[i].tai_size;\n> \t\t}\n> \t}\n\nCouldn't most of these be handled via colflags, instead of having\nnumerous individual checks? I.e. if you had TOASTCOL_COMPRESSED,\nTOASTCOL_IGNORED, TOASTCOL_MAIN, TOASTFLAG_EXTERNAL, etc, all but the\nsize check ought to boil down to a single mask test?\n\n\n> extern void toast_tuple_init(ToastTupleContext *ttc);\n> extern int\ttoast_tuple_find_biggest_attribute(ToastTupleContext *ttc,\n> \t\t\t\t\t\t\t\t\t\t\t bool for_compression,\n> \t\t\t\t\t\t\t\t\t\t\t bool check_main);\n> extern void toast_tuple_try_compression(ToastTupleContext *ttc, int attribute);\n> extern void toast_tuple_externalize(ToastTupleContext *ttc, int attribute,\n> \t\t\t\t\t\t\t\t\tint options, int max_chunk_size);\n> extern void toast_tuple_cleanup(ToastTupleContext *ttc, bool cleanup_toastrel);\n\nI wonder if a better prefix wouldn't be toasting_...\n\n\n> +/*\n> + * Information about one column of a tuple being toasted.\n> + *\n> + * NOTE: toast_action[i] can have these values:\n> + * ' ' default handling\n> + * 'p' already processed --- don't touch it\n> + * 'x' incompressible, but OK to move off\n> + *\n> + * NOTE: toast_attr[i].tai_size is only made valid for varlena attributes with\n> + * toast_action[i] different from 'p'.\n> + */\n> +typedef struct\n> +{\n> + struct varlena *tai_oldexternal;\n> + int32 tai_size;\n> + uint8 tai_colflags;\n> +} ToastAttrInfo;\n\nI think that comment is outdated?\n\n> +/*\n> + * Flags indicating the overall state of a TOAST operation.\n> + *\n> + * TOAST_NEEDS_DELETE_OLD indicates that one or more old TOAST datums need\n> + * to be deleted.\n> + *\n> + * TOAST_NEEDS_FREE indicates that one or more TOAST values need to be freed.\n> + *\n> + * TOAST_HAS_NULLS indicates that nulls were found in the tuple being toasted.\n> + *\n> + * TOAST_NEEDS_CHANGE indicates that a new tuple needs to built; in other\n> + * words, the toaster did something.\n> + */\n> +#define TOAST_NEEDS_DELETE_OLD 0x0001\n> +#define TOAST_NEEDS_FREE 0x0002\n> +#define TOAST_HAS_NULLS 0x0004\n> +#define TOAST_NEEDS_CHANGE 0x0008\n\nI'd make these enums. They're more often accessible in a debugger...\n\n\n\n> commit 9e4bd383a00e6bb96088666e57673b343049345c\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: 2019-08-01 10:37:02 -0400\n>\n> Allow TOAST tables to be implemented using table AMs other than heap.\n>\n> toast_fetch_datum, toast_save_datum, and toast_delete_datum are\n> adjusted to use tableam rather than heap-specific functions. This\n> might have some performance impact, but this patch attempts to\n> mitigate that by restructuring things so that we don't open and close\n> the toast table and indexes multiple times per tuple.\n>\n> tableam now exposes an integer value (not a callback) for the\n> maximum TOAST chunk size, and has a new callback allowing table\n> AMs to specify the AM that should be used to implement the TOAST\n> table. Previously, the toast AM was always the same as the table AM.\n>\n> Patch by me, tested by Prabhat Sabu.\n>\n> Discussion: http://postgr.es/m/CA+TgmoZv-=2iWM4jcw5ZhJeL18HF96+W1yJeYrnGMYdkFFnEpQ@mail.gmail.com\n\nI'm quite unconvinced that making the chunk size specified by the AM is\na good idea to do in isolation. We have TOAST_MAX_CHUNK_SIZE in\npg_control etc. It seems a bit dangerous to let AMs provide the size,\nwithout being very clear that any change of the value will make data\ninaccessible. It'd be different if the max were only used during\ntoasting.\n\nI think the *size* checks should be weakened so we check:\n1) After each chunk, whether the already assembled chunks exceed the\n expected size.\n2) After all chunks have been collected, check that the size is exactly\n what we expect.\n\nI don't think that removes a meaningful amount of error\nchecking. Missing tuples etc get detected by the chunk_ids not being\nconsecutive. The overall size is still verified.\n\n\nThe obvious problem with this is the slice fetching logic. For slices\nwith an offset of 0, it's obviously trivial to implement. For the higher\nslice logic, I'd assume we'd have to fetch the first slice by estimating\nwhere the start chunk is based on the current suggest chunk size, and\nrestarting the scan earlier/later if not accurate. In most cases it'll\nbe accurate, so we'd not loose efficiency.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Aug 2019 15:42:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 6:42 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm, those all include writing, right? And for read-only we don't expect\n> any additional overhead, correct? The write overhead is probably too\n> large show a bit of function call overhead - but if so, it'd probably be\n> on unlogged tables? And with COPY, because that utilizes multi_insert,\n> which means more toasting in a shorter amount of time?\n\nYes and yes. I guess we could test the unlogged case and with COPY,\nbut in any realistic case you're still looking for a tiny CPU overhead\nin a sea of I/O costs. Even if an extremely short COPY on an unlogged\ntable regresses slightly, we do not normally reject patches that\nimprove code quality on the grounds that they add function call\noverhead in a few places. Code like this hard to optimize and\nmaintain; as you remarked yourself, there are multiple opportunities\nto do this stuff better that are hard to see in the current structure.\n\n> .oO(why does everyone attach attachements out of order? Is that\n> a gmail thing?)\n\nMust be.\n\n> I wonder if toasting.c should be moved too?\n\nI mean, we could, but I don't really see a reason. It'd just be\nmoving it from one fairly-generic place to another, and I'd rather\nminimize churn.\n\n> trailing whitespace after \"#ifndef DETOAST_H \".\n\nWill fix.\n\n> Hm. This leaves toast_insert_or_update() as a name. That makes it sound\n> like it's generic toast code, rather than heap specific?\n\nI'll rename it to heap_toast_insert_or_update(). But I think I'll put\nthat in 0004 with the other renames.\n\n> It's definitely nice how a lot of repetitive code has been deduplicated.\n> Also makes it easier to see how algorithmically expensive\n> toast_insert_or_update() is :(.\n\nYep.\n\n> Shouldn't this condition be the other way round?\n\nI had to fight pretty hard to stop myself from tinkering with the\nalgorithm -- this can clearly be done better, but I wanted to make it\nmatch the existing structure as far as possible. It also only needs to\nbe tested once, not on every loop iteration, so if we're going to\nstart changing things, we should go further than just swapping the\norder of the tests. For now I prefer to draw a line in the sand and\nchange nothing.\n\n> Couldn't most of these be handled via colflags, instead of having\n> numerous individual checks? I.e. if you had TOASTCOL_COMPRESSED,\n> TOASTCOL_IGNORED, TOASTCOL_MAIN, TOASTFLAG_EXTERNAL, etc, all but the\n> size check ought to boil down to a single mask test?\n\nI'm not really seeing how more flags would significantly simplify this\nlogic, but I might be missing something.\n\n> I wonder if a better prefix wouldn't be toasting_...\n\nI'm open to other votes, but I think it's toast_tuple is more specific\nthan toasting_ and thus better.\n\n> > +/*\n> > + * Information about one column of a tuple being toasted.\n> > + *\n> > + * NOTE: toast_action[i] can have these values:\n> > + * ' ' default handling\n> > + * 'p' already processed --- don't touch it\n> > + * 'x' incompressible, but OK to move off\n> > + *\n> > + * NOTE: toast_attr[i].tai_size is only made valid for varlena attributes with\n> > + * toast_action[i] different from 'p'.\n> > + */\n> > +typedef struct\n> > +{\n> > + struct varlena *tai_oldexternal;\n> > + int32 tai_size;\n> > + uint8 tai_colflags;\n> > +} ToastAttrInfo;\n>\n> I think that comment is outdated?\n\nOops. Will fix.\n\n> > +/*\n> > + * Flags indicating the overall state of a TOAST operation.\n> > + *\n> > + * TOAST_NEEDS_DELETE_OLD indicates that one or more old TOAST datums need\n> > + * to be deleted.\n> > + *\n> > + * TOAST_NEEDS_FREE indicates that one or more TOAST values need to be freed.\n> > + *\n> > + * TOAST_HAS_NULLS indicates that nulls were found in the tuple being toasted.\n> > + *\n> > + * TOAST_NEEDS_CHANGE indicates that a new tuple needs to built; in other\n> > + * words, the toaster did something.\n> > + */\n> > +#define TOAST_NEEDS_DELETE_OLD 0x0001\n> > +#define TOAST_NEEDS_FREE 0x0002\n> > +#define TOAST_HAS_NULLS 0x0004\n> > +#define TOAST_NEEDS_CHANGE 0x0008\n>\n> I'd make these enums. They're more often accessible in a debugger...\n\nUgh, I hate that style. Abusing enums to make flag bits makes my skin\ncrawl. I always wondered what the appeal was -- I guess now I know.\nBlech.\n\n> I'm quite unconvinced that making the chunk size specified by the AM is\n> a good idea to do in isolation. We have TOAST_MAX_CHUNK_SIZE in\n> pg_control etc. It seems a bit dangerous to let AMs provide the size,\n> without being very clear that any change of the value will make data\n> inaccessible. It'd be different if the max were only used during\n> toasting.\n\nI was actually thinking about proposing that we rip\nTOAST_MAX_CHUNK_SIZE out of pg_control. No real effort has been made\nhere to make this something that users could configure, and I don't\nknow of a good reason to configure it. It also seems pretty out of\nplace in a world where there are multiple AMs floating around -- why\nshould heap, and only heap, be checked there? Granted it does have\nsome pride of place, but still.\n\n> I think the *size* checks should be weakened so we check:\n> 1) After each chunk, whether the already assembled chunks exceed the\n> expected size.\n> 2) After all chunks have been collected, check that the size is exactly\n> what we expect.\n>\n> I don't think that removes a meaningful amount of error\n> checking. Missing tuples etc get detected by the chunk_ids not being\n> consecutive. The overall size is still verified.\n>\n> The obvious problem with this is the slice fetching logic. For slices\n> with an offset of 0, it's obviously trivial to implement. For the higher\n> slice logic, I'd assume we'd have to fetch the first slice by estimating\n> where the start chunk is based on the current suggest chunk size, and\n> restarting the scan earlier/later if not accurate. In most cases it'll\n> be accurate, so we'd not loose efficiency.\n\nI don't feel entirely convinced that there's any rush to do all of\nthis right now, and the more I change the harder it is to make sure\nthat I haven't broken anything. How strongly do you feel about this\nstuff?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 5 Aug 2019 15:36:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't feel entirely convinced that there's any rush to do all of\n> this right now, and the more I change the harder it is to make sure\n> that I haven't broken anything. How strongly do you feel about this\n> stuff?\n\nFWIW, I agree with your comment further up that this patch ought to\njust refactor, not change any algorithms. The latter can be done\nin separate patches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Aug 2019 15:41:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-08-05 15:36:59 -0400, Robert Haas wrote:\n> On Fri, Aug 2, 2019 at 6:42 PM Andres Freund <andres@anarazel.de> wrote:\n> > Hm. This leaves toast_insert_or_update() as a name. That makes it sound\n> > like it's generic toast code, rather than heap specific?\n> \n> I'll rename it to heap_toast_insert_or_update(). But I think I'll put\n> that in 0004 with the other renames.\n\nMakes sense.\n\n\n> > It's definitely nice how a lot of repetitive code has been deduplicated.\n> > Also makes it easier to see how algorithmically expensive\n> > toast_insert_or_update() is :(.\n> \n> Yep.\n> \n> > Shouldn't this condition be the other way round?\n> \n> I had to fight pretty hard to stop myself from tinkering with the\n> algorithm -- this can clearly be done better, but I wanted to make it\n> match the existing structure as far as possible. It also only needs to\n> be tested once, not on every loop iteration, so if we're going to\n> start changing things, we should go further than just swapping the\n> order of the tests. For now I prefer to draw a line in the sand and\n> change nothing.\n\nMakes sense.\n\n\n> > Couldn't most of these be handled via colflags, instead of having\n> > numerous individual checks? I.e. if you had TOASTCOL_COMPRESSED,\n> > TOASTCOL_IGNORED, TOASTCOL_MAIN, TOASTFLAG_EXTERNAL, etc, all but the\n> > size check ought to boil down to a single mask test?\n> \n> I'm not really seeing how more flags would significantly simplify this\n> logic, but I might be missing something.\n\nWell, right now you have a number of ifs for each attribute. If you\nencoded all the parameters into flags, you could change that to a single\nflag test - as far as I can tell, all the parameters could be\nrepresented as that, if you moved MAIN etc into flags. A single if for\nflags (and then the size check) is cheaper than several separate checks.\n\n\n> > I'm quite unconvinced that making the chunk size specified by the AM is\n> > a good idea to do in isolation. We have TOAST_MAX_CHUNK_SIZE in\n> > pg_control etc. It seems a bit dangerous to let AMs provide the size,\n> > without being very clear that any change of the value will make data\n> > inaccessible. It'd be different if the max were only used during\n> > toasting.\n> \n> I was actually thinking about proposing that we rip\n> TOAST_MAX_CHUNK_SIZE out of pg_control. No real effort has been made\n> here to make this something that users could configure, and I don't\n> know of a good reason to configure it. It also seems pretty out of\n> place in a world where there are multiple AMs floating around -- why\n> should heap, and only heap, be checked there? Granted it does have\n> some pride of place, but still.\n\n> > I think the *size* checks should be weakened so we check:\n> > 1) After each chunk, whether the already assembled chunks exceed the\n> > expected size.\n> > 2) After all chunks have been collected, check that the size is exactly\n> > what we expect.\n> >\n> > I don't think that removes a meaningful amount of error\n> > checking. Missing tuples etc get detected by the chunk_ids not being\n> > consecutive. The overall size is still verified.\n> >\n> > The obvious problem with this is the slice fetching logic. For slices\n> > with an offset of 0, it's obviously trivial to implement. For the higher\n> > slice logic, I'd assume we'd have to fetch the first slice by estimating\n> > where the start chunk is based on the current suggest chunk size, and\n> > restarting the scan earlier/later if not accurate. In most cases it'll\n> > be accurate, so we'd not loose efficiency.\n> \n> I don't feel entirely convinced that there's any rush to do all of\n> this right now, and the more I change the harder it is to make sure\n> that I haven't broken anything. How strongly do you feel about this\n> stuff?\n\nI think we either should leave the hardcoded limit in place, or make it\nactually not fixed. Ripping-it-out-but-not-actually just seems like a\ntrap for the unwary, without much point.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Aug 2019 00:11:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-Aug-05, Robert Haas wrote:\n\n> > Shouldn't this condition be the other way round?\n> \n> I had to fight pretty hard to stop myself from tinkering with the\n> algorithm -- this can clearly be done better, but I wanted to make it\n> match the existing structure as far as possible. It also only needs to\n> be tested once, not on every loop iteration, so if we're going to\n> start changing things, we should go further than just swapping the\n> order of the tests. For now I prefer to draw a line in the sand and\n> change nothing.\n\nI agree, and can we move forward with this 0001? The idea here is to\nchange no code (as also suggested by Tom elsewhere), and it's the\nlargest patch in this series by a mile. I checked --color-moved=zebra\nand I think the patch looks fine, and also it compiles fine. I ran\nsrc/tools/pginclude/headerscheck on it and found no complaints.\n\nSo here's a rebased version, where the DETOAST_H whitespace has been\nremoved. No other changes from your original. Will you please push\nsoon?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 5 Sep 2019 10:52:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 10:52 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I agree, and can we move forward with this 0001? The idea here is to\n> change no code (as also suggested by Tom elsewhere), and it's the\n> largest patch in this series by a mile. I checked --color-moved=zebra\n> and I think the patch looks fine, and also it compiles fine. I ran\n> src/tools/pginclude/headerscheck on it and found no complaints.\n>\n> So here's a rebased version, where the DETOAST_H whitespace has been\n> removed. No other changes from your original. Will you please push\n> soon?\n\nDone, thanks. Here's the rest again with the additional rename added\nto 0003 (formerly 0004). I think it's probably OK to go ahead with\nthat stuff, too, but I'll wait a bit to see if anyone wants to raise\nmore objections.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 5 Sep 2019 13:42:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-09-05 13:42:40 -0400, Robert Haas wrote:\n> Done, thanks. Here's the rest again with the additional rename added\n> to 0003 (formerly 0004). I think it's probably OK to go ahead with\n> that stuff, too, but I'll wait a bit to see if anyone wants to raise\n> more objections.\n\nWell, I still dislike making the toast chunk size configurable in a\nhalfhearted manner.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 5 Sep 2019 12:10:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 3:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-09-05 13:42:40 -0400, Robert Haas wrote:\n> > Done, thanks. Here's the rest again with the additional rename added\n> > to 0003 (formerly 0004). I think it's probably OK to go ahead with\n> > that stuff, too, but I'll wait a bit to see if anyone wants to raise\n> > more objections.\n>\n> Well, I still dislike making the toast chunk size configurable in a\n> halfhearted manner.\n\nSo, I'd be willing to look into that some more. But how about if I\ncommit the next patch in the series first? I think this comment is\nreally about the second patch in the series, \"Allow TOAST tables to be\nimplemented using table AMs other than heap,\" and it's fair to point\nout that, since that patch extends table AM, we're somewhat committed\nto it once we put it in. But \"Create an API for inserting and\ndeleting rows in TOAST tables.\" is just refactoring, and I don't see\nwhat we gain from waiting to commit that part.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Sep 2019 15:27:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Well, I still dislike making the toast chunk size configurable in a\n> halfhearted manner.\n\nIt's hard to make it fully configurable without breaking our on-disk\nstorage, because of the lack of any explicit representation of the chunk\nsize in TOAST data. You have to \"just know\" how big the chunks are\nsupposed to be.\n\nHowever, it's reasonable to ask why we should treat it as an AM property,\nespecially a fixed AM property as this has it. If somebody does\nreimplement toast logic in some other AM, they might well decide it's\nworth the storage cost to be more flexible about the chunk size ... but\ntoo bad, this design won't let them do it.\n\nI don't entirely understand why relation_toast_am is a callback\nwhile toast_max_chunk_size isn't, either. Why would they not both\nbe callbacks? That would at least let an AM set a per-relation\nmax chunk size, if it wanted.\n\nIt seems like this design throws away most of the benefit of a fixed\nchunk size (mostly, being able to do relevant modulo arithmetic with\nshifts and masks rather than full-fledged integer division) without\ngetting much of anything in return.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Sep 2019 15:36:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 3:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Well, I still dislike making the toast chunk size configurable in a\n> > halfhearted manner.\n>\n> It's hard to make it fully configurable without breaking our on-disk\n> storage, because of the lack of any explicit representation of the chunk\n> size in TOAST data. You have to \"just know\" how big the chunks are\n> supposed to be.\n\nThere was a concrete proposal about this from Andres here, down at the\nbottom of the email:\n\nhttp://postgr.es/m/20190802224251.lsxw4o5ebn2ng5ey@alap3.anarazel.de\n\nBasically, detoasting would tolerate whatever chunk size it finds, and\nthe slice-fetching logic would get complicated.\n\n> However, it's reasonable to ask why we should treat it as an AM property,\n> especially a fixed AM property as this has it. If somebody does\n> reimplement toast logic in some other AM, they might well decide it's\n> worth the storage cost to be more flexible about the chunk size ... but\n> too bad, this design won't let them do it.\n\nFair complaint. The main reason I want to treat it as an AM property\nis that TOAST_TUPLE_THRESHOLD is defined in terms of heap-specific\nconstants, and having other AMs include heap-specific header files\nseems like a thing we should try hard to avoid. Once you're indirectly\nincluding htup_details.h in every AM in existence, it's going to be\nhard to be sure that you've got no other dependencies on the current\nheap AM. But I agree that making it not a fixed value could be useful.\nOne benefit of it would be that you could just change the value, even\nfor the current heap, without breaking access to already-toasted data.\n\n> It seems like this design throws away most of the benefit of a fixed\n> chunk size (mostly, being able to do relevant modulo arithmetic with\n> shifts and masks rather than full-fledged integer division) without\n> getting much of anything in return.\n\nI don't think you're really getting that particular benefit, because\nTOAST_TUPLE_THRESHOLD and TOAST_TUPLE_TARGET are not going to end up\nas powers of two. But you do get the benefit of working with\nconstants instead of a value determined at runtime.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 5 Sep 2019 15:51:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-09-05 15:27:28 -0400, Robert Haas wrote:\n> On Thu, Sep 5, 2019 at 3:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2019-09-05 13:42:40 -0400, Robert Haas wrote:\n> > > Done, thanks. Here's the rest again with the additional rename added\n> > > to 0003 (formerly 0004). I think it's probably OK to go ahead with\n> > > that stuff, too, but I'll wait a bit to see if anyone wants to raise\n> > > more objections.\n> >\n> > Well, I still dislike making the toast chunk size configurable in a\n> > halfhearted manner.\n> \n> So, I'd be willing to look into that some more. But how about if I\n> commit the next patch in the series first? I think this comment is\n> really about the second patch in the series, \"Allow TOAST tables to be\n> implemented using table AMs other than heap,\" and it's fair to point\n> out that, since that patch extends table AM, we're somewhat committed\n> to it once we put it in. But \"Create an API for inserting and\n> deleting rows in TOAST tables.\" is just refactoring, and I don't see\n> what we gain from waiting to commit that part.\n\nYea, makes sense to me.\n\n\n",
"msg_date": "Thu, 5 Sep 2019 13:07:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Sep 5, 2019 at 4:07 PM Andres Freund <andres@anarazel.de> wrote:\n> Yea, makes sense to me.\n\nOK, done. Here's the remaining patches again, with a slight update to\nthe renaming patch (now 0002). In the last version, I renamed\ntoast_insert_or_update to heap_toast_insert_or_update but did not\nrename toast_delete to heap_toast_delete. Actually, I'm not seeing\nany particular reason not to go ahead and push the renaming patch at\nthis point also. I guess there's a question as to whether I should\nmore aggressively add \"heap\" to the names of the other functions in\nheaptoast.h, but I'm kinda \"meh\" about that. It seems likely that\nother AMs will need their own versions of toast_insert_or_update() and\ntoast_delete(), but they shouldn't really need their own version of,\nsay, toast_flatten_tuple_to_datum(), and the point there is that we're\nbuilding a DatumTuple, so calling it\nheap_toast_flatten_tuple_to_datum() seems almost misleading. I'm\ninclined to leave all that stuff alone for now.\n\n0001 needs more thought, as discussed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 6 Sep 2019 10:59:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Fri, Aug 2, 2019 at 6:42 PM Andres Freund <andres@anarazel.de> wrote:\n> The obvious problem with this is the slice fetching logic. For slices\n> with an offset of 0, it's obviously trivial to implement. For the higher\n> slice logic, I'd assume we'd have to fetch the first slice by estimating\n> where the start chunk is based on the current suggest chunk size, and\n> restarting the scan earlier/later if not accurate. In most cases it'll\n> be accurate, so we'd not loose efficiency.\n\nDilip and I were discussing this proposal this morning, and after\ntalking to him, I don't quite see how to make this work without\nsignificant loss of efficiency. Suppose that, based on the estimated\nchunk size, you decide that there are probably 10 chunks and that the\nvalue that you need is probably located in chunk 6. So you fetch chunk\n6. Happily, chunk 6 is the size that you expect, so you extract the\nbytes that you need and go on your way.\n\nBut ... what if there are actually 6 chunks, not 10, and the first\nfive are bigger than you expected, and the reason why the size of\nchunk 6 matched your expectation because it was the last chunk and\nthus smaller than the rest? Now you've just returned the wrong answer.\n\nAFAICS, there's no way to detect that except to always read at least\ntwo chunks, which seems like a pretty heavy price to pay. It doesn't\ncost if you were going to need to read at least 2 chunks anyway, but\nif you were only going to need to read 1, having to fetch 2 stinks.\n\nActually, when I initially read your proposal, I thought you were\nproposing to relax things such that the chunks didn't even have to all\nbe the same size. That seems like it would be something cool that\ncould potentially be leveraged not only by AMs but perhaps also by\ndata types that want to break up their data strategically to cater to\nfuture access patterns. But then there's really no way to make slicing\nwork without always reading from the beginning.\n\nThere would be no big problem here - in either scenario - if each\nchunk contained the byte-offset of that chunk relative to the start of\nthe datum. You could guess wildly and always know whether or not you\nhad got it right without reading any extra data. But that's not the\ncase.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Sep 2019 09:03:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Fri, Sep 6, 2019 at 10:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 5, 2019 at 4:07 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, makes sense to me.\n>\n> OK, done. Here's the remaining patches again, with a slight update to\n> the renaming patch (now 0002). In the last version, I renamed\n> toast_insert_or_update to heap_toast_insert_or_update but did not\n> rename toast_delete to heap_toast_delete. Actually, I'm not seeing\n> any particular reason not to go ahead and push the renaming patch at\n> this point also.\n\nAnd, hearing no objections, done.\n\nHere's the last patch back, rebased over that renaming. Although I\nthink that Andres (and Tom) are probably right that there's room for\nimprovement here, I currently don't see a way around the issues I\nwrote about in http://postgr.es/m/CA+Tgmoa0zFcaCpOJCsSpOLLGpzTVfSyvcVB-USS8YoKzMO51Yw@mail.gmail.com\n-- so not quite sure where to go next. Hopefully Andres or someone\nelse will give me a quick whack with the cluebat if I'm missing\nsomething obvious.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 4 Oct 2019 14:32:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi All,\n\nWhile testing the Toast patch(PG+v7 patch) I found below server crash.\nSystem configuration:\nVCPUs: 4, RAM: 8GB, Storage: 320GB\n\nThis issue is not frequently reproducible, we need to repeat the same\ntestcase multiple times.\n\nCREATE OR REPLACE FUNCTION toast_chunks_cnt_func(p1 IN text)\n RETURNS int AS $$\nDECLARE\n chunks_cnt int;\n v_tbl text;\nBEGIN\n SELECT reltoastrelid::regclass INTO v_tbl FROM pg_class WHERE RELNAME =\np1;\n EXECUTE 'SELECT count(*) FROM ' || v_tbl::regclass INTO chunks_cnt;\n RETURN chunks_cnt;\nEND; $$ LANGUAGE PLPGSQL;\n\n-- Server crash after multiple run of below testcase\n-- ------------------------------------------------------------------------\nCHECKPOINT;\nCREATE TABLE toast_tab (c1 text);\n\\d+ toast_tab\n-- ALTER table column c1 for storage as \"EXTERNAL\" to make sure that the\ncolumn value is pushed to the TOAST table but not COMPRESSED.\nALTER TABLE toast_tab ALTER COLUMN c1 SET STORAGE EXTERNAL;\n\\d+ toast_tab\n\\timing\nINSERT INTO toast_tab\n( select repeat('a', 200000)\n from generate_series(1,40000) x);\n\\timing\nSELECT reltoastrelid::regclass FROM pg_class WHERE RELNAME = 'toast_tab';\nSELECT toast_chunks_cnt_func('toast_tab') \"Number of chunks\";\nSELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;\nSELECT DISTINCT SUBSTR(c1, 90000,10) FROM toast_tab;\n\nCHECKPOINT;\n\\timing\nUPDATE toast_tab SET c1 = UPPER(c1);\n\\timing\nSELECT toast_chunks_cnt_func('toast_tab') \"Number of chunks\";\nSELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;\nSELECT DISTINCT SUBSTR(c1, 90000,10) FROM toast_tab;\n\nDROP TABLE toast_tab;\n-- ------------------------------------------------------------------------\n\n-- Stacktrace as below:\n[centos@host-192-168-1-249 bin]$ gdb -q -c data2/core.3151 postgres\nReading symbols from\n/home/centos/PG/PGsrc/postgresql/inst/bin/postgres...done.\n[New LWP 3151]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: checkpointer\n '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f2267d33207 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-260.el7_6.5.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-37.el7_6.x86_64 libcom_err-1.42.9-13.el7.x86_64\nlibselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-16.el7_6.1.x86_64\npcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f2267d33207 in raise () from /lib64/libc.so.6\n#1 0x00007f2267d348f8 in abort () from /lib64/libc.so.6\n#2 0x0000000000eb3a80 in errfinish (dummy=0) at elog.c:552\n#3 0x0000000000c26530 in ProcessSyncRequests () at sync.c:393\n#4 0x0000000000bbbc57 in CheckPointBuffers (flags=256) at bufmgr.c:2589\n#5 0x0000000000604634 in CheckPointGuts (checkPointRedo=51448358328,\nflags=256) at xlog.c:8992\n#6 0x0000000000603b5e in CreateCheckPoint (flags=256) at xlog.c:8781\n#7 0x0000000000aed8fa in CheckpointerMain () at checkpointer.c:481\n#8 0x00000000006240de in AuxiliaryProcessMain (argc=2,\nargv=0x7ffe887c0880) at bootstrap.c:461\n#9 0x0000000000b0e834 in StartChildProcess (type=CheckpointerProcess) at\npostmaster.c:5414\n#10 0x0000000000b09283 in reaper (postgres_signal_arg=17) at\npostmaster.c:2995\n#11 <signal handler called>\n#12 0x00007f2267df1f53 in __select_nocancel () from /lib64/libc.so.6\n#13 0x0000000000b05000 in ServerLoop () at postmaster.c:1682\n#14 0x0000000000b0457b in PostmasterMain (argc=5, argv=0x349bce0) at\npostmaster.c:1391\n#15 0x0000000000971c9f in main (argc=5, argv=0x349bce0) at main.c:210\n(gdb)\n\n\n\nOn Sat, Oct 5, 2019 at 12:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Sep 6, 2019 at 10:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Thu, Sep 5, 2019 at 4:07 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Yea, makes sense to me.\n> >\n> > OK, done. Here's the remaining patches again, with a slight update to\n> > the renaming patch (now 0002). In the last version, I renamed\n> > toast_insert_or_update to heap_toast_insert_or_update but did not\n> > rename toast_delete to heap_toast_delete. Actually, I'm not seeing\n> > any particular reason not to go ahead and push the renaming patch at\n> > this point also.\n>\n> And, hearing no objections, done.\n>\n> Here's the last patch back, rebased over that renaming. Although I\n> think that Andres (and Tom) are probably right that there's room for\n> improvement here, I currently don't see a way around the issues I\n> wrote about in\n> http://postgr.es/m/CA+Tgmoa0zFcaCpOJCsSpOLLGpzTVfSyvcVB-USS8YoKzMO51Yw@mail.gmail.com\n> -- so not quite sure where to go next. Hopefully Andres or someone\n> else will give me a quick whack with the cluebat if I'm missing\n> something obvious.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nHi All,While testing the Toast patch(PG+v7 patch) I found below server crash.System configuration:VCPUs: 4, RAM: 8GB, Storage: 320GBThis issue is not frequently reproducible, we need to repeat the same testcase multiple times.CREATE OR REPLACE FUNCTION toast_chunks_cnt_func(p1 IN text) RETURNS int AS $$DECLARE chunks_cnt int; v_tbl text;BEGIN SELECT reltoastrelid::regclass INTO v_tbl FROM pg_class WHERE RELNAME = p1; EXECUTE 'SELECT count(*) FROM ' || v_tbl::regclass INTO chunks_cnt; RETURN chunks_cnt;END; $$ LANGUAGE PLPGSQL;-- Server crash after multiple run of below testcase-- ------------------------------------------------------------------------CHECKPOINT;CREATE TABLE toast_tab (c1 text);\\d+ toast_tab-- ALTER table column c1 for storage as \"EXTERNAL\" to make sure that the column value is pushed to the TOAST table but not COMPRESSED.ALTER TABLE toast_tab ALTER COLUMN c1 SET STORAGE EXTERNAL;\\d+ toast_tab\\timingINSERT INTO toast_tab( select repeat('a', 200000) from generate_series(1,40000) x);\\timingSELECT reltoastrelid::regclass FROM pg_class WHERE RELNAME = 'toast_tab';SELECT toast_chunks_cnt_func('toast_tab') \"Number of chunks\";SELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;SELECT DISTINCT SUBSTR(c1, 90000,10) FROM toast_tab;CHECKPOINT;\\timingUPDATE toast_tab SET c1 = UPPER(c1);\\timingSELECT toast_chunks_cnt_func('toast_tab') \"Number of chunks\";SELECT pg_column_size(t1.*) FROM toast_tab t1 limit 1;SELECT DISTINCT SUBSTR(c1, 90000,10) FROM toast_tab;DROP TABLE toast_tab;-- -------------------------------------------------------------------------- Stacktrace as below:[centos@host-192-168-1-249 bin]$ gdb -q -c data2/core.3151 postgres Reading symbols from /home/centos/PG/PGsrc/postgresql/inst/bin/postgres...done.[New LWP 3151][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: checkpointer '.Program terminated with signal 6, Aborted.#0 0x00007f2267d33207 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.5.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_6.x86_64 libcom_err-1.42.9-13.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-16.el7_6.1.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0 0x00007f2267d33207 in raise () from /lib64/libc.so.6#1 0x00007f2267d348f8 in abort () from /lib64/libc.so.6#2 0x0000000000eb3a80 in errfinish (dummy=0) at elog.c:552#3 0x0000000000c26530 in ProcessSyncRequests () at sync.c:393#4 0x0000000000bbbc57 in CheckPointBuffers (flags=256) at bufmgr.c:2589#5 0x0000000000604634 in CheckPointGuts (checkPointRedo=51448358328, flags=256) at xlog.c:8992#6 0x0000000000603b5e in CreateCheckPoint (flags=256) at xlog.c:8781#7 0x0000000000aed8fa in CheckpointerMain () at checkpointer.c:481#8 0x00000000006240de in AuxiliaryProcessMain (argc=2, argv=0x7ffe887c0880) at bootstrap.c:461#9 0x0000000000b0e834 in StartChildProcess (type=CheckpointerProcess) at postmaster.c:5414#10 0x0000000000b09283 in reaper (postgres_signal_arg=17) at postmaster.c:2995#11 <signal handler called>#12 0x00007f2267df1f53 in __select_nocancel () from /lib64/libc.so.6#13 0x0000000000b05000 in ServerLoop () at postmaster.c:1682#14 0x0000000000b0457b in PostmasterMain (argc=5, argv=0x349bce0) at postmaster.c:1391#15 0x0000000000971c9f in main (argc=5, argv=0x349bce0) at main.c:210(gdb) On Sat, Oct 5, 2019 at 12:03 AM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Sep 6, 2019 at 10:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Sep 5, 2019 at 4:07 PM Andres Freund <andres@anarazel.de> wrote:\n> > Yea, makes sense to me.\n>\n> OK, done. Here's the remaining patches again, with a slight update to\n> the renaming patch (now 0002). In the last version, I renamed\n> toast_insert_or_update to heap_toast_insert_or_update but did not\n> rename toast_delete to heap_toast_delete. Actually, I'm not seeing\n> any particular reason not to go ahead and push the renaming patch at\n> this point also.\n\nAnd, hearing no objections, done.\n\nHere's the last patch back, rebased over that renaming. Although I\nthink that Andres (and Tom) are probably right that there's room for\nimprovement here, I currently don't see a way around the issues I\nwrote about in http://postgr.es/m/CA+Tgmoa0zFcaCpOJCsSpOLLGpzTVfSyvcVB-USS8YoKzMO51Yw@mail.gmail.com\n-- so not quite sure where to go next. Hopefully Andres or someone\nelse will give me a quick whack with the cluebat if I'm missing\nsomething obvious.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Wed, 30 Oct 2019 13:19:06 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n> While testing the Toast patch(PG+v7 patch) I found below server crash.\n> System configuration:\n> VCPUs: 4, RAM: 8GB, Storage: 320GB\n>\n> This issue is not frequently reproducible, we need to repeat the same\n> testcase multiple times.\n>\n\nI wonder if this is an independent bug, because the backtrace doesn't look\nlike it's related to the stuff this is changing. Your report doesn't\nspecify whether you can also reproduce the problem without the patch, which\nis something that you should always check before reporting a bug in a\nparticular patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:While testing the Toast patch(PG+v7 patch) I found below server crash.System configuration:VCPUs: 4, RAM: 8GB, Storage: 320GBThis issue is not frequently reproducible, we need to repeat the same testcase multiple times.I wonder if this is an independent bug, because the backtrace doesn't look like it's related to the stuff this is changing. Your report doesn't specify whether you can also reproduce the problem without the patch, which is something that you should always check before reporting a bug in a particular patch. -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 30 Oct 2019 12:16:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <\n> prabhat.sahu@enterprisedb.com> wrote:\n>\n>> While testing the Toast patch(PG+v7 patch) I found below server crash.\n>> System configuration:\n>> VCPUs: 4, RAM: 8GB, Storage: 320GB\n>>\n>> This issue is not frequently reproducible, we need to repeat the same\n>> testcase multiple times.\n>>\n>\n> I wonder if this is an independent bug, because the backtrace doesn't look\n> like it's related to the stuff this is changing. Your report doesn't\n> specify whether you can also reproduce the problem without the patch, which\n> is something that you should always check before reporting a bug in a\n> particular patch.\n>\n\nHi Robert,\n\nMy sincere apologize that I have not mentioned the issue in more detail.\nI have ran the same case against both PG HEAD and HEAD+Patch multiple\ntimes(7, 10, 20nos), and\nas I found this issue was not failing in HEAD and same case is reproducible\nin HEAD+Patch (again I was not sure about the backtrace whether its related\nto patch or not).\n\n\n\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nOn Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:While testing the Toast patch(PG+v7 patch) I found below server crash.System configuration:VCPUs: 4, RAM: 8GB, Storage: 320GBThis issue is not frequently reproducible, we need to repeat the same testcase multiple times.I wonder if this is an independent bug, because the backtrace doesn't look like it's related to the stuff this is changing. Your report doesn't specify whether you can also reproduce the problem without the patch, which is something that you should always check before reporting a bug in a particular patch. Hi Robert,My sincere apologize that I have not mentioned the issue in more detail.I have ran the same case against both PG HEAD and HEAD+Patch multiple times(7, 10, 20nos), and as I found this issue was not failing in HEAD and same case is reproducible in HEAD+Patch (again I was not sure about the backtrace whether its related to patch or not). -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Thu, 31 Oct 2019 10:26:27 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "From the stack trace shared by Prabhat, I understand that the checkpointer\nprocess panicked due to one of the following two reasons:\n\n1) The fsync() failed in the first attempt itself and the reason for the\nfailure was not due to file being dropped or truncated i.e. fsync failed\nwith the error other than ENOENT. Refer to ProcessSyncRequests() for\ndetails esp. the code inside for (failures = 0; !entry->canceled;\nfailures++) loop.\n\n2) The first attempt to fsync() failed with ENOENT error because just\nbefore the fsync function was called, the file being synced either got\ndropped or truncated. When this happened, the checkpointer process called\nAbsorbSyncRequests() to update the entry for deleted file in the hash table\nbut it seems like AbsorbSyncRequests() failed to do so and that's why the\n\"entry->canceled\" couldn't be set to true. Due to this, fsync() was\nperformed on the same file twice and that failed too. As checkpointer\nprocess doesn't expect the fsync on the same file to fail twice, it\npanicked. Again, please check ProcessSyncRequests() for details esp. the\ncode inside for (failures = 0; !entry->canceled; failures++) loop.\n\nNow, the point of discussion here is, which one of the above two reasons\ncould the cause for panic? According to me, point #2 doesn't look like the\npossible reason for panic. The reason being just before a file is unlinked,\nbackend first sends a SYNC_FORGET_REQUEST to the checkpointer process which\nmarks the entry for this file in the hash table as cancelled and then\nremoves the file. So, with this understanding it is hard to believe that\nonce the first fsync() for a file has failed with error ENOENT, a call to\nAbsorbSyncRequests() made immediately after that wouldn't update the entry\nfor this file in the hash table because the backend only removes the file\nonce it has successfully sent the SYNC_FORGET_REQUEST for that file to the\ncheckpointer process. See mdunlinkfork()->register_forget_request() for\ndetails on this.\n\nSo, I think the first point that I mentioned above could be the probable\nreason for the checkpointer process getting panicked. But, having said all\nthat, it would be good to have some evidence for it which can be confirmed\nby inspecting the server logfile.\n\nPrabhat, is it possible for you to re-run the test-case with\nlog_min_messages set to DEBUG1 and save the logfile for the test-case that\ncrashes. This would be helpful in knowing if the fsync was performed just\nonce or twice i.e. whether point #1 is the reason for the panic or point\n#2.\n\nThanks,\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n>\n>\n> On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <\n>> prabhat.sahu@enterprisedb.com> wrote:\n>>\n>>> While testing the Toast patch(PG+v7 patch) I found below server crash.\n>>> System configuration:\n>>> VCPUs: 4, RAM: 8GB, Storage: 320GB\n>>>\n>>> This issue is not frequently reproducible, we need to repeat the same\n>>> testcase multiple times.\n>>>\n>>\n>> I wonder if this is an independent bug, because the backtrace doesn't\n>> look like it's related to the stuff this is changing. Your report doesn't\n>> specify whether you can also reproduce the problem without the patch, which\n>> is something that you should always check before reporting a bug in a\n>> particular patch.\n>>\n>\n> Hi Robert,\n>\n> My sincere apologize that I have not mentioned the issue in more detail.\n> I have ran the same case against both PG HEAD and HEAD+Patch multiple\n> times(7, 10, 20nos), and\n> as I found this issue was not failing in HEAD and same case is\n> reproducible in HEAD+Patch (again I was not sure about the backtrace\n> whether its related to patch or not).\n>\n>\n>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n> --\n>\n> With Regards,\n>\n> Prabhat Kumar Sahu\n> Skype ID: prabhat.sahu1984\n> EnterpriseDB Software India Pvt. Ltd.\n>\n> The Postgres Database Company\n>\n\nFrom the stack trace shared by Prabhat, I understand that the checkpointer process panicked due to one of the following two reasons:1) The fsync() failed in the first attempt itself and the reason for the failure was not due to file being dropped or truncated i.e. fsync failed with the error other than ENOENT. Refer to ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.2) The first attempt to fsync() failed with ENOENT error because just before the fsync function was called, the file being synced either got dropped or truncated. When this happened, the checkpointer process called AbsorbSyncRequests() to update the entry for deleted file in the hash table but it seems like AbsorbSyncRequests() failed to do so and that's why the \"entry->canceled\" couldn't be set to true. Due to this, fsync() was performed on the same file twice and that failed too. As checkpointer process doesn't expect the fsync on the same file to fail twice, it panicked. Again, please check ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.Now, the point of discussion here is, which one of the above two reasons could the cause for panic? According to me, point #2 doesn't look like the possible reason for panic. The reason being just before a file is unlinked, backend first sends a SYNC_FORGET_REQUEST to the checkpointer process which marks the entry for this file in the hash table as cancelled and then removes the file. So, with this understanding it is hard to believe that once the first fsync() for a file has failed with error ENOENT, a call to AbsorbSyncRequests() made immediately after that wouldn't update the entry for this file in the hash table because the backend only removes the file once it has successfully sent the SYNC_FORGET_REQUEST for that file to the checkpointer process. See mdunlinkfork()->register_forget_request() for details on this.So, I think the first point that I mentioned above could be the probable reason for the checkpointer process getting panicked. But, having said all that, it would be good to have some evidence for it which can be confirmed by inspecting the server logfile.Prabhat, is it possible for you to re-run the test-case with log_min_messages set to DEBUG1 and save the logfile for the test-case that crashes. This would be helpful in knowing if the fsync was performed just once or twice i.e. whether point #1 is the reason for the panic or point #2. Thanks,--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:While testing the Toast patch(PG+v7 patch) I found below server crash.System configuration:VCPUs: 4, RAM: 8GB, Storage: 320GBThis issue is not frequently reproducible, we need to repeat the same testcase multiple times.I wonder if this is an independent bug, because the backtrace doesn't look like it's related to the stuff this is changing. Your report doesn't specify whether you can also reproduce the problem without the patch, which is something that you should always check before reporting a bug in a particular patch. Hi Robert,My sincere apologize that I have not mentioned the issue in more detail.I have ran the same case against both PG HEAD and HEAD+Patch multiple times(7, 10, 20nos), and as I found this issue was not failing in HEAD and same case is reproducible in HEAD+Patch (again I was not sure about the backtrace whether its related to patch or not). -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Tue, 5 Nov 2019 16:48:12 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-10-04 20:32, Robert Haas wrote:\n> Here's the last patch back, rebased over that renaming. Although I\n> think that Andres (and Tom) are probably right that there's room for\n> improvement here, I currently don't see a way around the issues I\n> wrote about inhttp://postgr.es/m/CA+Tgmoa0zFcaCpOJCsSpOLLGpzTVfSyvcVB-USS8YoKzMO51Yw@mail.gmail.com\n> -- so not quite sure where to go next. Hopefully Andres or someone\n> else will give me a quick whack with the cluebat if I'm missing\n> something obvious.\n\nThis patch seems sound as far as the API restructuring goes.\n\nIf I may summarize the remaining discussion: This patch adds a field \ntoast_max_chunk_size to TableAmRoutine, to take the place of the \nhardcoded TOAST_MAX_CHUNK_SIZE. The heapam_methods implementation then \nsets this to TOAST_MAX_CHUNK_SIZE, thus preserving existing behavior. \nOther table AMs can set this to some other value that they find \nsuitable. Currently, TOAST_MAX_CHUNK_SIZE is computed based on \nheap-specific values and assumptions, so it's likely that other AMs \nwon't want to use that value. (Side note: Maybe rename \nTOAST_MAX_CHUNK_SIZE then.) The concern was raised that while \nTOAST_MAX_CHUNK_SIZE is stored in pg_control, values chosen by other \ntable AMs won't be, and so they won't have any safe-guards against \nstarting a server with incompatible disk layout. Then, various ways to \ndetect or check the TOAST chunk size at run time were discussed, but \nnone seemed satisfactory.\n\nI think AMs are probably going to need a general mechanism to store \npg_control-like data somewhere. There are going to be chunk sizes, \nblock sizes, segment sizes, and so on. This one is just a particular \ncase of that.\n\nThis particular patch doesn't need to be held up by that, though. \nProviding that mechanism can be a separate subproject of pluggable storage.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 6 Nov 2019 10:01:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 4:01 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> This patch seems sound as far as the API restructuring goes.\n\nThanks. And thanks for weighing in.\n\n> If I may summarize the remaining discussion: This patch adds a field\n> toast_max_chunk_size to TableAmRoutine, to take the place of the\n> hardcoded TOAST_MAX_CHUNK_SIZE. The heapam_methods implementation then\n> sets this to TOAST_MAX_CHUNK_SIZE, thus preserving existing behavior.\n> Other table AMs can set this to some other value that they find\n> suitable. Currently, TOAST_MAX_CHUNK_SIZE is computed based on\n> heap-specific values and assumptions, so it's likely that other AMs\n> won't want to use that value. (Side note: Maybe rename\n> TOAST_MAX_CHUNK_SIZE then.)\n\nYeah.\n\n> The concern was raised that while\n> TOAST_MAX_CHUNK_SIZE is stored in pg_control, values chosen by other\n> table AMs won't be, and so they won't have any safe-guards against\n> starting a server with incompatible disk layout. Then, various ways to\n> detect or check the TOAST chunk size at run time were discussed, but\n> none seemed satisfactory.\n\nYeah. I've been nervous about trying to proceed with this patch\nbecause Andres seemed confident there was a better approach than what\nI did here, but as I wrote about back on September 12th, it doesn't\nseem like his idea will work. I'm not clear whether I'm being stupid\nand there's a way to salvage his idea, or whether he just made a\nmistake.\n\nOne possible approach would be to move more of the logic below the\ntableam layer. For example, toast_fetch_datum() could do this:\n\ntoastrel = table_open(toast_pointer.va_toastrelid, AccessShareLock);\ncall_a_new_tableam_method_here(toast_rel, &toast_pointer);\ntable_close(toastrel, AccessShareLock);\n\n...and then it becomes the tableam's job to handle everything that\nneeds to be done in the middle. That might be better than what I've\ngot now; it's certainly more flexible. It does mean that an AM that\njust wants to reuse the existing logic with a different chunk size has\ngot to repeat some code, but it's probably <~150 lines, so that's\nperhaps not a catastrophe.\n\nAlternatively, we could (a) stick with the current approach, (b) use\nthe current approach but make the table AM member a callback rather\nthan a constant, or (c) something else entirely. I don't want to give\nup on making the TOAST infrastructure pluggable; requiring every AM to\nuse the heap as its TOAST implementation seems too constraining.\n\n> I think AMs are probably going to need a general mechanism to store\n> pg_control-like data somewhere. There are going to be chunk sizes,\n> block sizes, segment sizes, and so on. This one is just a particular\n> case of that.\n\nThat's an interesting point. I don't know for sure to what extent we\nneed that; I think that the toast chunk size is actually not very\ninteresting to vary, and the fact that we technically allow it to be\nvaried seems like it isn't buying us much. I think as much as possible\nwe should allow settings that actually need to be varied to differ\ntable-by-table, not require a recompile or re-initdb. But if we are\ngoing to have some that do require that, then what you're talking\nabout here would certainly make that easier to secure.\n\n> This particular patch doesn't need to be held up by that, though.\n> Providing that mechanism can be a separate subproject of pluggable storage.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Nov 2019 10:38:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 10:01:40 +0100, Peter Eisentraut wrote:\n> On 2019-10-04 20:32, Robert Haas wrote:\n> > Here's the last patch back, rebased over that renaming. Although I\n> > think that Andres (and Tom) are probably right that there's room for\n> > improvement here, I currently don't see a way around the issues I\n> > wrote about inhttp://postgr.es/m/CA+Tgmoa0zFcaCpOJCsSpOLLGpzTVfSyvcVB-USS8YoKzMO51Yw@mail.gmail.com\n> > -- so not quite sure where to go next. Hopefully Andres or someone\n> > else will give me a quick whack with the cluebat if I'm missing\n> > something obvious.\n> \n> This patch seems sound as far as the API restructuring goes.\n> \n> If I may summarize the remaining discussion: This patch adds a field\n> toast_max_chunk_size to TableAmRoutine, to take the place of the hardcoded\n> TOAST_MAX_CHUNK_SIZE. The heapam_methods implementation then sets this to\n> TOAST_MAX_CHUNK_SIZE, thus preserving existing behavior. Other table AMs can\n> set this to some other value that they find suitable. Currently,\n> TOAST_MAX_CHUNK_SIZE is computed based on heap-specific values and\n> assumptions, so it's likely that other AMs won't want to use that value.\n> (Side note: Maybe rename TOAST_MAX_CHUNK_SIZE then.) The concern was raised\n> that while TOAST_MAX_CHUNK_SIZE is stored in pg_control, values chosen by\n> other table AMs won't be, and so they won't have any safe-guards against\n> starting a server with incompatible disk layout. Then, various ways to\n> detect or check the TOAST chunk size at run time were discussed, but none\n> seemed satisfactory.\n\nI think it's more than just that. It's also that I think presenting a\nhardcoded value to the outside of / above an AM is architecturally\nwrong. If anything this is an implementation detail of the AM, that the\nAM ought to be concerned with internally, not something it should\npresent to the outside.\n\nI also, and separately from that architectural concern, think that\nhardcoding values like this in the control file is a bad practice, and\nwe shouldn't expand it. It basically makes it practically impossible to\never change their default value.\n\n\n> I think AMs are probably going to need a general mechanism to store\n> pg_control-like data somewhere. There are going to be chunk sizes, block\n> sizes, segment sizes, and so on. This one is just a particular case of\n> that.\n\nThat's imo best done as a meta page within the table.\n\n\n> This particular patch doesn't need to be held up by that, though. Providing\n> that mechanism can be a separate subproject of pluggable storage.\n\nAgain seems like something that the AM ought to handle below it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:25:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it's more than just that. It's also that I think presenting a\n> hardcoded value to the outside of / above an AM is architecturally\n> wrong. If anything this is an implementation detail of the AM, that the\n> AM ought to be concerned with internally, not something it should\n> present to the outside.\n\nI mean, it depends on your vision of how things ought to be\nabstracted. If you want the TOAST stuff to be logically \"below\" the\ntable AM layer, then this is an abstraction violation. But if you\nthink of TOAST as being a parallel system to table AM, then it's fine.\nIt also depends on your goals. If you want to give the table AM\nmaximum freedom to do what it likes, the design I proposed is not very\ngood. If you want to make it easy for someone to plug in a new AM that\ndoes toasting like the current heap but with a different chunk size,\nthat design lets you do so with a very minimal amount of code.\n\nI don't really care very much about the details here, but I don't want\nto just keep kicking the can down the road. If we can agree on *some*\ndesign that lets a new table AM have a TOAST table that uses an AM\nother than the heap, and that I can understand and implement with some\nhalfway-reasonable amount of work, I'll do it. It doesn't have to be\nthe thing I proposed. But I think it would be better to do that thing\nthan nothing. We're not engraving anything we do here on stone\ntablets.\n\n> I also, and separately from that architectural concern, think that\n> hardcoding values like this in the control file is a bad practice, and\n> we shouldn't expand it. It basically makes it practically impossible to\n> ever change their default value.\n\nI generally agree, although I think there might be exceptions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Nov 2019 11:49:10 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 10:38:58 -0500, Robert Haas wrote:\n> Yeah. I've been nervous about trying to proceed with this patch\n> because Andres seemed confident there was a better approach than what\n> I did here, but as I wrote about back on September 12th, it doesn't\n> seem like his idea will work. I'm not clear whether I'm being stupid\n> and there's a way to salvage his idea, or whether he just made a\n> mistake.\n\n(still trying to get back to this)\n\n\n> One possible approach would be to move more of the logic below the\n> tableam layer. For example, toast_fetch_datum() could do this:\n> \n> toastrel = table_open(toast_pointer.va_toastrelid, AccessShareLock);\n> call_a_new_tableam_method_here(toast_rel, &toast_pointer);\n> table_close(toastrel, AccessShareLock);\n\n> ...and then it becomes the tableam's job to handle everything that\n> needs to be done in the middle.\n\nI think that's a good direction to go in, for more than just the the\ndiscussion we're having here about the fixed chunking size.\n\nI'm fairly sure that plenty AMs e.g. wouldn't want to actually store\ntoasted datums in a separate relation. And this would at least go more\nin the direction of making that possible. And I think the above ought to\nnot even increase the overhead compared to the patch, as the number of\nindirect function calls ought to stay the same or even be lower. The\nindirection would otherwise have to happen within toast_fetch_datum(),\nwhereas with the above, it ought to be possible to avoid needing to do\nso processing toast chunks.\n\nIt seems, unfortunately, that atm an AM not wanting to store toast\ndatums in a separate file, would still need to create a toast relation,\njust to get a distinct toast oid, to make sure that toasted datums from\nthe old and new relation are distinct. Which seems like it could be\nimportant e.g. for table rewrites. There's probably some massaging\nof table rewrites and cluster needed.\n\nI suspect the new callback ought to allow sliced and non-sliced access,\nperhaps by just allowing to specify slice offset / length to 0 and\nINT32_MAX respectively (or maybe just -1, -1?). That'd also allow an AM\nto make slicing possible in cases that's not possible for heap. And\nthere seems little point in having two callbacks.\n\n\n> That might be better than what I've got now; it's certainly more\n> flexible. It does mean that an AM that just wants to reuse the\n> existing logic with a different chunk size has got to repeat some\n> code, but it's probably <~150 lines, so that's perhaps not a\n> catastrophe.\n\nAlso seems that the relevant code can be made reusable (opting in into\nthe current logic), in line with where you've been going with this code\nalready.\n\n\n> Alternatively, we could (a) stick with the current approach, (b) use\n> the current approach but make the table AM member a callback rather\n> than a constant, or (c) something else entirely.\n\nA callback returning the chunk size does not seem like an improvement to\nme.\n\n\n> I don't want to give up on making the TOAST infrastructure pluggable;\n> requiring every AM to use the heap as its TOAST implementation seems\n> too constraining.\n\n+1\n\n\n> > I think AMs are probably going to need a general mechanism to store\n> > pg_control-like data somewhere. There are going to be chunk sizes,\n> > block sizes, segment sizes, and so on. This one is just a particular\n> > case of that.\n> \n> That's an interesting point. I don't know for sure to what extent we\n> need that; I think that the toast chunk size is actually not very\n> interesting to vary, and the fact that we technically allow it to be\n> varied seems like it isn't buying us much.\n\nWhether I agree with that statement depends a bit on what you mean by\nvarying the chunk size. If you mean that there's not much need for a\nvalue other than a adjusted computation of what's currently used, then I\ndon't agree:\n\nWe currently make toast a lot more expensive by quadrupling the number\nof separate heap fetches.\n\nAnd e.g. compressing chunks separately, to allow for sliced access even\nwhen compressed, would also be hard to do with the current sizes.\n\n\n\nAdditionally, I *very* strongly suspect that, while it makes sense to\nuse chunks sizes where multiple chunks fit a page for a heavily updated\ntoast table full of small-ish values, that it makes no sense whatsoever\nto do so when toasting a 10MB value that's going to be appended to the\ntoast relation, because there's no space available for reuse anyway. And\nsmaller chunking isn't going to help with space reuse either, because\nthe whole toast datum is going to be deleted together.\n\nI think the right way for toast creation to behave really would be to\ncheck whether there's free space available that'd benefit from using\nsmaller chunks and do so if available, and otherwise use all the space\nin a page for each chunk.\n\nThat'd obviously make sliced access harder, so it surely isn't a\npanacea.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 08:55:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi,\n\nOn 2019-11-06 11:49:10 -0500, Robert Haas wrote:\n> On Wed, Nov 6, 2019 at 11:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it's more than just that. It's also that I think presenting a\n> > hardcoded value to the outside of / above an AM is architecturally\n> > wrong. If anything this is an implementation detail of the AM, that the\n> > AM ought to be concerned with internally, not something it should\n> > present to the outside.\n> \n> I mean, it depends on your vision of how things ought to be\n> abstracted. If you want the TOAST stuff to be logically \"below\" the\n> table AM layer, then this is an abstraction violation. But if you\n> think of TOAST as being a parallel system to table AM, then it's fine.\n> It also depends on your goals. If you want to give the table AM\n> maximum freedom to do what it likes, the design I proposed is not very\n> good. If you want to make it easy for someone to plug in a new AM that\n> does toasting like the current heap but with a different chunk size,\n> that design lets you do so with a very minimal amount of code.\n\nI'd like an AM to have the *option* of implementing something better, or\nat least go in the direction of making that possible.\n\nIt seems perfectly possible to have a helper function implementing the\ncurrent logic that you just can call with the fixed chunk size as an\nadditional parameter. Which'd basically mean there's no meaningful\ndifference in complexity compared to providing the chunk size as an\nexternal AM property. In one case you have a callback that just calls a\nhelper function with one parameter, in the other you fill in a member of\nthe struct.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Nov 2019 09:00:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, Nov 5, 2019 at 4:48 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> From the stack trace shared by Prabhat, I understand that the checkpointer\n> process panicked due to one of the following two reasons:\n>\n> 1) The fsync() failed in the first attempt itself and the reason for the\n> failure was not due to file being dropped or truncated i.e. fsync failed\n> with the error other than ENOENT. Refer to ProcessSyncRequests() for\n> details esp. the code inside for (failures = 0; !entry->canceled;\n> failures++) loop.\n>\n> 2) The first attempt to fsync() failed with ENOENT error because just\n> before the fsync function was called, the file being synced either got\n> dropped or truncated. When this happened, the checkpointer process called\n> AbsorbSyncRequests() to update the entry for deleted file in the hash table\n> but it seems like AbsorbSyncRequests() failed to do so and that's why the\n> \"entry->canceled\" couldn't be set to true. Due to this, fsync() was\n> performed on the same file twice and that failed too. As checkpointer\n> process doesn't expect the fsync on the same file to fail twice, it\n> panicked. Again, please check ProcessSyncRequests() for details esp. the\n> code inside for (failures = 0; !entry->canceled; failures++) loop.\n>\n> Now, the point of discussion here is, which one of the above two reasons\n> could the cause for panic? According to me, point #2 doesn't look like the\n> possible reason for panic. The reason being just before a file is unlinked,\n> backend first sends a SYNC_FORGET_REQUEST to the checkpointer process which\n> marks the entry for this file in the hash table as cancelled and then\n> removes the file. So, with this understanding it is hard to believe that\n> once the first fsync() for a file has failed with error ENOENT, a call to\n> AbsorbSyncRequests() made immediately after that wouldn't update the entry\n> for this file in the hash table because the backend only removes the file\n> once it has successfully sent the SYNC_FORGET_REQUEST for that file to the\n> checkpointer process. See mdunlinkfork()->register_forget_request() for\n> details on this.\n>\n> So, I think the first point that I mentioned above could be the probable\n> reason for the checkpointer process getting panicked. But, having said all\n> that, it would be good to have some evidence for it which can be confirmed\n> by inspecting the server logfile.\n>\n> Prabhat, is it possible for you to re-run the test-case with\n> log_min_messages set to DEBUG1 and save the logfile for the test-case that\n> crashes. This would be helpful in knowing if the fsync was performed just\n> once or twice i.e. whether point #1 is the reason for the panic or point\n> #2.\n>\n\nI have ran the same testcases with and without patch multiple times with\ndebug option (log_min_messages = DEBUG1), but this time I am not able to\nreproduce the crash.\n\n>\n> Thanks,\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n>\n> On Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <\n> prabhat.sahu@enterprisedb.com> wrote:\n>\n>>\n>>\n>> On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com>\n>> wrote:\n>>\n>>> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <\n>>> prabhat.sahu@enterprisedb.com> wrote:\n>>>\n>>>> While testing the Toast patch(PG+v7 patch) I found below server crash.\n>>>> System configuration:\n>>>> VCPUs: 4, RAM: 8GB, Storage: 320GB\n>>>>\n>>>> This issue is not frequently reproducible, we need to repeat the same\n>>>> testcase multiple times.\n>>>>\n>>>\n>>> I wonder if this is an independent bug, because the backtrace doesn't\n>>> look like it's related to the stuff this is changing. Your report doesn't\n>>> specify whether you can also reproduce the problem without the patch, which\n>>> is something that you should always check before reporting a bug in a\n>>> particular patch.\n>>>\n>>\n>> Hi Robert,\n>>\n>> My sincere apologize that I have not mentioned the issue in more detail.\n>> I have ran the same case against both PG HEAD and HEAD+Patch multiple\n>> times(7, 10, 20nos), and\n>> as I found this issue was not failing in HEAD and same case is\n>> reproducible in HEAD+Patch (again I was not sure about the backtrace\n>> whether its related to patch or not).\n>>\n>>\n>>\n>>> --\n>>> Robert Haas\n>>> EnterpriseDB: http://www.enterprisedb.com\n>>> The Enterprise PostgreSQL Company\n>>>\n>>\n>>\n>> --\n>>\n>> With Regards,\n>>\n>> Prabhat Kumar Sahu\n>> Skype ID: prabhat.sahu1984\n>> EnterpriseDB Software India Pvt. Ltd.\n>>\n>> The Postgres Database Company\n>>\n>\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nOn Tue, Nov 5, 2019 at 4:48 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:From the stack trace shared by Prabhat, I understand that the checkpointer process panicked due to one of the following two reasons:1) The fsync() failed in the first attempt itself and the reason for the failure was not due to file being dropped or truncated i.e. fsync failed with the error other than ENOENT. Refer to ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.2) The first attempt to fsync() failed with ENOENT error because just before the fsync function was called, the file being synced either got dropped or truncated. When this happened, the checkpointer process called AbsorbSyncRequests() to update the entry for deleted file in the hash table but it seems like AbsorbSyncRequests() failed to do so and that's why the \"entry->canceled\" couldn't be set to true. Due to this, fsync() was performed on the same file twice and that failed too. As checkpointer process doesn't expect the fsync on the same file to fail twice, it panicked. Again, please check ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.Now, the point of discussion here is, which one of the above two reasons could the cause for panic? According to me, point #2 doesn't look like the possible reason for panic. The reason being just before a file is unlinked, backend first sends a SYNC_FORGET_REQUEST to the checkpointer process which marks the entry for this file in the hash table as cancelled and then removes the file. So, with this understanding it is hard to believe that once the first fsync() for a file has failed with error ENOENT, a call to AbsorbSyncRequests() made immediately after that wouldn't update the entry for this file in the hash table because the backend only removes the file once it has successfully sent the SYNC_FORGET_REQUEST for that file to the checkpointer process. See mdunlinkfork()->register_forget_request() for details on this.So, I think the first point that I mentioned above could be the probable reason for the checkpointer process getting panicked. But, having said all that, it would be good to have some evidence for it which can be confirmed by inspecting the server logfile.Prabhat, is it possible for you to re-run the test-case with log_min_messages set to DEBUG1 and save the logfile for the test-case that crashes. This would be helpful in knowing if the fsync was performed just once or twice i.e. whether point #1 is the reason for the panic or point #2. I have ran the same testcases with and without patch multiple times with debug option (log_min_messages = DEBUG1), but this time I am not able to reproduce the crash.Thanks,--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:While testing the Toast patch(PG+v7 patch) I found below server crash.System configuration:VCPUs: 4, RAM: 8GB, Storage: 320GBThis issue is not frequently reproducible, we need to repeat the same testcase multiple times.I wonder if this is an independent bug, because the backtrace doesn't look like it's related to the stuff this is changing. Your report doesn't specify whether you can also reproduce the problem without the patch, which is something that you should always check before reporting a bug in a particular patch. Hi Robert,My sincere apologize that I have not mentioned the issue in more detail.I have ran the same case against both PG HEAD and HEAD+Patch multiple times(7, 10, 20nos), and as I found this issue was not failing in HEAD and same case is reproducible in HEAD+Patch (again I was not sure about the backtrace whether its related to patch or not). -- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company\n\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Thu, 7 Nov 2019 10:57:20 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 10:57 AM Prabhat Sahu\n<prabhat.sahu@enterprisedb.com> wrote:\n>\n>\n>\n> On Tue, Nov 5, 2019 at 4:48 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>> From the stack trace shared by Prabhat, I understand that the checkpointer process panicked due to one of the following two reasons:\n>>\n>> 1) The fsync() failed in the first attempt itself and the reason for the failure was not due to file being dropped or truncated i.e. fsync failed with the error other than ENOENT. Refer to ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.\n>>\n>> 2) The first attempt to fsync() failed with ENOENT error because just before the fsync function was called, the file being synced either got dropped or truncated. When this happened, the checkpointer process called AbsorbSyncRequests() to update the entry for deleted file in the hash table but it seems like AbsorbSyncRequests() failed to do so and that's why the \"entry->canceled\" couldn't be set to true. Due to this, fsync() was performed on the same file twice and that failed too. As checkpointer process doesn't expect the fsync on the same file to fail twice, it panicked. Again, please check ProcessSyncRequests() for details esp. the code inside for (failures = 0; !entry->canceled; failures++) loop.\n>>\n>> Now, the point of discussion here is, which one of the above two reasons could the cause for panic? According to me, point #2 doesn't look like the possible reason for panic. The reason being just before a file is unlinked, backend first sends a SYNC_FORGET_REQUEST to the checkpointer process which marks the entry for this file in the hash table as cancelled and then removes the file. So, with this understanding it is hard to believe that once the first fsync() for a file has failed with error ENOENT, a call to AbsorbSyncRequests() made immediately after that wouldn't update the entry for this file in the hash table because the backend only removes the file once it has successfully sent the SYNC_FORGET_REQUEST for that file to the checkpointer process. See mdunlinkfork()->register_forget_request() for details on this.\n>>\n>> So, I think the first point that I mentioned above could be the probable reason for the checkpointer process getting panicked. But, having said all that, it would be good to have some evidence for it which can be confirmed by inspecting the server logfile.\n>>\n>> Prabhat, is it possible for you to re-run the test-case with log_min_messages set to DEBUG1 and save the logfile for the test-case that crashes. This would be helpful in knowing if the fsync was performed just once or twice i.e. whether point #1 is the reason for the panic or point #2.\n>\n>\n> I have ran the same testcases with and without patch multiple times with debug option (log_min_messages = DEBUG1), but this time I am not able to reproduce the crash.\n\nOkay, no problem. Thanks for re-running the test-cases.\n\n@Robert, Myself and Prabhat have tried running the test-cases that\ncaused the checkpointer process to crash earlier multiple times but we\nare not able to reproduce it both with and without the patch. However,\nfrom the stack trace shared earlier by Prabhat, it is clear that the\ncheckpointer process panicked due to fsync failure. But, there is no\nfurther data to know the exact reason for the fsync failure. From the\ncode of checkpointer process (basically the function to process fsync\nrequests) it is understood that, the checkpointer process can PANIC\ndue to one of the following two reasons.\n\n1) The fsync call made by checkpointer process has failed with error\nother than ENOENT.\n\n2) The fsync call made by checkpointer process failed with ENOENT\nerror which caused the checkpointer process to invoke\nAbsorbSyncRequests() to update the entry for deleted file in the hash\ntable (basically to mark the entry as cancelled). But, seems like it\ncouldn't do so either because - a) possibly there was no\nSYNC_FORGET_REQUEST sent by the backend to the checkpointer process or\nb) the request was sent but due to some reason the checkpointer\nprocess couldn't absorb the request. This caused the checkpointer\nprocess to perform fsync on the same file once again which is bound to\nfail resulting into a panic.\n\nNow, if checkpointer process panicked due to reason #1 then I don't\nthink it has anything to do with postgres because postgres only cares\nwhen fsync fails with ENOENT error. If in case checkpointer process\npanicked due reason #2 then possibly there is some bug in postgres\ncode which I assume has to be some problem with the way backend is\nsending fsync request to the checkpointer for deleted files and the\nway checkpointer is handling the requests. At least for me, it is hard\nto believe that reason #2 could be the cause for the checkpointer\nprocess getting panicked here - for the reason that before a file is\nunlinked by backend, it first sends a SYNC_FORGET_REQUEST to the\ncheckpointer process, when this is done successfully then only backend\nremoves the file. So, with this understanding it is hard to believe\nthat once the first fsync() for a file has failed with error ENOENT, a\ncall to AbsorbSyncRequests() made immediately after that wouldn't\nupdate the entry for this file in the hash table. And even if reason\n#2 is the cause for this failure, I don't think it has anything to do\nwith your changes, although I haven't studied your patches in detail\nbut considering the purpose of the patch and from a quick look it\ndoesn't seem to change anything in the area of the code that might be\ncausing this crash.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n>>\n>>\n>> Thanks,\n>>\n>> --\n>> With Regards,\n>> Ashutosh Sharma\n>> EnterpriseDB:http://www.enterprisedb.com\n>>\n>> On Thu, Oct 31, 2019 at 10:26 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:\n>>>\n>>>\n>>>\n>>> On Wed, Oct 30, 2019 at 9:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>>>\n>>>> On Wed, Oct 30, 2019 at 3:49 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:\n>>>>>\n>>>>> While testing the Toast patch(PG+v7 patch) I found below server crash.\n>>>>> System configuration:\n>>>>> VCPUs: 4, RAM: 8GB, Storage: 320GB\n>>>>>\n>>>>> This issue is not frequently reproducible, we need to repeat the same testcase multiple times.\n>>>>\n>>>>\n>>>> I wonder if this is an independent bug, because the backtrace doesn't look like it's related to the stuff this is changing. Your report doesn't specify whether you can also reproduce the problem without the patch, which is something that you should always check before reporting a bug in a particular patch.\n>>>\n>>>\n>>> Hi Robert,\n>>>\n>>> My sincere apologize that I have not mentioned the issue in more detail.\n>>> I have ran the same case against both PG HEAD and HEAD+Patch multiple times(7, 10, 20nos), and\n>>> as I found this issue was not failing in HEAD and same case is reproducible in HEAD+Patch (again I was not sure about the backtrace whether its related to patch or not).\n>>>\n>>>\n>>>>\n>>>> --\n>>>> Robert Haas\n>>>> EnterpriseDB: http://www.enterprisedb.com\n>>>> The Enterprise PostgreSQL Company\n>>>\n>>>\n>>>\n>>> --\n>>>\n>>> With Regards,\n>>>\n>>> Prabhat Kumar Sahu\n>>> Skype ID: prabhat.sahu1984\n>>> EnterpriseDB Software India Pvt. Ltd.\n>>>\n>>> The Postgres Database Company\n>\n>\n>\n> --\n>\n> With Regards,\n>\n> Prabhat Kumar Sahu\n> Skype ID: prabhat.sahu1984\n> EnterpriseDB Software India Pvt. Ltd.\n>\n> The Postgres Database Company\n\n\n",
"msg_date": "Thu, 7 Nov 2019 11:45:35 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-11-06 18:00, Andres Freund wrote:\n> I'd like an AM to have the *option* of implementing something better, or\n> at least go in the direction of making that possible.\n\nI don't think the presented design prevents that. An AM can just return \nfalse from relation_needs_toast_table in all cases and implement \nsomething internally.\n\n> It seems perfectly possible to have a helper function implementing the\n> current logic that you just can call with the fixed chunk size as an\n> additional parameter. Which'd basically mean there's no meaningful\n> difference in complexity compared to providing the chunk size as an\n> external AM property. In one case you have a callback that just calls a\n> helper function with one parameter, in the other you fill in a member of\n> the struct.\n\nI can see a \"moral\" concern about having TOAST be part of the table AM \nAPI. It should be an implementation concern of the AM. How much more \nwork would it be to refactor TOAST into a separate API that an AM \nimplementation could use or not? How much more complicated would the \nresult be? I guess you would like to at least have it explored.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 7 Nov 2019 11:10:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 1:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> @Robert, Myself and Prabhat have tried running the test-cases that\n> caused the checkpointer process to crash earlier multiple times but we\n> are not able to reproduce it both with and without the patch. However,\n> from the stack trace shared earlier by Prabhat, it is clear that the\n> checkpointer process panicked due to fsync failure. But, there is no\n> further data to know the exact reason for the fsync failure. From the\n> code of checkpointer process (basically the function to process fsync\n> requests) it is understood that, the checkpointer process can PANIC\n> due to one of the following two reasons.\n\nOh, I didn't realize this was a panic due to an fsync() failure when I\nlooked at the stack trace before. I think it's concerning that\nfsync() failed on Prabhat's machine, and it would be interesting to\nknow why that happened, but I don't see how this patch could possibly\n*cause* fsync() to fail, so I think we can say that whatever is\nhappening on his machine is unrelated to this patch -- and probably\nalso unrelated to PostgreSQL.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 7 Nov 2019 09:05:13 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Nov 7, 2019 at 7:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Nov 7, 2019 at 1:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > @Robert, Myself and Prabhat have tried running the test-cases that\n> > caused the checkpointer process to crash earlier multiple times but we\n> > are not able to reproduce it both with and without the patch. However,\n> > from the stack trace shared earlier by Prabhat, it is clear that the\n> > checkpointer process panicked due to fsync failure. But, there is no\n> > further data to know the exact reason for the fsync failure. From the\n> > code of checkpointer process (basically the function to process fsync\n> > requests) it is understood that, the checkpointer process can PANIC\n> > due to one of the following two reasons.\n>\n> Oh, I didn't realize this was a panic due to an fsync() failure when I\n> looked at the stack trace before. I think it's concerning that\n> fsync() failed on Prabhat's machine, and it would be interesting to\n> know why that happened, but I don't see how this patch could possibly\n> *cause* fsync() to fail, so I think we can say that whatever is\n> happening on his machine is unrelated to this patch -- and probably\n> also unrelated to PostgreSQL.\n>\n\nThat's right and that's exactly what I mentioned in my conclusion too.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Nov 2019 20:15:37 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Nov 6, 2019 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd like an AM to have the *option* of implementing something better, or\n> at least go in the direction of making that possible.\n\nOK. Could you see what you think of the attached patches? 0001 does\nsome refactoring of toast_fetch_datum() and toast_fetch_datum_slice()\nto make them look more like each other and clean up a bunch of stuff\nthat I thought was annoying, and 0002 then pulls out the common logic\ninto a heap-specific function. If you like this direction, we could\nthen push the heap-specific function below tableam, but I haven't done\nthat yet.\n\n> It seems perfectly possible to have a helper function implementing the\n> current logic that you just can call with the fixed chunk size as an\n> additional parameter. Which'd basically mean there's no meaningful\n> difference in complexity compared to providing the chunk size as an\n> external AM property. In one case you have a callback that just calls a\n> helper function with one parameter, in the other you fill in a member of\n> the struct.\n\nI haven't tried to do this yet. I think that to make it work, the\nhelper function would have to operate in terms of slots instead of\nusing fastgetattr() as this logic does now. I don't know whether that\nwould be faster (because the current code might have a little less in\nterms of indirect function calls) or slower (because the current code\nmakes two calls to fastgetattr and if we used slots here we could just\ndeform once). I suspect it might be a small enough difference not to\nworry too much about it either way.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 8 Nov 2019 11:59:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, 7 Nov 2019 at 22:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\n> On Thu, Nov 7, 2019 at 7:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, Nov 7, 2019 at 1:15 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > > @Robert, Myself and Prabhat have tried running the test-cases that\n> > > caused the checkpointer process to crash earlier multiple times but we\n> > > are not able to reproduce it both with and without the patch. However,\n> > > from the stack trace shared earlier by Prabhat, it is clear that the\n> > > checkpointer process panicked due to fsync failure. But, there is no\n> > > further data to know the exact reason for the fsync failure. From the\n> > > code of checkpointer process (basically the function to process fsync\n> > > requests) it is understood that, the checkpointer process can PANIC\n> > > due to one of the following two reasons.\n> >\n> > Oh, I didn't realize this was a panic due to an fsync() failure when I\n> > looked at the stack trace before. I think it's concerning that\n> > fsync() failed on Prabhat's machine, and it would be interesting to\n> > know why that happened, but I don't see how this patch could possibly\n> > *cause* fsync() to fail, so I think we can say that whatever is\n> > happening on his machine is unrelated to this patch -- and probably\n> > also unrelated to PostgreSQL.\n> >\n>\n> That's right and that's exactly what I mentioned in my conclusion too.\n>\n>\nIn fact, I suspect this is PostgreSQL successfully protecting itself from\nan unsafe situation.\n\nDoes the host have thin-provisioned storage? lvmthin, thin-provisioned SAN,\netc?\n\nIs the DB on NFS?\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 7 Nov 2019 at 22:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:On Thu, Nov 7, 2019 at 7:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Nov 7, 2019 at 1:15 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> > @Robert, Myself and Prabhat have tried running the test-cases that\n> > caused the checkpointer process to crash earlier multiple times but we\n> > are not able to reproduce it both with and without the patch. However,\n> > from the stack trace shared earlier by Prabhat, it is clear that the\n> > checkpointer process panicked due to fsync failure. But, there is no\n> > further data to know the exact reason for the fsync failure. From the\n> > code of checkpointer process (basically the function to process fsync\n> > requests) it is understood that, the checkpointer process can PANIC\n> > due to one of the following two reasons.\n>\n> Oh, I didn't realize this was a panic due to an fsync() failure when I\n> looked at the stack trace before. I think it's concerning that\n> fsync() failed on Prabhat's machine, and it would be interesting to\n> know why that happened, but I don't see how this patch could possibly\n> *cause* fsync() to fail, so I think we can say that whatever is\n> happening on his machine is unrelated to this patch -- and probably\n> also unrelated to PostgreSQL.\n>\n\nThat's right and that's exactly what I mentioned in my conclusion too.In fact, I suspect this is PostgreSQL successfully protecting itself from an unsafe situation.Does the host have thin-provisioned storage? lvmthin, thin-provisioned SAN, etc?Is the DB on NFS?-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Sun, 10 Nov 2019 17:09:04 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "Hi Craig,\n\nPlease find my response inline below.\n\nOn Sun, Nov 10, 2019 at 2:39 PM Craig Ringer <craig@2ndquadrant.com> wrote:\n>\n> On Thu, 7 Nov 2019 at 22:45, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>>\n>\n> In fact, I suspect this is PostgreSQL successfully protecting itself from an unsafe situation.\n>\n> Does the host have thin-provisioned storage? lvmthin, thin-provisioned SAN, etc?\n>\n\nNo, It doesn't. Infact the machine on which the issue was reproduced\nonce/twice doesn't have any LVMs. The other machine on which the issue\nnever got reproduced have some LVMs but they are thick-provisioned not\nthin-provisioned.\n\n> Is the DB on NFS?\n>\n\nNo.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Nov 2019 13:00:00 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-11-08 17:59, Robert Haas wrote:\n> On Wed, Nov 6, 2019 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n>> I'd like an AM to have the *option* of implementing something better, or\n>> at least go in the direction of making that possible.\n> \n> OK. Could you see what you think of the attached patches? 0001 does\n> some refactoring of toast_fetch_datum() and toast_fetch_datum_slice()\n> to make them look more like each other and clean up a bunch of stuff\n> that I thought was annoying, and 0002 then pulls out the common logic\n> into a heap-specific function. If you like this direction, we could\n> then push the heap-specific function below tableam, but I haven't done\n> that yet.\n\nCompared to the previous patch (v7) where the API just had a \"use this \nAM for TOAST\" field and the other extreme of pushing TOAST entirely \ninside the heap AM, this seems like the worst of both worlds, with the \nmaximum additional complexity.\n\nI don't think we need to nail down this API for eternity, so I'd be \nhappy to err on the side of practicality here. However, it seems it's \nnot quite clear what for example the requirements and wishes from zheap \nwould be. What's the simplest way to move this forward?\n\nThe refactorings you proposed seem reasonable on their own, and I have \nsome additional comments on that if we decide to go forward in this \ndirection. One thing that's confusing is that the TOAST tables have \nfields chunk_id and chunk_seq, but when an error message talks about \n\"chunk %d\" or \"chunk number %d\", they usually mean the \"seq\" and not the \n\"id\".\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 11 Nov 2019 14:51:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Mon, Nov 11, 2019 at 8:51 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Compared to the previous patch (v7) where the API just had a \"use this\n> AM for TOAST\" field and the other extreme of pushing TOAST entirely\n> inside the heap AM, this seems like the worst of both worlds, with the\n> maximum additional complexity.\n\nThere might be a misunderstanding here. These patches would still have\na \"use this AM for TOAST\" callback, just as the previous set did, but\nI didn't include that here, because this is talking about a different\npart of the problem. The purpose of that callback is to determine\nwhich AM will be used to create the toast table. The purpose of these\npatches is to be able to detoast a value given nothing but the TOAST\npointer extracted from the heap tuple, while removing the present\nassumption that the TOAST table is a heap table.\n\n(The current coding is actually seriously inconsistent, because right\nnow, the code that creates TOAST tables always uses the same AM as the\nmain heap; but the detoasting code only works with heap tables, which\nmeans that no non-heap AM can use the TOAST system at all. If\nnecessary, we could commit the patch to allow the TOAST table AM to be\nchanged first, and then handle allowing the detoasting logic to cope\nwith a non-heap AM as a separate matter.)\n\n> I don't think we need to nail down this API for eternity, so I'd be\n> happy to err on the side of practicality here. However, it seems it's\n> not quite clear what for example the requirements and wishes from zheap\n> would be. What's the simplest way to move this forward?\n\nThe only thing zheap needs - in the current design, anyway - is the\nability to change the chunk size. However, I think that's mostly\nbecause we haven't spent a lot of time thinking about how to do TOAST\nbetter than the heap does TOAST today. I think it is desirable to\nallow for more options than that. That's why I like this approach more\nthan the previous one. The previous approach allowed the chunk size to\nbe variable, but permitted no other AM-specific variation; this one\nallows the AM to detoast in any way that it likes. The downside of\nthat is that if you really do only want to vary the chunk size, you'll\nhave to repeat somewhat more code. That's sad, but we're never likely\nto have enough AMs for that to be a really serious problem, and if we\ndo, the AM-specific callbacks for those AMs that just want a different\nchunk size could call a common helper function.\n\n> The refactorings you proposed seem reasonable on their own, and I have\n> some additional comments on that if we decide to go forward in this\n> direction. One thing that's confusing is that the TOAST tables have\n> fields chunk_id and chunk_seq, but when an error message talks about\n> \"chunk %d\" or \"chunk number %d\", they usually mean the \"seq\" and not the\n> \"id\".\n\nWell, we've got errors like this right now:\n\nunexpected chunk number %d (expected %d) for toast value %u in %s\n\nSo at least in this case, and I think in many cases, we're referring\nto the chunk_id as \"toast value %u\" and the chunk_seq as \"chunk number\n%d\". I think that's pretty good terminology. It's unfortunate that the\nTOAST table columns are called chunk_id and chunk_seq rather than,\nsay, value_id and chunk_number, and I guess we could possibly change\nthat without breaking too many things, but I'm not sure that changing\nthe error messages would help anybody. We could try to rephrase the\nerror message to mention the two value in the opposite order, which to\nme would be more clear, but I'm not exactly sure how to do that\nwithout writing rather awkward English.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 11 Nov 2019 09:31:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On 2019-11-08 17:59, Robert Haas wrote:\n> OK. Could you see what you think of the attached patches? 0001 does\n> some refactoring of toast_fetch_datum() and toast_fetch_datum_slice()\n> to make them look more like each other and clean up a bunch of stuff\n> that I thought was annoying, and 0002 then pulls out the common logic\n> into a heap-specific function. If you like this direction, we could\n> then push the heap-specific function below tableam, but I haven't done\n> that yet.\n\nPartial review: The 0001 patch seems very sensible. Some minor comments \non that:\n\nPerhaps rename the residx variable (in both functions). You have gotten \nrid of all the res* variables except that one. That name as it is right \nnow isn't very helpful at all.\n\nYou have collapsed the error messages for \"chunk %d of %d\" and \"final \nchunk %d\" and replaced it with just \"chunk %d\". I think it might be \nbetter to keep the \"chunk %d of %d\" wording, for more context, or was \nthere a reason why you wanted to remove the total count from the message?\n\nI believe this assertion\n\n+ Assert(endchunk <= totalchunks);\n\nshould be < (strictly less).\n\nIn the commit message you state that this assertion replaces a run-time \ncheck, but I couldn't quite make out which one you are referring to \nbecause all the existing run-time checks are kept, with slightly \nrefactored conditions.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 21 Nov 2019 11:37:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Thu, Nov 21, 2019 at 5:37 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Partial review: The 0001 patch seems very sensible. Some minor comments\n> on that:\n\nThanks for the review. Updated patches attached. This version is more\ncomplete than the last set of patches I posted. It looks like this:\n\n0001 - Lets a table AM that needs a toast table choose the AM that\nwill be used to implement the toast table.\n0002 - Refactoring and code cleanup for TOAST code.\n0003 - Move heap-specific portion of logic refactored by previous\npatch to a separate function.\n0004 - Lets a table AM arrange to call a different function when\ndetoasting, instead of the one created by 0003.\n\n> Perhaps rename the residx variable (in both functions). You have gotten\n> rid of all the res* variables except that one. That name as it is right\n> now isn't very helpful at all.\n\nOK, I renamed residx to curchunk and nextidx to expectedchunk.\n\n> You have collapsed the error messages for \"chunk %d of %d\" and \"final\n> chunk %d\" and replaced it with just \"chunk %d\". I think it might be\n> better to keep the \"chunk %d of %d\" wording, for more context, or was\n> there a reason why you wanted to remove the total count from the message?\n\nNo, not really. Adjusted.\n\n> I believe this assertion\n>\n> + Assert(endchunk <= totalchunks);\n>\n> should be < (strictly less).\n\nI think you're right. Fixed.\n\n> In the commit message you state that this assertion replaces a run-time\n> check, but I couldn't quite make out which one you are referring to\n> because all the existing run-time checks are kept, with slightly\n> refactored conditions.\n\nPre-patch, there is this condition:\n\n- if ((residx != nextidx) || (residx > endchunk) || (residx < startchunk)\n\nThis checks that the expected chunk number is equal to the one we\nwant, but not greater than the last one we were expecting nor less\nthan the first one we were expecting. The check is redundant if you\nsuppose that we never compute nextidx so that it is outside the bounds\nof startchunk..endchunk. Since nextidx (or expectedchunk as these\npatches now call it) is initialized to startchunk and then only\nincremented, it seems impossible for the first condition to ever fail,\nso it is no longer tested there. The latter chunk is also no longer\ntested there; instead, we do this:\n\n+ Assert(endchunk < totalchunks);\n\nThat's what the commit message is on about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 22 Nov 2019 10:41:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Fri, Nov 22, 2019 at 10:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Thanks for the review. Updated patches attached. This version is more\n> complete than the last set of patches I posted. It looks like this:\n>\n> 0001 - Lets a table AM that needs a toast table choose the AM that\n> will be used to implement the toast table.\n> 0002 - Refactoring and code cleanup for TOAST code.\n> 0003 - Move heap-specific portion of logic refactored by previous\n> patch to a separate function.\n> 0004 - Lets a table AM arrange to call a different function when\n> detoasting, instead of the one created by 0003.\n\nHearing no further comments, I went ahead and pushed 0002 today. That\nturned out to have a bug, so I pushed a fix for that. Hopefully the\nbuildfarm will agree that it's fixed.\n\nMeanwhile, here are the remaining patches again, rebased over the bug fix.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 17 Dec 2019 16:12:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 4:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hearing no further comments, I went ahead and pushed 0002 today. That\n> turned out to have a bug, so I pushed a fix for that. Hopefully the\n> buildfarm will agree that it's fixed.\n>\n> Meanwhile, here are the remaining patches again, rebased over the bug fix.\n\nOK, I've now pushed the last of the refactoring patches. Here are the\ntwo main patches back, which are actually quite small, though the\nsecond one looks bigger than it is because it moves a function from\ndetoast.c into heaptoast.c. This is slightly rebased again because the\nother refactoring patch I just pushed had a couple of typos which I\nfixed.\n\nIf nobody has further comments or objections, I plan to commit these\nin early January.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 18 Dec 2019 11:37:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 11:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> If nobody has further comments or objections, I plan to commit these\n> in early January.\n\nDone.\n\nWhich, I think, wraps up the work I felt needed to be done here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jan 2020 14:38:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: tableam vs. TOAST"
}
] |
[
{
"msg_contents": "So, I noticed that if I make a table in one schema and then a table with the\nsame name in another schema that describe only shows me one of them.\nDemonstrating with temp table and regular table just for simplicity:\nIf I make a temp table t1 and a normal table t1 (it doesn't\nmatter which one I create first), describe only shows the temp table.\n\ntest=# create table t1();\nCREATE TABLE\ntest=# \\d\n List of relations\n Schema | Name | Type | Owner\n--------+------+-------+-----------\n public | t1 | table | mplageman\n(1 row)\n\ntest=# create temp table t1();\nCREATE TABLE\ntest=# \\d\n List of relations\n Schema | Name | Type | Owner\n-----------+------+-------+-----------\n pg_temp_4 | t1 | table | mplageman\n(1 row)\n\nI'm not sure if this is the intended behavior or if it is a bug.\n\nI looked briefly at the describe code and ran the query in\ndescribeTableDetails\nwhich it constructs at the beginning and this, of course, returns the\nresults I\nwould expect.\n\ntest=# select c.oid, n.nspname, c.relname from pg_catalog.pg_class c left\njoin\npg_catalog.pg_namespace n on n.oid = c.relnamespace where c.relname = 't1';\n oid | nspname | relname\n-------+-----------+---------\n 23609 | public | t1\n 23612 | pg_temp_4 | t1\n(2 rows)\n\nSo, without much more digging, is the current behavior of describe intended?\nI couldn't find an email thread discussing this with the search terms I\ntried.\n\n(I noticed it on master and checked 11 as well and got the same behavior.)\n\n-- \nMelanie Plageman\n\nSo, I noticed that if I make a table in one schema and then a table with thesame name in another schema that describe only shows me one of them.Demonstrating with temp table and regular table just for simplicity:If I make a temp table t1 and a normal table t1 (it doesn'tmatter which one I create first), describe only shows the temp table.test=# create table t1();CREATE TABLEtest=# \\d List of relations Schema | Name | Type | Owner--------+------+-------+----------- public | t1 | table | mplageman(1 row)test=# create temp table t1();CREATE TABLEtest=# \\d List of relations Schema | Name | Type | Owner-----------+------+-------+----------- pg_temp_4 | t1 | table | mplageman(1 row)I'm not sure if this is the intended behavior or if it is a bug.I looked briefly at the describe code and ran the query in describeTableDetailswhich it constructs at the beginning and this, of course, returns the results Iwould expect.test=# select c.oid, n.nspname, c.relname from pg_catalog.pg_class c left joinpg_catalog.pg_namespace n on n.oid = c.relnamespace where c.relname = 't1'; oid | nspname | relname-------+-----------+--------- 23609 | public | t1 23612 | pg_temp_4 | t1(2 rows)So, without much more digging, is the current behavior of describe intended?I couldn't find an email thread discussing this with the search terms I tried.(I noticed it on master and checked 11 as well and got the same behavior.)-- Melanie Plageman",
"msg_date": "Fri, 17 May 2019 17:58:06 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "describe working as intended?"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> So, I noticed that if I make a table in one schema and then a table with the\n> same name in another schema that describe only shows me one of them.\n\nYes, that's intended, psql's \\d will only show you tables that are\nvisible in the search path, unless you give it a qualified pattern.\nYou can do something like \"\\d *.t1\" if you want to see all the\ninstances of t1.\n\nThis is documented I believe ... ah yes, here:\n\n Whenever the pattern parameter is omitted completely, the \\d commands\n display all objects that are visible in the current schema search path\n — this is equivalent to using * as the pattern. (An object is said to\n be visible if its containing schema is in the search path and no\n object of the same kind and name appears earlier in the search\n path. This is equivalent to the statement that the object can be\n referenced by name without explicit schema qualification.) To see all\n objects in the database regardless of visibility, use *.* as the\n pattern.\n ...\n A pattern that contains a dot (.) is interpreted as a schema name\n pattern followed by an object name pattern. For example, \\dt\n foo*.*bar* displays all tables whose table name includes bar that are\n in schemas whose schema name starts with foo. When no dot appears,\n then the pattern matches only objects that are visible in the current\n schema search path. Again, a dot within double quotes loses its\n special meaning and is matched literally.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 May 2019 21:27:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: describe working as intended?"
},
{
"msg_contents": "Hello\n\nNo, this is not bug. This is expected beharior of search_path setting: https://www.postgresql.org/docs/current/runtime-config-client.html\n\n> Likewise, the current session's temporary-table schema, pg_temp_nnn, is always searched if it exists. It can be explicitly listed in the path by using the alias pg_temp. If it is not listed in the path then it is searched first\n\npsql \\d command checks current search_path (by pg_table_is_visible call). You can use \\d *.t1 syntax to display tables with such name in all schemas.\n\nregards, Sergei\n\n\n",
"msg_date": "Sat, 18 May 2019 11:17:46 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: describe working as intended?"
},
{
"msg_contents": "On Sat, May 18, 2019 at 1:17 AM Sergei Kornilov <sk@zsrv.org> wrote:\n\n> Hello\n>\n> No, this is not bug. This is expected beharior of search_path setting:\n> https://www.postgresql.org/docs/current/runtime-config-client.html\n>\n> > Likewise, the current session's temporary-table schema, pg_temp_nnn, is\n> always searched if it exists. It can be explicitly listed in the path by\n> using the alias pg_temp. If it is not listed in the path then it is\n> searched first\n>\n> psql \\d command checks current search_path (by pg_table_is_visible call).\n> You can use \\d *.t1 syntax to display tables with such name in all schemas.\n>\n> regards, Sergei\n>\n\n\nThanks! I suppose it would behoove me to check the documentation\nbefore resorting to looking at the source code :)\n\n-- \nMelanie Plageman\n\nOn Sat, May 18, 2019 at 1:17 AM Sergei Kornilov <sk@zsrv.org> wrote:Hello\n\nNo, this is not bug. This is expected beharior of search_path setting: https://www.postgresql.org/docs/current/runtime-config-client.html\n\n> Likewise, the current session's temporary-table schema, pg_temp_nnn, is always searched if it exists. It can be explicitly listed in the path by using the alias pg_temp. If it is not listed in the path then it is searched first\n\npsql \\d command checks current search_path (by pg_table_is_visible call). You can use \\d *.t1 syntax to display tables with such name in all schemas.\n\nregards, Sergei\nThanks! I suppose it would behoove me to check the documentationbefore resorting to looking at the source code :)-- Melanie Plageman",
"msg_date": "Tue, 21 May 2019 11:19:10 -0700",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: describe working as intended?"
}
] |
[
{
"msg_contents": "Hackers,\n\nHead of master is giving me a segfault on running ANALYZE when isolation mode is SERIALIZABLE.\n\nMy configure:\n\nexport CFLAGS=\"-g\"\nexport LDFLAGS=\"-L/usr/local/opt/readline/lib\"\nexport CPPFLAGS=\"-I/usr/local/opt/readline/include\"\n\n./configure \\\n --prefix=/Users/joe/Development/tmp/pg \\\n --enable-cassert \\\n --enable-debug \\\n --with-readline\n\nTo reproduce:\n\n[joe@oberon pg]$ ./bin/initdb -D $(pwd)/data\nThe files belonging to this database system will be owned by user \"joe\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"en_GB.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\n\ncreating directory /Users/joe/Development/tmp/pg/data ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default timezone ... Europe/London\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... ok\nsyncing data to disk ... ok\n\ninitdb: warning: enabling \"trust\" authentication for local connections\nYou can change this by editing pg_hba.conf or using the option -A, or\n--auth-local and --auth-host, the next time you run initdb.\n\nSuccess. You can now start the database server using:\n\n ./bin/pg_ctl -D /Users/joe/Development/tmp/pg/data -l logfile start\n\n[joe@oberon pg]$ ./bin/pg_ctl -D /Users/joe/Development/tmp/pg/data -l logfile start && ./bin/psql -d postgres\nwaiting for server to start.... done\nserver started\npsql (12devel)\nType \"help\" for help.\n\n[local] joe@postgres=# ALTER SYSTEM SET DEFAULT_TRANSACTION_ISOLATION TO 'serializable';\nALTER SYSTEM\n[local] joe@postgres=# \\q\n[joe@oberon pg]$ ./bin/pg_ctl -D /Users/joe/Development/tmp/pg/data -l logfile restart && ./bin/psql -d postgres\nwaiting for server to shut down.... done\nserver stopped\nwaiting for server to start.... done\nserver started\npsql (12devel)\nType \"help\" for help.\n\n[local] joe@postgres=# ANALYZE;\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n @!> \n\n\nLogline: \n2019-05-18 15:10:06.831 BST [45177] LOG: server process (PID 45186) was terminated by signal 11: Segmentation fault: 11\n\n\nCheers,\n-Joe\n\n\n\n\n",
"msg_date": "Sat, 18 May 2019 15:15:48 +0100",
"msg_from": "Joe Wildish <joe-postgresql.org@elusive.cx>",
"msg_from_op": true,
"msg_subject": "Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hi\n\nI can reproduce with:\n\nset default_transaction_isolation TO serializable ;\nanalyze ;\n\nHere is backtrace:\n\n#0 SerializationNeededForRead (snapshot=0x0, relation=0x7f53e9a525f8) at predicate.c:530\n#1 PredicateLockRelation (relation=relation@entry=0x7f53e9a525f8, snapshot=snapshot@entry=0x0) at predicate.c:2507\n#2 0x0000562395b78a14 in heap_beginscan (relation=0x7f53e9a525f8, snapshot=0x0, nkeys=0, key=0x0, parallel_scan=0x0, allow_strat=<optimized out>, \n allow_sync=false, allow_pagemode=true, is_bitmapscan=false, is_samplescan=true, temp_snap=false) at heapam.c:1180\n#3 0x0000562395c782d7 in table_beginscan_analyze (rel=0x7f53e9a525f8) at ../../../src/include/access/tableam.h:786\n#4 acquire_sample_rows (onerel=onerel@entry=0x7f53e9a525f8, elevel=elevel@entry=13, rows=rows@entry=0x562396f01dd0, targrows=targrows@entry=30000, \n totalrows=totalrows@entry=0x7ffd0603e498, totaldeadrows=totaldeadrows@entry=0x7ffd0603e490) at analyze.c:1032\n#5 0x0000562395c790f2 in do_analyze_rel (onerel=onerel@entry=0x7f53e9a525f8, params=params@entry=0x7ffd0603e6a0, va_cols=va_cols@entry=0x0, \n acquirefunc=0x562395c781fa <acquire_sample_rows>, relpages=0, inh=inh@entry=false, in_outer_xact=false, elevel=13) at analyze.c:502\n#6 0x0000562395c79930 in analyze_rel (relid=<optimized out>, relation=0x0, params=params@entry=0x7ffd0603e6a0, va_cols=0x0, \n in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:260\n#7 0x0000562395cf6f90 in vacuum (relations=0x562396ecbf80, params=params@entry=0x7ffd0603e6a0, bstrategy=<optimized out>, bstrategy@entry=0x0, \n isTopLevel=isTopLevel@entry=true) at vacuum.c:413\n#8 0x0000562395cf759d in ExecVacuum (pstate=pstate@entry=0x562396df69f8, vacstmt=vacstmt@entry=0x562396dd54c0, isTopLevel=isTopLevel@entry=true)\n at vacuum.c:199\n#9 0x0000562395e84863 in standard_ProcessUtility (pstmt=0x562396dd5820, queryString=0x562396dd4ad8 \"analyze ;\", context=PROCESS_UTILITY_TOPLEVEL, \n params=0x0, queryEnv=0x0, dest=0x562396dd5918, completionTag=0x7ffd0603ea10 \"\") at utility.c:670\n#10 0x0000562395e84dba in ProcessUtility (pstmt=pstmt@entry=0x562396dd5820, queryString=<optimized out>, context=<optimized out>, \n params=<optimized out>, queryEnv=<optimized out>, dest=dest@entry=0x562396dd5918, completionTag=0x7ffd0603ea10 \"\") at utility.c:360\n#11 0x0000562395e811a1 in PortalRunUtility (portal=portal@entry=0x562396e3a178, pstmt=pstmt@entry=0x562396dd5820, isTopLevel=isTopLevel@entry=true, \n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x562396dd5918, completionTag=completionTag@entry=0x7ffd0603ea10 \"\") at pquery.c:1175\n#12 0x0000562395e81e0e in PortalRunMulti (portal=portal@entry=0x562396e3a178, isTopLevel=isTopLevel@entry=true, \n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x562396dd5918, altdest=altdest@entry=0x562396dd5918, \n completionTag=completionTag@entry=0x7ffd0603ea10 \"\") at pquery.c:1321\n#13 0x0000562395e82b99 in PortalRun (portal=portal@entry=0x562396e3a178, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, \n run_once=run_once@entry=true, dest=dest@entry=0x562396dd5918, altdest=altdest@entry=0x562396dd5918, completionTag=0x7ffd0603ea10 \"\")\n at pquery.c:796\n#14 0x0000562395e7ee14 in exec_simple_query (query_string=query_string@entry=0x562396dd4ad8 \"analyze ;\") at postgres.c:1215\n#15 0x0000562395e80cfc in PostgresMain (argc=<optimized out>, argv=argv@entry=0x562396e00320, dbname=<optimized out>, username=<optimized out>)\n at postgres.c:4249\n#16 0x0000562395df6358 in BackendRun (port=port@entry=0x562396df7d30) at postmaster.c:4431\n#17 0x0000562395df9477 in BackendStartup (port=port@entry=0x562396df7d30) at postmaster.c:4122\n#18 0x0000562395df969a in ServerLoop () at postmaster.c:1704\n#19 0x0000562395dfabdb in PostmasterMain (argc=3, argv=<optimized out>) at postmaster.c:1377\n#20 0x0000562395d59083 in main (argc=3, argv=0x562396dcf200) at main.c:228\n\nregards, Sergei\n\n\n",
"msg_date": "Sat, 18 May 2019 17:31:25 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hello\n\nSeems table_beginscan_analyze (src/include/access/tableam.h) should not pass second argument as NULL.\nCC'ing Andres Freund\n\nPS: also I noticed in src/include/utils/snapshot.h exactly same comment for both SNAPSHOT_SELF and SNAPSHOT_DIRTY - they have no difference?\n\nregards, Sergei\n\n\n",
"msg_date": "Sat, 18 May 2019 18:12:31 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n> I can reproduce with:\n\n> set default_transaction_isolation TO serializable ;\n> analyze ;\n\n> Here is backtrace:\n\n> #0 SerializationNeededForRead (snapshot=0x0, relation=0x7f53e9a525f8) at predicate.c:530\n> #1 PredicateLockRelation (relation=relation@entry=0x7f53e9a525f8, snapshot=snapshot@entry=0x0) at predicate.c:2507\n> #2 0x0000562395b78a14 in heap_beginscan (relation=0x7f53e9a525f8, snapshot=0x0, nkeys=0, key=0x0, parallel_scan=0x0, allow_strat=<optimized out>, \n> allow_sync=false, allow_pagemode=true, is_bitmapscan=false, is_samplescan=true, temp_snap=false) at heapam.c:1180\n> #3 0x0000562395c782d7 in table_beginscan_analyze (rel=0x7f53e9a525f8) at ../../../src/include/access/tableam.h:786\n> #4 acquire_sample_rows (onerel=onerel@entry=0x7f53e9a525f8, elevel=elevel@entry=13, rows=rows@entry=0x562396f01dd0, targrows=targrows@entry=30000, \n> totalrows=totalrows@entry=0x7ffd0603e498, totaldeadrows=totaldeadrows@entry=0x7ffd0603e490) at analyze.c:1032\n\nSo the problem is that something is passing a null snapshot to something\nthat isn't expecting that. This seems closely related to the tableam\nAPI issue that was being debated a day or two back about whether it's\never valid to hand a null snapshot to an AM. Andres, which layer do\nyou think is at fault here?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 11:19:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "I wrote:\n> Sergei Kornilov <sk@zsrv.org> writes:\n>> I can reproduce with:\n>> set default_transaction_isolation TO serializable ;\n>> analyze ;\n\n> So the problem is that something is passing a null snapshot to something\n> that isn't expecting that. This seems closely related to the tableam\n> API issue that was being debated a day or two back about whether it's\n> ever valid to hand a null snapshot to an AM. Andres, which layer do\n> you think is at fault here?\n\nBisecting confirms that this broke at\n\ncommit 737a292b5de296615a715ddce2b2d83d1ee245c5\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Sat Mar 30 16:21:09 2019 -0700\n\n tableam: VACUUM and ANALYZE support.\n\nI'd thought possibly this had something to do with bb16aba50 (Enable\nparallel query with SERIALIZABLE isolation) but the bisection result\nmakes it pretty clear that it's just a tableam API screwup.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 14:55:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hi,\n\nOn May 18, 2019 11:55:01 AM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>I wrote:\n>> Sergei Kornilov <sk@zsrv.org> writes:\n>>> I can reproduce with:\n>>> set default_transaction_isolation TO serializable ;\n>>> analyze ;\n>\n>> So the problem is that something is passing a null snapshot to\n>something\n>> that isn't expecting that. This seems closely related to the tableam\n>> API issue that was being debated a day or two back about whether it's\n>> ever valid to hand a null snapshot to an AM. Andres, which layer do\n>> you think is at fault here?\n\nNot quite - that was about the DML callbacks, this is about the scan itself. And while we have a snapshot allocated, the analyze version of the beginscan intentionally doesn't take a snapshot.\n\n>Bisecting confirms that this broke at\n>\n>commit 737a292b5de296615a715ddce2b2d83d1ee245c5\n>Author: Andres Freund <andres@anarazel.de>\n>Date: Sat Mar 30 16:21:09 2019 -0700\n>\n> tableam: VACUUM and ANALYZE support.\n>\n>I'd thought possibly this had something to do with bb16aba50 (Enable\n>parallel query with SERIALIZABLE isolation) but the bisection result\n>makes it pretty clear that it's just a tableam API screwup.\n\nI'm not yet at my computer, but I think all that's needed is to expand the check that prevents the predicate lock to be acquired for heap type scans to the analyze case. I'll check it in a few.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 18 May 2019 12:00:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Not quite - that was about the DML callbacks, this is about the scan itself. And while we have a snapshot allocated, the analyze version of the beginscan intentionally doesn't take a snapshot.\n\nUh, what? That's a *huge* regression. See, eg, 7170268ef. We\nreally want ANALYZE to act as though it's reading a normal MVCC\nsnapshot.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 May 2019 15:48:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-18 15:48:47 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Not quite - that was about the DML callbacks, this is about the scan itself. And while we have a snapshot allocated, the analyze version of the beginscan intentionally doesn't take a snapshot.\n> \n> Uh, what? That's a *huge* regression. See, eg, 7170268ef. We\n> really want ANALYZE to act as though it's reading a normal MVCC\n> snapshot.\n\nHm? That's not new at all? In 11 we just do:\n\n\t\t\tswitch (HeapTupleSatisfiesVacuum(&targtuple,\n\t\t\t\t\t\t\t\t\t\t\t OldestXmin,\n\t\t\t\t\t\t\t\t\t\t\t targbuffer))\n\nI.e. unrelated to the tableam changes there's no mvcc snapshot in for\nvisibility determinations. And that's not changed, heap's implementation\nstill uses HTSV. We do *hold* a snapshot, but that's all outside of\nanalyze.c afaik.\n\nI've complained before that we're using a snapshot for analyze's\nsampling scan in a lot of cases where it's not necessary, and that's a\nvery significant problem in production (where autvacuum doesn't cause\nslowdowns, but autoanalyze does quite substantially). But I'd not\nchange it just as an aside.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 May 2019 13:12:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hi,\n\nThanks for the report Joe!\n\nI've pushed a fix for this.\n\nI ended up going down the path of making scan_begin's arguments a\nbitmask. Given that several people expressed desire for that, and that\nrecognizing analyze scans would have required a new argument, that\nseemed the most reasonable course.\n\nI think the code handling sync/strat in heapam's initscan() could be\nsimplified so we don't set/unset the flags (and previously the booleans)\nmultiple times. But that seems like it ought to be done separately.\n\nI'd normally have asked for a round of feedback for the changes, but it\nseems more urgent to get something out for beta1. As the changes are all\nbelow tableam, we can adjust this later without causing much trouble.\n\nRegards,\n\nAndres\n\nOn 2019-05-18 13:12:41 -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-18 15:48:47 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > Not quite - that was about the DML callbacks, this is about the scan itself. And while we have a snapshot allocated, the analyze version of the beginscan intentionally doesn't take a snapshot.\n> > \n> > Uh, what? That's a *huge* regression. See, eg, 7170268ef. We\n> > really want ANALYZE to act as though it's reading a normal MVCC\n> > snapshot.\n> \n> Hm? That's not new at all? In 11 we just do:\n> \n> \t\t\tswitch (HeapTupleSatisfiesVacuum(&targtuple,\n> \t\t\t\t\t\t\t\t\t\t\t OldestXmin,\n> \t\t\t\t\t\t\t\t\t\t\t targbuffer))\n> \n> I.e. unrelated to the tableam changes there's no mvcc snapshot in for\n> visibility determinations. And that's not changed, heap's implementation\n> still uses HTSV. We do *hold* a snapshot, but that's all outside of\n> analyze.c afaik.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 15:17:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-18 18:12:31 +0300, Sergei Kornilov wrote:\n> Seems table_beginscan_analyze (src/include/access/tableam.h) should not pass second argument as NULL.\n\nAs hopefully explained downthread, and in the commit message, that's not\nreally the concern. We shouldn't use the snapshot in the first place.\n\n> PS: also I noticed in src/include/utils/snapshot.h exactly same\n> comment for both SNAPSHOT_SELF and SNAPSHOT_DIRTY - they have no\n> difference?\n\nThat was copy & paste mistake. Fixed. Also expanded the comments a\nbit. Thanks for noticing!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 16:26:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Segfault on ANALYZE in SERIALIZABLE isolation"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI've stumbled upon a misspelled HAVE_ZLIB in a comment and decided to\ncheck all the unique identifiers/entities in the source tree. Using the\nballeyeing technique I've processed questionable A* and HAVE_* unicums\n(for now). The patches for every one are attached.\n\n1. AExprConst -> AexprConst (an inconsistent case)\n2. AlterExtensionOwner_oid - remove (orphaned after 994c36e0)\n3. AlterTableDropColumn -> ATExecDropColumn (renamed in 077db40f)\n4. ApplySortComparatorFull -> ApplySortAbbrevFullComparator (an internal\ninconsistency)\n5. arracontjoinsel -> arraycontjoinsel (just a typo)\n6. ArrayNItems -> ArrayGetNItems (an internal inconsistency)\n7. ArrayRef & ArrayRefState -> SubscriptingRef & SubscriptingRefState\n(renamed by 558d77f2)\n8. AT_AddOids - remove (orphaned after 578b2297)\n10. AtPrepare_Inval - remove (orphaned after efc16ea52)\n11. AttachIndexInfo -> IndexAttachInfo (an internal inconsistency)\n12. AttributeOffsetGetAttributeNumber - > AttrOffsetGetAttrNumber (an\ninternal inconsistency)\n13. AttInMetaData -> AttInMetadata (an inconsistent case)\n14. AUTH_REQ_GSSAPI -> AUTH_REQ_GSS (an internal inconsistency)\n15. authenticaion -> authentication (a typo)\n16. HAVE__BUILTIN_CLZ -> HAVE__BUILTIN_CLZ (a typo)\n17. HAVE_BUILTIN_CLZ -> HAVE__BUILTIN_CLZ (a typo)\n18. HAVE_BUILTIN_CTZ -> HAVE__BUILTIN_CLZ (a typo)\n18. HAVE_FCVT - remove (survived after ff4628f3)\n19. HAVE_FINITE - remove (orphaned after cac2d912)\n20. HAVE_RAND_OPENSSL - remove (orphaned after fe0a0b59)\n21. HAVE_STRUCT_SOCKADDR_UN - remove (survived after ff4628f3)\n22. HAVE_SYSCONF - remove (survived after ff4628f3)\n23. HAVE_ZLIB -> HAVE_LIBZ (a typo)\n\nI hope you will find it useful. If so, I can continue this work.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 18 May 2019 18:40:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix some unique identifiers/entities"
}
] |
[
{
"msg_contents": "When bitmap-only heap scans were introduced in v11 (7c70996ebf0949b142a99)\nno changes were made to \"EXPLAIN\". This makes the feature rather opaque.\nYou can sometimes figure out what is going by the output of EXPLAIN\n(ANALYZE, BUFFERS), but that is unintuitive and fragile.\n\nLooking at the discussion where the feature was added, I think changing the\nEXPLAIN just wasn't considered.\n\nThe attached patch adds \"avoided\" to \"exact\" and \"lossy\" as a category\nunder\n\"Heap Blocks\". Also attached is the example output, as the below will\nprobably wrap to the point of illegibility:\n\nexplain analyze select count(*) from foo where a=35 and d between 67 and\n70;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=21451.36..21451.37 rows=1 width=8) (actual\ntime=103.955..103.955 rows=1 loops=1)\n -> Bitmap Heap Scan on foo (cost=9920.73..21442.44 rows=3570 width=0)\n(actual time=100.239..103.204 rows=3950 loops=1)\n Recheck Cond: ((a = 35) AND (d >= 67) AND (d <= 70))\n Heap Blocks: avoided=3718 exact=73\n -> BitmapAnd (cost=9920.73..9920.73 rows=3570 width=0) (actual\ntime=98.666..98.666 rows=0 loops=1)\n -> Bitmap Index Scan on foo_a_c_idx (cost=0.00..1682.93\nrows=91000 width=0) (actual time=28.541..28.541 rows=99776 loops=1)\n Index Cond: (a = 35)\n -> Bitmap Index Scan on foo_d_idx (cost=0.00..8235.76\nrows=392333 width=0) (actual time=66.946..66.946 rows=399003 loops=1)\n Index Cond: ((d >= 67) AND (d <= 70))\n Planning Time: 0.458 ms\n Execution Time: 104.487 ms\n\n\nI think the name of the node should also be changed to \"Bitmap Only Heap\nScan\", but I didn't implement that as adding another NodeTag looks like a\nlot of tedious error prone work to do before getting feedback on whether\nthe change is desirable in the first place, or the correct approach.\n\n Cheers,\n\nJeff",
"msg_date": "Sat, 18 May 2019 15:28:59 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "> Looking at the discussion where the feature was added, I think changing the\n> EXPLAIN just wasn't considered.\n\nI think this is an oversight. It is very useful to have this on\nEXPLAIN.\n\n> The attached patch adds \"avoided\" to \"exact\" and \"lossy\" as a category\n> under \"Heap Blocks\".\n\nIt took me a while to figure out what those names mean. \"unfetched\",\nas you call it on the code, may be more descriptive than \"avoided\" for\nthe new label. However I think the other two are more confusing. It\nmay be a good idea to change them together with this.\n\n> I think the name of the node should also be changed to \"Bitmap Only Heap\n> Scan\", but I didn't implement that as adding another NodeTag looks like a\n> lot of tedious error prone work to do before getting feedback on whether\n> the change is desirable in the first place, or the correct approach.\n\nI am not sure about this part. In my opinion it may have been easier\nto explain to users if \"Index Only Scan\" had not been separate but\n\"Index Scan\" optimization.\n\n\n",
"msg_date": "Thu, 20 Jun 2019 15:55:36 +0100",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "Hello,\n\n> It took me a while to figure out what those names mean. \"unfetched\",\n> as you call it on the code, may be more descriptive than \"avoided\" for\n> the new label. However I think the other two are more confusing. It\n> may be a good idea to change them together with this.\nIt'll be sad if this patch is forgotten only because of the words choice.\nI've changed it all to \"unfetched\" for at least not to call the same \nthing differently\nin the code and in the output, and also rebased it and fit in 80 lines \nwidth limit.\n\nBest, Alex",
"msg_date": "Fri, 7 Feb 2020 15:22:12 +0000",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Fri, Feb 07, 2020 at 03:22:12PM +0000, Alexey Bashtanov wrote:\n>Hello,\n>\n>>It took me a while to figure out what those names mean. \"unfetched\",\n>>as you call it on the code, may be more descriptive than \"avoided\" for\n>>the new label. However I think the other two are more confusing. It\n>>may be a good idea to change them together with this.\n>It'll be sad if this patch is forgotten only because of the words choice.\n>I've changed it all to \"unfetched\" for at least not to call the same \n>thing differently\n>in the code and in the output, and also rebased it and fit in 80 lines \n>width limit.\n>\n\nI kinda suspect one of the ressons why this got so little attention is\nthat it was never added to any CF.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 7 Feb 2020 22:35:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "\n> I kinda suspect one of the ressons why this got so little attention is\n> that it was never added to any CF.\n\nThanks Tomas, I've created a CF entry \nhttps://commitfest.postgresql.org/27/2443/\n\nBest, Alex\n\n\n\n",
"msg_date": "Sun, 9 Feb 2020 22:59:10 +0000",
"msg_from": "Alexey Bashtanov <bashtanov@imap.cc>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "Hi Jeff,\n\nOn 2/7/20 10:22 AM, Alexey Bashtanov wrote:\n> I've changed it all to \"unfetched\" for at least not to call the same \n> thing differently\n> in the code and in the output, and also rebased it and fit in 80 lines \n> width limit.\n\nWhat do you think of Alexey's updates?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 10 Mar 2020 12:15:35 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Tue, Mar 10, 2020 at 12:15 PM David Steele <david@pgmasters.net> wrote:\n>\n> Hi Jeff,\n>\n> On 2/7/20 10:22 AM, Alexey Bashtanov wrote:\n> > I've changed it all to \"unfetched\" for at least not to call the same\n> > thing differently\n> > in the code and in the output, and also rebased it and fit in 80 lines\n> > width limit.\n>\n> What do you think of Alexey's updates?\n>\n> Regards,\n> --\n> -David\n> david@pgmasters.net\n\nI've added myself as a reviewer.\n\nThe patch looks good to me. It doesn't seem to have much risk either;\nthere are not spec concerns applicable (since it's EXPLAIN), and the\nsurface area for impact quite small. Both make check and check-world\npass.\n\nHere's a test query setup I worked up:\n\ncreate table exp(a int, d int);\ninsert into exp(a, d) select random() * 100, t.i % 50 from\ngenerate_series(0,10000000) t(i);\ncreate index index_exp_a on exp(a);\ncreate index index_exp_d on exp(d);\nanalyze exp;\n\nThen:\nexplain analyze select count(*) from exp where a = 25 and d between 5 and 10;\nshows: Heap Blocks: exact=10518\n\nbut if I:\nvacuum freeze exp;\nthen it shows: Heap Blocks: unfetched=10518\nas I'd expect.\n\nOne question though: if I change the query to:\nexplain (analyze, buffers) select count(*) from exp where a between 50\nand 100 and d between 5 and 10;\nthen I get a parallel bitmap heap scan, and I only see exact heap\nblocks (see attached explain output).\n\nDoes the original optimization cover parallel bitmap heap scans like\nthis? If not, I think this patch is likely ready for committer. If so,\nthen we still need support for stats tracking and explain output for\nparallel nodes.\n\nI've taken the liberty of:\n- Reformatting slightly for a cleaner diff.\n- Running pgindent against the changes\n- Added a basic commit message.\n- Add unfetched_pages initialization to ExecInitBitmapHeapScan.\n\nSee attached.\n\nThanks,\nJames",
"msg_date": "Mon, 16 Mar 2020 09:08:36 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 9:08 AM James Coleman <jtc331@gmail.com> wrote:\n> ...\n> One question though: if I change the query to:\n> explain (analyze, buffers) select count(*) from exp where a between 50\n> and 100 and d between 5 and 10;\n> then I get a parallel bitmap heap scan, and I only see exact heap\n> blocks (see attached explain output).\n>\n> Does the original optimization cover parallel bitmap heap scans like\n> this? If not, I think this patch is likely ready for committer. If so,\n> then we still need support for stats tracking and explain output for\n> parallel nodes.\n\nI've looked at the code a bit more deeply, and the implementation\nmeans the optimization applies to parallel scans also. I've also\nconvinced myself that the change in explain.c will cover both\nnon-parallel and parallel plans.\n\nSince that's the only question I saw, and the patch seems pretty\nuncontroversial/not requiring any real design choices, I've gone ahead\nand marked this patch as ready for committer.\n\nThanks for working on this!\n\nJames\n\n\n",
"msg_date": "Thu, 19 Mar 2020 20:04:43 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 09:08:36AM -0400, James Coleman wrote:\n> Does the original optimization cover parallel bitmap heap scans like this?\n\nIt works for parallel bitmap only scans.\n\ntemplate1=# explain analyze select count(*) from exp where a between 25 and 35 and d between 5 and 10;\n Finalize Aggregate (cost=78391.68..78391.69 rows=1 width=8) (actual time=525.972..525.972 rows=1 loops=1)\n -> Gather (cost=78391.47..78391.68 rows=2 width=8) (actual time=525.416..533.133 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (cost=77391.47..77391.48 rows=1 width=8) (actual time=518.406..518.406 rows=1 loops=3)\n -> Parallel Bitmap Heap Scan on exp (cost=31825.37..77245.01 rows=58582 width=0) (actual time=296.309..508.440 rows=43887 loops=3)\n Recheck Cond: ((a >= 25) AND (a <= 35) AND (d >= 5) AND (d <= 10))\n Heap Blocks: unfetched=4701 exact=9650\n -> BitmapAnd (cost=31825.37..31825.37 rows=140597 width=0) (actual time=282.590..282.590 rows=0 loops=1)\n -> Bitmap Index Scan on index_exp_a (cost=0.00..15616.99 rows=1166456 width=0) (actual time=147.036..147.036 rows=1099872 loops=1)\n Index Cond: ((a >= 25) AND (a <= 35))\n -> Bitmap Index Scan on index_exp_d (cost=0.00..16137.82 rows=1205339 width=0) (actual time=130.366..130.366 rows=1200000 loops=1)\n Index Cond: ((d >= 5) AND (d <= 10))\n\n\n> +++ b/src/backend/commands/explain.c\n> @@ -2777,6 +2777,8 @@ show_tidbitmap_info(BitmapHeapScanState *planstate, ExplainState *es)\n> {\n> \tif (es->format != EXPLAIN_FORMAT_TEXT)\n> \t{\n> +\t\tExplainPropertyInteger(\"Unfetched Heap Blocks\", NULL,\n> +\t\t\t\t\t\t\t planstate->unfetched_pages, es);\n> \t\tExplainPropertyInteger(\"Exact Heap Blocks\", NULL,\n> \t\t\t\t\t\t\t planstate->exact_pages, es);\n> \t\tExplainPropertyInteger(\"Lossy Heap Blocks\", NULL,\n> @@ -2784,10 +2786,14 @@ show_tidbitmap_info(BitmapHeapScanState *planstate, ExplainState *es)\n> \t}\n> \telse\n> \t{\n> -\t\tif (planstate->exact_pages > 0 || planstate->lossy_pages > 0)\n> +\t\tif (planstate->exact_pages > 0 || planstate->lossy_pages > 0\n> +\t\t\t|| planstate->unfetched_pages > 0)\n> \t\t{\n> \t\t\tExplainIndentText(es);\n> \t\t\tappendStringInfoString(es->str, \"Heap Blocks:\");\n> +\t\t\tif (planstate->unfetched_pages > 0)\n> +\t\t\t\tappendStringInfo(es->str, \" unfetched=%ld\",\n> +\t\t\t\t\t\t\t\t planstate->unfetched_pages);\n> \t\t\tif (planstate->exact_pages > 0)\n> \t\t\t\tappendStringInfo(es->str, \" exact=%ld\", planstate->exact_pages);\n> \t\t\tif (planstate->lossy_pages > 0)\n\n\nI don't think it matters in nontext mode, but at least in text mode, I think\nmaybe the Unfetched blocks should be output after the exact and lossy blocks,\nin case someone is parsing it, and because bitmap-only is a relatively new\nfeature. Its output is probably less common than exact/lossy.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 19 Mar 2020 20:26:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 9:26 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Mar 16, 2020 at 09:08:36AM -0400, James Coleman wrote:\n> > Does the original optimization cover parallel bitmap heap scans like this?\n>\n> It works for parallel bitmap only scans.\n>\n> template1=# explain analyze select count(*) from exp where a between 25 and 35 and d between 5 and 10;\n> Finalize Aggregate (cost=78391.68..78391.69 rows=1 width=8) (actual time=525.972..525.972 rows=1 loops=1)\n> -> Gather (cost=78391.47..78391.68 rows=2 width=8) (actual time=525.416..533.133 rows=3 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> -> Partial Aggregate (cost=77391.47..77391.48 rows=1 width=8) (actual time=518.406..518.406 rows=1 loops=3)\n> -> Parallel Bitmap Heap Scan on exp (cost=31825.37..77245.01 rows=58582 width=0) (actual time=296.309..508.440 rows=43887 loops=3)\n> Recheck Cond: ((a >= 25) AND (a <= 35) AND (d >= 5) AND (d <= 10))\n> Heap Blocks: unfetched=4701 exact=9650\n> -> BitmapAnd (cost=31825.37..31825.37 rows=140597 width=0) (actual time=282.590..282.590 rows=0 loops=1)\n> -> Bitmap Index Scan on index_exp_a (cost=0.00..15616.99 rows=1166456 width=0) (actual time=147.036..147.036 rows=1099872 loops=1)\n> Index Cond: ((a >= 25) AND (a <= 35))\n> -> Bitmap Index Scan on index_exp_d (cost=0.00..16137.82 rows=1205339 width=0) (actual time=130.366..130.366 rows=1200000 loops=1)\n> Index Cond: ((d >= 5) AND (d <= 10))\n>\n>\n> > +++ b/src/backend/commands/explain.c\n> > @@ -2777,6 +2777,8 @@ show_tidbitmap_info(BitmapHeapScanState *planstate, ExplainState *es)\n> > {\n> > if (es->format != EXPLAIN_FORMAT_TEXT)\n> > {\n> > + ExplainPropertyInteger(\"Unfetched Heap Blocks\", NULL,\n> > + planstate->unfetched_pages, es);\n> > ExplainPropertyInteger(\"Exact Heap Blocks\", NULL,\n> > planstate->exact_pages, es);\n> > ExplainPropertyInteger(\"Lossy Heap Blocks\", NULL,\n> > @@ -2784,10 +2786,14 @@ show_tidbitmap_info(BitmapHeapScanState *planstate, ExplainState *es)\n> > }\n> > else\n> > {\n> > - if (planstate->exact_pages > 0 || planstate->lossy_pages > 0)\n> > + if (planstate->exact_pages > 0 || planstate->lossy_pages > 0\n> > + || planstate->unfetched_pages > 0)\n> > {\n> > ExplainIndentText(es);\n> > appendStringInfoString(es->str, \"Heap Blocks:\");\n> > + if (planstate->unfetched_pages > 0)\n> > + appendStringInfo(es->str, \" unfetched=%ld\",\n> > + planstate->unfetched_pages);\n> > if (planstate->exact_pages > 0)\n> > appendStringInfo(es->str, \" exact=%ld\", planstate->exact_pages);\n> > if (planstate->lossy_pages > 0)\n\nAwesome, thanks for confirming with an actual plan.\n\n> I don't think it matters in nontext mode, but at least in text mode, I think\n> maybe the Unfetched blocks should be output after the exact and lossy blocks,\n> in case someone is parsing it, and because bitmap-only is a relatively new\n> feature. Its output is probably less common than exact/lossy.\n\nI tweaked that (and a comment that didn't reference the change); see attached.\n\nJames",
"msg_date": "Thu, 19 Mar 2020 21:38:46 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 7:09 AM James Coleman <jtc331@gmail.com> wrote:\n>\n> Awesome, thanks for confirming with an actual plan.\n>\n> > I don't think it matters in nontext mode, but at least in text mode, I think\n> > maybe the Unfetched blocks should be output after the exact and lossy blocks,\n> > in case someone is parsing it, and because bitmap-only is a relatively new\n> > feature. Its output is probably less common than exact/lossy.\n>\n> I tweaked that (and a comment that didn't reference the change); see attached.\n>\n\nFew comments:\n1.\n-\n- if (tbmres->ntuples >= 0)\n+ else if (tbmres->ntuples >= 0)\n node->exact_pages++;\n\nHow is this change related to this patch?\n\n2.\n+ * unfetched_pages total number of pages not retrieved due to vm\n * prefetch_iterator iterator for prefetching ahead of current page\n * prefetch_pages # pages prefetch iterator is ahead of current\n * prefetch_target current target prefetch distance\n@@ -1591,6 +1592,7 @@ typedef struct BitmapHeapScanState\n Buffer pvmbuffer;\n long exact_pages;\n long lossy_pages;\n+ long unfetched_pages;\n\nCan we name it as skipped_pages?\n\n3. Can we add a test or two for this functionality?\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 10:54:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 10:54:05AM +0530, Amit Kapila wrote:\n> On Fri, Mar 20, 2020 at 7:09 AM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > Awesome, thanks for confirming with an actual plan.\n> >\n> > > I don't think it matters in nontext mode, but at least in text mode, I think\n> > > maybe the Unfetched blocks should be output after the exact and lossy blocks,\n> > > in case someone is parsing it, and because bitmap-only is a relatively new\n> > > feature. Its output is probably less common than exact/lossy.\n> >\n> > I tweaked that (and a comment that didn't reference the change); see attached.\n> >\n> \n> Few comments:\n> 1.\n> -\n> - if (tbmres->ntuples >= 0)\n> + else if (tbmres->ntuples >= 0)\n> node->exact_pages++;\n> \n> How is this change related to this patch?\n\nPreviously, a page was either \"exact\" or \"lossy\".\nNow it's one of exact/lossy/skipped.\n(But not exact/lossy but in either case might be skipped).\n\n if (skip_fetch)\n {\n /* can't be lossy in the skip_fetch case */\n Assert(tbmres->ntuples >= 0);\n\n /*\n * The number of tuples on this page is put into\n * node->return_empty_tuples.\n */\n node->return_empty_tuples = tbmres->ntuples;\n+ node->unfetched_pages++; \n } \n else if (!table_scan_bitmap_next_block(scan, tbmres)) \n { \n /* AM doesn't think this block is valid, skip */ \n continue; \n } \n- \n- if (tbmres->ntuples >= 0) \n+ else if (tbmres->ntuples >= 0) \n node->exact_pages++; \n else \n node->lossy_pages++; \n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 Mar 2020 01:06:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 11:36 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 10:54:05AM +0530, Amit Kapila wrote:\n> > On Fri, Mar 20, 2020 at 7:09 AM James Coleman <jtc331@gmail.com> wrote:\n> > >\n> > > Awesome, thanks for confirming with an actual plan.\n> > >\n> > > > I don't think it matters in nontext mode, but at least in text mode, I think\n> > > > maybe the Unfetched blocks should be output after the exact and lossy blocks,\n> > > > in case someone is parsing it, and because bitmap-only is a relatively new\n> > > > feature. Its output is probably less common than exact/lossy.\n> > >\n> > > I tweaked that (and a comment that didn't reference the change); see attached.\n> > >\n> >\n> > Few comments:\n> > 1.\n> > -\n> > - if (tbmres->ntuples >= 0)\n> > + else if (tbmres->ntuples >= 0)\n> > node->exact_pages++;\n> >\n> > How is this change related to this patch?\n>\n> Previously, a page was either \"exact\" or \"lossy\".\n> Now it's one of exact/lossy/skipped.\n>\n\nOkay, that makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Mar 2020 11:43:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 1:24 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 20, 2020 at 7:09 AM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > Awesome, thanks for confirming with an actual plan.\n> >\n> > > I don't think it matters in nontext mode, but at least in text mode, I think\n> > > maybe the Unfetched blocks should be output after the exact and lossy blocks,\n> > > in case someone is parsing it, and because bitmap-only is a relatively new\n> > > feature. Its output is probably less common than exact/lossy.\n> >\n> > I tweaked that (and a comment that didn't reference the change); see attached.\n> >\n>\n> Few comments:\n> 1.\n> -\n> - if (tbmres->ntuples >= 0)\n> + else if (tbmres->ntuples >= 0)\n> node->exact_pages++;\n>\n> How is this change related to this patch?\n\n<already answered by Justin>\n\n> 2.\n> + * unfetched_pages total number of pages not retrieved due to vm\n> * prefetch_iterator iterator for prefetching ahead of current page\n> * prefetch_pages # pages prefetch iterator is ahead of current\n> * prefetch_target current target prefetch distance\n> @@ -1591,6 +1592,7 @@ typedef struct BitmapHeapScanState\n> Buffer pvmbuffer;\n> long exact_pages;\n> long lossy_pages;\n> + long unfetched_pages;\n>\n> Can we name it as skipped_pages?\n\nThat seems easy enough to do.\n\n> 3. Can we add a test or two for this functionality?\n\n From what I can tell the current lossy page count isn't tested either;\nwould we expect the explain output from such a test to be stable\nacross different architectures etc.?\n\nJames\n\n\n",
"msg_date": "Tue, 24 Mar 2020 10:01:43 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "I took a quick look through this patch. While I see nothing to complain\nabout implementation-wise, I'm a bit befuddled as to why we need this\nreporting when there is no comparable data provided for regular index-only\nscans. Over there, you just get \"Heap Fetches: n\", and the existing\ncounts for bitmap scans seem to cover the same territory.\n\nI agree with the original comment that it's pretty strange that\nEXPLAIN doesn't identify an index-only BMS at all; but fixing that\nis a different patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Mar 2020 15:14:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I took a quick look through this patch. While I see nothing to complain\n> about implementation-wise, I'm a bit befuddled as to why we need this\n> reporting when there is no comparable data provided for regular index-only\n> scans. Over there, you just get \"Heap Fetches: n\", and the existing\n> counts for bitmap scans seem to cover the same territory.\n>\n\nIsn't deducing similar information (\"Skipped Heap Fetches: n\") there\nis a bit easier than it is here?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Mar 2020 08:32:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Tue, Mar 24, 2020 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I took a quick look through this patch. While I see nothing to complain\n> > about implementation-wise, I'm a bit befuddled as to why we need this\n> > reporting when there is no comparable data provided for regular index-only\n> > scans. Over there, you just get \"Heap Fetches: n\", and the existing\n> > counts for bitmap scans seem to cover the same territory.\n> >\n>\n> Isn't deducing similar information (\"Skipped Heap Fetches: n\") there\n> is a bit easier than it is here?\n\nWhile I'm not the patch author so can't speak to the original thought\nprocess, I do think it makes sense to show it. I could imagine a world\nin which index only scans were printed in explain as purely an\noptimization to index scans that shows exactly this (how many pages we\nwere able to skip fetching). That approach actually can make things\nmore helpful than the approach current in explain for index only\nscans, since the optimization isn't all or nothing (i.e., it can still\nfetch heap pages), so it's interesting to see exactly how much it\ngained you.\n\nJames\n\n\n",
"msg_date": "Wed, 25 Mar 2020 08:14:41 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Wed, Mar 25, 2020 at 5:44 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Tue, Mar 24, 2020 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 25, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > I took a quick look through this patch. While I see nothing to complain\n> > > about implementation-wise, I'm a bit befuddled as to why we need this\n> > > reporting when there is no comparable data provided for regular index-only\n> > > scans. Over there, you just get \"Heap Fetches: n\", and the existing\n> > > counts for bitmap scans seem to cover the same territory.\n> > >\n> >\n> > Isn't deducing similar information (\"Skipped Heap Fetches: n\") there\n> > is a bit easier than it is here?\n>\n> While I'm not the patch author so can't speak to the original thought\n> process, I do think it makes sense to show it.\n>\n\nYeah, I also see this information could be useful. It seems Tom Lane\nis not entirely convinced of this. I am not sure if this is the right\ntime to seek more opinions as we are already near the end of CF. So,\nwe should either decide to move this to the next CF if we think of\ngetting the opinion of others or simply reject it and see a better way\nfor EXPLAIN to identify an index-only BMS.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Mar 2020 06:53:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Fri, Mar 27, 2020 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 25, 2020 at 5:44 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > On Tue, Mar 24, 2020 at 11:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 25, 2020 at 12:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > I took a quick look through this patch. While I see nothing to complain\n> > > > about implementation-wise, I'm a bit befuddled as to why we need this\n> > > > reporting when there is no comparable data provided for regular index-only\n> > > > scans. Over there, you just get \"Heap Fetches: n\", and the existing\n> > > > counts for bitmap scans seem to cover the same territory.\n> > > >\n> > >\n> > > Isn't deducing similar information (\"Skipped Heap Fetches: n\") there\n> > > is a bit easier than it is here?\n> >\n> > While I'm not the patch author so can't speak to the original thought\n> > process, I do think it makes sense to show it.\n> >\n>\n> Yeah, I also see this information could be useful. It seems Tom Lane\n> is not entirely convinced of this. I am not sure if this is the right\n> time to seek more opinions as we are already near the end of CF. So,\n> we should either decide to move this to the next CF if we think of\n> getting the opinion of others or simply reject it and see a better way\n> for EXPLAIN to identify an index-only BMS.\n\nI'm curious if Tom's objection is mostly on the grounds that we should\nbe consistent in what's displayed, or that he thinks the information\nis likely to be useless.\n\nIf consistency is the goal you might e.g., do something that just\nchanges the node type output, but in favor of changing that, it seems\nto me that showing \"how well did the optimization\" is actually more\nvaluable than \"did we do the optimization at all\". Additionally I\nthink showing it as an optimization of an existing node is actually\nlikely less confusing anyway.\n\nOne other thing: my understanding is that this actually matches the\nunderlying code split too. For the index only scan case, we actually\nhave a separate node (it's not just an optimization of the standard\nindex scan). There are discussions about whether that's a good thing,\nbut it's what we have. In contrast, the bitmap scan actually has it as\na pure optimization of the existing bitmap heap scan node.\n\nJames\n\n\n",
"msg_date": "Sat, 28 Mar 2020 10:31:51 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Sat, Mar 28, 2020 at 8:02 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> On Fri, Mar 27, 2020 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Yeah, I also see this information could be useful. It seems Tom Lane\n> > is not entirely convinced of this. I am not sure if this is the right\n> > time to seek more opinions as we are already near the end of CF. So,\n> > we should either decide to move this to the next CF if we think of\n> > getting the opinion of others or simply reject it and see a better way\n> > for EXPLAIN to identify an index-only BMS.\n>\n> I'm curious if Tom's objection is mostly on the grounds that we should\n> be consistent in what's displayed, or that he thinks the information\n> is likely to be useless.\n>\n\nYeah, it would be good if he clarifies his position.\n\n> If consistency is the goal you might e.g., do something that just\n> changes the node type output, but in favor of changing that, it seems\n> to me that showing \"how well did the optimization\" is actually more\n> valuable than \"did we do the optimization at all\". Additionally I\n> think showing it as an optimization of an existing node is actually\n> likely less confusing anyway.\n>\n> One other thing: my understanding is that this actually matches the\n> underlying code split too. For the index only scan case, we actually\n> have a separate node (it's not just an optimization of the standard\n> index scan). There are discussions about whether that's a good thing,\n> but it's what we have. In contrast, the bitmap scan actually has it as\n> a pure optimization of the existing bitmap heap scan node.\n>\n\nI personally see those as valid points. Does anybody else want to\nweigh in here, so that we can reach to some conclusion and move ahead\nwith this CF entry?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Mar 2020 09:53:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sat, Mar 28, 2020 at 8:02 PM James Coleman <jtc331@gmail.com> wrote:\n>> I'm curious if Tom's objection is mostly on the grounds that we should\n>> be consistent in what's displayed, or that he thinks the information\n>> is likely to be useless.\n\n> Yeah, it would be good if he clarifies his position.\n\nSome of both: it seems like these ought to be consistent, and the\nlack of complaints so far about regular index-only scans suggests\nthat people don't need the info. But perhaps we ought to add\nsimilar info in both places.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Mar 2020 00:29:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
},
{
"msg_contents": "On Mon, Mar 30, 2020 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sat, Mar 28, 2020 at 8:02 PM James Coleman <jtc331@gmail.com> wrote:\n> >> I'm curious if Tom's objection is mostly on the grounds that we should\n> >> be consistent in what's displayed, or that he thinks the information\n> >> is likely to be useless.\n>\n> > Yeah, it would be good if he clarifies his position.\n>\n> Some of both: it seems like these ought to be consistent, and the\n> lack of complaints so far about regular index-only scans suggests\n> that people don't need the info. But perhaps we ought to add\n> similar info in both places.\n>\n\nFair enough. I have marked this CF entry as RWF.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Mar 2020 08:16:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: improve transparency of bitmap-only heap scans"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at fixing [1] on master, I noticed the following\ncodeblock:\n\nstatic HeapScanDesc\nheap_beginscan_internal(Relation relation, Snapshot snapshot,\n\t\t\t\t\t\tint nkeys, ScanKey key,\n\t\t\t\t\t\tParallelHeapScanDesc parallel_scan,\n\t\t\t\t\t\tbool allow_strat,\n\t\t\t\t\t\tbool allow_sync,\n\t\t\t\t\t\tbool allow_pagemode,\n\t\t\t\t\t\tbool is_bitmapscan,\n\t\t\t\t\t\tbool is_samplescan,\n\t\t\t\t\t\tbool temp_snap)\n...\n\t/*\n\t * For a seqscan in a serializable transaction, acquire a predicate lock\n\t * on the entire relation. This is required not only to lock all the\n\t * matching tuples, but also to conflict with new insertions into the\n\t * table. In an indexscan, we take page locks on the index pages covering\n\t * the range specified in the scan qual, but in a heap scan there is\n\t * nothing more fine-grained to lock. A bitmap scan is a different story,\n\t * there we have already scanned the index and locked the index pages\n\t * covering the predicate. But in that case we still have to lock any\n\t * matching heap tuples.\n\t */\n\tif (!is_bitmapscan)\n\t\tPredicateLockRelation(relation, snapshot);\n\nAs you can see this only tests for is_bitmapscan, *not* for\nis_samplescan. Which means we afaict currently every sample scan\npredicate locks the entire relation.\n\nI think there's two possibilities here:\n\n1) It's just the comment that's inaccurate, and it should really talk\n about both seqscans and sample scans. It should not be necessary to\n lock the whole relation, but I'm not sure the code otherwise takes\n enough care.\n\n2) We should really not predicate lock the entire relation. In which\n case I think there might be missing PredicateLockTuple/Page calls.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/4EA80A20-E9BF-49F1-9F01-5B66CAB21453%40elusive.cx\n\n\n",
"msg_date": "Sat, 18 May 2019 13:31:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "sample scans and predicate locking"
},
{
"msg_contents": "On Sun, May 19, 2019 at 8:31 AM Andres Freund <andres@anarazel.de> wrote:\n> While looking at fixing [1] on master, I noticed the following\n> codeblock:\n>\n> static HeapScanDesc\n> heap_beginscan_internal(Relation relation, Snapshot snapshot,\n> int nkeys, ScanKey key,\n> ParallelHeapScanDesc parallel_scan,\n> bool allow_strat,\n> bool allow_sync,\n> bool allow_pagemode,\n> bool is_bitmapscan,\n> bool is_samplescan,\n> bool temp_snap)\n> ...\n> /*\n> * For a seqscan in a serializable transaction, acquire a predicate lock\n> * on the entire relation. This is required not only to lock all the\n> * matching tuples, but also to conflict with new insertions into the\n> * table. In an indexscan, we take page locks on the index pages covering\n> * the range specified in the scan qual, but in a heap scan there is\n> * nothing more fine-grained to lock. A bitmap scan is a different story,\n> * there we have already scanned the index and locked the index pages\n> * covering the predicate. But in that case we still have to lock any\n> * matching heap tuples.\n> */\n> if (!is_bitmapscan)\n> PredicateLockRelation(relation, snapshot);\n>\n> As you can see this only tests for is_bitmapscan, *not* for\n> is_samplescan. Which means we afaict currently every sample scan\n> predicate locks the entire relation.\n\nRight, I just tested that. That's not wrong though, is it? It's just\noverly pessimistic.\n\n> I think there's two possibilities here:\n>\n> 1) It's just the comment that's inaccurate, and it should really talk\n> about both seqscans and sample scans. It should not be necessary to\n> lock the whole relation, but I'm not sure the code otherwise takes\n> enough care.\n>\n> 2) We should really not predicate lock the entire relation. In which\n> case I think there might be missing PredicateLockTuple/Page calls.\n\nYeah, we could probably predicate-lock pages in\nheapam_scan_sample_next_block() and tuples in\nheapam_scan_sample_next_tuple(), instead of doing this. Seems like a\nreasonable improvement for 13. But... hmm.... There *might* be a\ntheoretical argument about TABLESAMPLE(100) behaving differently if\ndone per page rather than if done at relation level, wrt new pages\nadded to the end later and therefore missed. And then by logical\nextension, TABLESAMPLE of any percentage. I'm not sure.\n\nI made a little list of small SERIALIZABLE projects for v13 and added\nthis, over here:\n\nhttps://wiki.postgresql.org/wiki/SerializableToDo\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Sun, 19 May 2019 13:57:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sample scans and predicate locking"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-19 13:57:42 +1200, Thomas Munro wrote:\n> On Sun, May 19, 2019 at 8:31 AM Andres Freund <andres@anarazel.de> wrote:\n> > While looking at fixing [1] on master, I noticed the following\n> > codeblock:\n> >\n> > static HeapScanDesc\n> > heap_beginscan_internal(Relation relation, Snapshot snapshot,\n> > int nkeys, ScanKey key,\n> > ParallelHeapScanDesc parallel_scan,\n> > bool allow_strat,\n> > bool allow_sync,\n> > bool allow_pagemode,\n> > bool is_bitmapscan,\n> > bool is_samplescan,\n> > bool temp_snap)\n> > ...\n> > /*\n> > * For a seqscan in a serializable transaction, acquire a predicate lock\n> > * on the entire relation. This is required not only to lock all the\n> > * matching tuples, but also to conflict with new insertions into the\n> > * table. In an indexscan, we take page locks on the index pages covering\n> > * the range specified in the scan qual, but in a heap scan there is\n> > * nothing more fine-grained to lock. A bitmap scan is a different story,\n> > * there we have already scanned the index and locked the index pages\n> > * covering the predicate. But in that case we still have to lock any\n> > * matching heap tuples.\n> > */\n> > if (!is_bitmapscan)\n> > PredicateLockRelation(relation, snapshot);\n> >\n> > As you can see this only tests for is_bitmapscan, *not* for\n> > is_samplescan. Which means we afaict currently every sample scan\n> > predicate locks the entire relation.\n> \n> Right, I just tested that. That's not wrong though, is it? It's just\n> overly pessimistic.\n\nYea, I was mostly commenting on the fact that the comment doesn't\nmention sample scans, so it looks a bit accidental.\n\nI added a comment to master (as part of a fix, where this codepath was\nentered inadvertently)\n\n\n> > I think there's two possibilities here:\n> >\n> > 1) It's just the comment that's inaccurate, and it should really talk\n> > about both seqscans and sample scans. It should not be necessary to\n> > lock the whole relation, but I'm not sure the code otherwise takes\n> > enough care.\n> >\n> > 2) We should really not predicate lock the entire relation. In which\n> > case I think there might be missing PredicateLockTuple/Page calls.\n> \n> Yeah, we could probably predicate-lock pages in\n> heapam_scan_sample_next_block() and tuples in\n> heapam_scan_sample_next_tuple(), instead of doing this. Seems like a\n> reasonable improvement for 13. But... hmm.... There *might* be a\n> theoretical argument about TABLESAMPLE(100) behaving differently if\n> done per page rather than if done at relation level, wrt new pages\n> added to the end later and therefore missed. And then by logical\n> extension, TABLESAMPLE of any percentage. I'm not sure.\n\nI don't think that's actually a problem, tablesample doesn't return\ninvisible rows. And the equivalent issue is true just as well for index\nand bitmap heap scans?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 15:22:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: sample scans and predicate locking"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2019-05-19 13:57:42 +1200, Thomas Munro wrote:\n> > Yeah, we could probably predicate-lock pages in\n> > heapam_scan_sample_next_block() and tuples in\n> > heapam_scan_sample_next_tuple(), instead of doing this. Seems like a\n> > reasonable improvement for 13. But... hmm.... There *might* be a\n> > theoretical argument about TABLESAMPLE(100) behaving differently if\n> > done per page rather than if done at relation level, wrt new pages\n> > added to the end later and therefore missed. And then by logical\n> > extension, TABLESAMPLE of any percentage. I'm not sure.\n>\n> I don't think that's actually a problem, tablesample doesn't return\n> invisible rows. And the equivalent issue is true just as well for index\n> and bitmap heap scans?\n\nIt affects whether this transaction could be considered to have run\nafter the other transaction though. The equivalent issue is handled\nfor index scans, because we arrange to predicate lock pages that\nanyone else will have to touch if they insert index tuples that would\nmatch your WHERE clause (ie your predicate), as the comment says. (I\nwondered if there'd be a finer grained way to do it by\npredicate-locking the page-after-last to detect extension, but I\nsuspect you might really need to lock all-the-pages-after-last... I\ndon't know.)\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2019 11:24:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sample scans and predicate locking"
}
] |
[
{
"msg_contents": "Hi,\n\nCan I obtain a document with the organisational structure of this mailing\nlist. I would like to foresee what happens with my email.\n\nConcretely I would like to know if there is a filtering and/or relaying\nperson or algorithm involved\n\nThanks\n\nSascha\n\nHi,Can I obtain a document with the organisational structure of this mailing list. I would like to foresee what happens with my email.Concretely I would like to know if there is a filtering and/or relaying person or algorithm involvedThanksSascha",
"msg_date": "Sun, 19 May 2019 04:46:34 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Organisational structure"
},
{
"msg_contents": "Greetings,\n\n* Sascha Kuhl (yogidabanli@gmail.com) wrote:\n> Can I obtain a document with the organisational structure of this mailing\n> list. I would like to foresee what happens with my email.\n\nThere is no formal document and we don't have any control over what\nhappens down-stream of us (there are multiple public archives of these\nlists). The official archives for this list are here:\n\nhttps://www.postgresql.org/list/pgsql-hackers/\n\n> Concretely I would like to know if there is a filtering and/or relaying\n> person or algorithm involved\n\nWe do have spam filtering in place for all inbound email, and there is\nmoderation where an individual may be involved.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 20 May 2019 09:37:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Organisational structure"
}
] |
[
{
"msg_contents": "Hi all,\n\nMany of the grammars could be clarified. For instance there's a number of useless associativity and/or precedence declarations. Maybe the point is to leave some form of a documentation, but actually, since it's not used at all by the tool, that documentation is not checked.\n\nIn the following two proposed patches, I remove directives that are completely useless. In other places, some associativity is declared (e.g., with %left) although the associativity is useless, only the precedence matters. I have not changed this, because I don't know what is the version of Bison that is required. Given that it's a maintainer-side tool, I would suggest targeting recent versions of Bison, but opinions might differ here.\n\nCheers!\n\ncommit 75e597aa239d8ebc332d3a29630ecad0133d3d6f\nAuthor: Akim Demaille <akim.demaille@gmail.com>\nDate: Sun May 19 14:24:33 2019 +0200\n\n json_path: remove useless precedence directives\n \n These directives are useless: the generated parser is exactly the\n same (except for line number changes).\n\ndiff --git a/src/backend/utils/adt/jsonpath_gram.y b/src/backend/utils/adt/jsonpath_gram.y\nindex 22c2089f78..82b6529414 100644\n--- a/src/backend/utils/adt/jsonpath_gram.y\n+++ b/src/backend/utils/adt/jsonpath_gram.y\n@@ -115,11 +115,9 @@ static JsonPathParseItem *makeItemLikeRegex(JsonPathParseItem *expr,\n \n %left\tOR_P\n %left\tAND_P\n-%right\tNOT_P\n %left\t'+' '-'\n %left\t'*' '/' '%'\n %left\tUMINUS\n-%nonassoc '(' ')'\n \n /* Grammar follows */\n %%\n\n\n\nThis second patch could be made simpler: just remove the %token declarations I provided, but then the generated files are different (but, of course, both parsers are equivalent).\n\ncommit 5322f7303a1a9dfa7cd959d68caeced847ae0466\nAuthor: Akim Demaille <akim.demaille@gmail.com>\nDate: Sun May 19 14:32:15 2019 +0200\n\n parser: remove useless associativity/precedence\n \n Use %token instead to guarantee that the token numbers are the same\n before and after this patch. As a consequence, the generated files\n are equal.\n\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex 3dc0e8a4fb..3d4c552cfa 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -766,10 +766,10 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);\n %left\t\tAT\t\t\t\t/* sets precedence for AT TIME ZONE */\n %left\t\tCOLLATE\n %right\t\tUMINUS\n-%left\t\t'[' ']'\n+%token\t\t'[' ']'\n %left\t\t'(' ')'\n %left\t\tTYPECAST\n-%left\t\t'.'\n+%token\t\t'.'\n /*\n * These might seem to be low-precedence, but actually they are not part\n * of the arithmetic hierarchy at all in their use as JOIN operators.\n\n\n\n\n\n",
"msg_date": "Sun, 19 May 2019 14:47:32 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Akim Demaille <akim@lrde.epita.fr> writes:\n> In the following two proposed patches, I remove directives that are\n> completely useless.\n\nI'm far from convinced that the proposed changes in gram.y are a good\nidea. Both [] and . (field selection) *are* left-associative in a\nmeaningful sense, so even if this change happens not to affect what\nBison does, I think the declarations are good documentation. Would\nyou have us also change the user documentation at\nhttps://www.postgresql.org/docs/devel/sql-syntax-lexical.html#SQL-PRECEDENCE\n?\n\nI haven't looked at the jsonpath grammar closely enough to have an\nopinion about that one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 May 2019 14:27:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Hi Tom,\n\n> Le 19 mai 2019 à 20:27, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Akim Demaille <akim@lrde.epita.fr> writes:\n>> In the following two proposed patches, I remove directives that are\n>> completely useless.\n> \n> I'm far from convinced that the proposed changes in gram.y are a good\n> idea. Both [] and . (field selection) *are* left-associative in a\n> meaningful sense, so even if this change happens not to affect what\n> Bison does, I think the declarations are good documentation.\n\nI don't dispute the overall behavior of the grammar as a whole, I'm only referring to these directives. In my experience, leaving useless associativity and precedence directives can be misleading (since these directives have no impact, you could put them anywhere: their contribution is not checked in any way) or even dangerous (some day, some change introduces unexpected shift-reduce conflicts that someone should have studied, but because of \"stray\" directives, they are \"fixed\" in some uncontrolled way).\n\n> Would\n> you have us also change the user documentation at\n> https://www.postgresql.org/docs/devel/sql-syntax-lexical.html#SQL-PRECEDENCE\n> ?\n\nNo, of course not! That you define the arithmetics with an unambiguous grammar (expr/term/fact and no associativity/precedence directive) or with an ambiguous grammar (expr and associativity/precedence directives) still results in the same behavior: the usual behavior of these operators. And the documentation should document that, of course.\n\n\nIt is for the same reasons that I would recommend not using associativity directives (%left, %right, %nonassoc) where associativity plays no role: %precedence is made for this. But it was introduced in Bison 2.7.1 (2013-04-15), and I don't know if requiring it is acceptable to PostgreSQL.\n\nCheers!\n\n",
"msg_date": "Mon, 20 May 2019 06:45:56 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Akim Demaille <akim@lrde.epita.fr> writes:\n> It is for the same reasons that I would recommend not using associativity directives (%left, %right, %nonassoc) where associativity plays no role: %precedence is made for this. But it was introduced in Bison 2.7.1 (2013-04-15), and I don't know if requiring it is acceptable to PostgreSQL.\n\n2013? Certainly not. We have a lot of buildfarm critters running\nolder platforms than that. I believe our (documented and tested)\nminimum version of Bison is still 1.875. While we'd be willing\nto move that goalpost if there were clear benefits from doing so,\nI'm not even convinced that %precedence as you describe it here\nis any improvement at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 09:54:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Hi Tom!\n\n> Le 20 mai 2019 à 15:54, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Akim Demaille <akim@lrde.epita.fr> writes:\n>> It is for the same reasons that I would recommend not using associativity directives (%left, %right, %nonassoc) where associativity plays no role: %precedence is made for this. But it was introduced in Bison 2.7.1 (2013-04-15), and I don't know if requiring it is acceptable to PostgreSQL.\n> \n> 2013? Certainly not. We have a lot of buildfarm critters running\n> older platforms than that.\n\nThis I fully understand. However, Bison is a source generator,\nand it's quite customary to use modern tools on the maintainer\nside, and then deploy the result them on possibly much older\narchitectures.\n\nUsually users of Bison build tarballs with the generated parsers\nin them, and ship/test from that.\n\n> I believe our (documented and tested)\n> minimum version of Bison is still 1.875. While we'd be willing\n> to move that goalpost if there were clear benefits from doing so,\n> I'm not even convinced that %precedence as you describe it here\n> is any improvement at all.\n\nOk. I find this really surprising: you are leaving dormant directives\nthat may fire some day without anyone knowing.\n\nYou could comment out the useless associativity/precedence directives,\nthat would just as well document them, without this risk.\n\nBut, Ok, that's only my opinion.\n\n\nSo Bison, and your use of it today, is exactly what you need?\nThere's no limitation of that tool that you'd like to see\naddress that would make it a better tool for PostgreSQL?\n\n",
"msg_date": "Tue, 21 May 2019 17:49:12 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Akim Demaille <akim@lrde.epita.fr> writes:\n>> Le 20 mai 2019 à 15:54, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>> 2013? Certainly not. We have a lot of buildfarm critters running\n>> older platforms than that.\n\n> This I fully understand. However, Bison is a source generator,\n> and it's quite customary to use modern tools on the maintainer\n> side, and then deploy the result them on possibly much older\n> architectures.\n> Usually users of Bison build tarballs with the generated parsers\n> in them, and ship/test from that.\n\nAs do we, but at the same time we don't want to make our tool\nrequirements too onerous. I think that really the practical limit\nright now is Bison 2.3 --- Apple is still shipping that as their system\nversion, so requiring something newer than 2.3 would put extra burden\non people doing PG development on Macs (of which there are a lot).\nThe fact that we still test 1.875 is mostly just an \"if it ain't broke\ndon't break it\" thing, ie don't move the goalposts without a reason.\n\n> So Bison, and your use of it today, is exactly what you need?\n> There's no limitation of that tool that you'd like to see\n> address that would make it a better tool for PostgreSQL?\n\nWell, there are a couple of pain points, but they're not going to be\naddressed by marginal hacking on declarations ;-). The things that\nwe find really painful, IMV, are:\n\n* Speed of the generated parser could be better. I suspect this has\na lot to do with the fact that our grammar is huge, and so are the\ntables, and that causes lots of cache misses. Maybe this could be\naddressed by trying to make the tables smaller and/or laid out in\na way with better cache locality?\n\n* Lack of run-time extensibility of the parser. There are many PG\nextensions that wish they could add things into the grammar, and can't.\nThis is pretty pie-in-the-sky, I know. One of the main reasons we stick\nto Bison is the compile-time grammar sanity checks it provides, and\nit's not apparent how to have that and extensibility too. But it's\nstill a pain point.\n\n* LALR(1) parsing can only barely cope with SQL, and the standards\ncommittee keeps making it harder. We've got some hacks that fake\nan additional token of lookahead in some places, but that's just an\nugly (and performance-eating) hack. Maybe Bison's GLR mode would\nalready solve that, but no one here has really looked into whether\nit could improve matters or whether it'd come at a performance cost.\nThe Bison manual's description of GLR doesn't give me a warm feeling\neither about the performance impact or whether we'd still get\ncompile-time warnings about bogus grammars.\n\nOther PG hackers might have a different laundry list, but that's mine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 15:06:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Hi Tom,\n\n> Le 21 mai 2019 à 21:06, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Akim Demaille <akim@lrde.epita.fr> writes:\n>>> Le 20 mai 2019 à 15:54, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>>> 2013? Certainly not. We have a lot of buildfarm critters running\n>>> older platforms than that.\n> \n>> This I fully understand. However, Bison is a source generator,\n>> and it's quite customary to use modern tools on the maintainer\n>> side, and then deploy the result them on possibly much older\n>> architectures.\n>> Usually users of Bison build tarballs with the generated parsers\n>> in them, and ship/test from that.\n> \n> As do we, but at the same time we don't want to make our tool\n> requirements too onerous. I think that really the practical limit\n> right now is Bison 2.3 --- Apple is still shipping that as their system\n> version,\n\nAnd do not expect Apple to update it at all. Apple refuses the\nGPLv3, and stopped updating Bison to the last GPLv2 release,\nas it did for every GPL'd program.\n\n> so requiring something newer than 2.3 would put extra burden\n> on people doing PG development on Macs (of which there are a lot).\n\nHonestly, I seriously doubt that you have contributors that don't\nhave MacPorts or Brew installed, and both are pretty up to date on\nBison.\n\n>> So Bison, and your use of it today, is exactly what you need?\n>> There's no limitation of that tool that you'd like to see\n>> address that would make it a better tool for PostgreSQL?\n> \n> Well, there are a couple of pain points, but they're not going to be\n> addressed by marginal hacking on declarations ;-). The things that\n> we find really painful, IMV, are:\n> \n> * Speed of the generated parser could be better.\n\nExpect news this year about that. I precisely came to look at\nPostgreSQL for this. Is there an easy way to bench pg and the various\ncosts? To be explicit: is there a way to see how long the parsing\nphase takes? And some mighty inputs to bench against?\n\n> I suspect this has\n> a lot to do with the fact that our grammar is huge, and so are the\n> tables, and that causes lots of cache misses. Maybe this could be\n> addressed by trying to make the tables smaller and/or laid out in\n> a way with better cache locality?\n\nThe improvement I have in mind is about LR techniques, not about\nthis. But you are right that it might be interesting.\n\nIt's unlikely that the table can be made smaller though. Bison\nwas designed when space was really scarce, and a lot of efforts\nwere invested to make the tables as small as possible. The\ncurrent trend is actually to optionally consume more space in\nexchange for better services (such as more accurate error messages).\n\n\n> * Lack of run-time extensibility of the parser. There are many PG\n> extensions that wish they could add things into the grammar, and can't.\n\nMaking the grammars extensible definitely makes sense, and it's\nin the wishlist. But making this doable at runtime is a much bigger\nproblem...\n\nHowever, maybe this can be achieved by calling the plugin parser\nfrom the outer parser. Provided, of course, that the grammar of\nthe plugin is really in a \"separate world\"; if it also wants to\nget bits of the host grammar, it's certainly not so easy.\n\nAre there documented examples of this? What would that look like?\n\n\n> * LALR(1) parsing can only barely cope with SQL, and the standards\n> committee keeps making it harder.\n\nBut Bison does more: it also provides support for LR(1) and IELR(1),\nwhich accept more (deterministic) grammars, and are not subject\nto \"mysterious s/r conflicts\" as in LALR. But maybe you refer to\neven beyond LR(1):\n\n> We've got some hacks that fake\n> an additional token of lookahead in some places, but that's just an\n> ugly (and performance-eating) hack.\n\nMore than k=1 is unlikely to happen. Given that we have GLR, which\nprovides us with k=∞ :)\n\n> Maybe Bison's GLR mode would already solve that,\n\nNo doubt about that.\n\nBison's grammar is not LR(1) either, because the rules are not mandated\nto end with ';', so when reading a grammar, in a sequence such as\n\"<ID> <:> <ID>\" the parser cannot know whether the second <ID> is the\nRHS of the rule introduced by the first <ID>, or the beginning of another\nrule if the sequence is actually \"<ID> <:> <ID> <:>\". Because of that,\nBison is also playing dirty tricks to turn this LR(2) into LR(1).\n\nBut as a consequence the grammar is much harder to evolve, the locations\nare less accurate (because two tokens are merged together into a mega\ntoken), etc.\n\nSo it is considered to turn Bison's own parser to GLR.\n\n> but no one here has really looked into whether\n> it could improve matters or whether it'd come at a performance cost.\n\nThat should be very easy to check: just adding %glr to the grammar\nshould not change the API, should not change the visible behavior,\nbut will give you a hint of the intrinsic cost of the GLR backend.\n\n> The Bison manual's description of GLR doesn't give me a warm feeling\n> either about the performance impact\n\nThe GLR backend is efficient... for a GLR backend. I have not\nbenched it against the deterministic backend, but now that you\nmention it, it's an obvious need...\n\n> or whether we'd still get\n> compile-time warnings about bogus grammars.\n\nWell, you can't fight nature here. Ambiguity of a grammar is\nundecidable.\n\nThat being said, there's a modified Bison which implements heuristics\nto detect ambiguities in grammar [1]. Would that make you feel more\ncomfortable?\n\n[1] http://www.lsv.fr/~schmitz/pub/expamb.pdf\n\n",
"msg_date": "Wed, 22 May 2019 21:20:31 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Akim Demaille <akim@lrde.epita.fr> writes:\n> Honestly, I seriously doubt that you have contributors that don't\n> have MacPorts or Brew installed, and both are pretty up to date on\n> Bison.\n\nHm, well, I'm a counterexample ;-). Right now you can develop PG\non a Mac just fine without any additional stuff, excepting maybe\nOpenSSL if you want that. If we have a strong reason to require\na newer Bison, I'd be willing to do so, but it needs to be a\nstrong reason.\n\n>> * Speed of the generated parser could be better.\n\n> Expect news this year about that. I precisely came to look at\n> PostgreSQL for this.\n\nThat's very cool news.\n\n> Is there an easy way to bench pg and the various\n> costs? To be explicit: is there a way to see how long the parsing\n> phase takes? And some mighty inputs to bench against?\n\nThe easiest method is to fire up some client code that repeatedly\ndoes whatever you want to test, and then look at perf or oprofile\nor local equivalent to see where the time is going in the backend\nprocess.\n\nFor the particular case of stressing the parser, probably the\nbest thing to look at is test cases that do a lot of low-overhead\nDDL, such as creating views. You could do worse than just repeatedly\nsourcing our standard view files, like\n\tsrc/backend/catalog/system_views.sql\n\tsrc/backend/catalog/information_schema.sql\n(In either case, I'd suggest adapting the file to create all\nits objects in some transient schema that you can just drop.\nRepointing information_schema.sql to some other schema is\ntrivial, just change a couple of commands at the top; and\nyou could tweak system_views.sql similarly. Also consider\nwrapping the whole thing in BEGIN; ... ROLLBACK; instead of\nspending time on an explicit DROP.)\n\nSomebody else might know of a better test case but I'd try\nthat first.\n\nThere would still be a fair amount of I/O and catalog lookup\noverhead in a test run that way, but it would be an honest\napproximation of useful real-world cases. If you're willing to\nput some blinders on and just micro-optimize the flex/bison\ncode, you could set up a custom function that just calls that\nstuff. I actually did that not too long ago; C code attached\nfor amusement's sake.\n\n>> * Lack of run-time extensibility of the parser. There are many PG\n>> extensions that wish they could add things into the grammar, and can't.\n\n> Are there documented examples of this? What would that look like?\n\nI'm just vaguely recalling occasional how-could-I-do-this complaints\non the pgsql-hackers mailing list. Perhaps somebody else could say\nsomething more concrete.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 22 May 2019 17:25:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "> On 22 May 2019, at 23:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Akim Demaille <akim@lrde.epita.fr> writes:\n>> Honestly, I seriously doubt that you have contributors that don't\n>> have MacPorts or Brew installed, and both are pretty up to date on\n>> Bison.\n> \n> Hm, well, I'm a counterexample ;-)\n\nAnd one more. While I do have brew installed, I prefer to use it as little as\npossible.\n\ncheers ./daniel\n\n",
"msg_date": "Wed, 22 May 2019 23:44:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "\nOn 5/21/19 11:49 AM, Akim Demaille wrote:\n> Hi Tom!\n>\n>> Le 20 mai 2019 à 15:54, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n>>\n>> Akim Demaille <akim@lrde.epita.fr> writes:\n>>> It is for the same reasons that I would recommend not using associativity directives (%left, %right, %nonassoc) where associativity plays no role: %precedence is made for this. But it was introduced in Bison 2.7.1 (2013-04-15), and I don't know if requiring it is acceptable to PostgreSQL.\n>> 2013? Certainly not. We have a lot of buildfarm critters running\n>> older platforms than that.\n> This I fully understand. However, Bison is a source generator,\n> and it's quite customary to use modern tools on the maintainer\n> side, and then deploy the result them on possibly much older\n> architectures.\n>\n> Usually users of Bison build tarballs with the generated parsers\n> in them, and ship/test from that.\n>\n\n\nThe buildfarm client does not build from tarballs, it builds from git,\nmeaning it has to run bison. Thus Tom's objection is quite valid, and\nyour dismissal of it is not.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Wed, 22 May 2019 18:11:26 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 5/21/19 11:49 AM, Akim Demaille wrote:\n>> Usually users of Bison build tarballs with the generated parsers\n>> in them, and ship/test from that.\n\n> The buildfarm client does not build from tarballs, it builds from git,\n> meaning it has to run bison. Thus Tom's objection is quite valid, and\n> your dismissal of it is not.\n\nRight, but that's a much narrower set of people who need to update\nthan \"all PG users\" or even \"all PG developers\".\n\nI checked the buildfarm's configure results not too long ago, and noted\nthat the oldest bison versions are\n\n gaur | configure: using bison (GNU Bison) 1.875\n prairiedog | configure: using bison (GNU Bison) 1.875\n dromedary | configure: using bison (GNU Bison) 2.3\n locust | configure: using bison (GNU Bison) 2.3\n longfin | configure: using bison (GNU Bison) 2.3\n nudibranch | configure: using bison (GNU Bison) 2.3\n anole | configure: using bison (GNU Bison) 2.4.1\n fulmar | configure: using bison (GNU Bison) 2.4.1\n gharial | configure: using bison (GNU Bison) 2.4.1\n grouse | configure: using bison (GNU Bison) 2.4.1\n koreaceratops | configure: using bison (GNU Bison) 2.4.1\n leech | configure: using bison (GNU Bison) 2.4.1\n magpie | configure: using bison (GNU Bison) 2.4.1\n treepie | configure: using bison (GNU Bison) 2.4.1\n coypu | configure: using bison (GNU Bison) 2.4.3\n friarbird | configure: using bison (GNU Bison) 2.4.3\n nightjar | configure: using bison (GNU Bison) 2.4.3\n (then 2.5 and later)\n\n(This doesn't cover the Windows members, unfortunately.)\n\ngaur and prairiedog are my own pet dinosaurs, and updating them would\nnot be very painful. (Neither of them are using the original vendor\nBison to begin with ... as I said, they're dinosaurs.) Meanwhile,\nthree of the 2.3 members are Mac systems; nudibranch is SUSE 11.\nRequiring anything newer than 2.4.1 would start to cause problems\nfor a fair number of people, I think.\n\nStill, the bottom line here is that we could require a new(ish) Bison\nif we could point to clear benefits that outweigh the pain. Right\nnow there's not much argument for it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 18:29:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "On Tue, May 21, 2019 at 3:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Other PG hackers might have a different laundry list, but that's mine.\n\nGood list.\n\nAnother thing is that it would be nice to have a better way of\nresolving conflicts than attaching precedence declarations. Some\nproblems can't be solved that way at all, and others can only be\nsolved that way at the risk of unforeseen side effects. One possible\nidea is a way to mark a rule %weak, meaning that it should only be\nused if no non-%weak rule could apply. I'm not sure if that would\nreally be the best way, but it's one idea. A more general version\nwould be some kind of ability to give rules different strengths; in\nthe case of a grammar conflict, the \"stronger\" rule would win.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 May 2019 23:34:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Another thing is that it would be nice to have a better way of\n> resolving conflicts than attaching precedence declarations. Some\n> problems can't be solved that way at all, and others can only be\n> solved that way at the risk of unforeseen side effects.\n\nYeah, we've definitely found that resolving shift/reduce conflicts via\nprecedence declarations has more potential for surprising side-effects\nthan one would think. It feels to me that there's something basically\nwrong with that concept, or at least wrong with the way we've used it.\nSome relevant commits: 670a6c7a2, 12b716457, 6fe27ca2f, and the\n\"x NOT-something y\" hacks in commit c6b3c939b (that one has a whole bunch\nof other cruft in it, so it might be hard to spot what I'm talking about).\n\n> One possible\n> idea is a way to mark a rule %weak, meaning that it should only be\n> used if no non-%weak rule could apply. I'm not sure if that would\n> really be the best way, but it's one idea. A more general version\n> would be some kind of ability to give rules different strengths; in\n> the case of a grammar conflict, the \"stronger\" rule would win.\n\nHmmm ... not apparent to me that that's really going to help.\nMaybe it will, but it sounds like more likely it's just another\nmechanism with not-as-obvious-as-you-thought side effects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 00:00:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "On Thu, May 23, 2019 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > One possible\n> > idea is a way to mark a rule %weak, meaning that it should only be\n> > used if no non-%weak rule could apply. I'm not sure if that would\n> > really be the best way, but it's one idea. A more general version\n> > would be some kind of ability to give rules different strengths; in\n> > the case of a grammar conflict, the \"stronger\" rule would win.\n>\n> Hmmm ... not apparent to me that that's really going to help.\n> Maybe it will, but it sounds like more likely it's just another\n> mechanism with not-as-obvious-as-you-thought side effects.\n\nThat's possible; I'm open to other ideas. If you wanted to be really\nexplicit about it, you could have a way to stick an optional name on a\ngrammar rule, and a way to say that the current rule should lose to a\nlist of named other rules.\n\nIt seems pretty clear, though, that our use of %prec proves that we\ncan't just write a grammar that is intrinsically conflict-free; we\nsometimes need to have conflicts and then tell the parser generator\nwhich option to prefer. And I think it's also pretty clear that %prec\nis, for anything other than operator precedence, a horrible way of\ndoing that. A method that was merely mediocre could still be a big\nimprovement over what we have available today.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 May 2019 09:10:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 17:25:31 -0400, Tom Lane wrote:\n> The easiest method is to fire up some client code that repeatedly\n> does whatever you want to test, and then look at perf or oprofile\n> or local equivalent to see where the time is going in the backend\n> process.\n> \n> For the particular case of stressing the parser, probably the\n> best thing to look at is test cases that do a lot of low-overhead\n> DDL, such as creating views. You could do worse than just repeatedly\n> sourcing our standard view files, like\n> \tsrc/backend/catalog/system_views.sql\n> \tsrc/backend/catalog/information_schema.sql\n> (In either case, I'd suggest adapting the file to create all\n> its objects in some transient schema that you can just drop.\n> Repointing information_schema.sql to some other schema is\n> trivial, just change a couple of commands at the top; and\n> you could tweak system_views.sql similarly. Also consider\n> wrapping the whole thing in BEGIN; ... ROLLBACK; instead of\n> spending time on an explicit DROP.)\n> \n> Somebody else might know of a better test case but I'd try\n> that first.\n\n> There would still be a fair amount of I/O and catalog lookup\n> overhead in a test run that way, but it would be an honest\n> approximation of useful real-world cases. If you're willing to\n> put some blinders on and just micro-optimize the flex/bison\n> code, you could set up a custom function that just calls that\n> stuff. I actually did that not too long ago; C code attached\n> for amusement's sake.\n\nFWIW, this is why I'd suggested the hack of EXPLAIN (PARSE_ANALYZE OFF,\nOPTIMIZE OFF) a few years back. Right now it's hard to measure the\nparser in isolation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 15:16:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "hi Tom!\n\n> Le 23 mai 2019 à 00:29, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 5/21/19 11:49 AM, Akim Demaille wrote:\n>>> Usually users of Bison build tarballs with the generated parsers\n>>> in them, and ship/test from that.\n> \n>> The buildfarm client does not build from tarballs, it builds from git,\n>> meaning it has to run bison. Thus Tom's objection is quite valid, and\n>> your dismissal of it is not.\n\nI had not realized I had been rude to anybody. I apologize to\nTom, I did not mean to dismiss anything.\n\n> Right, but that's a much narrower set of people who need to update\n> than \"all PG users\" or even \"all PG developers\".\n> \n> I checked the buildfarm's configure results not too long ago, and noted\n> that the oldest bison versions are\n> \n> gaur | configure: using bison (GNU Bison) 1.875\n> prairiedog | configure: using bison (GNU Bison) 1.875\n> dromedary | configure: using bison (GNU Bison) 2.3\n> locust | configure: using bison (GNU Bison) 2.3\n> longfin | configure: using bison (GNU Bison) 2.3\n> nudibranch | configure: using bison (GNU Bison) 2.3\n> anole | configure: using bison (GNU Bison) 2.4.1\n> fulmar | configure: using bison (GNU Bison) 2.4.1\n> gharial | configure: using bison (GNU Bison) 2.4.1\n> grouse | configure: using bison (GNU Bison) 2.4.1\n> koreaceratops | configure: using bison (GNU Bison) 2.4.1\n> leech | configure: using bison (GNU Bison) 2.4.1\n> magpie | configure: using bison (GNU Bison) 2.4.1\n> treepie | configure: using bison (GNU Bison) 2.4.1\n> coypu | configure: using bison (GNU Bison) 2.4.3\n> friarbird | configure: using bison (GNU Bison) 2.4.3\n> nightjar | configure: using bison (GNU Bison) 2.4.3\n> (then 2.5 and later)\n> \n> (This doesn't cover the Windows members, unfortunately.)\n> \n> gaur and prairiedog are my own pet dinosaurs, and updating them would\n> not be very painful.\n\nI don't want to be painful, but the fact that the buildfarm\nstarts from git is also a design choice. In order to be\nfully reproducible, I know projects that have a first step\nthat builds the end-user type of tarball, and run it on all\nsorts of architectures. Of course, it's more initial set up,\nbut you gain your independence on many tools which in turn\nmakes it easier to check on more architectures.\n\n> Still, the bottom line here is that we could require a new(ish) Bison\n> if we could point to clear benefits that outweigh the pain. Right\n> now there's not much argument for it.\n\nI get that, thanks.\n\nCheers!\n\n",
"msg_date": "Mon, 27 May 2019 18:49:20 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Hey Tom,\n\n> Le 22 mai 2019 à 23:25, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Akim Demaille <akim@lrde.epita.fr> writes:\n>> Honestly, I seriously doubt that you have contributors that don't\n>> have MacPorts or Brew installed, and both are pretty up to date on\n>> Bison.\n> \n> Hm, well, I'm a counterexample ;-).\n\nWow :) I have even more respect now :) I'm soooo happy to use\nthe bleeding edge compilers to get all the possible warnings and\nsanitizers...\n\n\n\nThanks for the tips on how to bench. I'll see what I can do (I'm\nnot a (direct) user of pg myself, nor am I used to write SQL by\nhand).\n\nCheers!\n\n",
"msg_date": "Mon, 27 May 2019 18:53:44 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "\n> Le 22 mai 2019 à 23:44, Daniel Gustafsson <daniel@yesql.se> a écrit :\n> \n>> On 22 May 2019, at 23:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Akim Demaille <akim@lrde.epita.fr> writes:\n>>> Honestly, I seriously doubt that you have contributors that don't\n>>> have MacPorts or Brew installed, and both are pretty up to date on\n>>> Bison.\n>> \n>> Hm, well, I'm a counterexample ;-)\n> \n> And one more. While I do have brew installed, I prefer to use it as little as\n> possible.\u0003\n\nErr... You're not exactly what I call a counterexample.\n\n\n\n",
"msg_date": "Mon, 27 May 2019 18:55:08 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "Tom,\n\n> Le 23 mai 2019 à 06:00, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> Another thing is that it would be nice to have a better way of\n>> resolving conflicts than attaching precedence declarations. Some\n>> problems can't be solved that way at all, and others can only be\n>> solved that way at the risk of unforeseen side effects.\n> \n> Yeah, we've definitely found that resolving shift/reduce conflicts via\n> precedence declarations has more potential for surprising side-effects\n> than one would think.\n\nThat's why in recent versions of Bison we also provide a means\nto pure %expect directives on the rules themselves, to be more\nprecise about what happens.\n\n> It feels to me that there's something basically\n> wrong with that concept, or at least wrong with the way we've used it.\n\nI'm trying to find means to scope the prec/assoc directives, because\nthey are too powerful, and that's dangerous. This is also why I try\nto remove the useless ones.\n\nSome people don't trust assoc/prec directives at all and use only\nunambiguous grammars. But this can be very verbose...\n\nI agree something is not so cool about these directives. GLR parsers\nhave a clear concept of in-between-rules precedence (%dprec). Something\nsimilar for LR (hence fully static) would be nice, but it remains to\nbe invented.\n\n",
"msg_date": "Mon, 27 May 2019 19:01:27 +0200",
"msg_from": "Akim Demaille <akim@lrde.epita.fr>",
"msg_from_op": true,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": ">>>>> \"Akim\" == Akim Demaille <akim@lrde.epita.fr> writes:\n\n >> Yeah, we've definitely found that resolving shift/reduce conflicts\n >> via precedence declarations has more potential for surprising\n >> side-effects than one would think.\n\n Akim> That's why in recent versions of Bison we also provide a means to\n Akim> pure %expect directives on the rules themselves, to be more\n Akim> precise about what happens.\n\nIt's possibly worth looking at the details of each case where we've run\ninto problems to see whether there is a better solution.\n\nThe main cases I know of are:\n\n1. RANGE UNBOUNDED PRECEDING - this one is actually a defect in the\nstandard SQL grammar, since UNBOUNDED is a non-reserved keyword and so\nit can also appear as a legal <identifier>, and the construct\nRANGE <unsigned value specification> PRECEDING allows <identifier> to\nappear as a <SQL parameter reference>.\n\nWe solve this by giving UNBOUNDED a precedence below PRECEDING.\n\n2. CUBE() - in the SQL spec, GROUP BY does not allow expressions, only\ncolumn references, but we allow expressions as an extension. The syntax\nGROUP BY CUBE(a,b) is a shorthand for grouping sets, but this is\nambiguous with a function cube(...). (CUBE is also a reserved word in the\nspec, but it's an unreserved keyword for us.)\n\nWe solve this by giving CUBE (and ROLLUP) precedence below '('.\n\n3. General handling of postfix operator conflicts\n\nThe fact that we allow postfix operators means that any sequence which\nlooks like <expression> <identifier> is ambiguous. This affects the use\nof aliases in the SELECT list, and also PRECEDING, FOLLOWING, GENERATED,\nand NULL can all follow expressions.\n\n4. Not reserving words that the spec says should be reserved\n\nWe avoid reserving PARTITION, RANGE, ROWS, GROUPS by using precedence\nhacks.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Tue, 28 May 2019 10:48:08 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "On 2019-May-28, Andrew Gierth wrote:\n\n> The main cases I know of are:\n> \n> 1. RANGE UNBOUNDED PRECEDING - this one is actually a defect in the\n> standard SQL grammar, since UNBOUNDED is a non-reserved keyword and so\n> it can also appear as a legal <identifier>, and the construct\n> RANGE <unsigned value specification> PRECEDING allows <identifier> to\n> appear as a <SQL parameter reference>.\n\nShould we report this to the SQL committee?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 17:54:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
},
{
"msg_contents": "On Tue, May 21, 2019 at 03:06:43PM -0400, Tom Lane wrote:\n> * Speed of the generated parser could be better. I suspect this has\n> a lot to do with the fact that our grammar is huge, and so are the\n> tables, and that causes lots of cache misses. Maybe this could be\n> addressed by trying to make the tables smaller and/or laid out in\n> a way with better cache locality?\n\nAgreed. This was brought up in January, with a little more specificity:\n\n\thttps://www.postgresql.org/message-id/20190125223859.GD13803@momjian.us\n\n\tWith our scanner keywords list now more cache-aware, and with us\n\tplanning to use Bison for years to come, has anyone ever looked at\n\treordering the bison state machine array to be more cache aware, e.g.,\n\thaving common states next to each other rather than scattered around the\n\tarray?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 20:40:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove useless associativity/precedence from parsers"
}
] |
[
{
"msg_contents": "Hello,\n\nNowadays, PostgreSQL is often used behind proxies. Some are PostgreSQL\nprotocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\nthe database instance point of view, all clients come from the proxy.\n\nThere are two major problems with this topology:\n\n* It neutralizes the host based authentication. Every client shares\nthe same source. Either we allow this source or not but we cannot allow\nclients on a more fine-grained basis, or not by the IP address.\n\n* It makes debugging harder. If we have a DDL or a slow query logged, we\ncannot use the source to identify who is responsible.\n\nOn one hand, we can move the authentication and logging mechanisms to\nPostgreSQL based proxies but they will never be as complete as\nPostgreSQL itself. And they don't have features like HTTP health checks\nto redirect trafic to nodes (health, role, whatever behind the URL). On\nthe other hand, those features are not implemented at all because they\ndon't know the PostgreSQL protocol, they simply forward requests.\n\nIn the HTTP reverse proxies world, there's a \"dirty hack\" to identify\nthe source IP address: add an HTTP header \"X-Forwared-For\" to the\nrequest. It's the destination duty to do whatever they want with this\ninformation. With this feature in mind, someone from HAProxy has\nimplemented this mechanism at the protocol level. It's called the PROXY\nprotocol.\n\nWith this piece of logic at the beginning of the protocol, we could\nimplement a totally transparent proxy and benefit from the great\nfeatures of PostgreSQL regarding clients. Note that MariaDB support the\nPROXY protocol in MaxScale (proxy) and MariaDB Server in recent\nversions.\n\nMy question is, what do you think of this feature? Is it worth to spend\ntime implementing it in PostgreSQL or not?\n\nLinks:\n - http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n - https://mariadb.com/kb/en/library/proxy-protocol-support/\n\nThanks,\nJulien\n\nPS: I've already sent this message to a wrong mailing list. Stephen\nFrost said it's implemented in pgbouncer but all I can find is an open\nissue: https://github.com/pgbouncer/pgbouncer/issues/241.\n\n\n",
"msg_date": "Sun, 19 May 2019 17:36:23 +0200",
"msg_from": "Julien Riou <julien@riou.xyz>",
"msg_from_op": true,
"msg_subject": "PROXY protocol support"
},
{
"msg_contents": "Greetings,\n\n* Julien Riou (julien@riou.xyz) wrote:\n> Nowadays, PostgreSQL is often used behind proxies. Some are PostgreSQL\n> protocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\n> the database instance point of view, all clients come from the proxy.\n> \n> There are two major problems with this topology:\n> \n> * It neutralizes the host based authentication. Every client shares\n> the same source. Either we allow this source or not but we cannot allow\n> clients on a more fine-grained basis, or not by the IP address.\n\nYou can instead have the IP-based checking done at the pooler.\n\n> * It makes debugging harder. If we have a DDL or a slow query logged, we\n> cannot use the source to identify who is responsible.\n\nProtocol-level poolers are able to do this, and pgbouncer does (see\napplication_name_add_host).\n\n> On one hand, we can move the authentication and logging mechanisms to\n> PostgreSQL based proxies but they will never be as complete as\n> PostgreSQL itself. And they don't have features like HTTP health checks\n> to redirect trafic to nodes (health, role, whatever behind the URL). On\n> the other hand, those features are not implemented at all because they\n> don't know the PostgreSQL protocol, they simply forward requests.\n> \n> In the HTTP reverse proxies world, there's a \"dirty hack\" to identify\n> the source IP address: add an HTTP header \"X-Forwared-For\" to the\n> request. It's the destination duty to do whatever they want with this\n> information. With this feature in mind, someone from HAProxy has\n> implemented this mechanism at the protocol level. It's called the PROXY\n> protocol.\n\nSomeone from HAProxy could certainly implement something similar by\nhaving HAProxy understand PostgreSQL's protocol.\n\n> With this piece of logic at the beginning of the protocol, we could\n> implement a totally transparent proxy and benefit from the great\n> features of PostgreSQL regarding clients. Note that MariaDB support the\n> PROXY protocol in MaxScale (proxy) and MariaDB Server in recent\n> versions.\n\npgbouncer is already a transparent proxy that understands the PG\nprotocol, and, even better, it has support for transaction-level pooling\n(as well as connection-level), which is really critical for larger PG\ndeployments as PG backend startup is (relatively) expensive.\n\n> PS: I've already sent this message to a wrong mailing list. Stephen\n> Frost said it's implemented in pgbouncer but all I can find is an open\n> issue: https://github.com/pgbouncer/pgbouncer/issues/241.\n\nThat would be some *other* proxy system (Amazon's ELB) that apparently\nalso doesn't understand the PG protocol and therefore doesn't have a\nfeature similar to pgbouncer's application_name_add_host.\n\nI haven't looked very closely at if it'd be possible to interpret the\nPROXY protocol thing that Amazon's ELB can do without confusing it with\na regular PG authentication startup and I'm not sure if we'd really want\nto wed ourselves to something like that. Certainly, what pgbouncer does\nworks quite well and is about as transparent to clients as possible.\n\nYou'd almost certainly want something like pgbouncer after the ELB\nanyway to avoid having tons of connections to PG and avoid spinning up\nnew backends constantly.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 19 May 2019 11:59:04 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On May 19, 2019 5:59:04 PM GMT+02:00, Stephen Frost <sfrost@snowman.net> wrote:\n>Greetings,\n>\n>* Julien Riou (julien@riou.xyz) wrote:\n>> Nowadays, PostgreSQL is often used behind proxies. Some are\n>PostgreSQL\n>> protocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\n>> the database instance point of view, all clients come from the proxy.\n>> \n>> There are two major problems with this topology:\n>> \n>> * It neutralizes the host based authentication. Every client shares\n>> the same source. Either we allow this source or not but we cannot\n>allow\n>> clients on a more fine-grained basis, or not by the IP address.\n>\n>You can instead have the IP-based checking done at the pooler.\n>\n>> * It makes debugging harder. If we have a DDL or a slow query logged,\n>we\n>> cannot use the source to identify who is responsible.\n>\n>Protocol-level poolers are able to do this, and pgbouncer does (see\n>application_name_add_host).\n>\n>> On one hand, we can move the authentication and logging mechanisms to\n>> PostgreSQL based proxies but they will never be as complete as\n>> PostgreSQL itself. And they don't have features like HTTP health\n>checks\n>> to redirect trafic to nodes (health, role, whatever behind the URL).\n>On\n>> the other hand, those features are not implemented at all because\n>they\n>> don't know the PostgreSQL protocol, they simply forward requests.\n>> \n>> In the HTTP reverse proxies world, there's a \"dirty hack\" to identify\n>> the source IP address: add an HTTP header \"X-Forwared-For\" to the\n>> request. It's the destination duty to do whatever they want with this\n>> information. With this feature in mind, someone from HAProxy has\n>> implemented this mechanism at the protocol level. It's called the\n>PROXY\n>> protocol.\n>\n>Someone from HAProxy could certainly implement something similar by\n>having HAProxy understand PostgreSQL's protocol.\n>\n>> With this piece of logic at the beginning of the protocol, we could\n>> implement a totally transparent proxy and benefit from the great\n>> features of PostgreSQL regarding clients. Note that MariaDB support\n>the\n>> PROXY protocol in MaxScale (proxy) and MariaDB Server in recent\n>> versions.\n>\n>pgbouncer is already a transparent proxy that understands the PG\n>protocol, and, even better, it has support for transaction-level\n>pooling\n>(as well as connection-level), which is really critical for larger PG\n>deployments as PG backend startup is (relatively) expensive.\n>\n>> PS: I've already sent this message to a wrong mailing list. Stephen\n>> Frost said it's implemented in pgbouncer but all I can find is an\n>open\n>> issue: https://github.com/pgbouncer/pgbouncer/issues/241.\n>\n>That would be some *other* proxy system (Amazon's ELB) that apparently\n>also doesn't understand the PG protocol and therefore doesn't have a\n>feature similar to pgbouncer's application_name_add_host.\n>\n>I haven't looked very closely at if it'd be possible to interpret the\n>PROXY protocol thing that Amazon's ELB can do without confusing it with\n>a regular PG authentication startup and I'm not sure if we'd really\n>want\n>to wed ourselves to something like that. Certainly, what pgbouncer\n>does\n>works quite well and is about as transparent to clients as possible.\n>\n>You'd almost certainly want something like pgbouncer after the ELB\n>anyway to avoid having tons of connections to PG and avoid spinning up\n>new backends constantly.\n>\n>Thanks,\n>\n>Stephen\n\nIt could be proprietary Amazon load balancers I don't have experience with, or simple HAProxy coupled with a Patroni HTTP API to tell if a backend is healthy or not.\n\nThe PgBouncer approach is interesting. I'm already using the application name as a workaround to identify containerized applications but didn't used it for setting the source IP.\n\nIf we take a look at the MariaDB implementation, they check for errors in the startup packet then run the PROXY protocol decoding then return a real error if it doesn't work. As our bouncers are all behind a pool of HAProxy, and if we consider PgBouncer as a trusted extension of PostgreSQL, maybe implementing it in PgBouncer first will be easier.\n\nThanks for your insightful comments.\nJulien\n\n\n",
"msg_date": "Sun, 19 May 2019 22:53:26 +0200",
"msg_from": "Julien Riou <julien@riou.xyz>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "\n\nOn 19.05.2019 18:36, Julien Riou wrote:\n> Hello,\n>\n> Nowadays, PostgreSQL is often used behind proxies. Some are PostgreSQL\n> protocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\n> the database instance point of view, all clients come from the proxy.\n>\n> There are two major problems with this topology:\n>\n> * It neutralizes the host based authentication. Every client shares\n> the same source. Either we allow this source or not but we cannot allow\n> clients on a more fine-grained basis, or not by the IP address.\n>\n> * It makes debugging harder. If we have a DDL or a slow query logged, we\n> cannot use the source to identify who is responsible.\n>\n> On one hand, we can move the authentication and logging mechanisms to\n> PostgreSQL based proxies but they will never be as complete as\n> PostgreSQL itself. And they don't have features like HTTP health checks\n> to redirect trafic to nodes (health, role, whatever behind the URL). On\n> the other hand, those features are not implemented at all because they\n> don't know the PostgreSQL protocol, they simply forward requests.\n>\n> In the HTTP reverse proxies world, there's a \"dirty hack\" to identify\n> the source IP address: add an HTTP header \"X-Forwared-For\" to the\n> request. It's the destination duty to do whatever they want with this\n> information. With this feature in mind, someone from HAProxy has\n> implemented this mechanism at the protocol level. It's called the PROXY\n> protocol.\n>\n> With this piece of logic at the beginning of the protocol, we could\n> implement a totally transparent proxy and benefit from the great\n> features of PostgreSQL regarding clients. Note that MariaDB support the\n> PROXY protocol in MaxScale (proxy) and MariaDB Server in recent\n> versions.\n>\n> My question is, what do you think of this feature? Is it worth to spend\n> time implementing it in PostgreSQL or not?\n>\n> Links:\n> - http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n> - https://mariadb.com/kb/en/library/proxy-protocol-support/\n>\n> Thanks,\n> Julien\n>\n> PS: I've already sent this message to a wrong mailing list. Stephen\n> Frost said it's implemented in pgbouncer but all I can find is an open\n> issue: https://github.com/pgbouncer/pgbouncer/issues/241.\n>\n>\n\nHi,\n From my point of view it will be better to support embedded connection \npooler in Postgres.\nIn this case all mentioned problems can be more or less \nstraightforwardly solved without inventing new protocol.\nThere is my prototype implementation of built-in connection pooler on \ncommit-fest:\nhttps://commitfest.postgresql.org/23/2067/\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 20 May 2019 18:28:43 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "+1 on this one...\n\nMySQL and derivatives support it very well.. it is a standard that can be\nused with either haproxy or better, ProxySQL.\n\nWould be nice to have it in core.\n\nIt is a show stopper for us to use proxying because of compliance and\ntracability reasons.\n\n\n\nLe dim. 19 mai 2019 11:36 AM, Julien Riou <julien@riou.xyz> a écrit :\n\n> Hello,\n>\n> Nowadays, PostgreSQL is often used behind proxies. Some are PostgreSQL\n> protocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\n> the database instance point of view, all clients come from the proxy.\n>\n> There are two major problems with this topology:\n>\n> * It neutralizes the host based authentication. Every client shares\n> the same source. Either we allow this source or not but we cannot allow\n> clients on a more fine-grained basis, or not by the IP address.\n>\n> * It makes debugging harder. If we have a DDL or a slow query logged, we\n> cannot use the source to identify who is responsible.\n>\n> On one hand, we can move the authentication and logging mechanisms to\n> PostgreSQL based proxies but they will never be as complete as\n> PostgreSQL itself. And they don't have features like HTTP health checks\n> to redirect trafic to nodes (health, role, whatever behind the URL). On\n> the other hand, those features are not implemented at all because they\n> don't know the PostgreSQL protocol, they simply forward requests.\n>\n> In the HTTP reverse proxies world, there's a \"dirty hack\" to identify\n> the source IP address: add an HTTP header \"X-Forwared-For\" to the\n> request. It's the destination duty to do whatever they want with this\n> information. With this feature in mind, someone from HAProxy has\n> implemented this mechanism at the protocol level. It's called the PROXY\n> protocol.\n>\n> With this piece of logic at the beginning of the protocol, we could\n> implement a totally transparent proxy and benefit from the great\n> features of PostgreSQL regarding clients. Note that MariaDB support the\n> PROXY protocol in MaxScale (proxy) and MariaDB Server in recent\n> versions.\n>\n> My question is, what do you think of this feature? Is it worth to spend\n> time implementing it in PostgreSQL or not?\n>\n> Links:\n> - http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n> - https://mariadb.com/kb/en/library/proxy-protocol-support/\n>\n> Thanks,\n> Julien\n>\n> PS: I've already sent this message to a wrong mailing list. Stephen\n> Frost said it's implemented in pgbouncer but all I can find is an open\n> issue: https://github.com/pgbouncer/pgbouncer/issues/241.\n>\n>\n>\n\n+1 on this one...MySQL and derivatives support it very well.. it is a standard that can be used with either haproxy or better, ProxySQL.Would be nice to have it in core. It is a show stopper for us to use proxying because of compliance and tracability reasons.Le dim. 19 mai 2019 11:36 AM, Julien Riou <julien@riou.xyz> a écrit :Hello,\n\nNowadays, PostgreSQL is often used behind proxies. Some are PostgreSQL\nprotocol aware (Pgpool, PgBouncer), some are pure TCP (HAProxy). From\nthe database instance point of view, all clients come from the proxy.\n\nThere are two major problems with this topology:\n\n* It neutralizes the host based authentication. Every client shares\nthe same source. Either we allow this source or not but we cannot allow\nclients on a more fine-grained basis, or not by the IP address.\n\n* It makes debugging harder. If we have a DDL or a slow query logged, we\ncannot use the source to identify who is responsible.\n\nOn one hand, we can move the authentication and logging mechanisms to\nPostgreSQL based proxies but they will never be as complete as\nPostgreSQL itself. And they don't have features like HTTP health checks\nto redirect trafic to nodes (health, role, whatever behind the URL). On\nthe other hand, those features are not implemented at all because they\ndon't know the PostgreSQL protocol, they simply forward requests.\n\nIn the HTTP reverse proxies world, there's a \"dirty hack\" to identify\nthe source IP address: add an HTTP header \"X-Forwared-For\" to the\nrequest. It's the destination duty to do whatever they want with this\ninformation. With this feature in mind, someone from HAProxy has\nimplemented this mechanism at the protocol level. It's called the PROXY\nprotocol.\n\nWith this piece of logic at the beginning of the protocol, we could\nimplement a totally transparent proxy and benefit from the great\nfeatures of PostgreSQL regarding clients. Note that MariaDB support the\nPROXY protocol in MaxScale (proxy) and MariaDB Server in recent\nversions.\n\nMy question is, what do you think of this feature? Is it worth to spend\ntime implementing it in PostgreSQL or not?\n\nLinks:\n - http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n - https://mariadb.com/kb/en/library/proxy-protocol-support/\n\nThanks,\nJulien\n\nPS: I've already sent this message to a wrong mailing list. Stephen\nFrost said it's implemented in pgbouncer but all I can find is an open\nissue: https://github.com/pgbouncer/pgbouncer/issues/241.",
"msg_date": "Mon, 20 May 2019 13:05:31 -0400",
"msg_from": "Bruno Lavoie <bl@brunol.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
}
] |
[
{
"msg_contents": "Hi,\n\nI seem to recall that we expect tests to either work with\ndefault_transaction_isolation=serializable, or to set it to a different\nlevel where needed.\n\nCurrently that's not the case. When running check-world with PGOPTIONS\nset to -c default_transaction_isolation=serializable I get easy to fix\nfailures (isolation, plpgsql) but also some apparently hanging tests\n(003_recovery_targets.pl, 003_standby_2.pl).\n\nDo we expect this to work? If it's desirable I'll set up an animal that\nforces it to on.\n\n- Andres\n\n\ndiff -du10 /home/andres/src/postgresql/src/test/isolation/expected/fk-partitioned-2.out /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/fk-partitioned-2.out\n--- /home/andres/src/postgresql/src/test/isolation/expected/fk-partitioned-2.out 2019-04-16 14:35:39.854303055 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/fk-partitioned-2.out 2019-05-19 15:47:05.767861172 -0700\n@@ -1,20 +1,20 @@\n Parsed test spec with 2 sessions\n\n starting permutation: s1b s1d s2b s2i s1c s2c\n step s1b: begin;\n step s1d: delete from ppk where a = 1;\n step s2b: begin;\n step s2i: insert into pfk values (1); <waiting ...>\n step s1c: commit;\n step s2i: <... completed>\n-error in steps s1c s2i: ERROR: insert or update on table \"pfk1\" violates foreign key constraint \"pfk_a_fkey\"\n+error in steps s1c s2i: ERROR: could not serialize access due to concurrent update\n step s2c: commit;\n\n starting permutation: s1b s1d s2bs s2i s1c s2c\n step s1b: begin;\n step s1d: delete from ppk where a = 1;\n step s2bs: begin isolation level serializable; select 1;\n ?column?\n\n 1\n step s2i: insert into pfk values (1); <waiting ...>\n@@ -23,21 +23,21 @@\n error in steps s1c s2i: ERROR: could not serialize access due to concurrent update\n step s2c: commit;\n\n starting permutation: s1b s2b s1d s2i s1c s2c\n step s1b: begin;\n step s2b: begin;\n step s1d: delete from ppk where a = 1;\n step s2i: insert into pfk values (1); <waiting ...>\n step s1c: commit;\n step s2i: <... completed>\n-error in steps s1c s2i: ERROR: insert or update on table \"pfk1\" violates foreign key constraint \"pfk_a_fkey\"\n+error in steps s1c s2i: ERROR: could not serialize access due to concurrent update\n step s2c: commit;\n\n starting permutation: s1b s2bs s1d s2i s1c s2c\n step s1b: begin;\n step s2bs: begin isolation level serializable; select 1;\n ?column?\n\n 1\n step s1d: delete from ppk where a = 1;\n step s2i: insert into pfk values (1); <waiting ...>\ndiff -du10 /home/andres/src/postgresql/src/test/isolation/expected/lock-update-delete_1.out /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/lock-update-delete.out\n--- /home/andres/src/postgresql/src/test/isolation/expected/lock-update-delete_1.out 2015-01-30 07:41:22.542718055 -0800\n+++ /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/lock-update-delete.out 2019-05-19 15:47:09.242873925 -0700\n@@ -143,21 +143,23 @@\n step s2b: BEGIN;\n step s1l: SELECT * FROM foo WHERE pg_advisory_xact_lock(0) IS NOT NULL AND key = 1 FOR KEY SHARE; <waiting ...>\n step s2u: UPDATE foo SET value = 2 WHERE key = 1;\n step s2_blocker3: UPDATE foo SET value = 2 WHERE key = 1;\n step s2c: COMMIT;\n step s2_unlock: SELECT pg_advisory_unlock(0);\n pg_advisory_unlock\n\n t\n step s1l: <... completed>\n-error in steps s2_unlock s1l: ERROR: could not serialize access due to concurrent update\n+key value\n+\n+1 1\n\n starting permutation: s2b s1l s2u s2_blocker1 s2r s2_unlock\n pg_advisory_lock\n\n\n step s2b: BEGIN;\n step s1l: SELECT * FROM foo WHERE pg_advisory_xact_lock(0) IS NOT NULL AND key = 1 FOR KEY SHARE; <waiting ...>\n step s2u: UPDATE foo SET value = 2 WHERE key = 1;\n step s2_blocker1: DELETE FROM foo;\n step s2r: ROLLBACK;\ndiff -du10 /home/andres/src/postgresql/src/test/isolation/expected/tuplelock-update.out /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/tuplelock-update.out\n--- /home/andres/src/postgresql/src/test/isolation/expected/tuplelock-update.out 2018-07-07 13:06:55.644442913 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/src/test/isolation/output_iso/results/tuplelock-update.out 2019-05-19 15:47:26.132936176 -0700\n@@ -16,21 +16,24 @@\n step s1_begin: BEGIN;\n step s1_grablock: SELECT * FROM pktab FOR KEY SHARE;\n id data\n\n 1 2\n step s1_advunlock1: SELECT pg_advisory_unlock(142857);\n pg_advisory_unlock\n\n t\n step s2_update: <... completed>\n+error in steps s1_advunlock1 s2_update: ERROR: could not serialize access due to concurrent update\n step s1_advunlock2: SELECT pg_sleep(5), pg_advisory_unlock(285714);\n pg_sleep pg_advisory_unlock\n\n t\n step s3_update: <... completed>\n+error in steps s1_advunlock2 s3_update: ERROR: could not serialize access due to concurrent update\n step s1_advunlock3: SELECT pg_sleep(5), pg_advisory_unlock(571428);\n pg_sleep pg_advisory_unlock\n\n t\n step s4_update: <... completed>\n+error in steps s1_advunlock3 s4_update: ERROR: could not serialize access due to concurrent update\n step s1_commit: COMMIT;\n\ndiff -du10 /home/andres/src/postgresql/src/pl/plpgsql/src/expected/plpgsql_transaction.out /home/andres/build/postgres/dev-assert/vpath/src/pl/plpgsql/src/results/plpgsql_transaction.out\n--- /home/andres/src/postgresql/src/pl/plpgsql/src/expected/plpgsql_transaction.out 2019-04-23 20:22:04.774775860 -0700\n+++ /home/andres/build/postgres/dev-assert/vpath/src/pl/plpgsql/src/results/plpgsql_transaction.out 2019-05-19 15:49:18.071358893 -0700\n@@ -455,21 +455,21 @@\n PERFORM 1;\n RAISE INFO '%', current_setting('transaction_isolation');\n COMMIT;\n SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n RESET TRANSACTION ISOLATION LEVEL;\n PERFORM 1;\n RAISE INFO '%', current_setting('transaction_isolation');\n COMMIT;\n END;\n $$;\n-INFO: read committed\n+INFO: serializable\n INFO: repeatable read\n INFO: read committed\n -- error cases\n DO LANGUAGE plpgsql $$\n BEGIN\n SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n END;\n $$;\n ERROR: SET TRANSACTION ISOLATION LEVEL must be called before any query\n CONTEXT: SQL statement \"SET TRANSACTION ISOLATION LEVEL REPEATABLE READ\"\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 19 May 2019 15:55:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Do we expect tests to work with\n default_transaction_isolation=serializable"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:55 AM Andres Freund <andres@anarazel.de> wrote:\n> I seem to recall that we expect tests to either work with\n> default_transaction_isolation=serializable, or to set it to a different\n> level where needed.\n\nHere are a couple of bits where that is no longer necessary after bb16aba5.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Mon, 20 May 2019 16:38:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we expect tests to work with\n default_transaction_isolation=serializable"
},
{
"msg_contents": "On Sun, May 19, 2019 at 03:55:06PM -0700, Andres Freund wrote:\n> I seem to recall that we expect tests to either work with\n> default_transaction_isolation=serializable, or to set it to a different\n> level where needed.\n> \n> Currently that's not the case. When running check-world with PGOPTIONS\n> set to -c default_transaction_isolation=serializable I get easy to fix\n> failures (isolation, plpgsql) but also some apparently hanging tests\n> (003_recovery_targets.pl, 003_standby_2.pl).\n> \n> Do we expect this to work? If it's desirable I'll set up an animal that\n> forces it to on.\n\nI'm +1 for making it a project expectation, with an animal to confirm. It's\nnot expected to work today.\n\n\n",
"msg_date": "Sat, 15 Jun 2019 11:47:39 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we expect tests to work with\n default_transaction_isolation=serializable"
},
{
"msg_contents": "On Sat, Jun 15, 2019 at 11:47:39AM -0700, Noah Misch wrote:\n> On Sun, May 19, 2019 at 03:55:06PM -0700, Andres Freund wrote:\n>> Currently that's not the case. When running check-world with PGOPTIONS\n>> set to -c default_transaction_isolation=serializable I get easy to fix\n>> failures (isolation, plpgsql) but also some apparently hanging tests\n>> (003_recovery_targets.pl, 003_standby_2.pl).\n\nThese sound strange and may point to actual bugs.\n\n>> Do we expect this to work? If it's desirable I'll set up an animal that\n>> forces it to on.\n> \n> I'm +1 for making it a project expectation, with an animal to confirm. It's\n> not expected to work today.\n\n+1.\n--\nMichael",
"msg_date": "Mon, 17 Jun 2019 16:19:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Do we expect tests to work with\n default_transaction_isolation=serializable"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a one-off regression test failure of a sort that commit\n624e440a intended to fix. a_star unexpectedly sorted higher. I\nchecked the space weather forecast for this morning but no sign of\nsolar flares. More seriously, it did the same in all 3 Parallel\nAppend queries. Recent commits look irrelevant. Could a UDP stats\npacket dropped on the floor cause that? Otherwise maybe you'd need a\nweird result from FileSize() to explain it. Based on log output no\nother tests ran around the same time.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=aye-aye&dt=2019-05-19%2018%3A30%3A10\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 May 2019 15:36:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Here's a one-off regression test failure of a sort that commit\n> 624e440a intended to fix.\n\nNote that in the discussion that led up to 624e440a, we never did\nthink that we'd completely explained the original irreproducible\nfailure.\n\nI think I've seen a couple of other cases of this same failure\nin the buildfarm recently, but too tired to go looking right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 00:46:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Mon, May 20, 2019 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Here's a one-off regression test failure of a sort that commit\n> > 624e440a intended to fix.\n>\n> Note that in the discussion that led up to 624e440a, we never did\n> think that we'd completely explained the original irreproducible\n> failure.\n>\n> I think I've seen a couple of other cases of this same failure\n> in the buildfarm recently, but too tired to go looking right now.\n\nI think it might be dependent on incidental vacuum/analyze activity\nhaving updated reltuples. With the attached script, I get the two\nplan variants depending on whether I comment out \"analyze a_star\". I\nguess we should explicitly analyze these X_star tables somewhere?\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Tue, 21 May 2019 11:31:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, May 20, 2019 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Note that in the discussion that led up to 624e440a, we never did\n>> think that we'd completely explained the original irreproducible\n>> failure.\n\n> I think it might be dependent on incidental vacuum/analyze activity\n> having updated reltuples.\n\nThe problem is to explain where said activity came from. a_star and\nits children are too small to attract autovacuum's attention. They\nget created/filled in create_table.sql/create_misc.sql, and then they\nget explicitly vacuum'd by sanity_check.sql, and then after that\nthings are 100% stable. Or should be.\n\nThere are some incidental ALTER TABLEs on them in misc.sql and\nselect_parallel.sql, but those shouldn't have any interesting\neffects on the rowcount estimates ... and even if they do,\nwhy would such effects not be reproducible?\n\nSo I'm not excited about sticking in an extra vacuum or analyze\nwithout actually understanding why the irreproducible behavior\nhappens. It's not exactly implausible that that'd make it\nworse not better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 20:07:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Tue, 21 May 2019 at 11:32, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, May 20, 2019 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Here's a one-off regression test failure of a sort that commit\n> > > 624e440a intended to fix.\n> >\n> > Note that in the discussion that led up to 624e440a, we never did\n> > think that we'd completely explained the original irreproducible\n> > failure.\n> >\n> > I think I've seen a couple of other cases of this same failure\n> > in the buildfarm recently, but too tired to go looking right now.\n>\n> I think it might be dependent on incidental vacuum/analyze activity\n> having updated reltuples. With the attached script, I get the two\n> plan variants depending on whether I comment out \"analyze a_star\". I\n> guess we should explicitly analyze these X_star tables somewhere?\n\nThat's the only theory I came up with yesterday when thinking about\nthis. We can't really go adding an ANALYZE in a test in a parallel\ngroup though since there'd be race conditions around other parallel\ntests which could cause plan changes.\n\nAt the moment, these tables are only vacuumed in sanity_check.sql,\nwhich as you can see is run by itself.\n\n# ----------\n# sanity_check does a vacuum, affecting the sort order of SELECT *\n# results. So it should not run parallel to other tests.\n# ----------\ntest: sanity_check\n\nI did add the following query just before the failing one and included\nthe expected output from below. The tests pass for me in make check\nand the post-upgrade test passes in make check-world too. I guess we\ncould commit that and see if it fails along with the other mentioned\nfailure. Alternatively, we could just invent some local tables\ninstead of using the ?_star tables and analyze them just before the\ntest, although, that does not guarantee a fix as there may be\nsomething else to blame that we've not thought of.\n\nselect relname,last_vacuum is null,last_analyze is\nnull,last_autovacuum is null,last_autoanalyze is null from\npg_stat_all_tables where relname like '__star' order by relname;\n relname | ?column? | ?column? | ?column? | ?column?\n---------+----------+----------+----------+----------\n a_star | f | t | t | t\n b_star | f | t | t | t\n c_star | f | t | t | t\n d_star | f | t | t | t\n e_star | f | t | t | t\n f_star | f | t | t | t\n(6 rows)\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 12:43:03 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, May 20, 2019 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Note that in the discussion that led up to 624e440a, we never did\n>> think that we'd completely explained the original irreproducible\n>> failure.\n>> \n>> I think I've seen a couple of other cases of this same failure\n>> in the buildfarm recently, but too tired to go looking right now.\n\n> I think it might be dependent on incidental vacuum/analyze activity\n> having updated reltuples.\n\nI got around to excavating in the buildfarm archives, and found a round\ndozen of more-or-less-similar incidents. I went back 18 months, which\nby coincidence (i.e., I didn't realize it till just now) is just about\nthe time since 624e440a:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2018-01-14%2006%3A30%3A02\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2018-03-02%2011%3A30%3A19\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-03-11%2023%3A25%3A46\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-03-15%2000%3A02%3A04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=spurfowl&dt=2018-04-05%2003%3A22%3A05\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2018-04-07%2018%3A32%3A02\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=termite&dt=2018-04-08%2019%3A55%3A06\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2018-04-23%2010%3A00%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2019-04-19%2001%3A50%3A08\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2019-04-23%2021%3A23%3A12\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2019-05-14%2014%3A59%3A43\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=aye-aye&dt=2019-05-19%2018%3A30%3A10\n\nThere are two really interesting things about this list:\n\n* All the failures are on HEAD. This implies that the issue was\nnot there when we forked off v11, else we'd surely have seen an\ninstance on that branch by now. The dates above are consistent\nwith the idea that we eliminated the problem in roughly May 2018,\nand then it came back about a month ago. (Of course, maybe this\njust traces to unrelated changes in test timing.)\n\n* All the failures are in the pg_upgrade test (and some are before,\nsome after, we switched that from serial to parallel schedule).\nThis makes very little sense; how is that meaningfully different\nfrom the buildfarm's straight-up invocations of \"make check\" and\n\"make installcheck\"?\n\nNote that I excluded a bunch of cases where we managed to run\nselect_parallel despite having suffered failures earlier in the\ntest run, typically failures that caused the sanity_check test\nto not run. These led to diffs in the X_star queries that look\nroughly similar to these, but not the same.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 May 2019 23:15:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> I did add the following query just before the failing one and included\n> the expected output from below. The tests pass for me in make check\n> and the post-upgrade test passes in make check-world too. I guess we\n> could commit that and see if it fails along with the other mentioned\n> failure.\n\nI'm thinking this is a good idea, although I think we could be more\naggressive about the data collected, as attached. Since all of these\nought to be single-page tables, the relpages and reltuples counts\nshould be machine-independent. In theory anyway.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 21 May 2019 10:39:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Wed, May 22, 2019 at 2:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > I did add the following query just before the failing one and included\n> > the expected output from below. The tests pass for me in make check\n> > and the post-upgrade test passes in make check-world too. I guess we\n> > could commit that and see if it fails along with the other mentioned\n> > failure.\n>\n> I'm thinking this is a good idea, although I think we could be more\n> aggressive about the data collected, as attached. Since all of these\n> ought to be single-page tables, the relpages and reltuples counts\n> should be machine-independent. In theory anyway.\n\nHuh, idiacanthus failed showing vacuum_count 0, in select_parallel.\nSo ... the VACUUM command somehow skipped those tables?\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 May 2019 16:24:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Huh, idiacanthus failed showing vacuum_count 0, in select_parallel.\n> So ... the VACUUM command somehow skipped those tables?\n\nNo, because the reltuples counts are correct. I think what we're\nlooking at there is the stats collector dropping a packet that\ntold it about vacuum activity.\n\nI'm surprised that we saw such a failure so quickly. I'd always\nfigured that the collector mechanism, while it's designed to be\nunreliable, is only a little bit unreliable. Maybe it's more\nthan a little bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 00:44:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Mon, May 20, 2019 at 11:15:47PM -0400, Tom Lane wrote:\n> I got around to excavating in the buildfarm archives, and found a round\n> dozen of more-or-less-similar incidents. I went back 18 months, which\n> by coincidence (i.e., I didn't realize it till just now) is just about\n> the time since 624e440a:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2018-01-14%2006%3A30%3A02\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2018-03-02%2011%3A30%3A19\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-03-11%2023%3A25%3A46\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-03-15%2000%3A02%3A04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=spurfowl&dt=2018-04-05%2003%3A22%3A05\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2018-04-07%2018%3A32%3A02\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=termite&dt=2018-04-08%2019%3A55%3A06\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=damselfly&dt=2018-04-23%2010%3A00%3A15\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2019-04-19%2001%3A50%3A08\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2019-04-23%2021%3A23%3A12\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2019-05-14%2014%3A59%3A43\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=aye-aye&dt=2019-05-19%2018%3A30%3A10\n\n> * All the failures are in the pg_upgrade test (and some are before,\n> some after, we switched that from serial to parallel schedule).\n> This makes very little sense; how is that meaningfully different\n> from the buildfarm's straight-up invocations of \"make check\" and\n> \"make installcheck\"?\n\nTwo behaviors are unique to pg_upgrade's check and have been in place since\n2018-01-14. It uses \"initdb --wal-segsize 1\". It creates three additional\ndatabases, having long names. Neither of those is clearly meaningful in this\ncontext, but it would be a simple matter of programming to make pg_regress.c\ndo those things and see if the buildfarm starts witnessing this failure mode\noutside the pg_upgrade check.\n\n\n",
"msg_date": "Tue, 4 Jun 2019 22:00:37 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "[ reviving a thread that's been idle for awhile ]\n\nI wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Huh, idiacanthus failed showing vacuum_count 0, in select_parallel.\n>> So ... the VACUUM command somehow skipped those tables?\n\n> No, because the reltuples counts are correct. I think what we're\n> looking at there is the stats collector dropping a packet that\n> told it about vacuum activity.\n\n> I'm surprised that we saw such a failure so quickly. I'd always\n> figured that the collector mechanism, while it's designed to be\n> unreliable, is only a little bit unreliable. Maybe it's more\n> than a little bit.\n\nSo that data-collection patch has been in place for nearly 2 months\n(since 2019-05-21), and in that time we've seen a grand total of\nno repeats of the original problem, as far as I've seen. That's\nfairly annoying considering we'd had four repeats in the month\nprior to putting the patch in, but such is life.\n\nIn the meantime, we've had *lots* of buildfarm failures in the\nadded pg_stat_all_tables query, which indicate that indeed the\nstats collector mechanism isn't terribly reliable. But that\ndoesn't directly prove anything about the original problem,\nsince the planner doesn't look at stats collector data.\n\nAnyway, I'm now starting to feel that these failures are more\nof a pain than they're worth, especially since there's not much\nreason to hope that the original problem will recur soon.\n\nWhat I propose to do is remove the pg_stat_all_tables query\nbut keep the relpages/reltuples query. That should fix the\nbuildfarm instability, but we can still hope to get at least\nsome insight if the original problem ever does recur.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 20:21:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "I wrote:\n> So that data-collection patch has been in place for nearly 2 months\n> (since 2019-05-21), and in that time we've seen a grand total of\n> no repeats of the original problem, as far as I've seen.\n\nOh ... wait a minute. I decided to go scrape the buildfarm logs to\nconfirm my impression that there were no matching failures, and darn\nif I didn't find one:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-06-04%2021%3A00%3A22\n\nFor the archives' sake, that's a pg_upgradeCheck failure, and here\nare the regression diffs:\n\n=========================== regression.diffs ================\ndiff -w -U3 c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/select_parallel.out c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\n--- c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/select_parallel.out\t2019-05-21 14:00:23 -0400\n+++ c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\t2019-06-04 17:42:27 -0400\n@@ -21,12 +21,12 @@\n Workers Planned: 3\n -> Partial Aggregate\n -> Parallel Append\n+ -> Parallel Seq Scan on a_star\n -> Parallel Seq Scan on d_star\n -> Parallel Seq Scan on f_star\n -> Parallel Seq Scan on e_star\n -> Parallel Seq Scan on b_star\n -> Parallel Seq Scan on c_star\n- -> Parallel Seq Scan on a_star\n (11 rows)\n \n select round(avg(aa)), sum(aa) from a_star a1;\n@@ -49,10 +49,10 @@\n -> Parallel Append\n -> Seq Scan on d_star\n -> Seq Scan on c_star\n+ -> Parallel Seq Scan on a_star\n -> Parallel Seq Scan on f_star\n -> Parallel Seq Scan on e_star\n -> Parallel Seq Scan on b_star\n- -> Parallel Seq Scan on a_star\n (11 rows)\n \n select round(avg(aa)), sum(aa) from a_star a2;\n@@ -75,12 +75,12 @@\n Workers Planned: 3\n -> Partial Aggregate\n -> Parallel Append\n+ -> Seq Scan on a_star\n -> Seq Scan on d_star\n -> Seq Scan on f_star\n -> Seq Scan on e_star\n -> Seq Scan on b_star\n -> Seq Scan on c_star\n- -> Seq Scan on a_star\n (11 rows)\n \n select round(avg(aa)), sum(aa) from a_star a3;\n@@ -95,7 +95,7 @@\n where relname like '__star' order by relname;\n relname | relpages | reltuples \n ---------+----------+-----------\n- a_star | 1 | 3\n+ a_star | 0 | 0\n b_star | 1 | 4\n c_star | 1 | 4\n d_star | 1 | 16\ndiff -w -U3 c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/stats.out c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out\n--- c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/../pgsql/src/test/regress/expected/stats.out\t2019-05-21 14:00:23 -0400\n+++ c:/mingw/msys/1.0/home/pgrunner/bf/root/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out\t2019-06-04 17:43:06 -0400\n@@ -205,7 +205,7 @@\n where relname like '__star' order by relname;\n relname | relpages | reltuples \n ---------+----------+-----------\n- a_star | 1 | 3\n+ a_star | 0 | 0\n b_star | 1 | 4\n c_star | 1 | 4\n d_star | 1 | 16\n\n\nThis plan shape change matches some, though by no means all, of the\nprevious failures. And we can now see why the planner did that: a_star\nhas a smaller recorded size than the other tables in the query.\n\nSo what happened there? There's no diff in the pg_stat_all_tables\nquery, which proves that a vacuum on a_star did happen, since it\ntransmitted a vacuum_count increment to the stats collector.\nIt seems like there are two possible theories:\n\n(1) The vacuum for some reason saw the table's size as zero\n (whereupon it'd read no blocks and count no tuples).\n(2) The vacuum's update of the pg_class row failed to \"take\".\n\nTheory (2) seems a bit more plausible, but still very unsettling.\n\nThe similar failures that this result doesn't exactly match\nall look, in the light of this data, like some one of the \"X_star\"\ntables unexpectedly moved to the top of the parallel plan, which\nwe can now hypothesize means that that table had zero relpages/\nreltuples after supposedly being vacuumed. So it's not only\na_star that's got the issue, which lets out my half-formed theory\nthat being the topmost parent of the inheritance hierarchy has\nsomething to do with it. But I bet that these tables forming\nan inheritance hierarchy (with multiple inheritance even) does\nhave something to do with it somehow, because if this were a\ngeneric VACUUM bug surely we'd be seeing it elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Jul 2019 21:12:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Hi,\n\nOn 2019-07-15 21:12:32 -0400, Tom Lane wrote:\n> But I bet that these tables forming\n> an inheritance hierarchy (with multiple inheritance even) does\n> have something to do with it somehow, because if this were a\n> generic VACUUM bug surely we'd be seeing it elsewhere.\n\nIt's possible that it's hidden in other cases, because of\n\nvoid\ntable_block_relation_estimate_size(Relation rel, int32 *attr_widths,\n\t\t\t\t\t\t\t\t BlockNumber *pages, double *tuples,\n\t\t\t\t\t\t\t\t double *allvisfrac,\n\t\t\t\t\t\t\t\t Size overhead_bytes_per_tuple,\n\t\t\t\t\t\t\t\t Size usable_bytes_per_page)\n...\n\t * If the table has inheritance children, we don't apply this heuristic.\n\t * Totally empty parent tables are quite common, so we should be willing\n\t * to believe that they are empty.\n\t */\n\tif (curpages < 10 &&\n\t\trelpages == 0 &&\n\t\t!rel->rd_rel->relhassubclass)\n\t\tcurpages = 10;\n\nwhich'd not make us actually take a relpages=0 into account for tables\nwithout inheritance. A lot of these tables never get 10+ pages long, so\nthe heuristic would always apply...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Jul 2019 12:23:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Wed, 17 Jul 2019 at 07:23, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2019-07-15 21:12:32 -0400, Tom Lane wrote:\n> > But I bet that these tables forming\n> > an inheritance hierarchy (with multiple inheritance even) does\n> > have something to do with it somehow, because if this were a\n> > generic VACUUM bug surely we'd be seeing it elsewhere.\n>\n> It's possible that it's hidden in other cases, because of\n>\n> void\n> table_block_relation_estimate_size(Relation rel, int32 *attr_widths,\n> BlockNumber *pages, double *tuples,\n> double *allvisfrac,\n> Size overhead_bytes_per_tuple,\n> Size usable_bytes_per_page)\n> ...\n> * If the table has inheritance children, we don't apply this heuristic.\n> * Totally empty parent tables are quite common, so we should be willing\n> * to believe that they are empty.\n> */\n> if (curpages < 10 &&\n> relpages == 0 &&\n> !rel->rd_rel->relhassubclass)\n> curpages = 10;\n>\n> which'd not make us actually take a relpages=0 into account for tables\n> without inheritance. A lot of these tables never get 10+ pages long, so\n> the heuristic would always apply...\n\nSurely it can't be that since that just sets what *pages gets set to.\nTom mentioned that following was returning 0 pages and tuples:\n\n-- Temporary hack to investigate whether extra vacuum/analyze is happening\nselect relname, relpages, reltuples\nfrom pg_class\nwhere relname like '__star' order by relname;\n relname | relpages | reltuples\n---------+----------+-----------\n a_star | 1 | 3\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jul 2019 13:20:18 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Surely it can't be that since that just sets what *pages gets set to.\n> Tom mentioned that following was returning 0 pages and tuples:\n\n> -- Temporary hack to investigate whether extra vacuum/analyze is happening\n> select relname, relpages, reltuples\n> from pg_class\n> where relname like '__star' order by relname;\n> relname | relpages | reltuples\n> ---------+----------+-----------\n> a_star | 1 | 3\n\nI poked around a little and came up with a much simpler theory:\nVACUUM will not change relpages/reltuples if it does not scan any pages\n(cf. special case for tupcount_pages == 0 in heap_vacuum_rel, at line 343\nin HEAD's vacuumlazy.c). And, because sanity_check.sql's VACUUM is a\nplain unaggressive vacuum, all that it takes to make it skip over a_star's\none page is for somebody else to have a pin on that page. So a chance\ncollision with the bgwriter or checkpointer could cause the observed\nsymptom, not just for a_star but for the other single-page relations that\nare at stake here. Those pages are certainly dirty after create_misc.sql,\nso it's hardly implausible for one of these processes to be holding pin\nwhile trying to write out the buffer at the time sanity_check.sql runs.\n\nA brute-force way to fix this (or at least reduce the odds quite a bit)\nwould be to have sanity_check.sql issue a CHECKPOINT before its VACUUM,\nthereby guaranteeing that none of these pages are still in need of being\nwritten. Not sure how much that'd penalize the regression tests' runtime,\nor whether we'd have a loss of test coverage of VACUUM behaviors.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 11:53:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On 2019-07-17 11:53:48 -0400, Tom Lane wrote:\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > Surely it can't be that since that just sets what *pages gets set to.\n> > Tom mentioned that following was returning 0 pages and tuples:\n> \n> > -- Temporary hack to investigate whether extra vacuum/analyze is happening\n> > select relname, relpages, reltuples\n> > from pg_class\n> > where relname like '__star' order by relname;\n> > relname | relpages | reltuples\n> > ---------+----------+-----------\n> > a_star | 1 | 3\n> \n> I poked around a little and came up with a much simpler theory:\n> VACUUM will not change relpages/reltuples if it does not scan any pages\n> (cf. special case for tupcount_pages == 0 in heap_vacuum_rel, at line 343\n> in HEAD's vacuumlazy.c). And, because sanity_check.sql's VACUUM is a\n> plain unaggressive vacuum, all that it takes to make it skip over a_star's\n> one page is for somebody else to have a pin on that page.\n\nI wonder if we could set log_min_messages to DEBUG2 on occasionally\nfailing machines to test that theory. That ought to hit\n\n\tappendStringInfo(&buf, ngettext(\"Skipped %u page due to buffer pins, \",\n\t\t\t\t\t\t\t\t\t\"Skipped %u pages due to buffer pins, \",\n\t\t\t\t\t\t\t\t\tvacrelstats->pinskipped_pages),\n ...\n\tereport(elevel,\n\t\t\t(errmsg(\"\\\"%s\\\": found %.0f removable, %.0f nonremovable row versions in %u out of %u pages\",\n\t\t\t\t\tRelationGetRelationName(onerel),\n\t\t\t\t\ttups_vacuumed, num_tuples,\n\t\t\t\t\tvacrelstats->scanned_pages, nblocks),\n\t\t\t errdetail_internal(\"%s\", buf.data)));\n\n\n\n> So a chance\n> collision with the bgwriter or checkpointer could cause the observed\n> symptom, not just for a_star but for the other single-page relations that\n> are at stake here. Those pages are certainly dirty after create_misc.sql,\n> so it's hardly implausible for one of these processes to be holding pin\n> while trying to write out the buffer at the time sanity_check.sql runs.\n> \n> A brute-force way to fix this (or at least reduce the odds quite a bit)\n> would be to have sanity_check.sql issue a CHECKPOINT before its VACUUM,\n> thereby guaranteeing that none of these pages are still in need of being\n> written. Not sure how much that'd penalize the regression tests' runtime,\n> or whether we'd have a loss of test coverage of VACUUM behaviors.\n\nAlternatively we could VACUUM FREEZE the relevant tables? That then\nought to hit the blocking codepath in lazu_scan_heap()?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Jul 2019 16:12:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-07-17 11:53:48 -0400, Tom Lane wrote:\n>> A brute-force way to fix this (or at least reduce the odds quite a bit)\n>> would be to have sanity_check.sql issue a CHECKPOINT before its VACUUM,\n>> thereby guaranteeing that none of these pages are still in need of being\n>> written. Not sure how much that'd penalize the regression tests' runtime,\n>> or whether we'd have a loss of test coverage of VACUUM behaviors.\n\n> Alternatively we could VACUUM FREEZE the relevant tables? That then\n> ought to hit the blocking codepath in lazu_scan_heap()?\n\nIf we want to target just the X_star tables, I'd be inclined to do\nan ANALYZE instead. (Although that would create inheritance-tree\nstatistics, which might change some plan choices? Haven't tried.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Jul 2019 19:20:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Tue, Jul 16, 2019 at 12:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In the meantime, we've had *lots* of buildfarm failures in the\n> added pg_stat_all_tables query, which indicate that indeed the\n> stats collector mechanism isn't terribly reliable. But that\n> doesn't directly prove anything about the original problem,\n> since the planner doesn't look at stats collector data.\n\nI noticed that if you look at the list of failures of this type, there\nare often pairs of animals belonging to Andres that failed at the same\ntime. I wonder if he might be running a bunch of animals on one\nkernel, and need to increase net.core.rmem_max and\nnet.core.rmem_default (or maybe the write side variants, or both, or\nsomething like that).\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Jul 2019 11:59:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "On Wed, Jul 24, 2019 at 11:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Jul 16, 2019 at 12:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > In the meantime, we've had *lots* of buildfarm failures in the\n> > added pg_stat_all_tables query, which indicate that indeed the\n> > stats collector mechanism isn't terribly reliable. But that\n> > doesn't directly prove anything about the original problem,\n> > since the planner doesn't look at stats collector data.\n>\n> I noticed that if you look at the list of failures of this type, there\n> are often pairs of animals belonging to Andres that failed at the same\n> time. I wonder if he might be running a bunch of animals on one\n> kernel, and need to increase net.core.rmem_max and\n> net.core.rmem_default (or maybe the write side variants, or both, or\n> something like that).\n\nIn further support of that theory, here are the counts of 'stats'\nfailures (excluding bogus reports due to crashes) for the past 90\ndays:\n\n owner | animal | count\n-------------------------+--------------+-------\n andres-AT-anarazel.de | desmoxytes | 5\n andres-AT-anarazel.de | dragonet | 9\n andres-AT-anarazel.de | flaviventris | 1\n andres-AT-anarazel.de | idiacanthus | 5\n andres-AT-anarazel.de | komodoensis | 11\n andres-AT-anarazel.de | pogona | 1\n andres-AT-anarazel.de | serinus | 3\n andrew-AT-dunslane.net | lorikeet | 1\n buildfarm-AT-coelho.net | moonjelly | 1\n buildfarm-AT-coelho.net | seawasp | 17\n clarenceho-AT-gmail.com | mayfly | 2\n\nAndres's animals report the same hostname and run at the same time, so\nit'd be interesting to know what net.core.rmem_max is set to and\nwhether these problems go away if it's cranked up 10x higher or\nsomething. In a quick test I can see that make installcheck is\ncapable of sending a *lot* of 936 byte messages in the same\nmillisecond.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Aug 2019 17:58:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 24, 2019 at 11:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Tue, Jul 16, 2019 at 12:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> In the meantime, we've had *lots* of buildfarm failures in the\n>>> added pg_stat_all_tables query, which indicate that indeed the\n>>> stats collector mechanism isn't terribly reliable. But that\n>>> doesn't directly prove anything about the original problem,\n>>> since the planner doesn't look at stats collector data.\n\n>> I noticed that if you look at the list of failures of this type, there\n>> are often pairs of animals belonging to Andres that failed at the same\n>> time. I wonder if he might be running a bunch of animals on one\n>> kernel, and need to increase net.core.rmem_max and\n>> net.core.rmem_default (or maybe the write side variants, or both, or\n>> something like that).\n\n> Andres's animals report the same hostname and run at the same time, so\n> it'd be interesting to know what net.core.rmem_max is set to and\n> whether these problems go away if it's cranked up 10x higher or\n> something. In a quick test I can see that make installcheck is\n> capable of sending a *lot* of 936 byte messages in the same\n> millisecond.\n\nYeah. I think we've had quite enough of the stats-transmission-related\nfailures, and they're no longer proving anything about the original\nproblem. So I will go do what I proposed in mid-July and revert the\nstats queries, while keeping the reltuples/relpages check. (I'd kind\nof like to get more confirmation that the plan shape change is associated\nwith those fields reading as zeroes, before we decide what to do about the\nunderlying instability.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Aug 2019 18:41:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
},
{
"msg_contents": "I wrote:\n> Yeah. I think we've had quite enough of the stats-transmission-related\n> failures, and they're no longer proving anything about the original\n> problem. So I will go do what I proposed in mid-July and revert the\n> stats queries, while keeping the reltuples/relpages check. (I'd kind\n> of like to get more confirmation that the plan shape change is associated\n> with those fields reading as zeroes, before we decide what to do about the\n> underlying instability.)\n\nChristoph Berg's recent complaint reminded me to scan the buildfarm\ndatabase again for info related to this issue, and I found this:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=moonjelly&dt=2019-07-02%2017%3A17%3A02\n\nin which the failure diffs are\n\ndiff -U3 /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/test/regress/expected/select_parallel.out /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\n--- /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/test/regress/expected/select_parallel.out\t2019-05-21 19:17:03.472207619 +0200\n+++ /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/select_parallel.out\t2019-07-02 19:21:53.643095637 +0200\n@@ -98,7 +98,7 @@\n a_star | 1 | 3\n b_star | 1 | 4\n c_star | 1 | 4\n- d_star | 1 | 16\n+ d_star | 0 | 0\n e_star | 1 | 7\n f_star | 1 | 16\n (6 rows)\n@@ -130,7 +130,7 @@\n -----------------------------------------------------\n Finalize Aggregate\n -> Gather\n- Workers Planned: 1\n+ Workers Planned: 3\n -> Partial Aggregate\n -> Append\n -> Parallel Seq Scan on a_star\ndiff -U3 /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out\n--- /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/test/regress/expected/stats.out\t2019-05-21 19:17:03.472207619 +0200\n+++ /home/fabien/pg/build-farm-10/buildroot/HEAD/pgsql.build/src/bin/pg_upgrade/tmp_check/regress/results/stats.out\t2019-07-02 19:21:57.891105601 +0200\n@@ -208,7 +208,7 @@\n a_star | 1 | 3\n b_star | 1 | 4\n c_star | 1 | 4\n- d_star | 1 | 16\n+ d_star | 0 | 0\n e_star | 1 | 7\n f_star | 1 | 16\n (6 rows)\n\nWhile this fails to show the plan ordering difference we were looking for,\nit does show that relpages/reltuples can sometimes read as zeroes for one\nof these tables. (It also indicates that at least some of the\nworker-count instability we've seen might trace to this same issue.)\n\nThat's the only related failure I could find in the last three months,\nwhich makes me think that we've changed the regression tests enough that\nthe chance timing needed to cause this is (once again) very improbable.\nSo I'm prepared to give up waiting for more buildfarm evidence.\n\nI propose to finish reverting f03a9ca43 in HEAD, and instead install\nthe attached in HEAD and v12. This follows the upthread suggestions\nfrom Thomas and myself to use ANALYZE to ensure that these tables\nhave the expected relpages/reltuples entries.\n\nIn principle, we might need this further back than v12, but without having\nseen a test failure in the wild I'm not tempted to back-patch further.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Sep 2019 13:14:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Parallel Append subplan order instability on aye-aye"
}
] |
[
{
"msg_contents": "Hello,\n\nThis week I upgraded one of my large(2.8TB), high-volume databases from 9\nto 11. The upgrade itself went fine. About two days later, we unexpectedly\nhit transaction ID wraparound. What was perplexing about this was that the\nage of our oldest `datfrozenxid` was only 1.2 billion - far away from where\nI'd expect a wraparound. Curiously, the wraparound error referred to a\nmysterious database of `OID 0`:\n\nUPDATE ERROR: database is not accepting commands to avoid wraparound data\nloss in database with OID 0\n\nWe were able to recover after a few hours by greatly speeding up our vacuum\non our largest table.\n\nIn a followup investigation I uncovered the reason we hit the wraparound so\nearly, and also the cause of the mysterious OID 0 message. When pg_upgrade\nexecutes, it calls pg_resetwal to set the next transaction ID. Within\npg_resetwal is the following code:\nhttps://github.com/postgres/postgres/blob/6cd404b344f7e27f4d64555bb133f18a758fe851/src/bin/pg_resetwal/pg_resetwal.c#L440-L450\n\nThis sets the controldata to have a fake database (OID 0) on the brink of\ntransaction wraparound. Specifically, after pg_upgrade is ran, wraparound\nwill occur within around 140 million transactions (provided the autovacuum\ndoesn't finish first). I confirmed by analyzing our controldata before and\nafter the upgrade that this was the cause of our early wraparound.\n\nGiven the size and heavy volume of our database, we tend to complete a\nvacuum in the time it takes around 250 million transactions to execute.\nWith our tunings this tends to be rather safe and we stay well away from\nthe wraparound point under normal circumstances.\n\nUnfortunately we had no obvious way of knowing that the upgrade would place\nour database upon the brink of wraparound. In fact, since this info is only\npersisted in the controldata, the only way to discover this state to my\nknowledge would be to inspect the controldata itself. Other standard means\nof monitoring for wraparound risk involve watching `pg_database` or\n`pg_class`, which in this case tells us nothing helpful since the fake\ndatabase present in the controldata is not represented in those stats.\n\nI'd like to suggest that either the pg_upgrade->pg_resetwal behaviour be\nadjusted, or the pg_upgrade documentation highlight this potential\nscenario. I'm happy to contribute code and/or documentation pull requests\nto accomplish this.\n\nThank you,\nJason Harvey\nreddit.com\n\nHello,This week I upgraded one of my large(2.8TB), high-volume databases from 9 to 11. The upgrade itself went fine. About two days later, we unexpectedly hit transaction ID wraparound. What was perplexing about this was that the age of our oldest `datfrozenxid` was only 1.2 billion - far away from where I'd expect a wraparound. Curiously, the wraparound error referred to a mysterious database of `OID 0`:UPDATE ERROR: database is not accepting commands to avoid wraparound data loss in database with OID 0We were able to recover after a few hours by greatly speeding up our vacuum on our largest table.In a followup investigation I uncovered the reason we hit the wraparound so early, and also the cause of the mysterious OID 0 message. When pg_upgrade executes, it calls pg_resetwal to set the next transaction ID. Within pg_resetwal is the following code: https://github.com/postgres/postgres/blob/6cd404b344f7e27f4d64555bb133f18a758fe851/src/bin/pg_resetwal/pg_resetwal.c#L440-L450This sets the controldata to have a fake database (OID 0) on the brink of transaction wraparound. Specifically, after pg_upgrade is ran, wraparound will occur within around 140 million transactions (provided the autovacuum doesn't finish first). I confirmed by analyzing our controldata before and after the upgrade that this was the cause of our early wraparound.Given the size and heavy volume of our database, we tend to complete a vacuum in the time it takes around 250 million transactions to execute. With our tunings this tends to be rather safe and we stay well away from the wraparound point under normal circumstances.Unfortunately we had no obvious way of knowing that the upgrade would place our database upon the brink of wraparound. In fact, since this info is only persisted in the controldata, the only way to discover this state to my knowledge would be to inspect the controldata itself. Other standard means of monitoring for wraparound risk involve watching `pg_database` or `pg_class`, which in this case tells us nothing helpful since the fake database present in the controldata is not represented in those stats.I'd like to suggest that either the pg_upgrade->pg_resetwal behaviour be adjusted, or the pg_upgrade documentation highlight this potential scenario. I'm happy to contribute code and/or documentation pull requests to accomplish this.Thank you,Jason Harveyreddit.com",
"msg_date": "Mon, 20 May 2019 02:10:17 -0800",
"msg_from": "Jason Harvey <jason@reddit.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "On Mon, May 20, 2019 at 3:10 AM Jason Harvey <jason@reddit.com> wrote:\n> This week I upgraded one of my large(2.8TB), high-volume databases from 9 to 11. The upgrade itself went fine. About two days later, we unexpectedly hit transaction ID wraparound. What was perplexing about this was that the age of our oldest `datfrozenxid` was only 1.2 billion - far away from where I'd expect a wraparound. Curiously, the wraparound error referred to a mysterious database of `OID 0`:\n>\n> UPDATE ERROR: database is not accepting commands to avoid wraparound data loss in database with OID 0\n>\n> We were able to recover after a few hours by greatly speeding up our vacuum on our largest table.\n>\n> In a followup investigation I uncovered the reason we hit the wraparound so early, and also the cause of the mysterious OID 0 message. When pg_upgrade executes, it calls pg_resetwal to set the next transaction ID. Within pg_resetwal is the following code: https://github.com/postgres/postgres/blob/6cd404b344f7e27f4d64555bb133f18a758fe851/src/bin/pg_resetwal/pg_resetwal.c#L440-L450\n>\n> This sets the controldata to have a fake database (OID 0) on the brink of transaction wraparound. Specifically, after pg_upgrade is ran, wraparound will occur within around 140 million transactions (provided the autovacuum doesn't finish first). I confirmed by analyzing our controldata before and after the upgrade that this was the cause of our early wraparound.\n>\n> Given the size and heavy volume of our database, we tend to complete a vacuum in the time it takes around 250 million transactions to execute. With our tunings this tends to be rather safe and we stay well away from the wraparound point under normal circumstances.\n\nThis does seem like an unfriendly behavior. Moving the thread over to\nthe -hackers list for further discussion...\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 May 2019 15:23:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "On Tue, May 21, 2019 at 03:23:00PM -0700, Peter Geoghegan wrote:\n> On Mon, May 20, 2019 at 3:10 AM Jason Harvey <jason@reddit.com> wrote:\n> > This week I upgraded one of my large(2.8TB), high-volume databases from 9 to 11. The upgrade itself went fine. About two days later, we unexpectedly hit transaction ID wraparound. What was perplexing about this was that the age of our oldest `datfrozenxid` was only 1.2 billion - far away from where I'd expect a wraparound. Curiously, the wraparound error referred to a mysterious database of `OID 0`:\n> >\n> > UPDATE ERROR: database is not accepting commands to avoid wraparound data loss in database with OID 0\n\nThat's bad.\n\n> > We were able to recover after a few hours by greatly speeding up our vacuum on our largest table.\n\nFor what it's worth, a quicker workaround is to VACUUM FREEZE any database,\nhowever small. That forces a vac_truncate_clog(), which recomputes the wrap\npoint from pg_database.datfrozenxid values. This demonstrates the workaround:\n\n--- a/src/bin/pg_upgrade/test.sh\n+++ b/src/bin/pg_upgrade/test.sh\n@@ -248,7 +248,10 @@ case $testhost in\n esac\n \n pg_dumpall --no-sync -f \"$temp_root\"/dump2.sql || pg_dumpall2_status=$?\n+pg_controldata \"${PGDATA}\"\n+vacuumdb -F template1\n pg_ctl -m fast stop\n+pg_controldata \"${PGDATA}\"\n \n if [ -n \"$pg_dumpall2_status\" ]; then\n \techo \"pg_dumpall of post-upgrade database cluster failed\"\n\n> > In a followup investigation I uncovered the reason we hit the wraparound so early, and also the cause of the mysterious OID 0 message. When pg_upgrade executes, it calls pg_resetwal to set the next transaction ID. Within pg_resetwal is the following code: https://github.com/postgres/postgres/blob/6cd404b344f7e27f4d64555bb133f18a758fe851/src/bin/pg_resetwal/pg_resetwal.c#L440-L450\n\npg_upgrade should set oldestXID to the same value as the source cluster or set\nit like vac_truncate_clog() would set it. Today's scheme is usually too\npessimistic, but it can be too optimistic if the source cluster was on the\nbring of wrap. Thanks for the report.\n\n\n",
"msg_date": "Sat, 15 Jun 2019 11:37:59 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "Hi,\n\nOn 2019-06-15 11:37:59 -0700, Noah Misch wrote:\n> On Tue, May 21, 2019 at 03:23:00PM -0700, Peter Geoghegan wrote:\n> > On Mon, May 20, 2019 at 3:10 AM Jason Harvey <jason@reddit.com> wrote:\n> > > This week I upgraded one of my large(2.8TB), high-volume databases from 9 to 11. The upgrade itself went fine. About two days later, we unexpectedly hit transaction ID wraparound. What was perplexing about this was that the age of our oldest `datfrozenxid` was only 1.2 billion - far away from where I'd expect a wraparound. Curiously, the wraparound error referred to a mysterious database of `OID 0`:\n> > >\n> > > UPDATE ERROR: database is not accepting commands to avoid wraparound data loss in database with OID 0\n> \n> That's bad.\n\nYea. The code triggering it in pg_resetwal is bogus as far as I can\ntell. That pg_upgrade triggers it makes this quite bad.\n\nI just hit issues related to it when writing a wraparound handling\ntest. Peter remembered this issue (how?)...\n\nEspecially before 13 (inserts triggering autovacuum) it is quite common\nto have tables that only ever get vacuumed due to anti-wraparound\nvacuums. And it's common for larger databases to increase\nautovacuum_freeze_max_age. Which makes it fairly likely for this to\nguess an oldestXid value that's *newer* than an accurate one. Since\noldestXid is used in a few important-ish places (like triggering\nvacuums, and in 14 also some snapshot related logic) I think that's bad.\n\nThe relevant code:\n\n if (set_xid != 0)\n {\n ControlFile.checkPointCopy.nextXid =\n FullTransactionIdFromEpochAndXid(EpochFromFullTransactionId(ControlFile.checkPointCopy.nextXid),\n set_xid);\n\n /*\n * For the moment, just set oldestXid to a value that will force\n * immediate autovacuum-for-wraparound. It's not clear whether adding\n * user control of this is useful, so let's just do something that's\n * reasonably safe. The magic constant here corresponds to the\n * maximum allowed value of autovacuum_freeze_max_age.\n */\n ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n ControlFile.checkPointCopy.oldestXidDB = InvalidOid;\n }\n\nOriginally from:\n\ncommit 25ec228ef760eb91c094cc3b6dea7257cc22ffb5\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2009-08-31 02:23:23 +0000\n\n Track the current XID wrap limit (or more accurately, the oldest unfrozen\n XID) in checkpoint records. This eliminates the need to recompute the value\n from scratch during database startup, which is one of the two remaining\n reasons for the flatfile code to exist. It should also simplify life for\n hot-standby operation.\n\n\nI think we should remove the oldestXid guessing logic, and expose it as\nan explicit option. I think it's important that pg_upgrade sets an\naccurate value. Probably not worth caring about oldestXidDB though?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Apr 2021 16:42:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 04:42:56PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-06-15 11:37:59 -0700, Noah Misch wrote:\n> > On Tue, May 21, 2019 at 03:23:00PM -0700, Peter Geoghegan wrote:\n> > > On Mon, May 20, 2019 at 3:10 AM Jason Harvey <jason@reddit.com> wrote:\n> > > > This week I upgraded one of my large(2.8TB), high-volume databases from 9 to 11. The upgrade itself went fine. About two days later, we unexpectedly hit transaction ID wraparound. What was perplexing about this was that the age of our oldest `datfrozenxid` was only 1.2 billion - far away from where I'd expect a wraparound. Curiously, the wraparound error referred to a mysterious database of `OID 0`:\n> > > >\n> > > > UPDATE ERROR: database is not accepting commands to avoid wraparound data loss in database with OID 0\n> > \n> > That's bad.\n> \n> Yea. The code triggering it in pg_resetwal is bogus as far as I can\n> tell. That pg_upgrade triggers it makes this quite bad.\n> \n> I just hit issues related to it when writing a wraparound handling\n> test. Peter remembered this issue (how?)...\n> \n> Especially before 13 (inserts triggering autovacuum) it is quite common\n> to have tables that only ever get vacuumed due to anti-wraparound\n> vacuums. And it's common for larger databases to increase\n> autovacuum_freeze_max_age. Which makes it fairly likely for this to\n> guess an oldestXid value that's *newer* than an accurate one. Since\n> oldestXid is used in a few important-ish places (like triggering\n> vacuums, and in 14 also some snapshot related logic) I think that's bad.\n> \n> The relevant code:\n> \n> if (set_xid != 0)\n> {\n> ControlFile.checkPointCopy.nextXid =\n> FullTransactionIdFromEpochAndXid(EpochFromFullTransactionId(ControlFile.checkPointCopy.nextXid),\n> set_xid);\n> \n> /*\n> * For the moment, just set oldestXid to a value that will force\n> * immediate autovacuum-for-wraparound. It's not clear whether adding\n> * user control of this is useful, so let's just do something that's\n> * reasonably safe. The magic constant here corresponds to the\n> * maximum allowed value of autovacuum_freeze_max_age.\n> */\n> ControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n> if (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n> ControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n> ControlFile.checkPointCopy.oldestXidDB = InvalidOid;\n> }\n> \n> Originally from:\n> \n> commit 25ec228ef760eb91c094cc3b6dea7257cc22ffb5\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: 2009-08-31 02:23:23 +0000\n> \n> Track the current XID wrap limit (or more accurately, the oldest unfrozen\n> XID) in checkpoint records. This eliminates the need to recompute the value\n> from scratch during database startup, which is one of the two remaining\n> reasons for the flatfile code to exist. It should also simplify life for\n> hot-standby operation.\n> \n> I think we should remove the oldestXid guessing logic, and expose it as\n> an explicit option. I think it's important that pg_upgrade sets an\n> accurate value. Probably not worth caring about oldestXidDB though?\n\nThis (combination of) thread(s) seems relevant.\n\nSubject: pg_upgrade failing for 200+ million Large Objects\nhttps://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com\nhttps://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com\nhttps://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e\n> Allows the user to provide a constant via pg_upgrade command-line, that\n>overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n>(window of) Transaction IDs available for pg_upgrade to complete.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Apr 2021 19:28:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-23 19:28:27 -0500, Justin Pryzby wrote:\n> This (combination of) thread(s) seems relevant.\n> \n> Subject: pg_upgrade failing for 200+ million Large Objects\n> https://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com\n> https://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com\n> https://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e\n\nHuh. Thanks for digging these up.\n\n\n> > Allows the user to provide a constant via pg_upgrade command-line, that\n> >overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n> >(window of) Transaction IDs available for pg_upgrade to complete.\n\nThat seems the entirely the wrong approach to me, buying further into\nthe broken idea of inventing random wrong values for oldestXid.\n\nWe drive important things like the emergency xid limits off oldestXid. On\ndatabases with tables that are older than ~147million xids (i.e. not even\naffected by the default autovacuum_freeze_max_age) the current constant leads\nto setting the oldestXid to a value *in the future*/wrapped around. Any\ndifferent different constant (or pg_upgrade parameter) will do that too in\nother scenarios.\n\nAs far as I can tell there is precisely *no* correct behaviour here other than\nexactly copying the oldestXid limit from the source database.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Apr 2021 18:00:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "Hi,\n\nOn 4/24/21 3:00 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-04-23 19:28:27 -0500, Justin Pryzby wrote:\n>> This (combination of) thread(s) seems relevant.\n>>\n>> Subject: pg_upgrade failing for 200+ million Large Objects\n>> https://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com\n>> https://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com\n>> https://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e\n> Huh. Thanks for digging these up.\n>\n>\n>>> Allows the user to provide a constant via pg_upgrade command-line, that\n>>> overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n>>> (window of) Transaction IDs available for pg_upgrade to complete.\n> That seems the entirely the wrong approach to me, buying further into\n> the broken idea of inventing random wrong values for oldestXid.\n>\n> We drive important things like the emergency xid limits off oldestXid. On\n> databases with tables that are older than ~147million xids (i.e. not even\n> affected by the default autovacuum_freeze_max_age) the current constant leads\n> to setting the oldestXid to a value *in the future*/wrapped around. Any\n> different different constant (or pg_upgrade parameter) will do that too in\n> other scenarios.\n>\n> As far as I can tell there is precisely *no* correct behaviour here other than\n> exactly copying the oldestXid limit from the source database.\n>\nPlease find attached a patch proposal doing so: it adds a new (- u) \nparameter to pg_resetwal that allows to specify the oldest unfrozen XID \nto set.\nThen this new parameter is being used in pg_upgrade to copy the source \nLatest checkpoint's oldestXID.\n\nQuestions:\n\n * Should we keep the old behavior in case -x is being used without -u?\n (The proposed patch does not set an arbitrary oldestXID anymore in\n case -x is used.)\n * Also shouldn't we ensure that the xid provided with -x or -u is >=\n FirstNormalTransactionId (Currently the only check is that it is # 0)?\n\nI'm adding this patch to the commitfest.\n\nBertrand",
"msg_date": "Tue, 4 May 2021 10:17:49 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade can result in early wraparound on databases with high\n transaction load"
},
{
"msg_contents": "Hi,\n\nOn 5/4/21 10:17 AM, Drouvot, Bertrand wrote:\n>\n> Hi,\n>\n> On 4/24/21 3:00 AM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-04-23 19:28:27 -0500, Justin Pryzby wrote:\n>>> This (combination of) thread(s) seems relevant.\n>>>\n>>> Subject: pg_upgrade failing for 200+ million Large Objects\n>>> https://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com\n>>> https://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com\n>>> https://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e\n>> Huh. Thanks for digging these up.\n>>\n>>\n>>>> Allows the user to provide a constant via pg_upgrade command-line, that\n>>>> overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n>>>> (window of) Transaction IDs available for pg_upgrade to complete.\n>> That seems the entirely the wrong approach to me, buying further into\n>> the broken idea of inventing random wrong values for oldestXid.\n>>\n>> We drive important things like the emergency xid limits off oldestXid. On\n>> databases with tables that are older than ~147million xids (i.e. not even\n>> affected by the default autovacuum_freeze_max_age) the current constant leads\n>> to setting the oldestXid to a value *in the future*/wrapped around. Any\n>> different different constant (or pg_upgrade parameter) will do that too in\n>> other scenarios.\n>>\n>> As far as I can tell there is precisely *no* correct behaviour here other than\n>> exactly copying the oldestXid limit from the source database.\n>>\n> Please find attached a patch proposal doing so: it adds a new (- u) \n> parameter to pg_resetwal that allows to specify the oldest unfrozen \n> XID to set.\n> Then this new parameter is being used in pg_upgrade to copy the source \n> Latest checkpoint's oldestXID.\n>\n> Questions:\n>\n> * Should we keep the old behavior in case -x is being used without\n> -u? (The proposed patch does not set an arbitrary oldestXID\n> anymore in case -x is used.)\n> * Also shouldn't we ensure that the xid provided with -x or -u is >=\n> FirstNormalTransactionId (Currently the only check is that it is # 0)?\n>\n\nCopy/pasting Andres feedback (Thanks Andres for this feedback) on those \nquestions from another thread [1].\n\n > I was also wondering if:\n >\n > * We should keep the old behavior in case pg_resetwal -x is being used\n > without -u?
(The proposed patch does not set an arbitrary oldestXID\n > anymore in
case -x is used)\n\nAndres: I don't think we should. I don't see anything in the old \nbehaviour worth\nmaintaining.\n\n > * We should ensure that the xid provided with -x or -u is\n > >=
FirstNormalTransactionId (Currently the only check is that it is\n > # 0)?\n\nAndres: Applying TransactionIdIsNormal() seems like a good idea.\n\n=> I am attaching a new version that makes use of \nTransactionIdIsNormal() checks.\n\nAndres: I think it's important to verify that the xid provided with -x \nis within a reasonable range of the oldest xid.\n\n=> What do you mean by \"a reasonable range\"?\n\nThanks\n\nBertrand\n\n[1]: \nhttps://www.postgresql.org/message-id/20210517185646.pwe4klaufwmdhe2a%40alap3.anarazel.de",
"msg_date": "Tue, 18 May 2021 13:26:38 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "\nThis patch has been applied back to 9.6 and will appear in the next\nminor release.\n\n---------------------------------------------------------------------------\n\nOn Tue, May 18, 2021 at 01:26:38PM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 5/4/21 10:17 AM, Drouvot, Bertrand wrote:\n> \n> \n> Hi,\n> \n> On 4/24/21 3:00 AM, Andres Freund wrote:\n> \n> Hi,\n> \n> On 2021-04-23 19:28:27 -0500, Justin Pryzby wrote:\n> \n> This (combination of) thread(s) seems relevant.\n> \n> Subject: pg_upgrade failing for 200+ million Large Objects\n> https://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com\n> https://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com\n> https://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e\n> \n> Huh. Thanks for digging these up.\n> \n> \n> \n> Allows the user to provide a constant via pg_upgrade command-line, that\n> overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n> (window of) Transaction IDs available for pg_upgrade to complete.\n> \n> That seems the entirely the wrong approach to me, buying further into\n> the broken idea of inventing random wrong values for oldestXid.\n> \n> We drive important things like the emergency xid limits off oldestXid. On\n> databases with tables that are older than ~147million xids (i.e. not even\n> affected by the default autovacuum_freeze_max_age) the current constant leads\n> to setting the oldestXid to a value *in the future*/wrapped around. Any\n> different different constant (or pg_upgrade parameter) will do that too in\n> other scenarios.\n> \n> As far as I can tell there is precisely *no* correct behaviour here other than\n> exactly copying the oldestXid limit from the source database.\n> \n> \n> Please find attached a patch proposal doing so: it adds a new (- u)\n> parameter to pg_resetwal that allows to specify the oldest unfrozen XID to\n> set.\n> Then this new parameter is being used in pg_upgrade to copy the source\n> Latest checkpoint's oldestXID.\n> \n> Questions:\n> \n> □ Should we keep the old behavior in case -x is being used without -u?\n> (The proposed patch does not set an arbitrary oldestXID anymore in case\n> -x is used.)\n> □ Also shouldn't we ensure that the xid provided with -x or -u is >=\n> FirstNormalTransactionId (Currently the only check is that it is # 0)?\n> \n> \n> Copy/pasting Andres feedback (Thanks Andres for this feedback) on those\n> questions from another thread [1].\n> \n> > I was also wondering if:\n> >\n> > * We should keep the old behavior in case pg_resetwal -x is being used\n> > without -u?
(The proposed patch does not set an arbitrary oldestXID\n> > anymore in
case -x is used)\n> \n> Andres: I don't think we should. I don't see anything in the old behaviour\n> worth\n> maintaining.\n> \n> > * We should ensure that the xid provided with -x or -u is\n> > >=
FirstNormalTransactionId (Currently the only check is that it is\n> > # 0)?\n> \n> Andres: Applying TransactionIdIsNormal() seems like a good idea.\n> \n> => I am attaching a new version that makes use of TransactionIdIsNormal()\n> checks.\n> \n> Andres: I think it's important to verify that the xid provided with -x is\n> within a reasonable range of the oldest xid.\n> \n> => What do you mean by \"a reasonable range\"?\n> \n> Thanks\n> \n> Bertrand\n> \n> [1]: https://www.postgresql.org/message-id/\n> 20210517185646.pwe4klaufwmdhe2a%40alap3.anarazel.de\n> \n> \n> \n> \n\n> src/bin/pg_resetwal/pg_resetwal.c | 65 ++++++++++++++++++++++-----------------\n> src/bin/pg_upgrade/controldata.c | 17 +++++++++-\n> src/bin/pg_upgrade/pg_upgrade.c | 6 ++++\n> src/bin/pg_upgrade/pg_upgrade.h | 1 +\n> 4 files changed, 60 insertions(+), 29 deletions(-)\n> diff --git a/src/bin/pg_resetwal/pg_resetwal.c b/src/bin/pg_resetwal/pg_resetwal.c\n> index 805dafef07..5e864760ed 100644\n> --- a/src/bin/pg_resetwal/pg_resetwal.c\n> +++ b/src/bin/pg_resetwal/pg_resetwal.c\n> @@ -65,6 +65,7 @@ static bool guessed = false;\t/* T if we had to guess at any values */\n> static const char *progname;\n> static uint32 set_xid_epoch = (uint32) -1;\n> static TransactionId set_xid = 0;\n> +static TransactionId set_oldest_unfrozen_xid = 0;\n> static TransactionId set_oldest_commit_ts_xid = 0;\n> static TransactionId set_newest_commit_ts_xid = 0;\n> static Oid\tset_oid = 0;\n> @@ -102,6 +103,7 @@ main(int argc, char *argv[])\n> \t\t{\"next-oid\", required_argument, NULL, 'o'},\n> \t\t{\"multixact-offset\", required_argument, NULL, 'O'},\n> \t\t{\"next-transaction-id\", required_argument, NULL, 'x'},\n> +\t\t{\"oldest-transaction-id\", required_argument, NULL, 'u'},\n> \t\t{\"wal-segsize\", required_argument, NULL, 1},\n> \t\t{NULL, 0, NULL, 0}\n> \t};\n> @@ -135,7 +137,7 @@ main(int argc, char *argv[])\n> \t}\n> \n> \n> -\twhile ((c = getopt_long(argc, argv, \"c:D:e:fl:m:no:O:x:\", long_options, NULL)) != -1)\n> +\twhile ((c = getopt_long(argc, argv, \"c:D:e:fl:m:no:O:x:u:\", long_options, NULL)) != -1)\n> \t{\n> \t\tswitch (c)\n> \t\t{\n> @@ -176,9 +178,24 @@ main(int argc, char *argv[])\n> \t\t\t\t\tfprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"), progname);\n> \t\t\t\t\texit(1);\n> \t\t\t\t}\n> -\t\t\t\tif (set_xid == 0)\n> +\t\t\t\tif (!TransactionIdIsNormal(set_xid))\n> \t\t\t\t{\n> -\t\t\t\t\tpg_log_error(\"transaction ID (-x) must not be 0\");\n> +\t\t\t\t\tpg_log_error(\"transaction ID (-x) must be greater or equal to %u\", FirstNormalTransactionId);\n> +\t\t\t\t\texit(1);\n> +\t\t\t\t}\n> +\t\t\t\tbreak;\n> +\n> +\t\t\tcase 'u':\n> +\t\t\t\tset_oldest_unfrozen_xid = strtoul(optarg, &endptr, 0);\n> +\t\t\t\tif (endptr == optarg || *endptr != '\\0')\n> +\t\t\t\t{\n> +\t\t\t\t\tpg_log_error(\"invalid argument for option %s\", \"-u\");\n> +\t\t\t\t\tfprintf(stderr, _(\"Try \\\"%s --help\\\" for more information.\\n\"), progname);\n> +\t\t\t\t\texit(1);\n> +\t\t\t\t}\n> +\t\t\t\tif (!TransactionIdIsNormal(set_oldest_unfrozen_xid))\n> +\t\t\t\t{\n> +\t\t\t\t\tpg_log_error(\"oldest unfrozen transaction ID (-u) must be greater or equal to %u\", FirstNormalTransactionId);\n> \t\t\t\t\texit(1);\n> \t\t\t\t}\n> \t\t\t\tbreak;\n> @@ -429,21 +446,12 @@ main(int argc, char *argv[])\n> \t\t\t\t\t\t\t\t\t\t\t XidFromFullTransactionId(ControlFile.checkPointCopy.nextXid));\n> \n> \tif (set_xid != 0)\n> -\t{\n> \t\tControlFile.checkPointCopy.nextXid =\n> \t\t\tFullTransactionIdFromEpochAndXid(EpochFromFullTransactionId(ControlFile.checkPointCopy.nextXid),\n> \t\t\t\t\t\t\t\t\t\t\t set_xid);\n> \n> -\t\t/*\n> -\t\t * For the moment, just set oldestXid to a value that will force\n> -\t\t * immediate autovacuum-for-wraparound. It's not clear whether adding\n> -\t\t * user control of this is useful, so let's just do something that's\n> -\t\t * reasonably safe. The magic constant here corresponds to the\n> -\t\t * maximum allowed value of autovacuum_freeze_max_age.\n> -\t\t */\n> -\t\tControlFile.checkPointCopy.oldestXid = set_xid - 2000000000;\n> -\t\tif (ControlFile.checkPointCopy.oldestXid < FirstNormalTransactionId)\n> -\t\t\tControlFile.checkPointCopy.oldestXid += FirstNormalTransactionId;\n> +\tif (set_oldest_unfrozen_xid != 0) {\n> +\t\tControlFile.checkPointCopy.oldestXid = set_oldest_unfrozen_xid;\n> \t\tControlFile.checkPointCopy.oldestXidDB = InvalidOid;\n> \t}\n> \n> @@ -1209,20 +1217,21 @@ usage(void)\n> \tprintf(_(\"Usage:\\n %s [OPTION]... DATADIR\\n\\n\"), progname);\n> \tprintf(_(\"Options:\\n\"));\n> \tprintf(_(\" -c, --commit-timestamp-ids=XID,XID\\n\"\n> -\t\t\t \" set oldest and newest transactions bearing\\n\"\n> -\t\t\t \" commit timestamp (zero means no change)\\n\"));\n> -\tprintf(_(\" [-D, --pgdata=]DATADIR data directory\\n\"));\n> -\tprintf(_(\" -e, --epoch=XIDEPOCH set next transaction ID epoch\\n\"));\n> -\tprintf(_(\" -f, --force force update to be done\\n\"));\n> -\tprintf(_(\" -l, --next-wal-file=WALFILE set minimum starting location for new WAL\\n\"));\n> -\tprintf(_(\" -m, --multixact-ids=MXID,MXID set next and oldest multitransaction ID\\n\"));\n> -\tprintf(_(\" -n, --dry-run no update, just show what would be done\\n\"));\n> -\tprintf(_(\" -o, --next-oid=OID set next OID\\n\"));\n> -\tprintf(_(\" -O, --multixact-offset=OFFSET set next multitransaction offset\\n\"));\n> -\tprintf(_(\" -V, --version output version information, then exit\\n\"));\n> -\tprintf(_(\" -x, --next-transaction-id=XID set next transaction ID\\n\"));\n> -\tprintf(_(\" --wal-segsize=SIZE size of WAL segments, in megabytes\\n\"));\n> -\tprintf(_(\" -?, --help show this help, then exit\\n\"));\n> +\t\t\t \" set oldest and newest transactions bearing\\n\"\n> +\t\t\t \" commit timestamp (zero means no change)\\n\"));\n> +\tprintf(_(\" [-D, --pgdata=]DATADIR data directory\\n\"));\n> +\tprintf(_(\" -e, --epoch=XIDEPOCH set next transaction ID epoch\\n\"));\n> +\tprintf(_(\" -f, --force force update to be done\\n\"));\n> +\tprintf(_(\" -l, --next-wal-file=WALFILE set minimum starting location for new WAL\\n\"));\n> +\tprintf(_(\" -m, --multixact-ids=MXID,MXID set next and oldest multitransaction ID\\n\"));\n> +\tprintf(_(\" -n, --dry-run no update, just show what would be done\\n\"));\n> +\tprintf(_(\" -o, --next-oid=OID set next OID\\n\"));\n> +\tprintf(_(\" -O, --multixact-offset=OFFSET set next multitransaction offset\\n\"));\n> +\tprintf(_(\" -u, --oldest-transaction-id=XID set oldest unfrozen transaction ID\\n\"));\n> +\tprintf(_(\" -V, --version output version information, then exit\\n\"));\n> +\tprintf(_(\" -x, --next-transaction-id=XID set next transaction ID\\n\"));\n> +\tprintf(_(\" --wal-segsize=SIZE size of WAL segments, in megabytes\\n\"));\n> +\tprintf(_(\" -?, --help show this help, then exit\\n\"));\n> \tprintf(_(\"\\nReport bugs to <%s>.\\n\"), PACKAGE_BUGREPORT);\n> \tprintf(_(\"%s home page: <%s>\\n\"), PACKAGE_NAME, PACKAGE_URL);\n> }\n> diff --git a/src/bin/pg_upgrade/controldata.c b/src/bin/pg_upgrade/controldata.c\n> index 4f647cdf33..a4b6375403 100644\n> --- a/src/bin/pg_upgrade/controldata.c\n> +++ b/src/bin/pg_upgrade/controldata.c\n> @@ -44,6 +44,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)\n> \tbool\t\tgot_oid = false;\n> \tbool\t\tgot_multi = false;\n> \tbool\t\tgot_oldestmulti = false;\n> +\tbool\t\tgot_oldestxid = false;\n> \tbool\t\tgot_mxoff = false;\n> \tbool\t\tgot_nextxlogfile = false;\n> \tbool\t\tgot_float8_pass_by_value = false;\n> @@ -312,6 +313,17 @@ get_control_data(ClusterInfo *cluster, bool live_check)\n> \t\t\tcluster->controldata.chkpnt_nxtmulti = str2uint(p);\n> \t\t\tgot_multi = true;\n> \t\t}\n> +\t\telse if ((p = strstr(bufin, \"Latest checkpoint's oldestXID:\")) != NULL)\n> +\t\t{\n> +\t\t\tp = strchr(p, ':');\n> +\n> +\t\t\tif (p == NULL || strlen(p) <= 1)\n> +\t\t\t\tpg_fatal(\"%d: controldata retrieval problem\\n\", __LINE__);\n> +\n> +\t\t\tp++;\t\t\t\t/* remove ':' char */\n> +\t\t\tcluster->controldata.chkpnt_oldstxid = str2uint(p);\n> +\t\t\tgot_oldestxid = true;\n> +\t\t}\n> \t\telse if ((p = strstr(bufin, \"Latest checkpoint's oldestMultiXid:\")) != NULL)\n> \t\t{\n> \t\t\tp = strchr(p, ':');\n> @@ -544,7 +556,7 @@ get_control_data(ClusterInfo *cluster, bool live_check)\n> \n> \t/* verify that we got all the mandatory pg_control data */\n> \tif (!got_xid || !got_oid ||\n> -\t\t!got_multi ||\n> +\t\t!got_multi || !got_oldestxid ||\n> \t\t(!got_oldestmulti &&\n> \t\t cluster->controldata.cat_ver >= MULTIXACT_FORMATCHANGE_CAT_VER) ||\n> \t\t!got_mxoff || (!live_check && !got_nextxlogfile) ||\n> @@ -575,6 +587,9 @@ get_control_data(ClusterInfo *cluster, bool live_check)\n> \t\t\tcluster->controldata.cat_ver >= MULTIXACT_FORMATCHANGE_CAT_VER)\n> \t\t\tpg_log(PG_REPORT, \" latest checkpoint oldest MultiXactId\\n\");\n> \n> +\t\tif (!got_oldestxid)\n> +\t\t\tpg_log(PG_REPORT, \" latest checkpoint oldestXID\\n\");\n> +\n> \t\tif (!got_mxoff)\n> \t\t\tpg_log(PG_REPORT, \" latest checkpoint next MultiXactOffset\\n\");\n> \n> diff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c\n> index e23b8ca88d..950ff24980 100644\n> --- a/src/bin/pg_upgrade/pg_upgrade.c\n> +++ b/src/bin/pg_upgrade/pg_upgrade.c\n> @@ -473,6 +473,12 @@ copy_xact_xlog_xid(void)\n> \t\t\t \"\\\"%s/pg_resetwal\\\" -f -x %u \\\"%s\\\"\",\n> \t\t\t new_cluster.bindir, old_cluster.controldata.chkpnt_nxtxid,\n> \t\t\t new_cluster.pgdata);\n> +\tcheck_ok();\n> +\tprep_status(\"Setting oldest XID for new cluster\");\n> +\texec_prog(UTILITY_LOG_FILE, NULL, true, true,\n> +\t\t\t \"\\\"%s/pg_resetwal\\\" -f -u %u \\\"%s\\\"\",\n> +\t\t\t new_cluster.bindir, old_cluster.controldata.chkpnt_oldstxid,\n> +\t\t\t new_cluster.pgdata);\n> \texec_prog(UTILITY_LOG_FILE, NULL, true, true,\n> \t\t\t \"\\\"%s/pg_resetwal\\\" -f -e %u \\\"%s\\\"\",\n> \t\t\t new_cluster.bindir, old_cluster.controldata.chkpnt_nxtepoch,\n> diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h\n> index a5f71c5294..dd0204902c 100644\n> --- a/src/bin/pg_upgrade/pg_upgrade.h\n> +++ b/src/bin/pg_upgrade/pg_upgrade.h\n> @@ -207,6 +207,7 @@ typedef struct\n> \tuint32\t\tchkpnt_nxtmulti;\n> \tuint32\t\tchkpnt_nxtmxoff;\n> \tuint32\t\tchkpnt_oldstMulti;\n> +\tuint32\t\tchkpnt_oldstxid;\n> \tuint32\t\talign;\n> \tuint32\t\tblocksz;\n> \tuint32\t\tlargesz;\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 26 Jul 2021 22:39:04 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early\n wraparound on databases with high transaction load"
},
{
"msg_contents": "Hi,\n\nOn 7/27/21 4:39 AM, Bruce Momjian wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> This patch has been applied back to 9.6 and will appear in the next\n> minor release.\n\nThank you!\n\nBertrand\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 09:25:22 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on\n databases with high transaction load"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 09:25:22AM +0200, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 7/27/21 4:39 AM, Bruce Momjian wrote:\n> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n> > \n> > \n> > \n> > This patch has been applied back to 9.6 and will appear in the next\n> > minor release.\n> \n> Thank you!\n\nThank you for the patch --- this was a tricky problem, and frankly, I am\ndisappointed that we (and I) took so long to address this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 08:50:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early\n wraparound on databases with high transaction load"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> This patch has been applied back to 9.6 and will appear in the next\n> minor release.\n\nI have just discovered that this patch broke pg_upgrade's ability\nto upgrade from 8.4:\n\n$ pg_upgrade -b ~/version84/bin -d ...\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nThe source cluster lacks some required control information:\n latest checkpoint oldestXID\n\nCannot continue without required control information, terminating\nFailure, exiting\n\nSure enough, 8.4's pg_controldata doesn't print anything about\noldestXID, because that info wasn't there then.\n\nGiven the lack of field complaints, it's probably not worth trying\nto do anything to restore that capability. But we really ought to\nupdate pg_upgrade's code and docs in pre-v15 branches to say that\nthe minimum supported source version is 9.0.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 12:59:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "\nOn 2022-07-05 Tu 12:59, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n>> This patch has been applied back to 9.6 and will appear in the next\n>> minor release.\n> I have just discovered that this patch broke pg_upgrade's ability\n> to upgrade from 8.4:\n>\n> $ pg_upgrade -b ~/version84/bin -d ...\n> Performing Consistency Checks\n> -----------------------------\n> Checking cluster versions ok\n> The source cluster lacks some required control information:\n> latest checkpoint oldestXID\n>\n> Cannot continue without required control information, terminating\n> Failure, exiting\n>\n> Sure enough, 8.4's pg_controldata doesn't print anything about\n> oldestXID, because that info wasn't there then.\n>\n> Given the lack of field complaints, it's probably not worth trying\n> to do anything to restore that capability. But we really ought to\n> update pg_upgrade's code and docs in pre-v15 branches to say that\n> the minimum supported source version is 9.0.\n\n\nSo it's taken us a year to discover the issue :-( Perhaps if we're going\nto say we support upgrades back to 9.0 we should have some testing to be\nassured we don't break it without knowing like this. I'll see if I can\ncoax crake to do that - it already tests back to 9.2.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:52:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> So it's taken us a year to discover the issue :-( Perhaps if we're going\n> to say we support upgrades back to 9.0 we should have some testing to be\n> assured we don't break it without knowing like this. I'll see if I can\n> coax crake to do that - it already tests back to 9.2.\n\nHmm ... could you first look into why 09878cdd4 broke it? I'd supposed\nthat that was just detecting situations we must already have dealt with\nin order for the pg_upgrade test to work, but crake's not happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 15:17:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 11:53 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Sure enough, 8.4's pg_controldata doesn't print anything about\n> > oldestXID, because that info wasn't there then.\n> >\n> > Given the lack of field complaints, it's probably not worth trying\n> > to do anything to restore that capability. But we really ought to\n> > update pg_upgrade's code and docs in pre-v15 branches to say that\n> > the minimum supported source version is 9.0.\n>\n>\n> So it's taken us a year to discover the issue :-(\n\nI'm not surprised at all, given the history here. There were at least\na couple of bugs affecting how pg_upgrade carries forward information\nabout these cutoffs. See commits 74cf7d46 and a61daa14.\n\nActually, commit 74cf7d46 was where pg_resetxlog/pg_resetwal's -u\nargument was first added, for use by pg_upgrade. That commit is only\nabout a year old, and was only backpatched to 9.6. Unfortunately the\nprevious approach to carrying forward oldestXID was an accident that\nusually worked. So...yeah, things are bad here. At least we now have\nthe ability to detect any downstream problems that this might cause by\nusing pg_amcheck.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Jul 2022 12:41:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 12:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, commit 74cf7d46 was where pg_resetxlog/pg_resetwal's -u\n> argument was first added, for use by pg_upgrade. That commit is only\n> about a year old, and was only backpatched to 9.6.\n\nI just realized that this thread was where that work was first\ndiscussed. That explains why it took a year to discover that we broke\n8.4!\n\nOn further reflection I think that breaking pg_upgrade for 8.4 might\nhave been a good thing. The issue was fairly visible and obvious if\nyou actually ran into it, which is vastly preferable to what would\nhave happened before commit 74cf7d46.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Jul 2022 12:50:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "\nOn 2022-07-05 Tu 15:17, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> So it's taken us a year to discover the issue :-( Perhaps if we're going\n>> to say we support upgrades back to 9.0 we should have some testing to be\n>> assured we don't break it without knowing like this. I'll see if I can\n>> coax crake to do that - it already tests back to 9.2.\n> Hmm ... could you first look into why 09878cdd4 broke it? I'd supposed\n> that that was just detecting situations we must already have dealt with\n> in order for the pg_upgrade test to work, but crake's not happy.\n\n\nIt's complaining about this:\n\n\nandrew@emma:HEAD $ cat\n./inst/REL9_6_STABLE-20220705T160820.039/incompatible_polymorphics.txt\nIn database: regression\n aggregate: public.first_el_agg_f8(double precision)\n\nI can have TestUpgradeXVersion.pm search for and remove offending\nfunctions, if that's the right fix.\n\nI note too that drongo is failing similarly, but its pg_upgrade output\ndirectory is missing, so 4fff78f009 seems possibly shy of a load w.r.t.\nMSVC. I will investigate.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 5 Jul 2022 17:25:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-07-05 Tu 15:17, Tom Lane wrote:\n>> Hmm ... could you first look into why 09878cdd4 broke it? I'd supposed\n>> that that was just detecting situations we must already have dealt with\n>> in order for the pg_upgrade test to work, but crake's not happy.\n\n> It's complaining about this:\n\n> andrew@emma:HEAD $ cat\n> ./inst/REL9_6_STABLE-20220705T160820.039/incompatible_polymorphics.txt\n> In database: regression\n> aggregate: public.first_el_agg_f8(double precision)\n\nThanks.\n\n> I can have TestUpgradeXVersion.pm search for and remove offending\n> functions, if that's the right fix.\n\nI'm not sure. It seems like the new check must be too strict,\nbecause it was only meant to detect cases that would cause a subsequent\ndump/reload failure, and evidently this did not. I'll have to look\ncloser to figure out what to do. Anyway, it's off topic for this\nthread ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 18:05:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] Re: pg_upgrade can result in early wraparound\n on databases with high transaction load"
},
{
"msg_contents": "> On 5 Jul 2022, at 18:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Given the lack of field complaints, it's probably not worth trying\n> to do anything to restore that capability. But we really ought to\n> update pg_upgrade's code and docs in pre-v15 branches to say that\n> the minimum supported source version is 9.0.\n\n(reviving an old thread from the TODO)\n\nSince we never got around to doing this we still refer to 8.4 as a possible\nupgrade path in v14 and older.\n\nThere seems to be two alternatives here, either we bump the minimum version in\nv14-v12 to 9.0 which is the technical limitation brought by 695b4a113ab, or we\nfollow the direction taken by e469f0aaf3c and set 9.2. e469f0aaf3c raised the\nminimum supported version to 9.2 based on the complexity of compiling anything\nolder using a modern toolchain.\n\nIt can be argued that making a change we don't cover with testing is unwise,\nbut we clearly don't test the current code either since it's broken.\n\nThe attached takes the conservative approach of raising the minimum supported\nversion to 9.0 while leaving the code to handle 8.4 in place. While it can be\nremoved, the risk/reward tradeoff of gutting code in backbranches doesn't seem\nappealing since the code will be unreachable with this check anyways.\n\nThoughts?\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 16 May 2024 08:11:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] pg_upgrade can result in early wraparound on\n databases with high transaction load"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 5 Jul 2022, at 18:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Given the lack of field complaints, it's probably not worth trying\n>> to do anything to restore that capability. But we really ought to\n>> update pg_upgrade's code and docs in pre-v15 branches to say that\n>> the minimum supported source version is 9.0.\n\n> (reviving an old thread from the TODO)\n\n> Since we never got around to doing this we still refer to 8.4 as a possible\n> upgrade path in v14 and older.\n\nOh, yeah, that seems to have fallen through a crack.\n\n> The attached takes the conservative approach of raising the minimum supported\n> version to 9.0 while leaving the code to handle 8.4 in place. While it can be\n> removed, the risk/reward tradeoff of gutting code in backbranches doesn't seem\n> appealing since the code will be unreachable with this check anyways.\n\nYeah, it's not worth working harder than this. I do see one typo\nin your comment: s/supported then/supported when/. LGTM otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 May 2024 13:47:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] pg_upgrade can result in early wraparound on\n databases with high transaction load"
},
{
"msg_contents": "> On 16 May 2024, at 19:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yeah, it's not worth working harder than this. I do see one typo\n> in your comment: s/supported then/supported when/. LGTM otherwise.\n\nThanks for review, I've pushed this (with the fix from above) to 14 through 12.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 17 May 2024 14:35:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [UNVERIFIED SENDER] pg_upgrade can result in early wraparound on\n databases with high transaction load"
}
] |
[
{
"msg_contents": "Hi,\n\nThis patch series is to add support for spgist quadtree @<(point,circle)\noperator. The first two patches are to refactor existing code before\nimplemention the new feature. The third commit is the actual implementation\nprovided with a set of simple unit tests.\n\nChanges since v2:\n - fix coding style\n - add comment to spg_quad_inner_consistent_circle_helper()\n - rework spg_quad_inner_consistent_circle_helper() using HYPOT() to make the\n search consistent with filter scan\n\nMatwey V. Kornilov (3):\n Introduce helper variable in spgquadtreeproc.c\n Introduce spg_quad_inner_consistent_box_helper() in spgquadtreeproc.c\n Add initial support for spgist quadtree @<(point,circle) operator\n\n src/backend/access/spgist/spgquadtreeproc.c | 147 +++++++++++++++-------\n src/include/catalog/pg_amop.dat | 3 +\n src/test/regress/expected/create_index_spgist.out | 96 ++++++++++++++\n src/test/regress/sql/create_index_spgist.sql | 32 +++++\n 4 files changed, 234 insertions(+), 44 deletions(-)\n\n-- \n2.13.7\n\n-- \nWith best regards,\nMatwey V. Kornilov",
"msg_date": "Mon, 20 May 2019 14:32:39 +0300",
"msg_from": "\"Matwey V. Kornilov\" <matwey.kornilov@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH v2] Introduce spgist quadtree @<(point,circle) operator"
},
{
"msg_contents": "On Mon, May 20, 2019 at 02:32:39PM +0300, Matwey V. Kornilov wrote:\n> This patch series is to add support for spgist quadtree @<(point,circle)\n> operator. The first two patches are to refactor existing code before\n> implemention the new feature. The third commit is the actual implementation\n> provided with a set of simple unit tests.\n\nCould you add that to the next commit fest please? Here you go:\nhttps://commitfest.postgresql.org/23/\n--\nMichael",
"msg_date": "Tue, 21 May 2019 14:42:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v2] Introduce spgist quadtree @<(point,circle) operator"
},
{
"msg_contents": "вт, 21 мая 2019 г. в 08:43, Michael Paquier <michael@paquier.xyz>:\n>\n> On Mon, May 20, 2019 at 02:32:39PM +0300, Matwey V. Kornilov wrote:\n> > This patch series is to add support for spgist quadtree @<(point,circle)\n> > operator. The first two patches are to refactor existing code before\n> > implemention the new feature. The third commit is the actual implementation\n> > provided with a set of simple unit tests.\n>\n> Could you add that to the next commit fest please? Here you go:\n> https://commitfest.postgresql.org/23/\n\nDone\n\n> --\n> Michael\n\n\n\n-- \nWith best regards,\nMatwey V. Kornilov\n\n\n",
"msg_date": "Tue, 21 May 2019 10:22:48 +0300",
"msg_from": "\"Matwey V. Kornilov\" <matwey.kornilov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v2] Introduce spgist quadtree @<(point,circle) operator"
},
{
"msg_contents": "Hi Matwey,\n\nOn Tue, May 21, 2019 at 10:23 AM Matwey V. Kornilov\n<matwey.kornilov@gmail.com> wrote:\n> вт, 21 мая 2019 г. в 08:43, Michael Paquier <michael@paquier.xyz>:\n> >\n> > On Mon, May 20, 2019 at 02:32:39PM +0300, Matwey V. Kornilov wrote:\n> > > This patch series is to add support for spgist quadtree @<(point,circle)\n> > > operator. The first two patches are to refactor existing code before\n> > > implemention the new feature. The third commit is the actual implementation\n> > > provided with a set of simple unit tests.\n> >\n> > Could you add that to the next commit fest please? Here you go:\n> > https://commitfest.postgresql.org/23/\n>\n> Done\n\nThank you for posting this patch. A took a look at it.\n\nIt appears that you make quadrant-based checks. But it seems to be\nlossy in comparison with box-based checks. Let me explain this on the\nexample. Imagine centroids (0,1) and (1,0). Square (0,0)-(1,1) is\nintersection of quadrant 2 of (0,1) and quadrant 4 of (1,0). And then\nimagine circle with center in (2,2) of radius 1. It intersects with\nboth quadrants, but doesn't intersect with square.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 21 Jul 2019 02:07:44 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v2] Introduce spgist quadtree @<(point,circle) operator"
}
] |
[
{
"msg_contents": "Someone probably forgot to update the comment when changing the arguments.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 20 May 2019 16:03:40 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Inaccurate header comment of issue_xlog_fsync_comment"
},
{
"msg_contents": "On Mon, May 20, 2019 at 11:04 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Someone probably forgot to update the comment when changing the arguments.\n\nThanks for the patch! Committed.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Tue, 21 May 2019 01:44:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate header comment of issue_xlog_fsync_comment"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile trying to setup a test environment under Windows I have managed to\nbuild the source using the latest Visual Studio 2019 [1].\n\nIt's only been tested in this one environment, Windows 10 x64, but the\nchanges seem tool dependant only.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1] https://visualstudio.microsoft.com/vs/",
"msg_date": "Mon, 20 May 2019 19:46:00 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Compile using the Visual Studio 2019"
},
{
"msg_contents": "On Tue, 21 May 2019 at 05:46, Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n> While trying to setup a test environment under Windows I have managed to build the source using the latest Visual Studio 2019 [1].\n>\n> It's only been tested in this one environment, Windows 10 x64, but the changes seem tool dependant only.\n\nThanks for doing work on this. Just to let you know there's already\nsome work pending to do this in\nhttps://www.postgresql.org/message-id/CAJrrPGegJG_gtQBMQffCNBny3i3fpe8QfE0DUkPSQEZf-FoY9w@mail.gmail.com\n\nIt's marked in the next commitfest entry in\nhttps://commitfest.postgresql.org/23/2122/\n\nMaybe you could take a look at that and maybe sign up to review it?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 21 May 2019 14:37:10 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Compile using the Visual Studio 2019"
},
{
"msg_contents": "On Tue, May 21, 2019 at 02:37:10PM +1200, David Rowley wrote:\n> Maybe you could take a look at that and maybe sign up to review it?\n\nYes, that would be great. New VS environments are a pain to set up so\nany input is welcome.\n\n- # visual 2017 hasn't changed the nmake version to 15, so still\n using the older version for comparison.\n+ # visual 2019 hasn't changed the nmake version to\n 15, so still using the older version for comparison.\n if ($major > 14)\n\nI have not checked the other patch and I am pretty sure that you are\ndoing the same thing. Still, for the notice, this comment update is\nincorrect as VS 2017 also marks nmake with version 15.\n--\nMichael",
"msg_date": "Tue, 21 May 2019 14:36:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compile using the Visual Studio 2019"
},
{
"msg_contents": "On Tue, May 21, 2019 at 7:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 21, 2019 at 02:37:10PM +1200, David Rowley wrote:\n> > Maybe you could take a look at that and maybe sign up to review it?\n>\n> Yes, that would be great. New VS environments are a pain to set up so\n> any input is welcome.\n>\n>\nAt some point I did check, but that previous work went unnoticed. Now that\nI have a better knowledge about building on Windows I will take a look at\nit.\n\n\n>\n> I have not checked the other patch and I am pretty sure that you are\n> doing the same thing. Still, for the notice, this comment update is\n> incorrect as VS 2017 also marks nmake with version 15.\n>\n>\nI don't want to keep this thread going any further, so I will check the\nother patch and see how it goes through this point.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, May 21, 2019 at 7:36 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 21, 2019 at 02:37:10PM +1200, David Rowley wrote:\n> Maybe you could take a look at that and maybe sign up to review it?\n\nYes, that would be great. New VS environments are a pain to set up so\nany input is welcome.\nAt some point I did check, but that previous work went unnoticed. Now that I have a better knowledge about building on Windows I will take a look at it. \nI have not checked the other patch and I am pretty sure that you are\ndoing the same thing. Still, for the notice, this comment update is\nincorrect as VS 2017 also marks nmake with version 15.I don't want to keep this thread going any further, so I will check the other patch and see how it goes through this point.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 21 May 2019 08:38:42 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Compile using the Visual Studio 2019"
}
] |
[
{
"msg_contents": "I have a question about this (really exciting) feature coming in pg12:\n\nAllow ALTER TABLE .. SET DATA TYPE timestamp/timestamptz to avoid a table\nrewrite when the session time zone is UTC (Noah Misch)\n\nIn the UTC time zone, the data types are binary compatible.\n\nWe actually want to migrate all of our databases to timestamptz\neverywhere. But some of them have historically saved data in a *local*\ntime zone with data type timestamp.\n\nI assume there is no similarly easy way to do this alter type without a\ntable rewrite for a local time zone? I would assume DST changes would be\nan issue here.\n\nBut it would be really nice if we have a table with timestamp data saved @\nAmerica/Chicago time zone, to set the session to 'America/Chicago' and\nalter type to timestamptz, and similarly avoid a table rewrite. Is this\npossible or feasible?\n\nThank you!\nJeremy\n\nI have a question about this (really exciting) feature coming in pg12:Allow ALTER TABLE .. SET DATA TYPE timestamp/timestamptz to avoid a table rewrite when the session time zone is UTC (Noah Misch)In the UTC time zone, the data types are binary compatible.We actually want to migrate all of our databases to timestamptz everywhere. But some of them have historically saved data in a *local* time zone with data type timestamp.I assume there is no similarly easy way to do this alter type without a table rewrite for a local time zone? I would assume DST changes would be an issue here.But it would be really nice if we have a table with timestamp data saved @ America/Chicago time zone, to set the session to 'America/Chicago' and alter type to timestamptz, and similarly avoid a table rewrite. Is this possible or feasible?Thank you!Jeremy",
"msg_date": "Mon, 20 May 2019 13:13:50 -0500",
"msg_from": "Jeremy Finzel <finzelj@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question about new pg12 feature no-rewrite timestamp to timestamptz\n conversion"
},
{
"msg_contents": "On Mon, May 20, 2019 at 01:13:50PM -0500, Jeremy Finzel wrote:\n> I have a question about this (really exciting) feature coming in pg12:\n> \n> Allow ALTER TABLE .. SET DATA TYPE timestamp/timestamptz to avoid a table\n> rewrite when the session time zone is UTC (Noah Misch)\n> \n> In the UTC time zone, the data types are binary compatible.\n> \n> We actually want to migrate all of our databases to timestamptz everywhere.�\n> But some of them have historically saved data in a *local* time zone with data\n> type timestamp.\n> \n> I assume there is no similarly easy way to do this alter type without a table\n> rewrite for a local time zone?� I would assume DST changes would be an issue\n> here.\n> \n> But it would be really nice if we have a table with timestamp data saved @\n> America/Chicago time zone, to set the session to 'America/Chicago' and alter\n> type to timestamptz, and similarly avoid a table rewrite.� Is this possible or\n> feasible?\n\nWell, the timestamptz data type stores the date/time in UTC internally,\nand then shifts it to whatever timezone you have set in the client. If\nyou did the conversion from timestamp _without_ time zone columns, the\nnew data would take your local time and assume it was stored in UTC,\nwhich I don't think you want. I don't know of a way to make the\nadjustment you want without a table rewrite. It is unfortunate that the\nSQL standard requires timestamp _without_ time zone to be the default\nfor 'timestamp'.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 20 May 2019 16:08:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about new pg12 feature no-rewrite timestamp to\n timestamptz conversion"
}
] |
[
{
"msg_contents": "Hello,\n\nI posted this to the \"clean up docs for v12\" thread and it was\nsuggested I make a new thread instead, so here it is. Sorry for the\nextra noise! :-)\n\nI noticed the docs at\nhttps://www.postgresql.org/docs/devel/ddl-partitioning.html still say\nyou can't create a foreign key referencing a partitioned table, even\nthough the docs for\nhttps://www.postgresql.org/docs/devel/sql-createtable.html have been\nupdated (compared to v11). My understanding is that foreign keys\n*still* don't work as expected when pointing at traditional INHERITS\ntables, but they *will* work with declaratively-partitioned tables. In\nthat case I suggest this change:\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex a0a7435a03..3b4f43bbad 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -3966,14 +3966,6 @@ ALTER TABLE measurement ATTACH PARTITION\nmeasurement_y2008m02\n\n- <listitem>\n- <para>\n- While primary keys are supported on partitioned tables, foreign\n- keys referencing partitioned tables are not supported. (Foreign key\n- references from a partitioned table to some other table are supported.)\n- </para>\n- </listitem>\n-\n <listitem>\n <para>\n <literal>BEFORE ROW</literal> triggers, if necessary, must be defined\n on individual partitions, not the partitioned table.\n </para>\n@@ -4366,6 +4358,14 @@ ALTER TABLE measurement_y2008m02 INHERIT measurement;\n </para>\n </listitem>\n\n+ <listitem>\n+ <para>\n+ While primary keys are supported on inheritance-partitioned\ntables, foreign\n+ keys referencing these tables are not supported. (Foreign key\n+ references from an inheritance-partitioned table to some other\ntable are supported.)\n+ </para>\n+ </listitem>\n+\n <listitem>\n <para>\n If you are using manual <command>VACUUM</command> or\n\n(I've also attached it as a patch file.) In other words, we should\nmove this caveat from the section on declaratively-partitioned tables\nto the section on inheritance-partitioned tables.\n\nYours,\nPaul",
"msg_date": "Mon, 20 May 2019 21:42:52 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On 2019/05/21 13:42, Paul A Jungwirth wrote:\n> I noticed the docs at\n> https://www.postgresql.org/docs/devel/ddl-partitioning.html still say\n> you can't create a foreign key referencing a partitioned table, even\n> though the docs for\n> https://www.postgresql.org/docs/devel/sql-createtable.html have been\n> updated (compared to v11). My understanding is that foreign keys\n> *still* don't work as expected when pointing at traditional INHERITS\n> tables, but they *will* work with declaratively-partitioned tables.\n\nThanks Paul.\n\nWould you like me to edit the wiki to add this to open items?\n\nRegards,\nAmit\n\n\n\n",
"msg_date": "Tue, 21 May 2019 13:58:46 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Mon, May 20, 2019 at 9:59 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> Would you like me to edit the wiki to add this to open items?\n\nThat would be helpful for sure. Thanks!\n\nPaul\n\n\n",
"msg_date": "Mon, 20 May 2019 22:05:01 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On 2019/05/21 14:05, Paul A Jungwirth wrote:\n> On Mon, May 20, 2019 at 9:59 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>> Would you like me to edit the wiki to add this to open items?\n> \n> That would be helpful for sure. Thanks!\n\nOK, done.\n\nThanks,\nAmit\n\n\n\n\n",
"msg_date": "Tue, 21 May 2019 14:10:09 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Mon, May 20, 2019 at 09:42:52PM -0700, Paul A Jungwirth wrote:\n> I noticed the docs at\n> https://www.postgresql.org/docs/devel/ddl-partitioning.html still say\n> you can't create a foreign key referencing a partitioned table, even\n> though the docs for\n> https://www.postgresql.org/docs/devel/sql-createtable.html have been\n> updated (compared to v11). My understanding is that foreign keys\n> *still* don't work as expected when pointing at traditional INHERITS\n> tables, but they *will* work with declaratively-partitioned tables. In\n> that case I suggest this change:\n\nCould you define what is an \"inheritance-partitioned\" table? I know\nof partitioned tables, inherited tables and tables which make use\nof inheritance for partitioning (hence Inheritance Partitioning), but\nthe paragraph you are adding introduces a new term in the whole tree.\nThis makes things confusing and the previous paragraph is not, even if\nit is incorrect since f56f8f8 as Amit has noted.\n\nA simple rewording could be \"tables using inheritance for\npartitioning\".\n\n> (I've also attached it as a patch file.) In other words, we should\n> move this caveat from the section on declaratively-partitioned tables\n> to the section on inheritance-partitioned tables.\n\nAs this restriction only applies only to tables partitioned with\ninheritance, then this move should be fine.\n--\nMichael",
"msg_date": "Tue, 21 May 2019 14:18:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Could you define what is an \"inheritance-partitioned\" table? I know\n> of partitioned tables, inherited tables and tables which make use\n> of inheritance for partitioning (hence Inheritance Partitioning), but\n> the paragraph you are adding introduces a new term in the whole tree.\n> This makes things confusing and the previous paragraph is not, even if\n> it is incorrect since f56f8f8 as Amit has noted.\n>\n> A simple rewording could be \"tables using inheritance for\n> partitioning\".\n\nI agree that sounds better. To avoid repeating it I changed the second\ninstance to just \"inherited tables\". New patch attached.\n\nPaul",
"msg_date": "Mon, 20 May 2019 22:35:31 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Mon, May 20, 2019 at 10:35:31PM -0700, Paul A Jungwirth wrote:\n> I agree that sounds better. To avoid repeating it I changed the second\n> instance to just \"inherited tables\". New patch attached.\n\nLooking closer, you are adding that:\n+ <listitem>\n+ <para>\n+ While primary keys are supported on tables using inheritance\n+ for partitioning, foreign keys referencing these tables are not\n+ supported. (Foreign key references from an inherited table to\n+ some other table are supported.)\n+ </para>\n+ </listitem>\n\nHowever that's just fine:\n=# create table aa (a int primary key);\nCREATE TABLE\n=# create table aa_child (a int primary key, inherits aa, foreign key\n(a) references aa);\nCREATE TABLE\n=# create table aa_grandchild (a int primary key, inherits aa_child,\nforeign key (a) references aa_child);\nCREATE TABLE\n\nThe paragraph you are removing from 5.11.2.3 (limitations of\ndeclarative partitioning) only applies to partitioned tables, not to\nplain tables. And there is no such thing for paritioning based on\ninheritance, so we should just remove one paragraph, and not add the\nextra one, no?\n--\nMichael",
"msg_date": "Thu, 23 May 2019 12:06:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Wed, May 22, 2019 at 8:06 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looking closer, you are adding that:\n> + <listitem>\n> + <para>\n> + While primary keys are supported on tables using inheritance\n> + for partitioning, foreign keys referencing these tables are not\n> + supported. (Foreign key references from an inherited table to\n> + some other table are supported.)\n> + </para>\n> + </listitem>\n>\n> However that's just fine:\n> =# create table aa (a int primary key);\n> CREATE TABLE\n> =# create table aa_child (a int primary key, inherits aa, foreign key\n> (a) references aa);\n> CREATE TABLE\n> =# create table aa_grandchild (a int primary key, inherits aa_child,\n> foreign key (a) references aa_child);\n> CREATE TABLE\n\nPostgres will let you define the FK, but it doesn't work in a meaningful way:\n\npaul=# create table t1 (id int primary key, foo text);\nCREATE TABLE\npaul=# create table t2 (bar text) inherits (t1);\nCREATE TABLE\npaul=# insert into t2 values (1, 'f', 'b');\nINSERT 0 1\npaul=# select * from t1;\n id | foo\n----+-----\n 1 | f\n(1 row)\n\npaul=# create table ch (id int, t_id int references t1 (id));\nCREATE TABLE\npaul=# insert into ch values (1, 1);\nERROR: insert or update on table \"ch\" violates foreign key constraint\n\"ch_t_id_fkey\"\nDETAIL: Key (t_id)=(1) is not present in table \"t1\".\n\nThe section in the docs (5.10) just before the one I changed has\nsimilar warnings:\n\n> Other types of constraints (unique, primary key, and foreign key constraints) are not inherited.\n\nand\n\n> A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single tables, not to their inheritance children.\n\n> The paragraph you are removing from 5.11.2.3 (limitations of\n> declarative partitioning) only applies to partitioned tables, not to\n> plain tables. And there is no such thing for paritioning based on\n> inheritance, so we should just remove one paragraph, and not add the\n> extra one, no?\n\nI moved the paragraph to a section describing inheritance as an\nalternative partitioning solution to declarative partitions. Since\nusing inheritance to partition a table requires giving up foreign\nkeys, it seems worthwhile to include that among the other caveats. (It\nwasn't necessary to include it before because declarative partitions\nhad the same drawback, and it was already expressed in the paragraph I\ntook out.) In my opinion mentioning this limitation would be helpful\nto people.\n\nPerhaps the wording is too strong though:\n\n> + . . . foreign keys referencing these tables are not\n> + supported. . . .\n\nI was trying to make a minimal change by keeping most of the original\nwording, but I agree that different language would be more accurate.\nWhat do you think of something like this?:\n\n+ <listitem>\n+ <para>\n+ While foreign keys may be defined that reference a parent\n+ table, they will not see records from its child tables. Since\n+ the parent table is typically empty, adding any record (with a\n+ non-null foreign key) to the referencing table will raise an error.\n+ </para>\n+ </listitem>\n\nPaul\n\n\n",
"msg_date": "Thu, 23 May 2019 00:02:56 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Thu, May 23, 2019 at 12:02:56AM -0700, Paul A Jungwirth wrote:\n> The section in the docs (5.10) just before the one I changed has\n> similar warnings:\n> \n>> Other types of constraints (unique, primary key, and foreign key\n>> constraints) are not inherited. \n> \n> and\n> \n>> A serious limitation of the inheritance feature is that indexes\n>> (including unique constraints) and foreign key constraints only\n>> apply to single tables, not to their inheritance children.\n\nYes.\n\n> I moved the paragraph to a section describing inheritance as an\n> alternative partitioning solution to declarative partitions. Since\n> using inheritance to partition a table requires giving up foreign\n> keys, it seems worthwhile to include that among the other caveats. (It\n> wasn't necessary to include it before because declarative partitions\n> had the same drawback, and it was already expressed in the paragraph I\n> took out.) In my opinion mentioning this limitation would be helpful\n> to people.\n\nWell, the point I would like to outline is that section 5.11.2 about\ndeclarative partitioning and 5.11.3 about partitioning with\ninheritance treat about two separate, independent partitioning\nmethods. So removing the paragraph from the declarative partitioning\nsection mentioning foreign keys referencing partitioned tables is\nfine, because that's not the case anymore...\n\n> I was trying to make a minimal change by keeping most of the original\n> wording, but I agree that different language would be more accurate.\n> What do you think of something like this?:\n> \n> + <listitem>\n> + <para>\n> + While foreign keys may be defined that reference a parent\n> + table, they will not see records from its child tables. Since\n> + the parent table is typically empty, adding any record (with a\n> + non-null foreign key) to the referencing table will raise an error.\n> + </para>\n> + </listitem>\n\n... However you are adding a paragraph for something which is\ncompletely unrelated to the issue we are trying to fix. If I were to\nadd something, I think that I would be more general than what you are\ntrying here and just mention a link to the previous paragraph about\nthe caveats of inheritance as they apply to single table members of an\ninheritance tree and not a full set:\n\"Indexes and foreign key constraint apply to single tables and not\ntheir inheritance children, hence they have some <link>caveats</> to\nbe aware of.\"\nStill this is a duplicate of a sentence which is just a couple of\nparagraphs back.\n--\nMichael",
"msg_date": "Mon, 27 May 2019 11:49:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Sun, May 26, 2019 at 7:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Well, the point I would like to outline is that section 5.11.2 about\n> declarative partitioning and 5.11.3 about partitioning with\n> inheritance treat about two separate, independent partitioning\n> methods. So removing the paragraph from the declarative partitioning\n> section mentioning foreign keys referencing partitioned tables is\n> fine, because that's not the case anymore...\n> [snip]\n> ... However you are adding a paragraph for something which is\n> completely unrelated to the issue we are trying to fix. If I were to\n> add something, I think that I would be more general than what you are\n> trying here and just mention a link to the previous paragraph about\n> the caveats of inheritance as they apply to single table members of an\n> inheritance tree and not a full set:\n> \"Indexes and foreign key constraint apply to single tables and not\n> their inheritance children, hence they have some <link>caveats</> to\n> be aware of.\"\n\nThat seems reasonable to me. Here is a patch file if that is helpful\n(minor typo corrected).\n\nYours,\nPaul",
"msg_date": "Tue, 28 May 2019 22:01:35 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn 2019/05/29 14:01, Paul A Jungwirth wrote:\n> On Sun, May 26, 2019 at 7:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> ... However you are adding a paragraph for something which is\n>> completely unrelated to the issue we are trying to fix. If I were to\n>> add something, I think that I would be more general than what you are\n>> trying here and just mention a link to the previous paragraph about\n>> the caveats of inheritance as they apply to single table members of an\n>> inheritance tree and not a full set:\n>> \"Indexes and foreign key constraint apply to single tables and not\n>> their inheritance children, hence they have some <link>caveats</> to\n>> be aware of.\"\n> \n> That seems reasonable to me. Here is a patch file if that is helpful\n> (minor typo corrected).\n\nThe patch looks good, thanks.\n\nMichael commented upthread that the new next might be repeating what's\nalready said elsewhere. I did find that to be the case by looking at\n5.10. Inheritance. For example, 5.10.1. Caveats says exactly the same thing:\n\n A serious limitation of the inheritance feature is that indexes\n (including unique constraints) and foreign key constraints only apply\n to single tables, not to their inheritance children. This is true on\n both the referencing and referenced sides of a foreign key constraint.\n\nBut couple of other points mentioned in 5.11.3.3. Caveats (of 5.11. Table\nPartitioning) are already repeated in 5.10.1. Caveats; for example, note\nthe point about VACUUM, ANALYZE, INSERT ON CONFLICT, etc. applying to\nsingle tables. So, perhaps it won't hurt to repeat the caveat about\nindexes and foreign keys too.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 29 May 2019 14:33:26 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
},
{
"msg_contents": "On Wed, May 29, 2019 at 02:33:26PM +0900, Amit Langote wrote:\n> But couple of other points mentioned in 5.11.3.3. Caveats (of 5.11. Table\n> Partitioning) are already repeated in 5.10.1. Caveats; for example, note\n> the point about VACUUM, ANALYZE, INSERT ON CONFLICT, etc. applying to\n> single tables. So, perhaps it won't hurt to repeat the caveat about\n> indexes and foreign keys too.\n\nOK, committed as such. Your patch linked to the top of the\ninheritance section, so I redirected that to the actual section about\ncaveats for clarity.\n--\nMichael",
"msg_date": "Wed, 29 May 2019 11:20:25 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs about FKs referencing partitioned tables"
}
] |
[
{
"msg_contents": "Hello,\nCommit 578b229718e8f remove oids option from pg_dump but its is\nstill in pg_dumpall .The attach patch remove it\nregards\nSurafel",
"msg_date": "Tue, 21 May 2019 09:31:48 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "with oids option not removed in pg_dumpall"
},
{
"msg_contents": "On Tue, May 21, 2019 at 09:31:48AM +0300, Surafel Temesgen wrote:\n> Commit 578b229718e8f remove oids option from pg_dump but its is\n> still in pg_dumpall .The attach patch remove it\n\nGood catch. Your cleanup looks correct to me. Andres, perhaps you\nwould prefer doing the cleanup yourself?\n--\nMichael",
"msg_date": "Tue, 21 May 2019 17:24:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: with oids option not removed in pg_dumpall"
},
{
"msg_contents": "On Tue, May 21, 2019 at 05:24:57PM +0900, Michael Paquier wrote:\n> Good catch. Your cleanup looks correct to me. Andres, perhaps you\n> would prefer doing the cleanup yourself?\n\nAs I am cleaning up the area for another issue, applied.\n--\nMichael",
"msg_date": "Thu, 23 May 2019 09:42:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: with oids option not removed in pg_dumpall"
},
{
"msg_contents": "Thank you for applying\n\nregards\nSurafel\n\nOn Thu, May 23, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 21, 2019 at 05:24:57PM +0900, Michael Paquier wrote:\n> > Good catch. Your cleanup looks correct to me. Andres, perhaps you\n> > would prefer doing the cleanup yourself?\n>\n> As I am cleaning up the area for another issue, applied.\n> --\n> Michael\n>\n\nThank you for applying regards Surafel On Thu, May 23, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, May 21, 2019 at 05:24:57PM +0900, Michael Paquier wrote:\n> Good catch. Your cleanup looks correct to me. Andres, perhaps you\n> would prefer doing the cleanup yourself?\n\nAs I am cleaning up the area for another issue, applied.\n--\nMichael",
"msg_date": "Thu, 23 May 2019 16:26:38 +0300",
"msg_from": "Surafel Temesgen <surafel3000@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: with oids option not removed in pg_dumpall"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 16:26:38 +0300, Surafel Temesgen wrote:\n> Thank you for applying\n> \n> regards\n> Surafel\n> \n> On Thu, May 23, 2019 at 3:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Tue, May 21, 2019 at 05:24:57PM +0900, Michael Paquier wrote:\n> > > Good catch. Your cleanup looks correct to me. Andres, perhaps you\n> > > would prefer doing the cleanup yourself?\n> >\n> > As I am cleaning up the area for another issue, applied.\n\nThanks for finding and applying.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 08:55:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: with oids option not removed in pg_dumpall"
}
] |
[
{
"msg_contents": "Perhaps it's just a matter of taste, but I think the TupleTableSlotOps\nstructure, once initialized, should be used wherever possible. At least for me\npersonally, when I read the code, the particular callback function name is a\nbit disturbing wherever it's not necessary.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 21 May 2019 14:01:47 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "A few more opportunities to use TupleTableSlotOps fields"
},
{
"msg_contents": "On Tue, May 21, 2019 at 8:02 AM Antonin Houska <ah@cybertec.at> wrote:\n> Perhaps it's just a matter of taste, but I think the TupleTableSlotOps\n> structure, once initialized, should be used wherever possible. At least for me\n> personally, when I read the code, the particular callback function name is a\n> bit disturbing wherever it's not necessary.\n\nBut it's significantly more efficient.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 May 2019 08:40:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A few more opportunities to use TupleTableSlotOps fields"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, May 21, 2019 at 8:02 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Perhaps it's just a matter of taste, but I think the TupleTableSlotOps\n> > structure, once initialized, should be used wherever possible. At least for me\n> > personally, when I read the code, the particular callback function name is a\n> > bit disturbing wherever it's not necessary.\n> \n> But it's significantly more efficient.\n\nDo you refer to the fact that for example the address of\n\n\ttts_virtual_clear(dstslot);\n\nis immediately available in the text section while in this case\n\n\tdstslot->tts_ops->clear(dstslot);\n\nthe CPU first needs to fetch the address of tts_ops and also that of the\n->clear function?\n\nI admit I didn't think about this problem. Nevertheless I imagine that due to\nconstness of the variables like TTSOpsVirtual (and due to several other const\ndeclarations) the compiler might be able to compute the address of the\ntts_ops->clear() expression. Thus the only extra work for the CPU would be to\nfetch tts_ops from dstslot, but that should not be a big deal because other\nfields of dstslot are accessed by the surrounding code (so all of them might\nbe available in the CPU cache).\n\nI don't pretend to be an expert in this area though, it's possible that I\nstill miss something.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 21 May 2019 16:47:50 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: A few more opportunities to use TupleTableSlotOps fields"
},
{
"msg_contents": "On Tue, May 21, 2019 at 10:48 AM Antonin Houska <ah@cybertec.at> wrote:\n> Do you refer to the fact that for example the address of\n>\n> tts_virtual_clear(dstslot);\n>\n> is immediately available in the text section while in this case\n>\n> dstslot->tts_ops->clear(dstslot);\n>\n> the CPU first needs to fetch the address of tts_ops and also that of the\n> ->clear function?\n\nYes. And since tts_virtual_clear() is marked static, it seems like it\nmight even possible for it to inline that function at compile time.\n\n> I admit I didn't think about this problem. Nevertheless I imagine that due to\n> constness of the variables like TTSOpsVirtual (and due to several other const\n> declarations) the compiler might be able to compute the address of the\n> tts_ops->clear() expression. Thus the only extra work for the CPU would be to\n> fetch tts_ops from dstslot, but that should not be a big deal because other\n> fields of dstslot are accessed by the surrounding code (so all of them might\n> be available in the CPU cache).\n\nI think the issue is pipeline stalls. If the compiler can see a\ndirect call coming up, it can start fetching the instructions from the\ntarget address. If it sees an indirect call, that's probably harder\nto do.\n\nBut really, I'm not an expect on this area. I would write the code as\nAndres did on the general principle of making things as easy for the\ncompiler and CPU as possible, but I do not know how much it really\nmatters. Andres probably does know...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 May 2019 12:16:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A few more opportunities to use TupleTableSlotOps fields"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 12:16:22 -0400, Robert Haas wrote:\n> On Tue, May 21, 2019 at 10:48 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Do you refer to the fact that for example the address of\n> >\n> > tts_virtual_clear(dstslot);\n> >\n> > is immediately available in the text section while in this case\n> >\n> > dstslot->tts_ops->clear(dstslot);\n> >\n> > the CPU first needs to fetch the address of tts_ops and also that of the\n> > ->clear function?\n> \n> Yes. And since tts_virtual_clear() is marked static, it seems like it\n> might even possible for it to inline that function at compile time.\n\nI sure hope so. And I did verify that at some point. We, for example,\ndon't want changes to slot->tts_flags tts_virtual_clear does to be\nactually written to memory, if it's called from a callsite that knows\nit's a virtual slot.\n\nI just checked, and for me tts_virtual_copyslot() inlines all of\ntts_virtual_clear(), and eliminates most of its contents except for the\npfree() branch. All the rest of the state changes are overwritten by\ntts_virtual_copyslot() anyway.\n\n\n> > I admit I didn't think about this problem. Nevertheless I imagine that due to\n> > constness of the variables like TTSOpsVirtual (and due to several other const\n> > declarations) the compiler might be able to compute the address of the\n> > tts_ops->clear() expression. Thus the only extra work for the CPU would be to\n> > fetch tts_ops from dstslot, but that should not be a big deal because other\n> > fields of dstslot are accessed by the surrounding code (so all of them might\n> > be available in the CPU cache).\n> \n> I think the issue is pipeline stalls. If the compiler can see a\n> direct call coming up, it can start fetching the instructions from the\n> target address. If it sees an indirect call, that's probably harder\n> to do.\n\nSome CPUs can do so, but it'll often be more expensive / have a higher\nchance of misspeculating (rather than the 0 chance in case of a straight\nline code.\n\n\n> But really, I'm not an expect on this area. I would write the code as\n> Andres did on the general principle of making things as easy for the\n> compiler and CPU as possible, but I do not know how much it really\n> matters. Andres probably does know...\n\nI think the inlining bit is more crucial, but that having as few\nindirect calls as possible matters here. It wasn't that easy to get the\nslot virtualization to not regress performance meaningfully.\n\nIf anything, I really want to go the *opposite* direction, i.e. remove\n*more* indirect calls from within execTuples.c, and get the compiler to\nrealize at callsites external to execTuples.c that it can cache tts_ops\nin more places.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 09:34:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: A few more opportunities to use TupleTableSlotOps fields"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-21 16:47:50 +0200, Antonin Houska wrote:\n> I admit I didn't think about this problem. Nevertheless I imagine that due to\n> constness of the variables like TTSOpsVirtual (and due to several other const\n> declarations) the compiler might be able to compute the address of the\n> tts_ops->clear() expression.\n\nIt really can't, without actually fetching tts_ops, and reading the\ncallback's location. How would e.g. tts_virtual_copyslot() know that the\nslot's tts_ops point to TTSOpsVirtual? There's simply no way to express\nthat in C. If this were a class in C++, the compiler would have decent\nchance at it these days (because if it's a final method it can infer\nthat it has to be, and because whole program optimization allows\ndevirtualization passes to do so), but well, it's not.\n\nAnd then there's the whole inlining issue explained in my other recent\nmail on the topic.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 May 2019 09:39:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: A few more opportunities to use TupleTableSlotOps fields"
}
] |
[
{
"msg_contents": "Consider:\n\nCREATE TABLE testwid\n(\n txtnotnull text,\n txtnull text,\n int8notnull int8,\n int8null int8\n);\nINSERT INTO testwid\nSELECT 'a' || g.i,\n NULL,\n g.i,\n NULL\nFROM generate_series(1,10000) AS g(i);\nANALYZE testwid;\nSELECT attname, avg_width FROM pg_stats WHERE tablename = 'testwid';\n attname | avg_width\n-------------+-----------\n txtnotnull | 5\n txtnull | 0\n int8notnull | 8\n int8null | 8\n(4 rows)\n\n\nI see in analyze.c\n8<-----------------\n/* We can only compute average width if we found some non-null values.*/\nif (nonnull_cnt > 0)\n\n [snip]\n\nelse if (null_cnt > 0)\n{\n /* We found only nulls; assume the column is entirely null */\n stats->stats_valid = true;\n stats->stanullfrac = 1.0;\n if (is_varwidth)\n stats->stawidth = 0; /* \"unknown\" */\n else\n stats->stawidth = stats->attrtype->typlen;\n stats->stadistinct = 0.0; /* \"unknown\" */\n}\n8<-----------------\n\nSo apparently intentional, but seems gratuitously inconsistent. Could\nthis cause any actual inconsistent behaviors? In any case that first\ncomment does not reflect the code.\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 21 May 2019 15:48:37 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "stawidth inconsistency with all NULL columns"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> else if (null_cnt > 0)\n> {\n> /* We found only nulls; assume the column is entirely null */\n> stats->stats_valid = true;\n> stats->stanullfrac = 1.0;\n> if (is_varwidth)\n> stats->stawidth = 0; /* \"unknown\" */\n> else\n> stats->stawidth = stats->attrtype->typlen;\n> stats->stadistinct = 0.0; /* \"unknown\" */\n> }\n> 8<-----------------\n\n> So apparently intentional, but seems gratuitously inconsistent. Could\n> this cause any actual inconsistent behaviors? In any case that first\n> comment does not reflect the code.\n\nAre you suggesting that we should set stawidth to zero even for a\nfixed-width datatype? That seems pretty silly. We know exactly what\nthe value should be, and would be if we'd chanced to find even one\nnon-null entry.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 15:55:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stawidth inconsistency with all NULL columns"
},
{
"msg_contents": "On 5/21/19 3:55 PM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> else if (null_cnt > 0)\n>> {\n>> /* We found only nulls; assume the column is entirely null */\n>> stats->stats_valid = true;\n>> stats->stanullfrac = 1.0;\n>> if (is_varwidth)\n>> stats->stawidth = 0; /* \"unknown\" */\n>> else\n>> stats->stawidth = stats->attrtype->typlen;\n>> stats->stadistinct = 0.0; /* \"unknown\" */\n>> }\n>> 8<-----------------\n> \n>> So apparently intentional, but seems gratuitously inconsistent. Could\n>> this cause any actual inconsistent behaviors? In any case that first\n>> comment does not reflect the code.\n> \n> Are you suggesting that we should set stawidth to zero even for a\n> fixed-width datatype? That seems pretty silly. We know exactly what\n> the value should be, and would be if we'd chanced to find even one\n> non-null entry.\n\nWell you could argue in similar fashion for variable width values -- if\nwe find even one of those, it will be at least 4 bytes. So why set those\nto zero?\n\nNot a big deal, but it struck me as odd when I was looking at the\ncurrent state of affairs.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 21 May 2019 16:07:50 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "Re: stawidth inconsistency with all NULL columns"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 5/21/19 3:55 PM, Tom Lane wrote:\n>> Are you suggesting that we should set stawidth to zero even for a\n>> fixed-width datatype? That seems pretty silly. We know exactly what\n>> the value should be, and would be if we'd chanced to find even one\n>> non-null entry.\n\n> Well you could argue in similar fashion for variable width values -- if\n> we find even one of those, it will be at least 4 bytes. So why set those\n> to zero?\n\nUm, really the minimum width is 1 byte, given short headers. But as\nthe code notes, zero means we don't know what a sane estimate would\nbe, which is certainly not the case for fixed-width types.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 16:54:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stawidth inconsistency with all NULL columns"
}
] |
[
{
"msg_contents": "> On Wed, Mar 27, 2019 at 11:42 AM Haribabu Kommi\n<kommi(dot)haribabu(at)gmail(dot)com>\n> wrote:\n>\n> Visual Studio 2019 is officially released. There is no major change in the\n> patch, except some small comments update.\n>\n> Also attached patches for the back branches also.\n>\n\nI have gone through path\n'0001-Support-building-with-visual-studio-2019.patch' only, but I am sure\nsome comments will also apply to back branches.\n\n1. The VisualStudioVersion value looks odd:\n\n\n+ $self->{VisualStudioVersion} = '16.0.32.32432';\n\nAre you using a pre-release version [1]?\n\n\n2. There is a typo: s/stuido/studio/:\n\n+ # The major visual stuido that is suppored has nmake version >= 14.20 and\n< 15.\n\n\nThere is something in the current code that I think should be also updated.\nThe code for _GetVisualStudioVersion contains:\n\n if ($major > 14)\n {\n carp\n \"The determined version of Visual Studio is newer than the latest\nsupported version. Returning the latest supported version instead.\";\n return '14.00';\n }\n\nShouldn't the returned value be '14.20' for Visual Studio 2019?\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1]\nhttps://docs.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers\n\n> On Wed, Mar 27, 2019 at 11:42 AM Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>> wrote:>> Visual Studio 2019 is officially released. There is no major change in the> patch, except some small comments update.>> Also attached patches for the back branches also.>I have gone through path '0001-Support-building-with-visual-studio-2019.patch' only, but I am sure some comments will also apply to back branches.1. The VisualStudioVersion value looks odd:+\t$self->{VisualStudioVersion} = '16.0.32.32432';Are you using a pre-release version [1]?2. There is a typo: s/stuido/studio/:+\t# The major visual stuido that is suppored has nmake version >= 14.20 and < 15.There is something in the current code that I think should be also updated. The code for _GetVisualStudioVersion contains: if ($major > 14) \t{ \t\tcarp \t\t \"The determined version of Visual Studio is newer than the latest supported version. Returning the latest supported version instead.\"; \t\treturn '14.00'; \t}Shouldn't the returned value be '14.20' for Visual Studio 2019?Regards,Juan José Santamaría Flecha[1] https://docs.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers",
"msg_date": "Tue, 21 May 2019 22:48:58 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MSVC Build support with visual studio 2019"
}
] |
[
{
"msg_contents": "Given the number of different people that have sent in patches\nfor building with VS2019, it doesn't seem to me that we ought\nto let that wait for v13. We could treat it as something that\nwe only intend to go into v12, or we could think that we ought\nto back-patch it, but either way it should be on the open-items\npage somewhere.\n\nOf course, I'm not volunteering to do the work, but still ...\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 May 2019 17:06:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Should MSVC 2019 support be an open item for v12?"
},
{
"msg_contents": "El mar., 21 may. 2019 23:06, Tom Lane <tgl@sss.pgh.pa.us> escribió:\n\n> Given the number of different people that have sent in patches\n> for building with VS2019, it doesn't seem to me that we ought\n> to let that wait for v13.\n\n\nI am not so sure if there are actually that many people or it's just me\nmaking too much noise about this single issue, if that is the case I want\nto apologize.\n\n We could treat it as something that\n> we only intend to go into v12, or we could think that we ought\n> to back-patch it, but either way it should be on the open-items\n> page somewhere.\n>\n\nThere is already one item about this in the commitfest [1].\n\n\n\n> Of course, I'm not volunteering to do the work, but still ...\n>\n\nAfter all the noise I will help to review the patch.\n\nRegards,\n\nJuan José Santamaría Flecha\n\n[1] https://commitfest.postgresql.org/23/2122/\n\n>\n\nEl mar., 21 may. 2019 23:06, Tom Lane <tgl@sss.pgh.pa.us> escribió:Given the number of different people that have sent in patches\nfor building with VS2019, it doesn't seem to me that we ought\nto let that wait for v13.I am not so sure if there are actually that many people or it's just me making too much noise about this single issue, if that is the case I want to apologize. We could treat it as something that\nwe only intend to go into v12, or we could think that we ought\nto back-patch it, but either way it should be on the open-items\npage somewhere.There is already one item about this in the commitfest [1].\nOf course, I'm not volunteering to do the work, but still ... After all the noise I will help to review the patch.Regards,Juan José Santamaría Flecha[1] https://commitfest.postgresql.org/23/2122/",
"msg_date": "Tue, 21 May 2019 23:53:51 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should MSVC 2019 support be an open item for v12?"
},
{
"msg_contents": "On Tue, May 21, 2019 at 11:53:51PM +0200, Juan José Santamaría Flecha wrote:\n> El mar., 21 may. 2019 23:06, Tom Lane <tgl@sss.pgh.pa.us> escribió:\n>> Given the number of different people that have sent in patches\n>> for building with VS2019, it doesn't seem to me that we ought\n>> to let that wait for v13.\n> \n> I am not so sure if there are actually that many people or it's just me\n> making too much noise about this single issue, if that is the case I want\n> to apologize.\n\nWell, you are the second person caring enough about that matter and\npost a patch on the lists, so my take is that there is no need to wait\nfor v13 to open, and that we should do that now also because support\nfor new MSVC versions gain back-patching. Something I think we should\nhave is also a new animal running VS2019 (no plans to maintain one\nmyself). I can take care of this patch, I just need to set up a VM\nwith this version of MSVC to make sure that it works.. One thing we\nneed to be careful is handling of local on Windows, this stuff changes\nmore or less at each release of VS.\n--\nMichael",
"msg_date": "Wed, 22 May 2019 16:32:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Should MSVC 2019 support be an open item for v12?"
}
] |
[
{
"msg_contents": "Attached is a patch for a write after allocated memory which we found in\ntesting. Its an obscure case but can happen if the same column is used in\ndifferent grouping keys, as in the example below, which uses tables from\nthe regress test suite (build with --enable-cassert in order to turn on\nmemory warnings). Patch is against master.\n\nThe hashed aggregate state has an array for the column indices that is\nsized using the number of non-aggregated columns in the set that includes\nthe agg's targetlist, quals and input grouping columns. The duplicate\nelimination of columns can result in under-allocation, as below. Sizing\nbased on the number of grouping columns and number of quals/targetlists not\nin the grouping columns avoids this.\n\nRegards,\nColm McHugh (Salesforce)\n\nexplain (costs off) select 1 from tenk where (hundred, thousand) in (select\ntwothousand, twothousand from onek);\n\npsql: WARNING: problem in alloc set ExecutorState: detected write past\nchunk end in block 0x7f8b8901fa00, chunk 0x7f8b89020cd0\n\npsql: WARNING: problem in alloc set ExecutorState: detected write past\nchunk end in block 0x7f8b8901fa00, chunk 0x7f8b89020cd0\n\n QUERY PLAN\n\n-------------------------------------------------------------\n\n Hash Join\n\n Hash Cond: (tenk.hundred = onek.twothousand)\n\n -> Seq Scan on tenk\n\n Filter: (hundred = thousand)\n\n -> Hash\n\n -> HashAggregate\n\n Group Key: onek.twothousand, onek.twothousand\n\n -> Seq Scan on onek\n(8 rows)",
"msg_date": "Tue, 21 May 2019 18:03:46 -0700",
"msg_from": "Colm McHugh <colm.mchugh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Patch to fix write after end of array in hashed agg initialization"
},
{
"msg_contents": "Colm McHugh <colm.mchugh@gmail.com> writes:\n> Attached is a patch for a write after allocated memory which we found in\n> testing. Its an obscure case but can happen if the same column is used in\n> different grouping keys, as in the example below, which uses tables from\n> the regress test suite (build with --enable-cassert in order to turn on\n> memory warnings). Patch is against master.\n\nI confirm the appearance of the memory-overwrite warnings in HEAD.\n\nIt looks like the bad code is (mostly) the fault of commit b5635948.\nAndrew, can you take a look at this fix?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 11:11:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n >> Attached is a patch for a write after allocated memory which we\n >> found in testing. Its an obscure case but can happen if the same\n >> column is used in different grouping keys, as in the example below,\n >> which uses tables from the regress test suite (build with\n >> --enable-cassert in order to turn on memory warnings). Patch is\n >> against master.\n\n Tom> I confirm the appearance of the memory-overwrite warnings in HEAD.\n\n Tom> It looks like the bad code is (mostly) the fault of commit\n Tom> b5635948. Andrew, can you take a look at this fix?\n\nI'll look into it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 23 May 2019 01:36:10 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n >>> Attached is a patch for a write after allocated memory which we\n >>> found in testing. Its an obscure case but can happen if the same\n >>> column is used in different grouping keys, as in the example below,\n >>> which uses tables from the regress test suite (build with\n >>> --enable-cassert in order to turn on memory warnings). Patch is\n >>> against master.\n\n Andrew> I'll look into it.\n\nOK, so my first impression is that this is down to (a) the fact that\nwhen planning a GROUP BY, we eliminate duplicate grouping keys; (b) due\nto (a), the executor code isn't expecting to have to deal with\nduplicates, but (c) when using a HashAgg to implement a Unique path, the\nplanner code isn't making any attempt to eliminate duplicates so they\nget through.\n\nIt was wrong before commit b5635948, looks like Andres' fc4b3dea2 which\nintroduced the arrays and the concept of narrowing the stored tuples is\nthe actual culprit. But I'll deal with fixing it anyway unless Andres\nhas a burning desire to step in.\n\nMy inclination is to fix this in the planner rather than the executor;\nthere seems no good reason to actually hash a duplicate column more than\nonce.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 23 May 2019 04:49:57 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> My inclination is to fix this in the planner rather than the executor;\n> there seems no good reason to actually hash a duplicate column more than\n> once.\n\nSounds reasonable --- but would it make sense to introduce some\nassertions, or other cheap tests, into the executor to check that\nit's not being given a case it can't handle?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 00:02:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n >> My inclination is to fix this in the planner rather than the\n >> executor; there seems no good reason to actually hash a duplicate\n >> column more than once.\n\n Tom> Sounds reasonable --- but would it make sense to introduce some\n Tom> assertions, or other cheap tests, into the executor to check that\n Tom> it's not being given a case it can't handle?\n\nOh definitely, I was planning on it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 23 May 2019 05:11:57 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
},
{
"msg_contents": ">>>>> \"Andrew\" == Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n\n Andrew> My inclination is to fix this in the planner rather than the\n Andrew> executor; there seems no good reason to actually hash a\n Andrew> duplicate column more than once.\n\nI take this back; I don't believe it's possible to eliminate duplicates\nin all cases. Consider (a,b) IN (select c,c from...), where a,b,c are\ndifferent types; I don't think we can assume that (a=c) and (b=c)\ncross-type comparisons will necessarily induce the same hash function on\nc, and so we might legitimately need to keep it duplicated.\n\nSo I'm going with a simpler method of ensuring the array is adequately\nsized at execution time and not touching the planner at all. Draft patch\nis attached, will commit it later.\n\n-- \nAndrew (irc:RhodiumToad)",
"msg_date": "Thu, 23 May 2019 11:44:20 +0100",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Patch to fix write after end of array in hashed agg\n initialization"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a draft of the PG12 Beta 1 press release that is going out\nthis Thursday. The primary goals of this release announcement are to\nintroduce new features, enhancements, and changes that are available in\nPG12, as well as encourage our users to test and provide feedback to\nhelp ensure the stability of the release.\n\nSpeaking of feedback, please provide me with your feedback on the\ntechnical correctness of this announcement so I can incorporate changes\nprior to the release.\n\nThanks!\n\nJonathan",
"msg_date": "Tue, 21 May 2019 23:39:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "Hi!\n\nOn Wed, May 22, 2019 at 6:40 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is a draft of the PG12 Beta 1 press release that is going out\n> this Thursday. The primary goals of this release announcement are to\n> introduce new features, enhancements, and changes that are available in\n> PG12, as well as encourage our users to test and provide feedback to\n> help ensure the stability of the release.\n\nGreat work! Thank you for your efforts.\n\n> Speaking of feedback, please provide me with your feedback on the\n> technical correctness of this announcement so I can incorporate changes\n> prior to the release.\n\nI suggest renaming \"Most-common Value Statistics\" to \"Multicolumn\nMost-common Value Statistics\". Looking on current title one may think\nwe didn't support MCV statistics at all, but we did support\nsingle-column case for a long time.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 22 May 2019 06:54:39 +0300",
"msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "For CTEs, is forcing inlining the example we want to give, rather than the\nexample of forcing materialization given?\n\nAccording to the docs, virtual generated columns aren't yet supported. I'm\npretty sure the docs are right. Do we still want to mention it?\n\nOtherwise it looks good to me.\n\nOn Tue, May 21, 2019 at 11:39 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> Hi,\n>\n> Attached is a draft of the PG12 Beta 1 press release that is going out\n> this Thursday. The primary goals of this release announcement are to\n> introduce new features, enhancements, and changes that are available in\n> PG12, as well as encourage our users to test and provide feedback to\n> help ensure the stability of the release.\n>\n> Speaking of feedback, please provide me with your feedback on the\n> technical correctness of this announcement so I can incorporate changes\n> prior to the release.\n>\n> Thanks!\n>\n> Jonathan\n>\n\nFor CTEs, is forcing inlining the example we want to give, rather than the example of forcing materialization given?According to the docs, virtual generated columns aren't yet supported. I'm pretty sure the docs are right. Do we still want to mention it?Otherwise it looks good to me.On Tue, May 21, 2019 at 11:39 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:Hi,\n\nAttached is a draft of the PG12 Beta 1 press release that is going out\nthis Thursday. The primary goals of this release announcement are to\nintroduce new features, enhancements, and changes that are available in\nPG12, as well as encourage our users to test and provide feedback to\nhelp ensure the stability of the release.\n\nSpeaking of feedback, please provide me with your feedback on the\ntechnical correctness of this announcement so I can incorporate changes\nprior to the release.\n\nThanks!\n\nJonathan",
"msg_date": "Wed, 22 May 2019 01:01:44 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On 2019-05-22 05:39, Jonathan S. Katz wrote:\n> \n> Speaking of feedback, please provide me with your feedback on the\n> technical correctness of this announcement so I can incorporate changes\n> prior to the release.\n\nHere are a few changes.\n\nMain change: generated columns exist only in the STORED variety. VIRTUAL \nwill hopefully later be added.\n\n\nthanks,\n\nErik Rijkers",
"msg_date": "Wed, 22 May 2019 07:05:49 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "Find some corrections inline.\n\nOn Tue, May 21, 2019 at 11:39:38PM -0400, Jonathan S. Katz wrote:\n> PostgreSQL 12 Beta 1 Released\n> =============================\n> \n> The PostgreSQL Global Development Group announces that the first beta release of\n> PostgreSQL 12 is now available for download. This release contains previews of\n> all features that will be available in the final release of PostgreSQL 12,\n> though some details of the release could change before then.\n> \n> In the spirit of the open source PostgreSQL community, we strongly encourage you\n> to test the new features of PostgreSQL 12 in your database systems to help us\n> eliminate any bugs or other issues that may exist. While we do not advise for\n> you to run PostgreSQL 12 Beta 1 in your production environments, we encourage\n> you to find ways to run your typical application workloads against this beta\n> release.\n> \n> Your testing and feedback will help the community ensure that the PostgreSQL 12\n> release upholds our standards of providing a stable, reliable release of the\n> world's most advanced open source relational database.\n> \n> PostgreSQL 12 Features Highlights\n> ---------------------------------\n> \n> ### Indexing Performance, Functionality, and Management\n> \n> PostgreSQL 12 improves the overall performance of the standard B-tree indexes\n> with improvements to the overall space management of these indexes as well.\n> These improvements also provide an overall reduction of bloating when using\n> B-tree for specific use cases, in addition to a performance gain.\n> \n> Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently,\n> which lets you perform a [`REINDEX`](https://www.postgresql.org/docs/devel/sql-reindex.html) operation\n\n\n> without blocking any writes to the index. The inclusion of this feature should\n> help with length index rebuilds that could cause potential downtime evens when\n\nevents\n\n> administration a PostgreSQL database in a production environment.\n> \n> PostgreSQL 12 extends the abilities of several of the specialized indexing\n> mechanisms. The ability to create covering indexes, i.e. the `INCLUDE` clause\n> that was introduced in PostgreSQL 11, have now been added to GiST indexes.\n\nhas now\n\n> SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN)\n> queries for data types that support the distance (`<->`) operation.\n> \n> The amount of write-ahead log (WAL) overhead generated when creating a GiST,\n> GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which\n> provides several benefits to the overall disk utilization of a PostgreSQL\n> cluster as well as using features such as continuous archiving and streaming\n> replication.\n> \n> ### Inlined WITH queries (Common table expressions)\n> \n> Common table expressions (aka `WITH` queries) can now be automatically inlined\n> in a query if they are a) not recursive, b) do not have any side-effects and\n\nI think \"are\" should be rearranged:\na) are not recursive\n\n> c) are only referenced once in a later part of a query. These removes a known\n> \"optimization fence\" that has existed since the introduction of the `WITH`\n> clause in PostgreSQL 8.4\n> \n> You can force a `WITH` query to be inlined using the `NOT MATERIALIZED` clause,\n> e.g.\n> \n> ```\n> WITH c AS NOT MATERIALIZED (\n> SELECT * FROM a WHERE a.x % 4\n> )\n> SELECT * FROM c JOIN d ON d.y = a.x;\n> ```\n> \n> ### Partitioning\n> \n> PostgreSQL 12 improves on the performance of processing tables with thousands\n\n=> improves on the performance WHEN processing tables with thousands\n\n> of partitions for operations that only need to use a small number of partitions.\n> PostgreSQL 12 also provides improvements to the performance of both\n> using `COPY` with a partitioned table as well as the `ATTACH PARTITION`\n> operation. Additionally, the ability to use foreign keys to reference\n\n=> Additionally, the ability for foreign keys to reference partitioned\n\n> partitioned tables is now allowed in PostgreSQL 12.\n> \n> ### JSON path queries per SQL/JSON specification\n> \n> PostgreSQL 12 now lets you execute [JSON path queries](https://www.postgresql.org/docs/devel/functions-json.html#FUNCTIONS-SQLJSON-PATH)\n\n=> allows execution of\n\n> per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath\n> expressions for XML, JSON path expressions let you evaluate a variety of\n\n> arithmetic expressions and functions in addition to comparing values within JSON\n> documents.\n> \n> A subset of these expressions can be accelerated with GIN indexes, letting you\n\n=> allowing execution of\n\n> execute highly performant lookups across sets of JSON data.\n> \n> ### Collations\n> \n> PostgreSQL 12 now supports case-insensitive and accent-insensitive collations\n> for ICU provided collations, also known as \"[nondeterministic collations](https://www.postgresql.org/docs/devel/collation.html#COLLATION-NONDETERMINISTIC)\".\n> When used, these collations can provide convenience for comparisons and sorts,\n> but can also lead to a performance penalty depending as a collation may need to\n> make additional checks on a string.\n> \n> ### Most-common Value Statistics\n> \n> [`CREATE STATISTICS`](https://www.postgresql.org/docs/devel/sql-createstatistics.html),\n> introduced in PostgreSQL 10 to help collect more complex statistics to improve\n> query planning, now supports most-common value statistics. This leads to\n> improved query plans for distributions that are non-uniform.\n> \n> ### Generated Columns\n> \n> PostgreSQL 12 lets you create [generated columns](https://www.postgresql.org/docs/devel/ddl-generated-columns.html)\n> that compute their values based on the contents of other columns. This feature\n> provides two types of generated columns:\n> \n> - Stored generated columns, which are computed on inserts and updated and are saved on disk\n> - Virtual generated columns, which are computed only when a column is read as part of a query\n> \n> ### Pluggable Table Storage Interface\n> \n> PostgreSQL 12 introduces the pluggable table storage interface that allows for\n> the creation and use of different storage mechanisms for table storage. New\n> access methods can be added to a PostgreSQL cluster using the [`CREATE ACCESS METHOD`](https://www.postgresql.org/docs/devel/sql-create-access-method.html)\n> and subsequently added to tables with the new `USING` clause on `CREATE TABLE`.\n> \n> A table storage interface can be defined by creating a new [table access method](https://www.postgresql.org/docs/devel/tableam.html).\n> \n> In PostgreSQL 12, the storage interface that is used by default is the `heap`\n> access method, which is currently the only supported method.\n> \n> ### Page Checksums\n> \n> The `pg_verify_checkums` command has been renamed to [`pg_checksums`](https://www.postgresql.org/docs/devel/app-pgchecksums.html)\n> and now supports the ability to enable and disable page checksums across an\n> PostgreSQL cluster that is offline. Previously, page checksums could only be\n> enabled during the initialization of a cluster with `initdb`.\n> \n> ### Authentication\n> \n> GSSAPI now supports client and server-side encryption and can be specified in\n> the [`pg_hba.conf`](https://www.postgresql.org/docs/devel/auth-pg-hba-conf.html)\n> file using the `hostgssenc` and `hostnogssenc` record types. PostgreSQL 12 also\n> allows for LDAP servers to be discovered based on `DNS SRV` records if\n\n=> allows discovery of LDAP servers\n\n> PostgreSQL was compiled with OpenLDAP.\n> \n> Noted Behavior Changes\n> ----------------------\n> \n> There are several changes introduced in PostgreSQL 12 that can affect the\n> behavior as well as management of your ongoing operations. A few of these are\n> noted below; for information about other changes, please review the\n> \"Migrating to Version 12\" section of the [release notes](https://www.postgresql.org/docs/devel/release-12.html).\n> \n> 1. The `recovery.conf` configuration file is now merged into the main\n> `postgresql.conf` file. PostgreSQL will not start if it detects that\n> `recovery.conf` is present. To put PostgreSQL into a non-primary mode, you can\n> use the `recovery.signal` and the `standby.signal` files.\n> \n> You can read more about [archive recovery](https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY) here:\n> \n> https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY\n> \n> 2. Just-in-Time (JIT) compilation is now enabled by default.\n> \n> Additional Features\n> -------------------\n> \n> Many other new features and improvements have been added to PostgreSQL 12, some\n> of which may be as or more important to specific users than what is mentioned above. Please see the [Release Notes](https://www.postgresql.org/docs/devel/release-12.html) for a complete list of new and changed features.\n> \n> Testing for Bugs & Compatibility\n> --------------------------------\n> \n> The stability of each PostgreSQL release greatly depends on YOUR, the community,\n\nyou\n\n> to test the upcoming version with your workloads and testing tools in order to\n> find bugs and regressions before the release of PostgreSQL 12. As this is a\n\nRemove: \"of PostgreSQL 12\" ?\n\n> Beta, minor changes to database behaviors, feature details, and APIs are still\n> possible. Your feedback and testing will help determine the final tweaks on the\n> new features, so please test in the near future. The quality of user testing\n> helps determine when we can make a final release.\n> \n> A list of [open issues](https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items)\n> is publicly available in the PostgreSQL wiki. You can\n> [report bugs](https://www.postgresql.org/account/submitbug/) using this form on\n> the PostgreSQL website:\n> \n> https://www.postgresql.org/account/submitbug/\n> \n> Beta Schedule\n> -------------\n> \n> This is the first beta release of version 12. The PostgreSQL Project will\n> release additional betas as required for testing, followed by one or more\n> release candidates, until the final release in late 2019. For further\n> information please see the [Beta Testing](https://www.postgresql.org/developer/beta/) page.\n> \n> Links\n> -----\n> \n> * [Download](https://www.postgresql.org/download/)\n> * [Beta Testing Information](https://www.postgresql.org/developer/beta/)\n> * [PostgreSQL 12 Beta Release Notes](https://www.postgresql.org/docs/devel/release-12.html)\n> * [PostgreSQL 12 Open Issues](https://wiki.postgresql.org/wiki/PostgreSQL_12_Open_Items)\n> * [Submit a Bug](https://www.postgresql.org/account/submitbug/)\n\n\nOn Wed, May 22, 2019 at 07:05:49AM +0200, Erik Rijkers wrote:\n> --- 12beta1.md.orig\t2019-05-22 06:33:16.286099932 +0200\n> +++ 12beta1.md\t2019-05-22 06:48:24.279966057 +0200\n> @@ -30,12 +30,12 @@\n> Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently,\n> which lets you perform a [`REINDEX`](https://www.postgresql.org/docs/devel/sql-reindex.html) operation\n> without blocking any writes to the index. The inclusion of this feature should\n> -help with length index rebuilds that could cause potential downtime evens when\n> -administration a PostgreSQL database in a production environment.\n> +help with lengthy index rebuilds that could cause potential downtime when\n> +administrating a PostgreSQL database in a production environment.\n\nShould be \"administering\"\n\n> Common table expressions (aka `WITH` queries) can now be automatically inlined\n> in a query if they are a) not recursive, b) do not have any side-effects and\n> -c) are only referenced once in a later part of a query. These removes a known\n> +c) are only referenced once in a later part of a query. This removes a known\n\nI would remove \"known\".\n\n\n",
"msg_date": "Wed, 22 May 2019 00:50:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On Wed, 22 May 2019 at 15:39, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Speaking of feedback, please provide me with your feedback on the\n> technical correctness of this announcement so I can incorporate changes\n> prior to the release.\n\nSeems like a pretty good summary. Thanks for writing that up.\n\nCouple notes from my read through:\n\n> help with length index rebuilds that could cause potential downtime evens when\n> administration a PostgreSQL database in a production environment.\n\nlength -> lengthy?\nevens -> events? (mentioned by Justin)\nadministration -> administering? (mentioned by Justin)\n\n> PostgreSQL 12 also provides improvements to the performance of both\n> using `COPY` with a partitioned table as well as the `ATTACH PARTITION`\n> operation. Additionally, the ability to use foreign keys to reference\n> partitioned tables is now allowed in PostgreSQL 12.\n\nI'd say nothing has been done to improve the performance of ATTACH\nPARTITION. Robert did reduce the lock level required for that\noperation, but that should make it any faster.\n\nI think it would be good to write:\n\nPostgreSQL 12 also provides improvements to the performance of both\n`INSERT` and `COPY` into a partitioned table. `ATTACH PARTITION` can\nnow also be performed without blocking concurrent queries on the\npartitioned table. Additionally, the ability to use foreign keys to\nreference partitioned tables is now allowed in PostgreSQL 12.\n\n\n> ### Most-common Value Statistics\n\nI think this might be better titled:\n\n### Most-common Value Extended Statistics\n\nwhich is slightly different from what Alexander mentioned. I think we\ngenerally try to call them \"extended statistics\", even if the name of\nthe command does not quite agree. git grep -i \"extended stat\" shows\nmore interesting results than git grep -i \"column stat\" when done in\nthe doc directory. Either way, I think it's slightly confusing to\ntitle this the way it is since we already have MCV stats and have had\nfor a long time.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 22 May 2019 18:47:57 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On 2019/05/22 15:47, David Rowley wrote:\n> On Wed, 22 May 2019 at 15:39, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> PostgreSQL 12 also provides improvements to the performance of both\n>> using `COPY` with a partitioned table as well as the `ATTACH PARTITION`\n>> operation. Additionally, the ability to use foreign keys to reference\n>> partitioned tables is now allowed in PostgreSQL 12.\n> \n> I'd say nothing has been done to improve the performance of ATTACH\n> PARTITION. Robert did reduce the lock level required for that\n> operation, but that should make it any faster.\n\nMaybe you meant \"..., but that shouldn't make it any faster.\"\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 22 May 2019 15:52:30 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n\n> Attached is a draft of the PG12 Beta 1 press release that is going out\n> this Thursday. The primary goals of this release announcement are to\n> introduce new features, enhancements, and changes that are available in\n> PG12, as well as encourage our users to test and provide feedback to\n> help ensure the stability of the release.\n\nAwesome!\n\n> ### Authentication\n>\n> GSSAPI now supports client and server-side encryption and can be\n> specified in the\n> [`pg_hba.conf`](https://www.postgresql.org/docs/devel/auth-pg-hba-conf.html)\n> file using the `hostgssenc` and `hostnogssenc` record\n> types. PostgreSQL 12 also allows for LDAP servers to be discovered\n> based on `DNS SRV` records if PostgreSQL was compiled with OpenLDAP.\n\nMaybe a better title for this section would be \"Authentication /\nEncryption\" or maybe even \"Connection security\"? I get that this is a\npress release though, so feel free to disregard.\n\nThanks,\n--Robbie",
"msg_date": "Wed, 22 May 2019 10:55:18 -0400",
"msg_from": "Robbie Harwood <rharwood@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "Hi,\n\n> ### Indexing Performance, Functionality, and Management\n> \n> PostgreSQL 12 improves the overall performance of the standard B-tree indexes\n> with improvements to the overall space management of these indexes as well.\n> These improvements also provide an overall reduction of bloating when using\n> B-tree for specific use cases, in addition to a performance gain.\n\nI'm not sure everyone will understand bloating as a term? Perhaps just\nsaying 'also reduce index size when the index is modified frequently' or\nsuch?\n\n\n\n> ### Inlined WITH queries (Common table expressions)\n> \n> Common table expressions (aka `WITH` queries) can now be automatically inlined\n> in a query if they are a) not recursive, b) do not have any side-effects and\n> c) are only referenced once in a later part of a query. These removes a known\n> \"optimization fence\" that has existed since the introduction of the `WITH`\n> clause in PostgreSQL 8.4\n> \n> You can force a `WITH` query to be inlined using the `NOT MATERIALIZED` clause,\n> e.g.\n> \n> ```\n> WITH c AS NOT MATERIALIZED (\n> SELECT * FROM a WHERE a.x % 4\n> )\n> SELECT * FROM c JOIN d ON d.y = a.x;\n> ```\n\nWouldn't it be more important to reference how they can be *forced* to\nbe materialized? Because that'll be what users will need. And I think if\nwe reference NOT MATERIALIZED it also sounds like CTEs will not\nautomatically inlined ever.\n\n\n> ### Pluggable Table Storage Interface\n> \n> PostgreSQL 12 introduces the pluggable table storage interface that allows for\n> the creation and use of different storage mechanisms for table storage. New\n> access methods can be added to a PostgreSQL cluster using the [`CREATE ACCESS METHOD`](https://www.postgresql.org/docs/devel/sql-create-access-method.html)\n> and subsequently added to tables with the new `USING` clause on `CREATE TABLE`.\n> \n> A table storage interface can be defined by creating a new [table access method](https://www.postgresql.org/docs/devel/tableam.html).\n> \n> In PostgreSQL 12, the storage interface that is used by default is the `heap`\n> access method, which is currently the only supported method.\n\nI think s/which is currently the only supported method/which currently\nis the only built-in method/ or such would be good. I don't know what\n\"supported\" would actually mean here.\n\n\n> ### Authentication\n> \n> GSSAPI now supports client and server-side encryption and can be specified in\n> the [`pg_hba.conf`](https://www.postgresql.org/docs/devel/auth-pg-hba-conf.html)\n> file using the `hostgssenc` and `hostnogssenc` record types. PostgreSQL 12 also\n> allows for LDAP servers to be discovered based on `DNS SRV` records if\n> PostgreSQL was compiled with OpenLDAP.\n\nIs this really accurately categorized under authentication? Because it's\nreally ongoing encryption, as an alternative to tls?\n\n\n> Noted Behavior Changes\n> ----------------------\n> \n> There are several changes introduced in PostgreSQL 12 that can affect the\n> behavior as well as management of your ongoing operations. A few of these are\n> noted below; for information about other changes, please review the\n> \"Migrating to Version 12\" section of the [release notes](https://www.postgresql.org/docs/devel/release-12.html).\n> \n> 1. The `recovery.conf` configuration file is now merged into the main\n> `postgresql.conf` file. PostgreSQL will not start if it detects that\n> `recovery.conf` is present. To put PostgreSQL into a non-primary mode, you can\n> use the `recovery.signal` and the `standby.signal` files.\n> \n> You can read more about [archive recovery](https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY) here:\n> \n> https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY\n> \n> 2. Just-in-Time (JIT) compilation is now enabled by default.\n\nI think we should probably list the removal of WITH OIDs.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 12:44:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On 5/21/19 11:39 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a draft of the PG12 Beta 1 press release that is going out\n> this Thursday. The primary goals of this release announcement are to\n> introduce new features, enhancements, and changes that are available in\n> PG12, as well as encourage our users to test and provide feedback to\n> help ensure the stability of the release.\n> \n> Speaking of feedback, please provide me with your feedback on the\n> technical correctness of this announcement so I can incorporate changes\n> prior to the release.\n\nThank you everyone for your feedback. I have incorporated most of it\ninto this latest revision. For your convenience I have also attached a diff.\n\nPlease let me know if you have any questions. If you have additional\nfeedback please provide it before 7am EDT tomorrow.\n\nThanks!\n\nJonathan",
"msg_date": "Wed, 22 May 2019 18:07:21 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "Hi Jonathan,\n\n\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n\n> If you have additional feedback please provide it before 7am EDT\n> tomorrow.\n\nThanks for writing this up. Below are some things I noticed when\nreading through (disclaimer: I'm not a native speaker).\n\n> PostgreSQL 12 extends the abilities of several of the specialized indexing\n> mechanisms. The ability to create covering indexes, i.e. the `INCLUDE` clause\n> that was introduced in PostgreSQL 11, havs now been added to GiST indexes.\n\n\"havs\" should be \"has\"\n\n> The amount of write-ahead log (WAL) overhead generated when creating a GiST,\n> GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which\n> provides several benefits to the overall disk utilization of a PostgreSQL\n> cluster as well as using features such as continuous archiving and streaming\n> replication.\n\nThe \"using\" reads odd to me. I think it would be better either omitted,\nor expanded to \"when using\".\n\n> ### Partitioning\n>\n> PostgreSQL 12 improves on the performance when processing tables with thousands\n> of partitions for operations that only need to use a small number of partitions.\n>\n> PostgreSQL 12 also provides improvements to the performance of both `INSERT` and\n> `COPY` into a partitioned table. `ATTACH PARTITION` can now also be performed\n> without blocking concurrent queries on the partitioned table. Additionally, the\n> ability to use foreign keys to reference partitioned tables is now allowed in\n> PostgreSQL 12.\n\n\"the ability to use ... is now allowed\" doesn't look right. How about\n\"the ability to use ... is now provided\" or \"using ... is now allowed\"?\n\n> ### Collations\n>\n> PostgreSQL 12 now supports case-insensitive and accent-insensitive collations\n> for ICU provided collations,\n\n\"collations for ... collations\" doesn't look right. I think either\n\"comparison for ... collations\" or \"collation ... for collations\" would\nbe better, but I'm not sure which.\n\n> ### Generated Columns\n>\n> PostgreSQL 12 lets you create [generated columns](https://www.postgresql.org/docs/devel/ddl-generated-columns.html)\n> that compute their values based on the contents of other columns. This feature\n> provides stored generated columns, which are computed on inserts and updated and\n> are saved on disk.\n\nShould be \"on inserts and updates\".\n\n> ### Pluggable Table Storage Interface\n>\n> PostgreSQL 12 introduces the pluggable table storage interface that allows for\n> the creation and use of different storage mechanisms for table storage. New\n> access methods can be added to a PostgreSQL cluster using the [`CREATE ACCESS METHOD`](https://www.postgresql.org/docs/devel/sql-create-access-method.html)\n> and subsequently added to tables with the new `USING` clause on `CREATE TABLE`.\n\nShould be either \"the CREATE ACCESS METHOD command\" or just \"CREATE\nACCESS METHOD\".\n\n> ### Page Checksums\n>\n> The `pg_verify_checkums` command has been renamed to [`pg_checksums`](https://www.postgresql.org/docs/devel/app-pgchecksums.html)\n> and now supports the ability to enable and disable page checksums across an\n> PostgreSQL cluster that is offline.\n\nShould be \"a PostgreSQL cluster\", not \"an\".\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n",
"msg_date": "Wed, 22 May 2019 23:52:13 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "Hi Ilmari,\n\nOn 5/22/19 6:52 PM, Dagfinn Ilmari Mannsåker wrote:\n> Hi Jonathan,\n> \n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> \n>> If you have additional feedback please provide it before 7am EDT\n>> tomorrow.\n> \n> Thanks for writing this up. Below are some things I noticed when\n> reading through (disclaimer: I'm not a native speaker).\n\nThanks for the fixes + suggestions. I accepted most of them. Attached is\nv3 of the patch, along with a diff.\n\nBest,\n\nJonathan",
"msg_date": "Wed, 22 May 2019 23:30:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Attached is\n> v3 of the patch, along with a diff.\n\nMinor details, but this query is not valid:\n\n> WITH c AS MATERIALIZED (\n> SELECT * FROM a WHERE a.x % 4\n> )\n> SELECT * FROM c JOIN d ON d.y = a.x;\n\na.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\na.x can't be referenced there.\n\nHow about:\n\nWITH c AS MATERIALIZED (\n SELECT * FROM a WHERE a.x % 4 = 0\n)\nSELECT * FROM c JOIN d ON d.y = c.x;\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 17:45:23 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On 5/23/19 1:45 AM, David Rowley wrote:\n> On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> Attached is\n>> v3 of the patch, along with a diff.\n> \n> Minor details, but this query is not valid:\n> \n>> WITH c AS MATERIALIZED (\n>> SELECT * FROM a WHERE a.x % 4\n>> )\n>> SELECT * FROM c JOIN d ON d.y = a.x;\n> \n> a.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\n> a.x can't be referenced there.\n\n...that's the only gotcha I'm actually embarrassed about. Fixed.\n\nThanks,\n\nJonathan",
"msg_date": "Thu, 23 May 2019 08:01:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On Thu, May 23, 2019 at 1:01 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 5/23/19 1:45 AM, David Rowley wrote:\n> > On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org>\n> wrote:\n> >> Attached is\n> >> v3 of the patch, along with a diff.\n> >\n> > Minor details, but this query is not valid:\n> >\n> >> WITH c AS MATERIALIZED (\n> >> SELECT * FROM a WHERE a.x % 4\n> >> )\n> >> SELECT * FROM c JOIN d ON d.y = a.x;\n> >\n> > a.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\n> > a.x can't be referenced there.\n>\n> ...that's the only gotcha I'm actually embarrassed about. Fixed.\n>\n>\nThe ON d.y = a.x still needs to be changed to ON d.y = c.x\n\nPantelis\n\nOn Thu, May 23, 2019 at 1:01 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 5/23/19 1:45 AM, David Rowley wrote:\n> On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> Attached is\n>> v3 of the patch, along with a diff.\n> \n> Minor details, but this query is not valid:\n> \n>> WITH c AS MATERIALIZED (\n>> SELECT * FROM a WHERE a.x % 4\n>> )\n>> SELECT * FROM c JOIN d ON d.y = a.x;\n> \n> a.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\n> a.x can't be referenced there.\n\n...that's the only gotcha I'm actually embarrassed about. Fixed.\n The ON d.y = a.x still needs to be changed to ON d.y = c.xPantelis",
"msg_date": "Thu, 23 May 2019 16:36:06 +0100",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
},
{
"msg_contents": "On Thu, May 23, 2019 at 4:36 PM Pantelis Theodosiou <ypercube@gmail.com>\nwrote:\n\n>\n> On Thu, May 23, 2019 at 1:01 PM Jonathan S. Katz <jkatz@postgresql.org>\n> wrote:\n>\n>> On 5/23/19 1:45 AM, David Rowley wrote:\n>> > On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org>\n>> wrote:\n>> >> Attached is\n>> >> v3 of the patch, along with a diff.\n>> >\n>> > Minor details, but this query is not valid:\n>> >\n>> >> WITH c AS MATERIALIZED (\n>> >> SELECT * FROM a WHERE a.x % 4\n>> >> )\n>> >> SELECT * FROM c JOIN d ON d.y = a.x;\n>> >\n>> > a.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\n>> > a.x can't be referenced there.\n>>\n>> ...that's the only gotcha I'm actually embarrassed about. Fixed.\n>>\n>>\n> The ON d.y = a.x still needs to be changed to ON d.y = c.x\n>\n> Pantelis\n>\n\nAnother minor point in the sentence \"... which is currently is ...\":\n\n> In PostgreSQL 12, the storage interface that is used by default is the\nheap access method, which is currently is the only built-in method.\n\nBut I forgot the most important. Thank you for the new version and all the\nwork that has gone into it!\n\nOn Thu, May 23, 2019 at 4:36 PM Pantelis Theodosiou <ypercube@gmail.com> wrote:On Thu, May 23, 2019 at 1:01 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 5/23/19 1:45 AM, David Rowley wrote:\n> On Thu, 23 May 2019 at 15:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> Attached is\n>> v3 of the patch, along with a diff.\n> \n> Minor details, but this query is not valid:\n> \n>> WITH c AS MATERIALIZED (\n>> SELECT * FROM a WHERE a.x % 4\n>> )\n>> SELECT * FROM c JOIN d ON d.y = a.x;\n> \n> a.x % 4 is not a boolean clause, and \"a\" is not in the main query, so\n> a.x can't be referenced there.\n\n...that's the only gotcha I'm actually embarrassed about. Fixed.\n The ON d.y = a.x still needs to be changed to ON d.y = c.xPantelisAnother minor point in the sentence \"... which is currently is ...\":> In PostgreSQL 12, the storage interface that is used by default is the heap\naccess method, which is currently is the only built-in method.But I forgot the most important. Thank you for the new version and all the work that has gone into it!",
"msg_date": "Thu, 23 May 2019 16:50:57 +0100",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12 Beta 1 press release draft"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs some may have noticed, I have been looking at the ACL dump ordering\nfor databases, and I have noticed the same issue with tablespaces:\nhttps://www.postgresql.org/message-id/20190522062626.GC1486@paquier.xyz\n\nFor the sake of avoiding looking at the other email, here is how to\nreproduce the issue:\n1) First issue those SQLs:\n\\! rm -rf /tmp/tbspc/\n\\! mkdir -p /tmp/tbspc/\nCREATE ROLE a_user;\nCREATE ROLE b_user WITH SUPERUSER;\nCREATE ROLE c_user;\nCREATE TABLESPACE poo LOCATION '/tmp/tbspc/';\nSET SESSION AUTHORIZATION b_user;\nREVOKE ALL ON TABLESPACE poo FROM public;\nGRANT CREATE ON TABLESPACE poo TO c_user WITH GRANT OPTION;\nSET SESSION AUTHORIZATION c_user;\nGRANT CREATE ON TABLESPACE poo TO a_user\n2) Use pg_dumpall -g, where you would notice the following set of\nGRANT queries:\nCREATE TABLESPACE poo OWNER postgres LOCATION '/tmp/tbspc';\nSET SESSION AUTHORIZATION c_user;\nGRANT ALL ON TABLESPACE poo TO a_user;\nRESET SESSION AUTHORIZATION;\nGRANT ALL ON TABLESPACE poo TO c_user WITH GRANT OPTION;\n3) Trying to restore results in a failure for the first GRANT query,\nas the second one has not set yet the authorizations for c_user.\n\nAttached is a patch to fix that, so as pg_dumpall does not complain\nwhen piling up GRANT commands using WITH GRANT OPTION. Are there any\ncomplains to apply that down to 9.6?\n\nWhen applying the patch, the set of GRANT queries is reordered:\n CREATE TABLESPACE poo OWNER postgres LOCATION '/tmp/tbspc';\n+GRANT ALL ON TABLESPACE poo TO c_user WITH GRANT OPTION;\n SET SESSION AUTHORIZATION c_user;\n GRANT ALL ON TABLESPACE poo TO a_user;\n RESET SESSION AUTHORIZATION;\n-GRANT ALL ON TABLESPACE poo TO c_user WITH GRANT OPTION;\n\nAs the problem is kind of different than the database case, I wanted\nto spawn anyway a new thread, but I got a bonus question: what would\nit take to support pg_init_privs for databases and tablespaces? If we\ncould get that to work, then all the ACL-related queries built for all\nobjects could make use of buildACLQueries(), which would avoid extra\ndiffs in the dump code for dbs and tbspaces.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 22 May 2019 16:15:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "ACL dump ordering broken as well for tablespaces"
},
{
"msg_contents": "On 5/22/19, 12:16 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> Attached is a patch to fix that, so as pg_dumpall does not complain\r\n> when piling up GRANT commands using WITH GRANT OPTION. Are there any\r\n> complains to apply that down to 9.6?\r\n\r\nThe patch looks good to me.\r\n\r\n> As the problem is kind of different than the database case, I wanted\r\n> to spawn anyway a new thread, but I got a bonus question: what would\r\n> it take to support pg_init_privs for databases and tablespaces? If we\r\n> could get that to work, then all the ACL-related queries built for all\r\n> objects could make use of buildACLQueries(), which would avoid extra\r\n> diffs in the dump code for dbs and tbspaces.\r\n\r\nA bit of digging led me to the commit that removed databases and\r\ntablespaces from pg_init_privs [0] and to a related thread [1]. IIUC\r\nthe problem is that using pg_init_privs for databases is complicated\r\nby the ability to drop and recreate the template databases.\r\n\r\nNathan\r\n\r\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=47f5bb9f539a7fff089724b1cbacc31613031895\r\n[1] https://www.postgresql.org/message-id/9f25cb66-df67-8d81-ed6a-d18692a03410%402ndquadrant.com\r\n\r\n",
"msg_date": "Wed, 22 May 2019 18:35:31 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ACL dump ordering broken as well for tablespaces"
},
{
"msg_contents": "On Wed, May 22, 2019 at 06:35:31PM +0000, Bossart, Nathan wrote:\n> The patch looks good to me.\n\nThanks for double-checking. I have applied and back-patched. The good\nthing here is that there were zero conflicts.\n\n> A bit of digging led me to the commit that removed databases and\n> tablespaces from pg_init_privs [0] and to a related thread [1]. IIUC\n> the problem is that using pg_init_privs for databases is complicated\n> by the ability to drop and recreate the template databases.\n\nI don't quite get this argument. If a user is willing to drop\ntemplate1, then it is logic to also drop its initial privilege entries\nand recreate new ones from scratch. I think that this deserves a\ncloser lookup. For tablespaces, we are limited by the ability of not\nsharing pg_init_privs?\n--\nMichael",
"msg_date": "Thu, 23 May 2019 10:51:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: ACL dump ordering broken as well for tablespaces"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Wed, May 22, 2019 at 06:35:31PM +0000, Bossart, Nathan wrote:\n> > A bit of digging led me to the commit that removed databases and\n> > tablespaces from pg_init_privs [0] and to a related thread [1]. IIUC\n> > the problem is that using pg_init_privs for databases is complicated\n> > by the ability to drop and recreate the template databases.\n> \n> I don't quite get this argument. If a user is willing to drop\n> template1, then it is logic to also drop its initial privilege entries\n> and recreate new ones from scratch. I think that this deserves a\n> closer lookup. For tablespaces, we are limited by the ability of not\n> sharing pg_init_privs?\n\nWhy do you feel that's the case? The point of pg_init_privs is to\nprovide a way to go from initdb-time state to\nwhatever-the-current-privs-are. Dropping initdb-time privs and\nreverting back to a no-privs default wouldn't be right in any case\nwhere the initdb-time privs are different from no-privs.\n\nAs an example- if someone dropped the template1 database and then\nre-created it, it won't have the same privileges as the initdb-time\ntemplate1 and instead wouldn't have any privileges- but an actual new\ninstallation would bring back template1 with the initdb-time privileges.\n\nBasically, the pg_dump output in that case *should* include the\nequivilant of \"REVOKE initdb-time privs\" if we want it to be a minimal\nchange from what is there at initdb-time, but the only way to have that\nhappen is to know what the initdb-time privs were, and that's not what\nwill be in pg_init_privs if you drop the entries from pg_init_priv when\ntemplate1 is dropped.\n\nI'm not sure where/why tablespaces came into this discussion at all\nsince we don't have any initdb-time tablespaces that can be dropped or\nrecreated, and the ones we do have exist with no-privs at initdb-time,\nso I don't think there's any reason we'd need to have entries in\npg_init_privs for those, and it seems like they should be able to just\nuse the buildACLQueries magic, though it looks like you might have to\nadd a flag that will basically be the same as the binary_upgrade flag to\nsay \"this object doesn't have any init privs, so we aren't joining\nagainst pg_init_privs, and don't include that in the query\".\n\nUnfortunately, that isn't going to work for databases though. I haven't\ngot any particularly great solution there. We could possibly have a\ncatalog which defined the initdb-time objects using their name instead\nof the initdb-time OID but that seems awful grotty.. Or we could try to\ndo the same thing the user did and just DROP template1 and then recreate\nit, in which case it'd be created with no-privs and then do the same as\ntablespaces above, if it has a too-high OID, but again, that seems\npretty ugly.\n\nOpen to other thoughts, but I really don't think we can just stick\nthings into pg_init_privs for global objects and then remove them if\nthat object is dropped, and it isn't a shared catalog either, so there's\nalso that... Having a shared catalog just in case someone wants to\ndrop/recreate template1 strikes me as pretty massive overkill.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 23 May 2019 10:46:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: ACL dump ordering broken as well for tablespaces"
}
] |
[
{
"msg_contents": "Hi all,\n\nTrying to do pg_dump[all] on a 9.5 or older server results in spurious\nfailures:\npg_dump: column number -1 is out of range 0..36\n\nAfter looking around, the problem comes from\ncheck_tuple_field_number(), more specifically from getTables() where\nsomeone has forgotten to add NULL values for amname when querying\nolder server versions.\n\nAttached is a patch to fix that. I am not seeing other failures with\nan instance that includes all the contents of installcheck, so it\nseems that the rest is fine.\n\nThis needs to be applied to HEAD, so I am adding an open item.\n\nAny objections to the attached?\n--\nMichael",
"msg_date": "Wed, 22 May 2019 17:34:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "> On Wed, May 22, 2019 at 10:34 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Trying to do pg_dump[all] on a 9.5 or older server results in spurious\n> failures:\n> pg_dump: column number -1 is out of range 0..36\n>\n> After looking around, the problem comes from\n> check_tuple_field_number(), more specifically from getTables() where\n> someone has forgotten to add NULL values for amname when querying\n> older server versions.\n\nYeah, sorry, looks like it was my fault.\n\n> Attached is a patch to fix that. I am not seeing other failures with\n> an instance that includes all the contents of installcheck, so it\n> seems that the rest is fine.\n>\n> This needs to be applied to HEAD, so I am adding an open item.\n>\n> Any objections to the attached?\n\nI've checked it too (on 9.4), don't see any issues after applying this patch,\nso +1.\n\n\n",
"msg_date": "Wed, 22 May 2019 11:05:07 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Trying to do pg_dump[all] on a 9.5 or older server results in spurious\n> failures:\n> pg_dump: column number -1 is out of range 0..36\n\n> After looking around, the problem comes from\n> check_tuple_field_number(), more specifically from getTables() where\n> someone has forgotten to add NULL values for amname when querying\n> older server versions.\n\n> Attached is a patch to fix that. I am not seeing other failures with\n> an instance that includes all the contents of installcheck, so it\n> seems that the rest is fine.\n\nLooks like the right fix. I'm sad that the buildfarm did not catch\nthis ... why wouldn't the cross-version-upgrade tests have seen it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 09:46:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 09:46:19 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Trying to do pg_dump[all] on a 9.5 or older server results in spurious\n> > failures:\n> > pg_dump: column number -1 is out of range 0..36\n> \n> > After looking around, the problem comes from\n> > check_tuple_field_number(), more specifically from getTables() where\n> > someone has forgotten to add NULL values for amname when querying\n> > older server versions.\n\nThanks for catching!\n\n\n> > Attached is a patch to fix that.\n\nWouldn't the better fix be to change\n\n\t\tif (PQgetisnull(res, i, i_amname))\n\t\t\ttblinfo[i].amname = NULL;\n\ninto\n\n\t\tif (i_amname == -1 || PQgetisnull(res, i, i_amname))\n\t\t\ttblinfo[i].amname = NULL;\n\nit's much more scalable than adding useless columns everywhere, and we\nalready use that approach with i_checkoption (and at a number of other\nplaces).\n\n\n> > Attached is a patch to fix that. I am not seeing other failures with\n> > an instance that includes all the contents of installcheck, so it\n> > seems that the rest is fine.\n> \n> Looks like the right fix. I'm sad that the buildfarm did not catch\n> this ... why wouldn't the cross-version-upgrade tests have seen it?\n\nI suspect we just didn't notice that it saw that:\n\n\tif (field_num < 0 || field_num >= res->numAttributes)\n\t{\n\t\tpqInternalNotice(&res->noticeHooks,\n\t\t\t\t\t\t \"column number %d is out of range 0..%d\",\n\t\t\t\t\t\t field_num, res->numAttributes - 1);\n\t\treturn false;\n\t}\n\nas it's just a notice, not a failure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 11:06:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Wouldn't the better fix be to change\n> \t\tif (PQgetisnull(res, i, i_amname))\n> \t\t\ttblinfo[i].amname = NULL;\n> into\n> \t\tif (i_amname == -1 || PQgetisnull(res, i, i_amname))\n> \t\t\ttblinfo[i].amname = NULL;\n> it's much more scalable than adding useless columns everywhere, and we\n> already use that approach with i_checkoption (and at a number of other\n> places).\n\nFWIW, I think that's a pretty awful idea, and the fact that some\npeople have had it before doesn't make it less awful. It's giving\nup the ability to detect errors-of-omission, which might easily\nbe harmful rather than harmless errors.\n\nIt does seem like we're overdue to rethink how pg_dump handles\ncross-version query differences ... but inconsistently lobotomizing\nits internal error detection is not a good start on that.\n\n>> Looks like the right fix. I'm sad that the buildfarm did not catch\n>> this ... why wouldn't the cross-version-upgrade tests have seen it?\n\n> I suspect we just didn't notice that it saw that:\n> as it's just a notice, not a failure.\n\nHm. But shouldn't we have gotten garbage output from the pg_dump run?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 14:17:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 14:17:41 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Wouldn't the better fix be to change\n> > \t\tif (PQgetisnull(res, i, i_amname))\n> > \t\t\ttblinfo[i].amname = NULL;\n> > into\n> > \t\tif (i_amname == -1 || PQgetisnull(res, i, i_amname))\n> > \t\t\ttblinfo[i].amname = NULL;\n> > it's much more scalable than adding useless columns everywhere, and we\n> > already use that approach with i_checkoption (and at a number of other\n> > places).\n> \n> FWIW, I think that's a pretty awful idea, and the fact that some\n> people have had it before doesn't make it less awful. It's giving\n> up the ability to detect errors-of-omission, which might easily\n> be harmful rather than harmless errors.\n\nWell, if we explicitly have to check for -1, it's not really an error of\nomission for everything. Yes, we could forget returning the amname in a\nnewer version of the query, but given that we usually just forward copy\nthe query that's not that likely. And instead having to change a lot of\nper-branch queries also adds potential for error, and also makes it more\npainful to fix cross-branch bugs etc.\n\n\n> >> Looks like the right fix. I'm sad that the buildfarm did not catch\n> >> this ... why wouldn't the cross-version-upgrade tests have seen it?\n> \n> > I suspect we just didn't notice that it saw that:\n> > as it's just a notice, not a failure.\n> \n> Hm. But shouldn't we have gotten garbage output from the pg_dump run?\n\nWhat garbage? We'd take the column as NULL, and assume it doesn't have\nan assigned AM. Which is the right behaviour when dumping from < 12?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 11:24:21 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-22 14:17:41 -0400, Tom Lane wrote:\n>> FWIW, I think that's a pretty awful idea, and the fact that some\n>> people have had it before doesn't make it less awful. It's giving\n>> up the ability to detect errors-of-omission, which might easily\n>> be harmful rather than harmless errors.\n\n> Well, if we explicitly have to check for -1, it's not really an error of\n> omission for everything. Yes, we could forget returning the amname in a\n> newer version of the query, but given that we usually just forward copy\n> the query that's not that likely.\n\nNo, the concerns I have are about (1) failure to insert the extra return\ncolumn into all branches where it's needed; (2) some unexpected run-time\nproblem causing the PGresult to not have the expected column.\n\nI think we've had some discussions about restructuring those giant\nif-nests so that they build up the query in pieces, making it possible\nto write things along the lines of\n\n appendPQExpBuffer(query,\n \"SELECT c.tableoid, c.oid, c.relname, \"\n // version-independent stuff here\n ...);\n ...\n if (fout->remoteVersion >= 120000)\n appendPQExpBuffer(query, \"am.amname, \");\n else\n appendPQExpBuffer(query, \"NULL as amname, \");\n ...\n\nI'm not sure if it'd work quite that cleanly when we need changes in the\nFROM part, but certainly for newly-added result fields this would be\nhugely better than repeating the whole query. And yes, I'd still insist\nthat explicitly providing the alternative NULL value is not optional.\n\n\n>> Hm. But shouldn't we have gotten garbage output from the pg_dump run?\n\n> What garbage? We'd take the column as NULL, and assume it doesn't have\n> an assigned AM. Which is the right behaviour when dumping from < 12?\n\nOh, I see:\n\nint\nPQgetisnull(const PGresult *res, int tup_num, int field_num)\n{\n if (!check_tuple_field_number(res, tup_num, field_num))\n return 1; /* pretend it is null */\n\nwhich just happens to be the right thing --- in this case --- for\nthe back branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 14:39:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "\nOn 5/22/19 9:46 AM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Trying to do pg_dump[all] on a 9.5 or older server results in spurious\n>> failures:\n>> pg_dump: column number -1 is out of range 0..36\n>> After looking around, the problem comes from\n>> check_tuple_field_number(), more specifically from getTables() where\n>> someone has forgotten to add NULL values for amname when querying\n>> older server versions.\n>> Attached is a patch to fix that. I am not seeing other failures with\n>> an instance that includes all the contents of installcheck, so it\n>> seems that the rest is fine.\n> Looks like the right fix. I'm sad that the buildfarm did not catch\n> this ... why wouldn't the cross-version-upgrade tests have seen it?\n\n\nThat's a good question.\n\n\nIt's in the output - see for example\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=crake&dt=2019-05-22%2017%3A47%3A30&stg=xversion-upgrade-REL9_4_STABLE-HEAD>\nand scroll down a bit.\n\n\nBut since this doesn't cause pg_dumpall to fail, it doesn't on its own\ncause the buildfarm to fail either, and this is apparently sufficiently\nbenign to allow the tests to succeed.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n",
"msg_date": "Wed, 22 May 2019 14:48:19 -0400",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "On Wed, May 22, 2019 at 02:39:54PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Well, if we explicitly have to check for -1, it's not really an error of\n>> omission for everything. Yes, we could forget returning the amname in a\n>> newer version of the query, but given that we usually just forward copy\n>> the query that's not that likely.\n> \n> No, the concerns I have are about (1) failure to insert the extra return\n> column into all branches where it's needed; (2) some unexpected run-time\n> problem causing the PGresult to not have the expected column.\n\nUsing a -1 check is not something I find much helpful, because this\nmasks the real problem that some queries may not have the output they\nexpect.\n\n> I'm not sure if it'd work quite that cleanly when we need changes in the\n> FROM part, but certainly for newly-added result fields this would be\n> hugely better than repeating the whole query. And yes, I'd still insist\n> that explicitly providing the alternative NULL value is not optional.\n\nThis makes the addition of JOIN clauses and WHERE quals harder to\nfollow and read, and it makes back-patching harder (with testing to\nolder versions it is already complicated enough) so I don't think that\nthis is a good idea. One extra idea I have would be to add a\ncompile-time flag which we could use to enforce a hard failure with an\nassertion or such in those code paths, because we never expect it in\nthe in-core clients. And that would cause any failure to be\nimmediately visible, at the condition of using the flag of course.\n\n> int\n> PQgetisnull(const PGresult *res, int tup_num, int field_num)\n> {\n> if (!check_tuple_field_number(res, tup_num, field_num))\n> return 1; /* pretend it is null */\n> \n> which just happens to be the right thing --- in this case --- for\n> the back branches.\n\nYes. I don't think that this is completely wrong. So, are there any\nobjections if I just apply the patch at the top of this thread and fix\nthe issue?\n--\nMichael",
"msg_date": "Thu, 23 May 2019 09:11:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 09:11:33 +0900, Michael Paquier wrote:\n> On Wed, May 22, 2019 at 02:39:54PM -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> >> Well, if we explicitly have to check for -1, it's not really an error of\n> >> omission for everything. Yes, we could forget returning the amname in a\n> >> newer version of the query, but given that we usually just forward copy\n> >> the query that's not that likely.\n> > \n> > No, the concerns I have are about (1) failure to insert the extra return\n> > column into all branches where it's needed; (2) some unexpected run-time\n> > problem causing the PGresult to not have the expected column.\n> \n> Using a -1 check is not something I find much helpful, because this\n> masks the real problem that some queries may not have the output they\n> expect.\n\nI don't buy this, at all. The likelihood of introducing failures by\nhaving to modify a lot of queries nobody runs is much higher than what\nwe gain by the additional \"checks\". If this were something on the type\nsystem level, where the compiler would detect the error, even without\nrunning the query: Yea, ok. But it's not.\n\n\n> Yes. I don't think that this is completely wrong. So, are there any\n> objections if I just apply the patch at the top of this thread and fix\n> the issue?\n\nWell, I think the approach of duplicating code all over is a bad idea,\nand the fix is many times too big. But it's better than not fixing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 15:31:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "On Thu, May 23, 2019 at 03:31:30PM -0700, Andres Freund wrote:\n> Well, I think the approach of duplicating code all over is a bad idea,\n> and the fix is many times too big. But it's better than not fixing.\n\nWell, I can see why the current solution is not perfect, but we have\nbeen doing that for some time now, and redesigning that part has a\nmuch larger impact than a single column. I have committed the initial\nfix now. We can always break the wheel later on in 13~.\n--\nMichael",
"msg_date": "Fri, 24 May 2019 08:32:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-24 08:32:29 +0900, Michael Paquier wrote:\n> On Thu, May 23, 2019 at 03:31:30PM -0700, Andres Freund wrote:\n> > Well, I think the approach of duplicating code all over is a bad idea,\n> > and the fix is many times too big. But it's better than not fixing.\n> \n> Well, I can see why the current solution is not perfect, but we have\n> been doing that for some time now, and redesigning that part has a\n> much larger impact than a single column. I have committed the initial\n> fix now.\n\nThat argument would hold some sway if we there weren't a number of cases\ndoing it differently in the tree already:\n\n\t\tif (i_checkoption == -1 || PQgetisnull(res, i, i_checkoption))\n\t\t\ttblinfo[i].checkoption = NULL;\n\t\telse\n\t\t\ttblinfo[i].checkoption = pg_strdup(PQgetvalue(res, i, i_checkoption));\n\n\tif (PQfnumber(res, \"protrftypes\") != -1)\n\t\tprotrftypes = PQgetvalue(res, 0, PQfnumber(res, \"protrftypes\"));\n\telse\n\t\tprotrftypes = NULL;\n\n\tif (PQfnumber(res, \"proparallel\") != -1)\n\t\tproparallel = PQgetvalue(res, 0, PQfnumber(res, \"proparallel\"));\n\telse\n\t\tproparallel = NULL;\n\n\tif (i_proparallel != -1)\n\t\tproparallel = PQgetvalue(res, 0, PQfnumber(res, \"proparallel\"));\n\telse\n\t\tproparallel = NULL;\n\nAnd no, I don't buy the consistency argument. Continuing to do redundant\nwork just because we always have, isn't better than having one useful\nand one redundant approach. And yes, a full blown redesign would be\nbetter, but that doesn't get harder by having the -1 checks.\n\n- Andres\n\n\n",
"msg_date": "Thu, 23 May 2019 16:39:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump throwing \"column number -1 is out of range 0..36\" on HEAD"
}
] |
[
{
"msg_contents": "Our Solaris packager reports that 12beta1 is failing to build for him\non some Solaris variants:\n\n> The link failure is:\n\n> ---\n> Undefined\t\t\tfirst referenced\n> symbol \t\t\t in file\n> ReadNextFullTransactionId pg_checksums.o\n> ld: fatal: symbol referencing errors. No output written to pg_checksums\n> ---\n\n> Now, ReadNextFullTransactionId() is implemented in\n> src/backend/access/transam/varsup.c but I cannot see varsop.o being\n> included in any of the libraries pg_checksum is linked against\n> (libpgcommon.a and libpgport.a).\n\n> When I check the pg_checksum.o I find that it references\n> ReadNextFullTransactionId on the platforms that fail but not where it\n> doesn't. The failed platforms are all sparc variants plus 64-bit x86\n> on Solaris 11.\n\n> The compiler used in Sun Studio 12u1, very old and and I can try to\n> upgrade and see if that helps.\n> [ it didn't ]\n\nI'm a bit mystified why we did not see this problem in the buildfarm,\nespecially since we have at least one critter (damselfly) running an\nOpenSolaris variant. Nonetheless, it sure looks like a \"somebody\nwas sloppy about frontend/backend separation\" problem.\n\nFix ideas anyone? I think we need to not only solve the immediate\nproblem (which might just take an #ifndef FRONTEND somewhere) but\nalso close the testing gap so we don't get blindsided like this\nagain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 09:56:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "I wrote:\n> Our Solaris packager reports that 12beta1 is failing to build for him\n> on some Solaris variants:\n\n>> The link failure is:\n>> ---\n>> Undefined\t\t\tfirst referenced\n>> symbol \t\t\t in file\n>> ReadNextFullTransactionId pg_checksums.o\n>> ld: fatal: symbol referencing errors. No output written to pg_checksums\n>> ---\n\nOn looking closer, the fix is simple and matches what we've done\nelsewhere: transam.h needs to have \"#ifndef FRONTEND\" to protect\nits static inline function from being compiled into frontend code.\n\nSo the disturbing thing here is that we no longer have any active\nbuildfarm members that can build HEAD but have the won't-elide-\nunused-static-functions problem. Clearly we'd better close that\ngap somehow ... anyone have an idea about how to test it better?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 10:48:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "I wrote:\n>> Our Solaris packager reports that 12beta1 is failing to build for him\n>> on some Solaris variants:\n>>> The link failure is:\n>>> Undefined\t\t\tfirst referenced\n>>> symbol \t\t\t in file\n>>> ReadNextFullTransactionId pg_checksums.o\n\n> On looking closer, the fix is simple and matches what we've done\n> elsewhere: transam.h needs to have \"#ifndef FRONTEND\" to protect\n> its static inline function from being compiled into frontend code.\n\n> So the disturbing thing here is that we no longer have any active\n> buildfarm members that can build HEAD but have the won't-elide-\n> unused-static-functions problem. Clearly we'd better close that\n> gap somehow ... anyone have an idea about how to test it better?\n\nAh-hah --- some study of the gcc manual finds that modern versions\nof gcc have\n\n`-fkeep-inline-functions'\n In C, emit `static' functions that are declared `inline' into the\n object file, even if the function has been inlined into all of its\n callers. This switch does not affect functions using the `extern\n inline' extension in GNU C89. In C++, emit any and all inline\n functions into the object file.\n\nThis seems to do exactly what we need to test for this problem.\nI've confirmed that with it turned on, a modern platform finds\nthe ReadNextFullTransactionId problem with yesterday's sources,\nand that everything seems green as of HEAD.\n\nSo, we'd obviously not want to turn this on for normal builds,\nbut could we get a buildfarm animal or two to use this switch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 15:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 15:55:50 -0400, Tom Lane wrote:\n> I wrote:\n> >> Our Solaris packager reports that 12beta1 is failing to build for him\n> >> on some Solaris variants:\n> >>> The link failure is:\n> >>> Undefined\t\t\tfirst referenced\n> >>> symbol \t\t\t in file\n> >>> ReadNextFullTransactionId pg_checksums.o\n> \n> > On looking closer, the fix is simple and matches what we've done\n> > elsewhere: transam.h needs to have \"#ifndef FRONTEND\" to protect\n> > its static inline function from being compiled into frontend code.\n> \n> > So the disturbing thing here is that we no longer have any active\n> > buildfarm members that can build HEAD but have the won't-elide-\n> > unused-static-functions problem. Clearly we'd better close that\n> > gap somehow ... anyone have an idea about how to test it better?\n\nI'm somewhat inclined to just declare that people using such old\ncompilers ought to just use something newer. Having to work around\nbroken compilers that are so old that we don't even have a buildfarm\nanimal actually exposing that behaviour, seems like wasted effort. IMO\nit'd make sense to just treat this as part of the requirements for a C99\ncompiler.\n\n\n> Ah-hah --- some study of the gcc manual finds that modern versions\n> of gcc have\n> \n> `-fkeep-inline-functions'\n> In C, emit `static' functions that are declared `inline' into the\n> object file, even if the function has been inlined into all of its\n> callers. This switch does not affect functions using the `extern\n> inline' extension in GNU C89. In C++, emit any and all inline\n> functions into the object file.\n> \n> This seems to do exactly what we need to test for this problem.\n> I've confirmed that with it turned on, a modern platform finds\n> the ReadNextFullTransactionId problem with yesterday's sources,\n> and that everything seems green as of HEAD.\n> \n> So, we'd obviously not want to turn this on for normal builds,\n> but could we get a buildfarm animal or two to use this switch?\n\nI could easily add that to one of mine, if we decide to go for that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 May 2019 13:03:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-22 15:55:50 -0400, Tom Lane wrote:\n>>> So the disturbing thing here is that we no longer have any active\n>>> buildfarm members that can build HEAD but have the won't-elide-\n>>> unused-static-functions problem. Clearly we'd better close that\n>>> gap somehow ... anyone have an idea about how to test it better?\n\n> I'm somewhat inclined to just declare that people using such old\n> compilers ought to just use something newer. Having to work around\n> broken compilers that are so old that we don't even have a buildfarm\n> animal actually exposing that behaviour, seems like wasted effort. IMO\n> it'd make sense to just treat this as part of the requirements for a C99\n> compiler.\n\nTBH, I too supposed the requirement for this had gone away with the C99\nmove. But according to the discussion today on -packagers, there are\nstill supported variants of Solaris that have compilers that speak C99\nbut don't have this behavior. Per Bjorn's report:\n\n>> The compiler used in Sun Studio 12u1, very old and and I can try to\n>> upgrade and see if that helps.\n> I tried Sun Studio 12u2 and then a more drastic upgrade to Developer\n> Studio 12.5 but both had the same effect.\n\nIt doesn't sound like \"use a newer compiler\" is going to be a helpful\nanswer there.\n\n(It is annoying that nobody is running such a platform in the buildfarm,\nI agree. But I don't have the resources to spin up a real-Solaris\nbuildfarm member.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 May 2019 16:13:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-22 16:13:02 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-22 15:55:50 -0400, Tom Lane wrote:\n> Per Bjorn's report:\n> >> The compiler used in Sun Studio 12u1, very old and and I can try to\n> >> upgrade and see if that helps.\n> > I tried Sun Studio 12u2 and then a more drastic upgrade to Developer\n> > Studio 12.5 but both had the same effect.\n> \n> It doesn't sound like \"use a newer compiler\" is going to be a helpful\n> answer there.\n\nWell, GCC is available on solaris, and IIRC not that hard to install\n(isn't it just a 'pkg install gcc' or such?). Don't think we need to\ninvest a lot of energy fixing a compiler / OS combo that effectively\nisn't developed further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 10:11:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-22 16:13:02 -0400, Tom Lane wrote:\n>> It doesn't sound like \"use a newer compiler\" is going to be a helpful\n>> answer there.\n\n> Well, GCC is available on solaris, and IIRC not that hard to install\n> (isn't it just a 'pkg install gcc' or such?). Don't think we need to\n> invest a lot of energy fixing a compiler / OS combo that effectively\n> isn't developed further.\n\nI'm not really excited about adopting a position that PG will only\nbuild on GCC and clones thereof.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 13:46:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 13:46:15 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-22 16:13:02 -0400, Tom Lane wrote:\n> >> It doesn't sound like \"use a newer compiler\" is going to be a helpful\n> >> answer there.\n> \n> > Well, GCC is available on solaris, and IIRC not that hard to install\n> > (isn't it just a 'pkg install gcc' or such?). Don't think we need to\n> > invest a lot of energy fixing a compiler / OS combo that effectively\n> > isn't developed further.\n> \n> I'm not really excited about adopting a position that PG will only\n> build on GCC and clones thereof.\n\nThat's not what I said though? Not supporting one compiler, on an OS\nthat's effectively not being developed anymore, with a pretty\nindefensible behaviour, requiring not insignificant work by everyone,\nisn't the same as standardizing on gcc. I mean, we obviously are going\nto continue at the absolute very least gcc, llvm/clang and msvc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 10:52:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-23 13:46:15 -0400, Tom Lane wrote:\n>> I'm not really excited about adopting a position that PG will only\n>> build on GCC and clones thereof.\n\n> That's not what I said though? Not supporting one compiler, on an OS\n> that's effectively not being developed anymore, with a pretty\n> indefensible behaviour, requiring not insignificant work by everyone,\n> isn't the same as standardizing on gcc. I mean, we obviously are going\n> to continue at the absolute very least gcc, llvm/clang and msvc.\n\nI think you're vastly overstating the case for refusing support for this.\nAdding \"#ifndef FRONTEND\" to relevant headers isn't a huge amount of work\n--- it's certainly far less of a problem than the Microsoft-droppings\nwe've had to put in in so many places. The only real issue in my mind\nis the lack of buildfarm support for detecting that we need to do so.\n\nAlso relevant here is that you have no evidence for the assumption that\nthese old Solaris compilers are the only live platform with the problem.\nYeah, we wish our buildfarm covered everything of interest, but it does\nnot. Maybe, if we get to beta2 without any additional reports of build\nfailures on beta1, that would be a bit of evidence that nobody else cares\n--- but we have no such evidence right now. We certainly can't assume\nthat any pre-v12 release provides evidence of that, because up till\nI retired pademelon, it was forcing us to keep this case supported.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 14:05:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 14:05:19 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-23 13:46:15 -0400, Tom Lane wrote:\n> >> I'm not really excited about adopting a position that PG will only\n> >> build on GCC and clones thereof.\n> \n> > That's not what I said though? Not supporting one compiler, on an OS\n> > that's effectively not being developed anymore, with a pretty\n> > indefensible behaviour, requiring not insignificant work by everyone,\n> > isn't the same as standardizing on gcc. I mean, we obviously are going\n> > to continue at the absolute very least gcc, llvm/clang and msvc.\n> \n> I think you're vastly overstating the case for refusing support for this.\n> Adding \"#ifndef FRONTEND\" to relevant headers isn't a huge amount of work\n> --- it's certainly far less of a problem than the Microsoft-droppings\n> we've had to put in in so many places. The only real issue in my mind\n> is the lack of buildfarm support for detecting that we need to do so.\n\nWell, doing it for every single inline function is pretty annoying, just\nfrom a bulkiness perspective. And figuring out exactly which inline\nfunction needs this isn't easy without something that actually shows the\nproblem.\n\n\n> Also relevant here is that you have no evidence for the assumption that\n> these old Solaris compilers are the only live platform with the problem.\n> Yeah, we wish our buildfarm covered everything of interest, but it does\n> not. Maybe, if we get to beta2 without any additional reports of build\n> failures on beta1, that would be a bit of evidence that nobody else cares\n> --- but we have no such evidence right now. We certainly can't assume\n> that any pre-v12 release provides evidence of that, because up till\n> I retired pademelon, it was forcing us to keep this case supported.\n\nI don't think I'm advocating for not fixing the issue we had for\nsolaris, for 12. I just don't think this a reasonable approach going\nforward.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 11:49:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-23 14:05:19 -0400, Tom Lane wrote:\n>> I think you're vastly overstating the case for refusing support for this.\n>> Adding \"#ifndef FRONTEND\" to relevant headers isn't a huge amount of work\n>> --- it's certainly far less of a problem than the Microsoft-droppings\n>> we've had to put in in so many places. The only real issue in my mind\n>> is the lack of buildfarm support for detecting that we need to do so.\n\n> Well, doing it for every single inline function is pretty annoying, just\n> from a bulkiness perspective.\n\nOh, I certainly wasn't suggesting we do that.\n\n> And figuring out exactly which inline\n> function needs this isn't easy without something that actually shows the\n> problem.\n\n... which is why we need a buildfarm animal that shows the problem.\nWe had some, up until the C99 move.\n\nIf the only practical way to detect the issue were to run some ancient\nplatform or other, I'd tend to agree with you that it's not worth the\ntrouble. But if we can spot it just by using -fkeep-inline-functions\non an animal or two, I think it's a reasonable thing to keep supporting\nthe case for a few years more.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 18:27:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: FullTransactionId changes are causing portability issues"
}
] |
[
{
"msg_contents": "Hello,\n\nI have recently observed a deadlock on one of our production servers related\nto locking only a single row in a job table. There were two functions involved\nin the deadlock, the first one acquires a “for key share” lock on the row that\nrepresents the job it works on and subsequently updates it with the job’s end\ntime (we need multiple jobs to be operating on a single row concurrently,\nthat’s why there is a \"for key share\" lock). The other function starts by\nacquiring the “for update” lock on the job row and then performs actions that\nshould not be run in parallel with other jobs.\n\nThe deadlock can be easily reproduced with the following statements. The\nqueries run against a table job (id integer primary key, name text) with a\nsingle row of (1,'a'))\n\nX1: select id from job where name = 'a' for key share;\nY: select id from job where name = 'a' for update; -- starts waiting for X1\nX2: select id from job where name = 'a' for key share;\nX1: update job set name = 'b' where id = 1;\nX2: update job set name = 'c' where id = 1; -- starts waiting for X1\nX1: rollback;\n\nAt this point, Y is terminated by the deadlock detector:\n\n\"deadlock detected\",\nProcess 53937 waits for ShareLock on transaction 488; blocked by process 53953.\nProcess 53953 waits for ExclusiveLock on tuple (0,1) of relation 16386 of database 12931;\nblocked by process 53937.\nProcess 53937: select id from job where name = 'a' for update;\nProcess 53953: update job set name = 'c' where id = 1;\",\n\nThe deadlock is between X2 and Y. Y waits for X2 to finish, as X2 holds a \"key\nshare\" lock, incompatible with \"for update\" that Y attempts to acquire. On the\nother hand, X2 needs to acquire the row lock to perform the update, and that\nis a two-phase process: first, get the tuple lock and then wait for\nconflicting transactions to finish, releasing the tuple lock afterward. X2\ntries to acquire the tuple lock, but it is owned by Y. PostgreSQL detects the\ndeadlock and terminates Y.\n\nSuch a deadlock only occurs when three or more sessions locking the same row\nare present and the lock is upgraded in at least one session. With only two\nsessions the upgrade does not go through the lock manager, as there are no\nconflicts with locks stored in the tuple. \n\nThat gave me an idea on how to change PostgreSQL to avoid deadlocking under\nthe condition above. When detecting the lock upgrade from the multixact, we\ncan avoid acquiring the tuple lock; however, we should still wait for the\nmutlixact members that hold conflicting locks, to avoid acquiring incompatible\nones.\n\nThe patch is attached. I had to tweak heap_update and heap_delete alongside\nthe heap_lock_tuple, as they acquire row locks as well. I am not very happy\nwith overloading DoesMultiXactIdConflict with another function to check if\ncurrent transaction id is among the multixact members, perhaps it is worth to\nhave a separate function for this. We can figure this out if we agree this is\nthe problem that needs to be solved and on the solution. The other possible\nobjection is related to the statement from README.tuplock that we need to go\nthrough the lock manager to avoid starving waiting exclusive-lockers. Since\nthis patch omits the tuple lock only when the lock upgrade happens, it does\nlimit the starvation condition to the cases when the lock compatible with the\none the waiting process asks for is acquired first and then gets upgraded to\nthe incompatible one. Since under present conditions the same operation will\nlikely deadlock and cancel the exclusive waiter altogether, I don't see this\nas a strong objection.\n\nCheers,\nOleksii",
"msg_date": "Wed, 22 May 2019 17:27:15 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "upgrades in row-level locks can deadlock"
},
{
"msg_contents": "Hello\n\nOn 2019-May-22, Oleksii Kliukin wrote:\n\n> I have recently observed a deadlock on one of our production servers related\n> to locking only a single row in a job table. There were two functions involved\n> in the deadlock, the first one acquires a “for key share” lock on the row that\n> represents the job it works on and subsequently updates it with the job’s end\n> time (we need multiple jobs to be operating on a single row concurrently,\n> that’s why there is a \"for key share\" lock). The other function starts by\n> acquiring the “for update” lock on the job row and then performs actions that\n> should not be run in parallel with other jobs.\n> \n> The deadlock can be easily reproduced with the following statements. The\n> queries run against a table job (id integer primary key, name text) with a\n> single row of (1,'a'))\n\nHmm, great find.\n\n> X1: select id from job where name = 'a' for key share;\n> Y: select id from job where name = 'a' for update; -- starts waiting for X1\n> X2: select id from job where name = 'a' for key share;\n> X1: update job set name = 'b' where id = 1;\n> X2: update job set name = 'c' where id = 1; -- starts waiting for X1\n> X1: rollback;\n> \n> At this point, Y is terminated by the deadlock detector:\n\nYeah, this seems undesirable in general terms. Here's a quick\nisolationtester spec that reproduces the problem:\n\nsetup {\n\tdrop table if exists tlu_job;\n\tcreate table tlu_job (id integer primary key, name text);\n\tinsert into tlu_job values (1, 'a');\n}\n\nteardown {\n\tdrop table tlu_job;\n}\n\nsession \"s1\"\nsetup\t\t\t\t{ begin; }\nstep \"s1_keyshare\"\t{ select id from tlu_job where name = 'a' for key share; }\nstep \"s1_update\"\t{ update tlu_job set name = 'b' where id = 1; }\nstep \"s1_rollback\"\t{ rollback; }\n\nsession \"s2\"\nsetup\t\t\t\t{ begin; }\nstep \"s2_keyshare\"\t{ select id from tlu_job where name = 'a' for key share; }\nstep \"s2_update\"\t{ update tlu_job set name = 'c' where id = 1; }\nstep \"s2_commit\"\t{ commit; }\n\nsession \"s3\"\nsetup\t\t\t\t{ begin; }\nstep \"s3_forupd\"\t{ select id from tlu_job where name = 'a' for update; }\nteardown\t\t\t{ commit; }\n\n# Alexey's permutation\npermutation \"s1_keyshare\" \"s3_forupd\" \"s2_keyshare\" \"s1_update\" \"s2_update\" \"s1_rollback\" \"s2_commit\"\n\n(X1 is s1, X2 is s2, Y is s3).\n\nPermutations such as that one report a deadlock with the original code,\nand does not report a deadlock after your proposed patch.\n\npermutation \"s1_keyshare\" \"s1_update\" \"s2_keyshare\" \"s3_forupd\" \"s2_update\" \"s1_rollback\" \"s2_commit\"\n\nBut semantically, I wonder if your transactions are correct. If you\nintend to modify the row in s1 and s2, shouldn't you be acquiring FOR NO\nKEY UPDATE lock instead? I don't see how can s1 and s2 coexist\npeacefully. Also, can your Y transaction use FOR NO KEY UPDATE instead\n.. unless you intend to delete the tuple in that transaction?\n\n\nI'm mulling over your proposed fix. I don't much like the idea that\nDoesMultiXactIdConflict() returns that new boolean -- seems pretty\nad-hoc -- but I don't see any way to do better than that ... (If we get\ndown to details, DoesMultiXactIdConflict needn't initialize that\nboolean: better let the callers do.)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 16:40:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "Oleksii Kliukin <alexk@hintbits.com> wrote:\n\n> Hi,\n> \n> Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n>> On 2019-May-22, Oleksii Kliukin wrote:\n>> \n>>> X1: select id from job where name = 'a' for key share;\n>>> Y: select id from job where name = 'a' for update; -- starts waiting for X1\n>>> X2: select id from job where name = 'a' for key share;\n>>> X1: update job set name = 'b' where id = 1;\n>>> X2: update job set name = 'c' where id = 1; -- starts waiting for X1\n>>> X1: rollback;\n>>> \n>>> At this point, Y is terminated by the deadlock detector:\n>> \n>> Yeah, this seems undesirable in general terms. Here's a quick\n>> isolationtester spec that reproduces the problem:\n>> \n>> setup {\n>> \tdrop table if exists tlu_job;\n>> \tcreate table tlu_job (id integer primary key, name text);\n>> \tinsert into tlu_job values (1, 'a');\n>> }\n>> \n>> teardown {\n>> \tdrop table tlu_job;\n>> }\n>> \n>> session \"s1\"\n>> setup\t\t\t\t{ begin; }\n>> step \"s1_keyshare\"\t{ select id from tlu_job where name = 'a' for key share; }\n>> step \"s1_update\"\t{ update tlu_job set name = 'b' where id = 1; }\n>> step \"s1_rollback\"\t{ rollback; }\n>> \n>> session \"s2\"\n>> setup\t\t\t\t{ begin; }\n>> step \"s2_keyshare\"\t{ select id from tlu_job where name = 'a' for key share; }\n>> step \"s2_update\"\t{ update tlu_job set name = 'c' where id = 1; }\n>> step \"s2_commit\"\t{ commit; }\n>> \n>> session \"s3\"\n>> setup\t\t\t\t{ begin; }\n>> step \"s3_forupd\"\t{ select id from tlu_job where name = 'a' for update; }\n>> teardown\t\t\t{ commit; }\n> \n> Thank you! I can make it even simpler; s1 just acquires for share lock, s3\n> gets for update one and s2 takes for share lock first, and then tries to\n> acquire for update one; once s1 finishes, s3 deadlocks.\n> \n>> But semantically, I wonder if your transactions are correct. If you\n>> intend to modify the row in s1 and s2, shouldn't you be acquiring FOR NO\n>> KEY UPDATE lock instead? I don't see how can s1 and s2 coexist\n>> peacefully. Also, can your Y transaction use FOR NO KEY UPDATE instead\n>> .. unless you intend to delete the tuple in that transaction?\n> \n> It is correct. I wanted to make sure jobs that acquire for key share lock\n> can run concurrently most of the time; they only execute one update at the\n> end of the job, and it is just to update the last run timestamp.\n> \n>> I'm mulling over your proposed fix. I don't much like the idea that\n>> DoesMultiXactIdConflict() returns that new boolean -- seems pretty\n>> ad-hoc -- but I don't see any way to do better than that ... (If we get\n>> down to details, DoesMultiXactIdConflict needn't initialize that\n>> boolean: better let the callers do.)\n> \n> I am also not happy about the new parameter to DoesMultiXactIdConflict, but\n> calling a separate function to fetch the presence of the current transaction\n> in the multixact would mean doing the job of DoesMultiXactIdConflict twice.\n> I am open to suggestions on how to make it nicer.\n> \n> Attached is a slightly modified patch that avoids initializing\n> has_current_xid inside DoesMultiXactIdConflict and should apply cleanly to\n> the current master.\n\nAnd here is the v3 that also includes the isolation test I described above\n(quoting my previous message in full as I accidentally sent it off-list,\nsorry about that).\n\nCheers,\nOleksii",
"msg_date": "Wed, 12 Jun 2019 17:14:13 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "On 2019-Jun-12, Oleksii Kliukin wrote:\n\n> Thank you! I can make it even simpler; s1 just acquires for share lock, s3\n> gets for update one and s2 takes for share lock first, and then tries to\n> acquire for update one; once s1 finishes, s3 deadlocks.\n\nCool. I think it would be worthwhile to include a number of reasonable\npermutations instead of just one, and make sure they all work correctly.\nI don't think we need to include all possible permutations, just a few.\nI think we need at least enough permutations to cover the two places of\nthe code that are modified by the patch, for both values of\nhave_current_xid (so there should be four permutations, I think).\n\nPlease don't simplify the table name to just \"t\" -- the reason I used\nanother name is that we want these tests to be able to run concurrently\nat some point; ref.\nhttps://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\n\n> > But semantically, I wonder if your transactions are correct. If you\n> > intend to modify the row in s1 and s2, shouldn't you be acquiring FOR NO\n> > KEY UPDATE lock instead? I don't see how can s1 and s2 coexist\n> > peacefully. Also, can your Y transaction use FOR NO KEY UPDATE instead\n> > .. unless you intend to delete the tuple in that transaction?\n> \n> It is correct. I wanted to make sure jobs that acquire for key share lock\n> can run concurrently most of the time; they only execute one update at the\n> end of the job, and it is just to update the last run timestamp.\n\nI see. Under READ COMMITTED it works okay, I suppose.\n\n> > I'm mulling over your proposed fix. I don't much like the idea that\n> > DoesMultiXactIdConflict() returns that new boolean -- seems pretty\n> > ad-hoc -- but I don't see any way to do better than that ... (If we get\n> > down to details, DoesMultiXactIdConflict needn't initialize that\n> > boolean: better let the callers do.)\n> \n> I am also not happy about the new parameter to DoesMultiXactIdConflict, but\n> calling a separate function to fetch the presence of the current transaction\n> in the multixact would mean doing the job of DoesMultiXactIdConflict twice.\n> I am open to suggestions on how to make it nicer.\n\nYeah, I didn't find anything better either. We could make things more\ncomplex that we resolve the multixact once and then extract the two\nsepraate bits of information that we need from that ... but it ends up\nbeing uglier and messier for no real gain. So let's go with your\noriginal idea.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 12:46:49 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "Hello,\n\nAlvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Jun-12, Oleksii Kliukin wrote:\n> \n>> Thank you! I can make it even simpler; s1 just acquires for share lock, s3\n>> gets for update one and s2 takes for share lock first, and then tries to\n>> acquire for update one; once s1 finishes, s3 deadlocks.\n> \n> Cool. I think it would be worthwhile to include a number of reasonable\n> permutations instead of just one, and make sure they all work correctly.\n> I don't think we need to include all possible permutations, just a few.\n> I think we need at least enough permutations to cover the two places of\n> the code that are modified by the patch, for both values of\n> have_current_xid (so there should be four permutations, I think).\n\nMakes sense. For the symmetry I have included those that perform lock\nupgrades in one session and those that doesn’t, while the other sessions\nacquire locks, do updates or deletes. For those that don’t upgrade locks the\ntest checks that the locks are acquired in the correct order.\n\n> Please don't simplify the table name to just \"t\" -- the reason I used\n> another name is that we want these tests to be able to run concurrently\n> at some point; ref.\n> https://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\n\nAlright, thanks.\n\n> \n>>> But semantically, I wonder if your transactions are correct. If you\n>>> intend to modify the row in s1 and s2, shouldn't you be acquiring FOR NO\n>>> KEY UPDATE lock instead? I don't see how can s1 and s2 coexist\n>>> peacefully. Also, can your Y transaction use FOR NO KEY UPDATE instead\n>>> .. unless you intend to delete the tuple in that transaction?\n>> \n>> It is correct. I wanted to make sure jobs that acquire for key share lock\n>> can run concurrently most of the time; they only execute one update at the\n>> end of the job, and it is just to update the last run timestamp.\n> \n> I see. Under READ COMMITTED it works okay, I suppose.\n> \n>>> I'm mulling over your proposed fix. I don't much like the idea that\n>>> DoesMultiXactIdConflict() returns that new boolean -- seems pretty\n>>> ad-hoc -- but I don't see any way to do better than that ... (If we get\n>>> down to details, DoesMultiXactIdConflict needn't initialize that\n>>> boolean: better let the callers do.)\n>> \n>> I am also not happy about the new parameter to DoesMultiXactIdConflict, but\n>> calling a separate function to fetch the presence of the current transaction\n>> in the multixact would mean doing the job of DoesMultiXactIdConflict twice.\n>> I am open to suggestions on how to make it nicer.\n> \n> Yeah, I didn't find anything better either. We could make things more\n> complex that we resolve the multixact once and then extract the two\n> sepraate bits of information that we need from that ... but it ends up\n> being uglier and messier for no real gain. So let's go with your\n> original idea.\n\nOk, the v4 is attached. I have addressed your suggestion for the isolation\ntests, added a paragraph to README.tuplock explaining why do we skip\nLockTuple to avoid a deadlock in the session that upgrades its lock.\n\nCheers,\nOleksii",
"msg_date": "Thu, 13 Jun 2019 15:42:53 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 12:47 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> Please don't simplify the table name to just \"t\" -- the reason I used\n> another name is that we want these tests to be able to run concurrently\n> at some point; ref.\n> https://postgr.es/m/20180124231006.z7spaz5gkzbdvob5@alvherre.pgsql\n\nNot only that, but 't' is completely ungreppable. If you name the\ntable 'walrus' and five years from now somebody sees an error about it\nin some buildfarm log or whatever, they can type 'git grep walrus' to\nfind the test, and they'll probably only get that one hit. If you\nname it 't', well...\n\n[rhaas pgsql]$ git grep t | wc -l\n 1653468\n\nNot very helpful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jun 2019 13:32:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "On 2019-Jun-13, Oleksii Kliukin wrote:\n\n> Makes sense. For the symmetry I have included those that perform lock\n> upgrades in one session and those that doesn’t, while the other sessions\n> acquire locks, do updates or deletes. For those that don’t upgrade locks the\n> test checks that the locks are acquired in the correct order.\n\nThanks for the updated patch! I'm about to push to branches 9.6-master.\nIt applies semi-cleanly (only pgindent-maturity whitespace conflicts).\n\nThe [pg11 version of the] patch does applies to 9.5 cleanly ... but the\nisolation test doesn't work, because isolationtester was not smart\nenough back then. Since there have been no previous reports of this\nproblem, and to avoid pushing untested code, I'm going to refrain from\nback-patching there. My guess is that it should work ...\n\nIn 9.4 there are quite some severe conflicts, because 27846f02c176 was\nnot back-patched there. (The bug number \"#8470\" still floats in my\nmemory from time to time. Shudder)\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 14:00:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "On 2019-Jun-13, Alvaro Herrera wrote:\n\n> On 2019-Jun-13, Oleksii Kliukin wrote:\n> \n> > Makes sense. For the symmetry I have included those that perform lock\n> > upgrades in one session and those that doesn’t, while the other sessions\n> > acquire locks, do updates or deletes. For those that don’t upgrade locks the\n> > test checks that the locks are acquired in the correct order.\n> \n> Thanks for the updated patch! I'm about to push to branches 9.6-master.\n> It applies semi-cleanly (only pgindent-maturity whitespace conflicts).\n\nDone, thanks for the report and patch!\n\nI tried hard to find a scenario that this patch breaks, but couldn't\nfind anything.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 17:37:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> On 2019-Jun-13, Alvaro Herrera wrote:\n> \n>> On 2019-Jun-13, Oleksii Kliukin wrote:\n>> \n>>> Makes sense. For the symmetry I have included those that perform lock\n>>> upgrades in one session and those that doesn’t, while the other sessions\n>>> acquire locks, do updates or deletes. For those that don’t upgrade locks the\n>>> test checks that the locks are acquired in the correct order.\n>> \n>> Thanks for the updated patch! I'm about to push to branches 9.6-master.\n>> It applies semi-cleanly (only pgindent-maturity whitespace conflicts).\n> \n> Done, thanks for the report and patch!\n> \n> I tried hard to find a scenario that this patch breaks, but couldn't\n> find anything.\n\nThank you very much for reviewing and committing it!\n\nCheers,\nOleksii\n\n",
"msg_date": "Fri, 14 Jun 2019 09:52:18 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": true,
"msg_subject": "Re: upgrades in row-level locks can deadlock"
}
] |
[
{
"msg_contents": "Hi,\n\na customer reported excessive memory usage and out-of-memory ERRORs\nafter introducing native partitioning in one of their databases. We\ncould narrow it down to the overhead introduced by the partitioning when\nissuing multiple statements in a single query. I could reduce the\nproblem to the following recipe:\n\n--8<---------------cut here---------------start------------->8---\n#!/bin/bash\n\n# create 100 partitions\npsql -c 'create table t(c int primary key) partition by range(c)'\nfor i in {1..100}; do\n psql -e -c \"create table t$i partition of t for values\n from ($(((i-1)*100))) to ($((i*100-1))) \"\ndone\n\n# artificially limit per-process memory by setting a resource limit for\n# the postmaster to 256MB\n\nprlimit -d$((256*1024*1024)) -p $POSTMASTER_PID\n--8<---------------cut here---------------end--------------->8---\n\nNow, updates to a partition are fine with 4000 update statements:\n\n,----\n| $ psql -c \"$(yes update t2 set c=c where c=6 \\; | head -n 4000)\"\n| UPDATE 0\n`----\n\n…but when doing it on the parent relation, even 100 statements are\nenough to exceed the limit:\n\n,----\n| $ psql -c \"$(yes update t set c=c where c=6 \\; | head -n 100)\"\n| FEHLER: Speicher aufgebraucht\n| DETAIL: Failed on request of size 200 in memory context \"MessageContext\".\n`----\n\nThe memory context dump shows plausible values except for the MessageContext:\n\nTopMemoryContext: 124336 total in 8 blocks; 18456 free (11 chunks); 105880 used\n [...]\n MessageContext: 264241152 total in 42 blocks; 264 free (0 chunks); 264240888 used\n [...]\n\nMaybe some tactically placed pfrees or avoiding putting redundant stuff\ninto MessageContext can relax the situation?\n\nregards,\nAndreas\n\n\n",
"msg_date": "Wed, 22 May 2019 21:15:40 +0200",
"msg_from": "Andreas Seltenreich <andreas.seltenreich@credativ.de>",
"msg_from_op": true,
"msg_subject": "Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On Thu, 23 May 2019 at 17:55, Andreas Seltenreich\n<andreas.seltenreich@credativ.de> wrote:\n> a customer reported excessive memory usage and out-of-memory ERRORs\n> after introducing native partitioning in one of their databases. We\n> could narrow it down to the overhead introduced by the partitioning when\n> issuing multiple statements in a single query.\n\n\"multiple statements in a single query\", did you mean to write session\nor maybe transaction there?\n\nWhich version?\n\nI tried your test case with REL_11_STABLE and I see nowhere near as\nmuch memory used in MessageContext.\n\nAfter repeating the query twice, I see:\n\nMessageContext: 8388608 total in 11 blocks; 3776960 free (1 chunks);\n4611648 used\nGrand total: 8388608 bytes in 11 blocks; 3776960 free (1 chunks); 4611648 used\nMessageContext: 8388608 total in 11 blocks; 3776960 free (1 chunks);\n4611648 used\nGrand total: 8388608 bytes in 11 blocks; 3776960 free (1 chunks); 4611648 used\n\nwhich is quite a long way off the 252MB you're getting.\n\nperhaps I'm not testing with the same version as you are.\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 23 May 2019 20:47:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hey David,\n\n\n> \"multiple statements in a single query\", did you mean to write\n> session\n> or maybe transaction there?\n\nMaybe the wording isn't perfect. It is required that the querys are\nsent as a single batch. Try the exact bash-script Andreas used for\nupdating the parent.\n\n> Which version?\n\nTested including 11.2. Initially found on 11.1. Memory-consumption\nScales somewhat linearly with existing partitions and ';' delimited\nQuerys per single Batch.\n\n\nregards\n-- \nJulian Schauder\n\n\n\n",
"msg_date": "Thu, 23 May 2019 11:18:37 +0200",
"msg_from": "Julian Schauder <julian.schauder@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/\n partitioning"
},
{
"msg_contents": "On Thu, 23 May 2019 at 21:19, Julian Schauder\n<julian.schauder@credativ.de> wrote:\n> > \"multiple statements in a single query\", did you mean to write\n> > session\n> > or maybe transaction there?\n>\n> Maybe the wording isn't perfect. It is required that the querys are\n> sent as a single batch. Try the exact bash-script Andreas used for\n> updating the parent.\n\nThanks for explaining.\n\n> > Which version?\n>\n> Tested including 11.2. Initially found on 11.1. Memory-consumption\n> Scales somewhat linearly with existing partitions and ';' delimited\n> Querys per single Batch.\n\nYeah, unfortunately, if the batch contains 100 of those statements\nthen the planner is going to eat 100 times the memory since it stores\nall 100 plans at once.\n\nSince your pruning all but 1 partition then the situation should be\nmuch better for you when you can upgrade to v12. Unfortunately, that's\nstill about 5 months away.\n\nThe best thing you can do for now is going to be either reduce the\nnumber of partitions or reduce the number of statements in the\nbatch... or install more memory.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 22:02:52 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hi,\n\nOn 2019/05/23 4:15, Andreas Seltenreich wrote:\n> …but when doing it on the parent relation, even 100 statements are\n> enough to exceed the limit:\n> \n> ,----\n> | $ psql -c \"$(yes update t set c=c where c=6 \\; | head -n 100)\"\n> | FEHLER: Speicher aufgebraucht\n> | DETAIL: Failed on request of size 200 in memory context \"MessageContext\".\n> `----\n> \n> The memory context dump shows plausible values except for the MessageContext:\n> \n> TopMemoryContext: 124336 total in 8 blocks; 18456 free (11 chunks); 105880 used\n> [...]\n> MessageContext: 264241152 total in 42 blocks; 264 free (0 chunks); 264240888 used\n> [...]\n\nAs David Rowley said, planning that query hundreds of times under a single\nMessageContext is not something that will end well on 11.3, because even a\nsingle instance takes up tons of memory that's only released when\nMessageContext is reset.\n\n> Maybe some tactically placed pfrees or avoiding putting redundant stuff\n> into MessageContext can relax the situation?\n\nI too have had similar thoughts on the matter. If the planner had built\nall its subsidiary data structures in its own private context (or tree of\ncontexts) which is reset once a plan for a given query is built and passed\non, then there wouldn't be an issue of all of that subsidiary memory\nleaking into MessageContext. However, the problem may really be that\nwe're subjecting the planner to use cases that it wasn't perhaps designed\nto perform equally well under -- running it many times while handling the\nsame message. It is worsened by the fact that the query in question is\nsomething that ought to have been documented as not well supported by the\nplanner; David has posted a documentation patch for that [1]. PG 12 has\nalleviated the situation to a large degree, so you won't see the OOM\noccurring for this query, but not for all queries unfortunately.\n\nWith that said, we may want to look into the planner sometimes hoarding\nmemory, especially when planning complex queries involving partitions.\nAFAIK, one of the reasons for partition-wise join, aggregate to be turned\noff by default is that its planning consumes a lot of CPU and memory,\npartly because of the fact that planner doesn't actively release the\nmemory of its subsidiary structures, or maybe because of inferior ways in\nwhich partitions and partitioning properties are represented in the\nplanner. Though if there's evidence that it's the latter, maybe we should\nfix that before pondering any sophisticated planner memory management.\n\nThanks,\nAmit\n\n[1]\nhttps://www.postgresql.org/message-id/CAKJS1f-2rx%2BE9mG3xrCVHupefMjAp1%2BtpczQa9SEOZWyU7fjEA%40mail.gmail.com\n\n\n\n",
"msg_date": "Fri, 24 May 2019 14:47:00 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On 5/24/19 1:47 AM, Amit Langote wrote:\n> On 2019/05/23 4:15, Andreas Seltenreich wrote:\n>> …but when doing it on the parent relation, even 100 statements are\n>> enough to exceed the limit:\n>> \n>> ,----\n>> | $ psql -c \"$(yes update t set c=c where c=6 \\; | head -n 100)\"\n>> | FEHLER: Speicher aufgebraucht\n>> | DETAIL: Failed on request of size 200 in memory context \"MessageContext\".\n>> `----\n>> \n>> The memory context dump shows plausible values except for the MessageContext:\n>> \n>> TopMemoryContext: 124336 total in 8 blocks; 18456 free (11 chunks); 105880 used\n>> [...]\n>> MessageContext: 264241152 total in 42 blocks; 264 free (0 chunks); 264240888 used\n>> [...]\n> \n> As David Rowley said, planning that query hundreds of times under a single\n> MessageContext is not something that will end well on 11.3, because even a\n> single instance takes up tons of memory that's only released when\n> MessageContext is reset.\n> \n>> Maybe some tactically placed pfrees or avoiding putting redundant stuff\n>> into MessageContext can relax the situation?\n> \n> I too have had similar thoughts on the matter. If the planner had built\n> all its subsidiary data structures in its own private context (or tree of\n> contexts) which is reset once a plan for a given query is built and passed\n> on, then there wouldn't be an issue of all of that subsidiary memory\n> leaking into MessageContext. However, the problem may really be that\n> we're subjecting the planner to use cases that it wasn't perhaps designed\n> to perform equally well under -- running it many times while handling the\n> same message. It is worsened by the fact that the query in question is\n> something that ought to have been documented as not well supported by the\n> planner; David has posted a documentation patch for that [1]. PG 12 has\n> alleviated the situation to a large degree, so you won't see the OOM\n> occurring for this query, but not for all queries unfortunately.\n\n\nI admittedly haven't followed this thread too closely, but if having 100\npartitions causes out of memory on pg11, that sounds like a massive\nregression to me.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 24 May 2019 08:18:49 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On Sat, 25 May 2019 at 00:18, Joe Conway <mail@joeconway.com> wrote:\n> I admittedly haven't followed this thread too closely, but if having 100\n> partitions causes out of memory on pg11, that sounds like a massive\n> regression to me.\n\nFor it to have regressed it would have had to once have been better,\nbut where was that mentioned? The only thing I saw was\nnon-partitioned tables compared to partitioned tables, but you can't\nreally say it's a regression if you're comparing apples to oranges.\n\nI think the only regression here is in the documents from bebc46931a1\nhaving removed the warning about too many partitions in a partitioned\ntable at the end of ddl.sgml. As Amit mentions, we'd like to put\nsomething back about that.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 25 May 2019 01:33:05 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On 5/24/19 9:33 AM, David Rowley wrote:\n> On Sat, 25 May 2019 at 00:18, Joe Conway <mail@joeconway.com> wrote:\n>> I admittedly haven't followed this thread too closely, but if having 100\n>> partitions causes out of memory on pg11, that sounds like a massive\n>> regression to me.\n> \n> For it to have regressed it would have had to once have been better,\n> but where was that mentioned? The only thing I saw was\n> non-partitioned tables compared to partitioned tables, but you can't\n> really say it's a regression if you're comparing apples to oranges.\n\n\nI have very successfully used multiple hundreds and even low thousands\nof partitions without running out of memory under the older inheritance\nbased \"partitioning\", and declarative partitioning is supposed to be\n(and we have advertised it to be) better, not worse, isn't it?\n\nAt least from my point of view if 100 partitions is unusable due to\nmemory leaks it is a regression. Perhaps not *technically* a regression\nassuming it behaves this way in pg10 also, but I bet lots of users would\nsee it that way.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 24 May 2019 10:17:21 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> On 5/24/19 9:33 AM, David Rowley wrote:\n>> For it to have regressed it would have had to once have been better,\n>> but where was that mentioned? The only thing I saw was\n>> non-partitioned tables compared to partitioned tables, but you can't\n>> really say it's a regression if you're comparing apples to oranges.\n\n> I have very successfully used multiple hundreds and even low thousands\n> of partitions without running out of memory under the older inheritance\n> based \"partitioning\", and declarative partitioning is supposed to be\n> (and we have advertised it to be) better, not worse, isn't it?\n\nHave you done the exact thing described in the test case? I think\nthat's going to be quite unpleasantly memory-intensive in any version.\n\nThe real issue here is that we have designed around the assumption\nthat MessageContext will be used to parse and plan one single statement\nbefore being reset. The described usage breaks that assumption.\nNo matter how memory-efficient any one statement is or isn't,\nif you throw enough of them at the backend without giving it a chance\nto reset MessageContext, it won't end well.\n\nSo my thought, if we want to do something about this, is not \"find\nsome things we can pfree at the end of planning\" but \"find a way\nto use a separate context for each statement in the query string\".\nMaybe multi-query strings could be handled by setting up a child\ncontext of MessageContext (after we've done the raw parsing there\nand determined that indeed there are multiple queries), running\nparse analysis and planning in that context, and resetting that\ncontext after each query.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 10:28:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On 5/24/19 10:28 AM, Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>> On 5/24/19 9:33 AM, David Rowley wrote:\n>>> For it to have regressed it would have had to once have been better,\n>>> but where was that mentioned? The only thing I saw was\n>>> non-partitioned tables compared to partitioned tables, but you can't\n>>> really say it's a regression if you're comparing apples to oranges.\n> \n>> I have very successfully used multiple hundreds and even low thousands\n>> of partitions without running out of memory under the older inheritance\n>> based \"partitioning\", and declarative partitioning is supposed to be\n>> (and we have advertised it to be) better, not worse, isn't it?\n> \n> Have you done the exact thing described in the test case? I think\n> that's going to be quite unpleasantly memory-intensive in any version.\n\n\nOk, fair point. Will test and report back.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Fri, 24 May 2019 12:54:46 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On 2019/05/24 23:28, Tom Lane wrote:\n> So my thought, if we want to do something about this, is not \"find\n> some things we can pfree at the end of planning\" but \"find a way\n> to use a separate context for each statement in the query string\".\n> Maybe multi-query strings could be handled by setting up a child\n> context of MessageContext (after we've done the raw parsing there\n> and determined that indeed there are multiple queries), running\n> parse analysis and planning in that context, and resetting that\n> context after each query.\n\nMaybe like the attached? I'm not sure if we need to likewise be concerned\nabout exec_sql_string() being handed multi-query strings.\n\nThanks,\nAmit",
"msg_date": "Mon, 27 May 2019 14:58:12 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hi,\n\nOn 2019/05/24 21:18, Joe Conway wrote:\n> On 5/24/19 1:47 AM, Amit Langote wrote:\n>> I too have had similar thoughts on the matter. If the planner had built\n>> all its subsidiary data structures in its own private context (or tree of\n>> contexts) which is reset once a plan for a given query is built and passed\n>> on, then there wouldn't be an issue of all of that subsidiary memory\n>> leaking into MessageContext. However, the problem may really be that\n>> we're subjecting the planner to use cases that it wasn't perhaps designed\n>> to perform equally well under -- running it many times while handling the\n>> same message. It is worsened by the fact that the query in question is\n>> something that ought to have been documented as not well supported by the\n>> planner; David has posted a documentation patch for that [1]. PG 12 has\n>> alleviated the situation to a large degree, so you won't see the OOM\n>> occurring for this query, but not for all queries unfortunately.\n> \n> I admittedly haven't followed this thread too closely, but if having 100\n> partitions causes out of memory on pg11, that sounds like a massive\n> regression to me.\n\nYou won't run out of memory if you are running just one query per message,\nbut that's not the case in this discussion. With multi-query submissions\nlike in this case, memory taken up by parsing and planning of *all*\nqueries adds up to a single MessageContext, so can lead to OOM if there\nare enough queries to load up MessageContext beyond limit. The only point\nI was trying to make in what I wrote is that reaching OOM of this sort is\neasier with partitioning, because of the age-old behavior that planning\nUPDATE/DELETE queries on inherited tables (and so partitioned tables)\nneeds tons of memory that grows as the number of child tables / partitions\nincreases.\n\nWe fixed things in PG 12, at least for partitioning, so that as long as a\nquery needs to affect only a small number of partitions of the total\npresent, its planning will use only a fixed amount of CPU and memory, so\nincreasing the number of partitions won't lead to explosive growth in\nmemory used. You might be able to tell however that that effort had\nnothing to do improving the situation with multi-query submissions.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 27 May 2019 15:41:50 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/05/24 23:28, Tom Lane wrote:\n>> So my thought, if we want to do something about this, is not \"find\n>> some things we can pfree at the end of planning\" but \"find a way\n>> to use a separate context for each statement in the query string\".\n\n> Maybe like the attached? I'm not sure if we need to likewise be concerned\n> about exec_sql_string() being handed multi-query strings.\n\nPlease add this to the upcoming CF so we don't forget about it.\n(I don't think there's anything very new about this behavior, so\nI don't feel that we should consider it an open item for v12 ---\nanyone think differently?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 May 2019 08:56:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On 2019/05/27 21:56, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/05/24 23:28, Tom Lane wrote:\n>>> So my thought, if we want to do something about this, is not \"find\n>>> some things we can pfree at the end of planning\" but \"find a way\n>>> to use a separate context for each statement in the query string\".\n> \n>> Maybe like the attached? I'm not sure if we need to likewise be concerned\n>> about exec_sql_string() being handed multi-query strings.\n> \n> Please add this to the upcoming CF so we don't forget about it.\n\nDone; added to Performance for lack of a better topic for this.\n\nhttps://commitfest.postgresql.org/23/2131/\n\n> (I don't think there's anything very new about this behavior, so\n> I don't feel that we should consider it an open item for v12 ---\n> anyone think differently?)\n\nAgree that there's nothing new about the behavior itself. What may be new\nthough is people getting increasingly bitten by it if they query tables\ncontaining large numbers of partitions most of which need to be scanned\n[1]. That is, provided they have use cases where a single client request\ncontains hundreds of such queries to begin with.\n\nThanks,\nAmit\n\n\n[1] AFAICT, that's the only class of queries where planner needs to keep a\nlot of stuff around, the memory cost of which increases with the number of\npartitions. I was thinking that the planning complex queries involving\ngoing through tons of indexes, joins, etc. also hoards tons of memory, but\napparently not, because the planner seems fairly good at cleaning after\nitself as it's doing the work.\n\n\n\n",
"msg_date": "Tue, 28 May 2019 13:56:29 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On Tue, May 28, 2019 at 6:57 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/05/27 21:56, Tom Lane wrote:\n> > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> >> On 2019/05/24 23:28, Tom Lane wrote:\n> >>> So my thought, if we want to do something about this, is not \"find\n> >>> some things we can pfree at the end of planning\" but \"find a way\n> >>> to use a separate context for each statement in the query string\".\n> >\n> >> Maybe like the attached? I'm not sure if we need to likewise be concerned\n> >> about exec_sql_string() being handed multi-query strings.\n\nthe whole extension sql script is passed to execute_sql_string(), so I\nthink that it's a good thing to have similar workaround there.\n\nAbout the patch:\n\n - * Switch to appropriate context for constructing querytrees (again,\n- * these must outlive the execution context).\n+ * Switch to appropriate context for constructing querytrees.\n+ * Memory allocated during this construction is released before\n+ * the generated plan is executed.\n\nThe comment should mention query and plan trees, everything else seems ok to me.\n\n\n",
"msg_date": "Thu, 4 Jul 2019 11:52:22 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hi Julien,\n\nThanks for taking a look at this.\n\nOn Thu, Jul 4, 2019 at 6:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Tue, May 28, 2019 at 6:57 AM Amit Langote wrote:\n> > >> Maybe like the attached? I'm not sure if we need to likewise be concerned\n> > >> about exec_sql_string() being handed multi-query strings.\n>\n> the whole extension sql script is passed to execute_sql_string(), so I\n> think that it's a good thing to have similar workaround there.\n\nThat makes sense, although it is perhaps much less likely for memory\nusage explosion to occur in execute_sql_strings(), because the scripts\npassed to execute_sql_strings() mostly contain utility statements and\nrarely anything whose planning will explode in memory usage.\n\nAnyway, I've added similar handling in execute_sql_strings() for consistency.\n\nNow I wonder if we'll need to consider another path which calls\npg_plan_queries() on a possibly multi-statement query --\nBuildCachedPlan()...\n\n> About the patch:\n>\n> - * Switch to appropriate context for constructing querytrees (again,\n> - * these must outlive the execution context).\n> + * Switch to appropriate context for constructing querytrees.\n> + * Memory allocated during this construction is released before\n> + * the generated plan is executed.\n>\n> The comment should mention query and plan trees, everything else seems ok to me.\n\nOkay, fixed.\n\nAttached updated patch. Thanks again.\n\nRegards,\nAmit",
"msg_date": "Mon, 8 Jul 2019 17:52:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hi Amit,\n\nOn Mon, Jul 8, 2019 at 10:52 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jul 4, 2019 at 6:52 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Tue, May 28, 2019 at 6:57 AM Amit Langote wrote:\n> > > >> Maybe like the attached? I'm not sure if we need to likewise be concerned\n> > > >> about exec_sql_string() being handed multi-query strings.\n> >\n> > the whole extension sql script is passed to execute_sql_string(), so I\n> > think that it's a good thing to have similar workaround there.\n>\n> That makes sense, although it is perhaps much less likely for memory\n> usage explosion to occur in execute_sql_strings(), because the scripts\n> passed to execute_sql_strings() mostly contain utility statements and\n> rarely anything whose planning will explode in memory usage.\n>\n> Anyway, I've added similar handling in execute_sql_strings() for consistency.\n\nThanks!\n\n> Now I wonder if we'll need to consider another path which calls\n> pg_plan_queries() on a possibly multi-statement query --\n> BuildCachedPlan()...\n\nI also thought about this when reviewing the patch, but AFAICS you\ncan't provide a multi-statement query to BuildCachedPlan() using\nprepared statements and I'm not sure that SPI is worth the trouble.\nI'll mark this patch as ready for committer.\n\n>\n> > About the patch:\n> >\n> > - * Switch to appropriate context for constructing querytrees (again,\n> > - * these must outlive the execution context).\n> > + * Switch to appropriate context for constructing querytrees.\n> > + * Memory allocated during this construction is released before\n> > + * the generated plan is executed.\n> >\n> > The comment should mention query and plan trees, everything else seems ok to me.\n>\n> Okay, fixed.\n>\n> Attached updated patch. Thanks again.\n>\n> Regards,\n> Amit\n\n\n",
"msg_date": "Mon, 8 Jul 2019 15:15:18 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> [ parse-plan-memcxt_v2.patch ]\n\nI got around to looking at this finally. I'm not at all happy with\nthe fact that it's added a plantree copy step to the only execution\npath through exec_simple_query. That's a very significant overhead,\nin many use-cases, to solve something that nobody had complained\nabout for a couple of decades before now. I don't see the need for\nany added copy step anyway. The only reason you're doing it AFAICS\nis so you can release the per-statement context a bit earlier, which\nis a completely unnecessary optimization. Just wait to release it\ntill the bottom of the loop.\n\nAlso, creating/deleting the sub-context is in itself an added overhead\nthat accomplishes exactly nothing in the typical case where there's\nnot multiple statements. I thought the idea was to do that only if\nthere was more than one raw parsetree (or, maybe better, do it for\nall but the last parsetree).\n\nTo show that this isn't an empty concern, I did a quick pgbench\ntest. Using a single-client select-only test (\"pgbench -S -T 60\"\nin an -s 10 database), I got these numbers in three trials with HEAD:\n\ntps = 9593.818478 (excluding connections establishing)\ntps = 9570.189163 (excluding connections establishing)\ntps = 9596.579038 (excluding connections establishing)\n\nand these numbers after applying the patch:\n\ntps = 9411.918165 (excluding connections establishing)\ntps = 9389.279079 (excluding connections establishing)\ntps = 9409.350175 (excluding connections establishing)\n\nThat's about a 2% dropoff. Now it's possible that that can be\nexplained away as random variations from a slightly different layout\nof critical loops vs cacheline boundaries ... but I don't believe it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Jul 2019 17:20:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On Tue, Jul 9, 2019 at 6:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [ parse-plan-memcxt_v2.patch ]\n>\n> I got around to looking at this finally.\n\nThanks for the review.\n\n> I'm not at all happy with\n> the fact that it's added a plantree copy step to the only execution\n> path through exec_simple_query. That's a very significant overhead,\n> in many use-cases, to solve something that nobody had complained\n> about for a couple of decades before now. I don't see the need for\n> any added copy step anyway. The only reason you're doing it AFAICS\n> is so you can release the per-statement context a bit earlier, which\n> is a completely unnecessary optimization. Just wait to release it\n> till the bottom of the loop.\n\nAh, that makes sense. I've removed the copying of plan tree and also\nmoved the temporary context deletion to the bottom of the loop.\n\n> Also, creating/deleting the sub-context is in itself an added overhead\n> that accomplishes exactly nothing in the typical case where there's\n> not multiple statements. I thought the idea was to do that only if\n> there was more than one raw parsetree (or, maybe better, do it for\n> all but the last parsetree).\n\nThat makes sense too. I've made it (creation/deletion of the child\ncontext) conditional on whether there are more than one queries to\nplan.\n\n> To show that this isn't an empty concern, I did a quick pgbench\n> test. Using a single-client select-only test (\"pgbench -S -T 60\"\n> in an -s 10 database), I got these numbers in three trials with HEAD:\n>\n> tps = 9593.818478 (excluding connections establishing)\n> tps = 9570.189163 (excluding connections establishing)\n> tps = 9596.579038 (excluding connections establishing)\n>\n> and these numbers after applying the patch:\n>\n> tps = 9411.918165 (excluding connections establishing)\n> tps = 9389.279079 (excluding connections establishing)\n> tps = 9409.350175 (excluding connections establishing)\n>\n> That's about a 2% dropoff.\n\nWith the updated patch, here are the numbers on my machine (HEAD vs patch)\n\nHEAD:\n\ntps = 3586.233815 (excluding connections establishing)\ntps = 3569.252542 (excluding connections establishing)\ntps = 3559.027733 (excluding connections establishing)\n\nPatched:\n\ntps = 3586.988057 (excluding connections establishing)\ntps = 3585.169589 (excluding connections establishing)\ntps = 3526.437968 (excluding connections establishing)\n\nA bit noisy but not much degradation.\n\nAttached updated patch. Thanks again.\n\nRegards,\nAmit",
"msg_date": "Wed, 10 Jul 2019 16:35:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Hi,\n\nAt Wed, 10 Jul 2019 16:35:18 +0900, Amit Langote <amitlangote09@gmail.com> wrote in <CA+HiwqFCO4c8tdQmXcDNzyaD43A81caapYLJ6CEh8H3P0tPL4A@mail.gmail.com>\n> On Tue, Jul 9, 2019 at 6:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > [ parse-plan-memcxt_v2.patch ]\n> >\n> > I got around to looking at this finally.\n> \n> Thanks for the review.\n> \n> > I'm not at all happy with\n> > the fact that it's added a plantree copy step to the only execution\n> > path through exec_simple_query. That's a very significant overhead,\n> > in many use-cases, to solve something that nobody had complained\n> > about for a couple of decades before now. I don't see the need for\n> > any added copy step anyway. The only reason you're doing it AFAICS\n> > is so you can release the per-statement context a bit earlier, which\n> > is a completely unnecessary optimization. Just wait to release it\n> > till the bottom of the loop.\n> \n> Ah, that makes sense. I've removed the copying of plan tree and also\n> moved the temporary context deletion to the bottom of the loop.\n\n- * Switch to appropriate context for constructing querytrees (again,\n- * these must outlive the execution context).\n+ * Switch to appropriate context for constructing query and plan trees\n+ * (again, these must outlive the execution context). Normally, it's\n+ * MessageContext, but if there are more queries to plan, we use a\n+ * temporary child context that will be reset after executing this\n+ * query. We avoid that overhead of setting up a separate context\n+ * for the common case of having just a single query.\n\nMight be stupid, but I feel uneasy that \"usually it must live in\nMessageContxt, but not necessarily if there is succeeding\nquery\".. *I* need more explanation why it is safe to use that\nshort-lived context.\n\n> > Also, creating/deleting the sub-context is in itself an added overhead\n> > that accomplishes exactly nothing in the typical case where there's\n> > not multiple statements. I thought the idea was to do that only if\n> > there was more than one raw parsetree (or, maybe better, do it for\n> > all but the last parsetree).\n> \n> That makes sense too. I've made it (creation/deletion of the child\n> context) conditional on whether there are more than one queries to\n> plan.\n>\n> > To show that this isn't an empty concern, I did a quick pgbench\n> > test. Using a single-client select-only test (\"pgbench -S -T 60\"\n> > in an -s 10 database), I got these numbers in three trials with HEAD:\n> >\n> > tps = 9593.818478 (excluding connections establishing)\n> > tps = 9570.189163 (excluding connections establishing)\n> > tps = 9596.579038 (excluding connections establishing)\n> >\n> > and these numbers after applying the patch:\n> >\n> > tps = 9411.918165 (excluding connections establishing)\n> > tps = 9389.279079 (excluding connections establishing)\n> > tps = 9409.350175 (excluding connections establishing)\n> >\n> > That's about a 2% dropoff.\n> \n> With the updated patch, here are the numbers on my machine (HEAD vs patch)\n> \n> HEAD:\n> \n> tps = 3586.233815 (excluding connections establishing)\n> tps = 3569.252542 (excluding connections establishing)\n> tps = 3559.027733 (excluding connections establishing)\n> \n> Patched:\n> \n> tps = 3586.988057 (excluding connections establishing)\n> tps = 3585.169589 (excluding connections establishing)\n> tps = 3526.437968 (excluding connections establishing)\n> \n> A bit noisy but not much degradation.\n> \n> Attached updated patch. Thanks again.\n> \n> Regards,\n> Amit\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 10 Jul 2019 17:38:58 +0900 (Tokyo Standard Time)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/\n partitioning"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Attached updated patch. Thanks again.\n\nPushed with a bit of further cleanup --- most notably, the way\nyou had execute_sql_string(), it was still leaking any cruft\nProcessUtility might generate. We can fix that by running\nProcessUtility in the per-statement context too.\n\nI also dropped the optimization for a single/last statement in\nexecute_sql_string(), and simplified it to just always create\nand delete a child context. This was based on a couple of\nthoughts. The norm in this code path is that there's multiple\nstatements, probably lots of them, so that the percentage savings\nfrom getting rid of one context creation is likely negligible.\nAlso, unlike exec_simple_query, we *don't* know that the outer\ncontext is due to be cleared right afterwards. Since\nexecute_sql_string() can run multiple times in one extension\ncommand, in principle we could get bloat from not cleaning up\nafter the last command of each string. Admittedly, it's not\nlikely that you'd have so many strings involved that that\namounts to a lot, but between following upgrade-script chains\nand cascaded module loads, there could be more than a couple.\nSo it seems like the argument for saving a context creation is\nmuch weaker here than in exec_simple_query.\n\nI tried to improve the comments too. I noticed that the bit about\n\"(again, these must outlive the execution context)\" seemed to be\na dangling reference --- whatever previous comment it was referring\nto is not to be found anymore. So I made that self-contained.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Jul 2019 14:46:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "On Thu, Jul 11, 2019 at 3:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > Attached updated patch. Thanks again.\n>\n> Pushed with a bit of further cleanup\n\nThanks a lot.\n\n> --- most notably, the way\n> you had execute_sql_string(), it was still leaking any cruft\n> ProcessUtility might generate. We can fix that by running\n> ProcessUtility in the per-statement context too.\n\nAh, I was thinking only about planning.\n\n> I also dropped the optimization for a single/last statement in\n> execute_sql_string(), and simplified it to just always create\n> and delete a child context. This was based on a couple of\n> thoughts. The norm in this code path is that there's multiple\n> statements, probably lots of them, so that the percentage savings\n> from getting rid of one context creation is likely negligible.\n> Also, unlike exec_simple_query, we *don't* know that the outer\n> context is due to be cleared right afterwards. Since\n> execute_sql_string() can run multiple times in one extension\n> command, in principle we could get bloat from not cleaning up\n> after the last command of each string. Admittedly, it's not\n> likely that you'd have so many strings involved that that\n> amounts to a lot, but between following upgrade-script chains\n> and cascaded module loads, there could be more than a couple.\n> So it seems like the argument for saving a context creation is\n> much weaker here than in exec_simple_query.\n\nAgreed.\n\n> I tried to improve the comments too. I noticed that the bit about\n> \"(again, these must outlive the execution context)\" seemed to be\n> a dangling reference --- whatever previous comment it was referring\n> to is not to be found anymore. So I made that self-contained.\n\nThanks.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Fri, 12 Jul 2019 13:49:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
},
{
"msg_contents": "Horiguchi-san,\n\nThanks for the comment. My reply is a bit late now, but....\n\nOn Wed, Jul 10, 2019 at 5:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 10 Jul 2019 16:35:18 +0900, Amit Langote <amitlangote09@gmail.com> wrote:\n> - * Switch to appropriate context for constructing querytrees (again,\n> - * these must outlive the execution context).\n> + * Switch to appropriate context for constructing query and plan trees\n> + * (again, these must outlive the execution context). Normally, it's\n> + * MessageContext, but if there are more queries to plan, we use a\n> + * temporary child context that will be reset after executing this\n> + * query. We avoid that overhead of setting up a separate context\n> + * for the common case of having just a single query.\n>\n> Might be stupid, but I feel uneasy that \"usually it must live in\n> MessageContxt, but not necessarily if there is succeeding\n> query\".. *I* need more explanation why it is safe to use that\n> short-lived context.\n\nSo the problem we're trying solve with this is that memory consumed\nwhen parsing/planning individual statements pile up in a single\ncontext (currently, MessageContext), which can lead to severe bloat\nespecially if the planning of individual statements consumes huge\namount of memory. The solution is to use a sub-context of\nMessageContext for each statement that's reset when its execution is\nfinished. I think it's safe to use a shorter-lived context for each\nstatement because the memory of a given statement should not need to\nbe referenced when its execution finishes. Do you see any problem\nwith that assumption?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 12 Jul 2019 14:22:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory usage in multi-statement queries w/ partitioning"
}
] |
[
{
"msg_contents": "[ redirected from a thread in pgsql-committers[1] ]\n\nAs of commit eb9812f27 you can run a manual check-world with\nstdout dumped to /dev/null, and get fairly clean results:\n\n$ time make check-world -j10 >/dev/null\nNOTICE: database \"regression\" does not exist, skipping\n\nreal 1m43.875s\nuser 2m50.659s\nsys 1m22.518s\n$\n\nThis is a productive way to work because if you do get a failure,\nmake's bleating gives you enough context to see which subdirectory\nto check the log files in; so you don't really need to see all the\nnoise that goes to stdout. (OTOH, if you don't discard stdout,\nit's a mess; if you get a failure it could easily scroll off the\nscreen before you ever see it, leaving you with a false impression\nthat the test succeeded.)\n\nHowever ... there is that one NOTICE, which is annoying just because\nit's the only one left. That's coming from the pg_upgrade test's\ninvocation of \"make installcheck\" in the instance it's just built.\n(Every other test lets pg_regress build its own temp instance,\nand then pg_regress knows it needn't bother with \"DROP DATABASE\nregression\".)\n\nI experimented with the attached quick-hack patch to make pg_regress\nsuppress notices from its various initial DROP/CREATE IF [NOT] EXISTS\ncommands. I'm not entirely convinced whether suppressing them is\na good idea though. Perhaps some hack with effects confined to\npg_upgrade's test would be better. I don't have a good idea what\nthat would look like, however.\n\nOr we could just say this isn't annoying enough to fix.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/E1hSk9C-0002hH-Vp@gemulon.postgresql.org",
"msg_date": "Wed, 22 May 2019 18:57:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Suppressing noise in successful check-world runs"
},
{
"msg_contents": "On Wed, May 22, 2019 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I experimented with the attached quick-hack patch to make pg_regress\n> suppress notices from its various initial DROP/CREATE IF [NOT] EXISTS\n> commands. I'm not entirely convinced whether suppressing them is\n> a good idea though. Perhaps some hack with effects confined to\n> pg_upgrade's test would be better. I don't have a good idea what\n> that would look like, however.\n>\n> Or we could just say this isn't annoying enough to fix.\n\nI think it's worth fixing.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 May 2019 12:31:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Suppressing noise in successful check-world runs"
},
{
"msg_contents": "On Fri, May 24, 2019 at 12:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, May 22, 2019 at 3:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I experimented with the attached quick-hack patch to make pg_regress\n> > suppress notices from its various initial DROP/CREATE IF [NOT] EXISTS\n> > commands. I'm not entirely convinced whether suppressing them is\n> > a good idea though. Perhaps some hack with effects confined to\n> > pg_upgrade's test would be better. I don't have a good idea what\n> > that would look like, however.\n> >\n> > Or we could just say this isn't annoying enough to fix.\n>\n> I think it's worth fixing.\n\nMy development machine has 8 logical cores, and like you I only see\nthe NOTICE from pg_upgrade's tests with \"-j10\":\n\npg@bat:/code/postgresql/patch/build$ time make check-world -j10 >/dev/null\nNOTICE: database \"regression\" does not exist, skipping\nmake check-world -j10 > /dev/null 86.40s user 34.10s system 140% cpu\n1:25.94 total\n\nHowever, I see something else with \"-j16\", even after a precautionary\nclean + rebuild:\n\npg@bat:/code/postgresql/patch/build$ time make check-world -j16 >/dev/null\nNOTICE: database \"regression\" does not exist, skipping\npg_regress: could not open file\n\"/code/postgresql/patch/build/src/test/regress/regression.diffs\" for\nreading: No such file or directory\nmake check-world -j16 > /dev/null 96.35s user 37.45s system 152% cpu\n1:27.49 total\n\nI suppose this might be because of a pg_regress/make file\n\"regression.diffs\" race. This is also a problem for my current\nworkflow for running \"make check-world\" in parallel [1], though only\nwhen there is definitely a regression.diffs file with actual\nregressions -- there is no regression that I'm missing here, and as\nfar as I know this output about \"regression.diffs\" is just more noise.\nI had intended to figure out a way of keeping \"regression.diffs\" with\nmy existing workflow, since losing the details of a test failure is a\nreal annoyance. Especially when there is a test that doesn't fail\nreliably.\n\n[1] https://wiki.postgresql.org/wiki/Committing_checklist#Basic_checks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 May 2019 12:50:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Suppressing noise in successful check-world runs"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> My development machine has 8 logical cores, and like you I only see\n> the NOTICE from pg_upgrade's tests with \"-j10\":\n\n> pg@bat:/code/postgresql/patch/build$ time make check-world -j10 >/dev/null\n> NOTICE: database \"regression\" does not exist, skipping\n> make check-world -j10 > /dev/null 86.40s user 34.10s system 140% cpu\n> 1:25.94 total\n\n> However, I see something else with \"-j16\", even after a precautionary\n> clean + rebuild:\n\n> pg@bat:/code/postgresql/patch/build$ time make check-world -j16 >/dev/null\n> NOTICE: database \"regression\" does not exist, skipping\n> pg_regress: could not open file\n> \"/code/postgresql/patch/build/src/test/regress/regression.diffs\" for\n> reading: No such file or directory\n> make check-world -j16 > /dev/null 96.35s user 37.45s system 152% cpu\n> 1:27.49 total\n\nYes, I see that too with sufficiently high -j. I believe this is\nwhat Noah was trying to fix in bd1592e85, but that patch evidently\nneeds a bit more work :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 19:18:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suppressing noise in successful check-world runs"
},
{
"msg_contents": "On Fri, May 24, 2019 at 4:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yes, I see that too with sufficiently high -j. I believe this is\n> what Noah was trying to fix in bd1592e85, but that patch evidently\n> needs a bit more work :-(\n\nIt would be nice if this was fixed, but I don't see a problem when I\nuse the optimum number of jobs, so I don't consider it to be urgent.\n\nI'm happy with the new approach, since it avoids the problem of\nregression.diffs files that get deleted before I have a chance to take\na look. I should thank Noah for his work on this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 24 May 2019 16:56:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Suppressing noise in successful check-world runs"
}
] |
[
{
"msg_contents": "Hackers,\n\nI have been auditing the v12 source code for places\nwhich inappropriately ignore the return value of a function\nand have found another example which seems to me\na fertile source of future bugs.\n\nIn src/backend/nodes/list.c, list_delete_cell frees the list\nand returns NIL when you delete the last element of a\nlist, placing a responsibility on any caller to check the\nreturn value.\n\nIn tablecmds.c, MergeAttributes fails to do this. My\ninspection of the surrounding code leads me to suspect\nthat logically the cell being deleted can never be the\nlast cell, and hence the failure to check the return value\ndoes not manifest as a bug. But the surrounding\ncode is rather large and convoluted, and I have\nlittle confidence that the code couldn't be changed such\nthat the return value would be NIL, possibly leading\nto memory bugs.\n\nWhat to do about this is harder to say. In the following\npatch, I'm just doing what I think is standard for callers\nof list_delete_cell, and assigning the return value back\nto the list (similar to how a call to repalloc should do).\nBut since there is an implicit assumption that the list\nis never emptied by this operation, perhaps checking\nagainst NIL and elog'ing makes more sense?\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 602a8dbd1c..96d6833274 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -2088,7 +2088,7 @@ MergeAttributes(List *schema, List *supers, char\nrelpersistence,\n coldef->cooked_default =\nrestdef->cooked_default;\n coldef->constraints =\nrestdef->constraints;\n coldef->is_from_type = false;\n- list_delete_cell(schema, rest, prev);\n+ schema =\nlist_delete_cell(schema, rest, prev);\n }\n else\n ereport(ERROR,\n\n\n",
"msg_date": "Wed, 22 May 2019 18:20:01 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Wed, May 22, 2019 at 06:20:01PM -0700, Mark Dilger wrote:\n> What to do about this is harder to say. In the following\n> patch, I'm just doing what I think is standard for callers\n> of list_delete_cell, and assigning the return value back\n> to the list (similar to how a call to repalloc should do).\n> But since there is an implicit assumption that the list\n> is never emptied by this operation, perhaps checking\n> against NIL and elog'ing makes more sense?\n\nYes, I agree that this is a bit fuzzy, and this code is new as of\n705d433. As you say, I agree that making sure that the return value\nof list_delete_cell is not NIL is a sensible choice.\n\nI don't think that an elog() is in place here though as this does not\nrely directly on catalog contents, what about just an assertion?\n\nHere is an idea of message for the elog(ERROR) if we go that way:\n\"no remaining columns after merging column \\\"%s\\\"\".\n--\nMichael",
"msg_date": "Thu, 23 May 2019 14:21:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Wed, May 22, 2019 at 10:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 22, 2019 at 06:20:01PM -0700, Mark Dilger wrote:\n> > What to do about this is harder to say. In the following\n> > patch, I'm just doing what I think is standard for callers\n> > of list_delete_cell, and assigning the return value back\n> > to the list (similar to how a call to repalloc should do).\n> > But since there is an implicit assumption that the list\n> > is never emptied by this operation, perhaps checking\n> > against NIL and elog'ing makes more sense?\n>\n> Yes, I agree that this is a bit fuzzy, and this code is new as of\n> 705d433. As you say, I agree that making sure that the return value\n> of list_delete_cell is not NIL is a sensible choice.\n>\n> I don't think that an elog() is in place here though as this does not\n> rely directly on catalog contents, what about just an assertion?\n\nI think assigning the return value (as I did in my small patch) and\nthen asserting that 'schema' is not NIL would be good.\n\n> Here is an idea of message for the elog(ERROR) if we go that way:\n> \"no remaining columns after merging column \\\"%s\\\"\".\n\nPerhaps. I like your idea of adding an assertion better.\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 06:23:10 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 22, 2019 at 06:20:01PM -0700, Mark Dilger wrote:\n>> What to do about this is harder to say. In the following\n>> patch, I'm just doing what I think is standard for callers\n>> of list_delete_cell, and assigning the return value back\n>> to the list (similar to how a call to repalloc should do).\n>> But since there is an implicit assumption that the list\n>> is never emptied by this operation, perhaps checking\n>> against NIL and elog'ing makes more sense?\n\n> Yes, I agree that this is a bit fuzzy, and this code is new as of\n> 705d433. As you say, I agree that making sure that the return value\n> of list_delete_cell is not NIL is a sensible choice.\n\nAre we sure that's not just a newly-introduced bug, ie it has not\nbeen tested in cases where the tlist could become empty? My first\nthought would be to assign the list pointer value back as per usual\ncoding convention, not to double down on the assumption that this\nwas well-considered code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 10:54:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Thu, May 23, 2019 at 7:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Wed, May 22, 2019 at 06:20:01PM -0700, Mark Dilger wrote:\n> >> What to do about this is harder to say. In the following\n> >> patch, I'm just doing what I think is standard for callers\n> >> of list_delete_cell, and assigning the return value back\n> >> to the list (similar to how a call to repalloc should do).\n> >> But since there is an implicit assumption that the list\n> >> is never emptied by this operation, perhaps checking\n> >> against NIL and elog'ing makes more sense?\n>\n> > Yes, I agree that this is a bit fuzzy, and this code is new as of\n> > 705d433. As you say, I agree that making sure that the return value\n> > of list_delete_cell is not NIL is a sensible choice.\n>\n> Are we sure that's not just a newly-introduced bug, ie it has not\n> been tested in cases where the tlist could become empty? My first\n> thought would be to assign the list pointer value back as per usual\n> coding convention, not to double down on the assumption that this\n> was well-considered code.\n\nI don't think that is disputed. I was debating between assigning\nit back and also asserting that it is not NIL vs. assigning it back\nand elog/ereporting if it is NIL. Of course, this is assuming the\ncode was designed with the expectation that the list can never\nbecome empty. If you think it might become empty, and that the\nsurrounding code can handle that sensibly, then perhaps we\nneed neither the assertion nor the elog/ereport, though we still\nneed the assignment.\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 08:27:19 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Thu, May 23, 2019 at 08:27:19AM -0700, Mark Dilger wrote:\n> On Thu, May 23, 2019 at 7:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Are we sure that's not just a newly-introduced bug, ie it has not\n>> been tested in cases where the tlist could become empty? My first\n>> thought would be to assign the list pointer value back as per usual\n>> coding convention, not to double down on the assumption that this\n>> was well-considered code.\n> \n> I don't think that is disputed. I was debating between assigning\n> it back and also asserting that it is not NIL vs. assigning it back\n> and elog/ereporting if it is NIL. Of course, this is assuming the\n> code was designed with the expectation that the list can never\n> become empty. If you think it might become empty, and that the\n> surrounding code can handle that sensibly, then perhaps we\n> need neither the assertion nor the elog/ereport, though we still\n> need the assignment.\n\nLooking closer, this code is not new as of v12. We have that since\ne7b3349 which has introduced CREATE TABLE OF. Anyway, I think that\nassigning the result of list_delete_cell and adding an assertion like\nin the attached are saner things to do. This code scans each entry in\nthe list and removes columns with duplicate names, so we should never\nfinish with an empty list as we will in the first case always merge\ndown to at least one column. That's rather a nit, but I guess that\nthis is better than the previous code which assumed that silently?\n--\nMichael",
"msg_date": "Fri, 24 May 2019 09:24:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Thu, May 23, 2019 at 5:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 23, 2019 at 08:27:19AM -0700, Mark Dilger wrote:\n> > On Thu, May 23, 2019 at 7:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Are we sure that's not just a newly-introduced bug, ie it has not\n> >> been tested in cases where the tlist could become empty? My first\n> >> thought would be to assign the list pointer value back as per usual\n> >> coding convention, not to double down on the assumption that this\n> >> was well-considered code.\n> >\n> > I don't think that is disputed. I was debating between assigning\n> > it back and also asserting that it is not NIL vs. assigning it back\n> > and elog/ereporting if it is NIL. Of course, this is assuming the\n> > code was designed with the expectation that the list can never\n> > become empty. If you think it might become empty, and that the\n> > surrounding code can handle that sensibly, then perhaps we\n> > need neither the assertion nor the elog/ereport, though we still\n> > need the assignment.\n>\n> Looking closer, this code is not new as of v12. We have that since\n> e7b3349 which has introduced CREATE TABLE OF. Anyway, I think that\n> assigning the result of list_delete_cell and adding an assertion like\n> in the attached are saner things to do. This code scans each entry in\n> the list and removes columns with duplicate names, so we should never\n> finish with an empty list as we will in the first case always merge\n> down to at least one column. That's rather a nit, but I guess that\n> this is better than the previous code which assumed that silently?\n\nI like it better because it makes static analysis of the code easier,\nand because if anybody ever changed list_delete_cell to return a\ndifferent list object in more cases than just when the list is completely\nempty, this call site would be silently wrong.\n\nThanks for the patch!\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 17:59:39 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On 2019-May-23, Mark Dilger wrote:\n\n> On Thu, May 23, 2019 at 5:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > Looking closer, this code is not new as of v12. We have that since\n> > e7b3349 which has introduced CREATE TABLE OF.\n\nYeah, I was not quite understanding why it was being blamed on a commit\nthat actually *removed* one other callsite that did the same thing. (I\ndidn't actually realize at the time that this bug was there, mind.)\n\n> > Anyway, I think that assigning the result of list_delete_cell and\n> > adding an assertion like in the attached are saner things to do.\n\nLooks good to me.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:18:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 04:18:30PM -0400, Alvaro Herrera wrote:\n> Yeah, I was not quite understanding why it was being blamed on a commit\n> that actually *removed* one other callsite that did the same thing. (I\n> didn't actually realize at the time that this bug was there, mind.)\n\nI completely forgot about this thread as an effect of last week's\nactivity. Committed now. Thanks for the input, Alvaro.\n--\nMichael",
"msg_date": "Wed, 5 Jun 2019 15:03:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about poor style in MergeAttributes"
}
] |
[
{
"msg_contents": "Hi,\n\nHere are some tiny things I noticed in passing.\n\n-- \nThomas Munro\nhttps://enterprisedb.com",
"msg_date": "Thu, 23 May 2019 13:28:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Minor typos and copyright year slippage"
},
{
"msg_contents": "On Thu, May 23, 2019 at 01:28:45PM +1200, Thomas Munro wrote:\n> Here are some tiny things I noticed in passing.\n\nGood catches. And you have spotted all the blank spots for the\ncopyright notices.\n--\nMichael",
"msg_date": "Thu, 23 May 2019 10:55:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor typos and copyright year slippage"
},
{
"msg_contents": "On Thu, May 23, 2019 at 1:55 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, May 23, 2019 at 01:28:45PM +1200, Thomas Munro wrote:\n> > Here are some tiny things I noticed in passing.\n>\n> Good catches. And you have spotted all the blank spots for the\n> copyright notices.\n\nThanks, pushed. There are also a few 2018 copyright messages in .po\nfiles but I understand that those are managed with a different\nworkflow.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 May 2019 12:07:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor typos and copyright year slippage"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thanks, pushed. There are also a few 2018 copyright messages in .po\n> files but I understand that those are managed with a different\n> workflow.\n\nRight. I'm not sure what the copyright-maintenance process is for the\n.po files, but in any case the .po files in our gitmaster repo are\ndownstream from where that would need to happen. There's no point\nin editing them here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 22:25:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Minor typos and copyright year slippage"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe return value of gimme_edge_table is not used anywhere in the\ncore code, so far as I can see. But the value is computed as\n\n /* return average number of edges per index */\n return ((float) (edge_total * 2) / (float) num_gene);\n\nwhich involves some floating point math. I'm not sure that this matters\nmuch, but (1) it deceives a reader of this code into thinking that this\ncalculation is meaningful, which it is not, and (2) gimme_edge_table is\ncalled inside a loop, so this is happening repeatedly, though admittedly\nthat loop is perhaps not terribly large.\n\nmark\n\n\n",
"msg_date": "Wed, 22 May 2019 20:57:49 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "nitpick about useless floating point division in gimme_edge_table"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> Hackers,\n> The return value of gimme_edge_table is not used anywhere in the\n> core code, so far as I can see. But the value is computed as\n\n> /* return average number of edges per index */\n> return ((float) (edge_total * 2) / (float) num_gene);\n\n> which involves some floating point math. I'm not sure that this matters\n> much, but (1) it deceives a reader of this code into thinking that this\n> calculation is meaningful, which it is not, and (2) gimme_edge_table is\n> called inside a loop, so this is happening repeatedly, though admittedly\n> that loop is perhaps not terribly large.\n\nHmm, probably there was use for that once upon a time, but I agree it's\ndead code now. Want to send a patch to change it to returns-void?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 12:50:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: nitpick about useless floating point division in gimme_edge_table"
}
] |
[
{
"msg_contents": "Over on [1] I raised a concern about the lack of any warning in our\ndocuments to inform users that they might not want to use thousands of\npartitions. More recently there's [2], also suffering from OOM using\n100 partitions. Perhaps there's more too this, but the planner using\na lot of memory planning updates and deletes to partitioned tables\ndoes seem to be a surprise to many people.\n\nI had hoped we could get something it the documents sooner rather than\nlater about this. Probably the v12 patch will need to be adjusted now\nthat the memory consumption will be reduced when many partitions are\npruned, but I still think v12 needs to have some sort of warning in\nthere.\n\nhttps://commitfest.postgresql.org/23/2065/\n\nI'm moving this to a new thread with a better title, rather than\ntagging onto that old thread that's become rather long.\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8RW-mHQ8aEWD5Dv0+8A1wH5tHHdYMGW9y5sXqnE0X9wA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/87ftp6l2qr.fsf@credativ.de\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 23 May 2019 21:02:40 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Should we warn against using too many partitions?"
},
{
"msg_contents": "Hi David,\n\nOn 2019/05/23 18:02, David Rowley wrote:\n> Over on [1] I raised a concern about the lack of any warning in our\n> documents to inform users that they might not want to use thousands of\n> partitions. More recently there's [2], also suffering from OOM using\n> 100 partitions. Perhaps there's more too this, but the planner using\n> a lot of memory planning updates and deletes to partitioned tables\n> does seem to be a surprise to many people.\n> \n> I had hoped we could get something it the documents sooner rather than\n> later about this. Probably the v12 patch will need to be adjusted now\n> that the memory consumption will be reduced when many partitions are\n> pruned, but I still think v12 needs to have some sort of warning in\n> there.\n> \n> https://commitfest.postgresql.org/23/2065/\n\nThe latest patch on the thread linked from this CF entry (a modified\nversion of your patch sent by Justin Pryzby) looks good to me. Why not\npost it on this thread and link this one to the CF entry? Or maybe, make\nthis an open item, because we should update documentation back to v11?\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 24 May 2019 11:04:27 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 24 May 2019 at 14:04, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> The latest patch on the thread linked from this CF entry (a modified\n> version of your patch sent by Justin Pryzby) looks good to me. Why not\n> post it on this thread and link this one to the CF entry?\n\nI'm not much of a fan of that patch:\n\n+ <para>\n+ When using table inheritance, partition hierarchies with more than a few\n+ hundred partitions are not recommended. Larger partition hierarchies may\n+ incur long planning time, and, in the case of <command>UPDATE</command>\n+ and <command>DELETE</command>, excessive memory usage. When inheritance\n+ is used, see also the limitations described in\n+ <xref linkend=\"ddl-partitioning-constraint-exclusion\"/>.\n+ </para>\n\nI'm a bit confused about this paragraph. It introduces itself as\ntalking about table inheritance, then uses the word \"partition\" in\nvarious places. I think that can be dropped. The final sentence\nthrows me off as it tries to reduce the scope to only inheritance, but\nas far as I understand that was already the scope of the paragraph,\nunless of course \"table inheritance\" is not the same as \"inheritance\".\nWithout any insider knowledge on it, I've no idea if this\nUPDATE/DELETE issue affects native partitioning too.\n\n+ <para>\n+ When using declarative partitioning, the overhead of query planning\n+ is directly related to the number of unpruned partitions. Planning is\n+ generally fast with small numbers of unpruned partitions, even in\n+ partition hierarchies containing many thousands of partitions. However,\n+ long planning time will be incurred by large partition hierarchies if\n+ partition pruning is not possible during the planning phase.\n+ </para>\n\nThis should really mention the excessive memory usage when many\npartitions survive pruning.\n\nI've attached 3 patches of what I think should go into master, pg11, and pg10.\n\n> Or maybe, make\n> this an open item, because we should update documentation back to v11?\n\nI'll add this to the open items list since it includes master, and\nshift the CF entry to point to this thread.\n\nAuthors are Robert Haas and Justin Pryzby, who I've included in the email.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 24 May 2019 16:37:35 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On 2019/05/24 13:37, David Rowley wrote:\n> I've attached 3 patches of what I think should go into master, pg11, and pg10.\n\nThanks for the updated patches.\n\nIn pg11 and pg10 patches, I see this text:\n\n+ Whether using table inheritance or native partitioning, hierarchies\n\nMaybe, it would better to use the word \"declarative\" instead of \"native\",\nif only to be consistent; neighboring paragraphs use \"declarative\".\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Fri, 24 May 2019 14:57:54 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 24 May 2019 at 17:58, Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> + Whether using table inheritance or native partitioning, hierarchies\n>\n> Maybe, it would better to use the word \"declarative\" instead of \"native\",\n> if only to be consistent; neighboring paragraphs use \"declarative\".\n\nThanks for having a look.\n\nI've attached the pg10 and pg11 patches with that updated... and also\nthe master one (unchanged) with the hopes that the CF bot picks that\none.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 24 May 2019 22:00:51 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 24 May 2019 at 22:00, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> I've attached the pg10 and pg11 patches with that updated... and also\n> the master one (unchanged) with the hopes that the CF bot picks that\n> one.\n\nI got talking to Andres about this at PGCon after a use case of 250k\npartitions was brought to our attention. I was thinking about the best\nway to handle this on the long flight home and after studying the\ncurrent docs I really feel that they fairly well describe what we've\ndone so far implementing table partitioning, but they offer next to\nnothing on best practices on how to make the most of the feature.\n\nI've done some work on this today and what I've ended up with is an\nentirely new section to the partitioning docs about best practices\nwhich provides a bit of detail on how you might go about choosing the\npartition key. It gives an example of why LIST partitioning on a set\nof values that may grow significantly over time might be a bad idea.\nIt talks about memory growth with more partitions and mentions that\nrel cache might become a problem even if queries are touching a small\nnumber of partitions per query, but a large number per session.\n\nThe attached patch is aimed at master. PG11 will need the planner\nmemory and performance part tweaked and for PG10 I'll do that plus\nremove the mention of PRIMARY KEY and UNIQUE constraints on the\npartitioned table.\n\nDoes anyone see anything wrong with doing this? I don't think there\nshould be an issue adding a section to the docs right at the end as\nit's not causing any resequencing.\n\nOr does anyone have any better ideas or better examples to give? or\nany comments?\n\nIf it looks okay I can post version for PG11 and PG10 for review, but\nI'd like to get this in fairly soon.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 6 Jun 2019 16:43:48 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "I suggest just minor variations on language.\n\nOn Thu, Jun 06, 2019 at 04:43:48PM +1200, David Rowley wrote:\n\n>diff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\n>index cce1618fc1..ab26630199 100644\n>--- a/doc/src/sgml/ddl.sgml\n>+++ b/doc/src/sgml/ddl.sgml\n>@@ -4674,6 +4675,76 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';\n> </itemizedlist>\n> </para>\n> </sect2>\n>+ \n>+ <sect2 id=\"ddl-partitioning-declarative-best-practices\">\n>+ <title>Declarative Partitioning Best Practices</title>\n>+\n>+ <para>\n>+ The choice of how to partition a table should be considered carefully as\n\nEither say \"How to partition consider should be ..\" or \"The choice should MADE carefully\" ?\n\n>+ <para>\n>+ One of the most critical design decisions will be the column or columns\n>+ which you partition your data by. Often the best choice will be to\n\nby which ?\n\n>+ <para>\n>+ Choosing the number of partitions to divide the table into is also a\n\nthe TARGET number of partitions BY WHICH to divide the table ?\n\n>+ critical decision to make. Not having enough partitions may mean that\n>+ indexes remain too large and that data locality remains poor which could\n>+ result in poor cache hit ratios. However, dividing the table into too\n>+ many partitions can also cause issues. Too many partitions can mean\n>+ slower query planning times and higher memory consumption during both\n>+ query planning and execution. It's also important to consider what\n>+ changes may occur in the future when choosing how to partition your table.\n>+ For example, if you choose to have one partition per customer and you\n>+ currently have a small number of large customers, what will the\n\nhave ONLY ?\n\n>+ implications be if in several years you obtain a large number of small\n>+ customers. In this case, it may be better to choose to partition by\n>+ <literal>HASH</literal> and choose a reasonable amount of partitions\n\nreasonable NUMBER ?\n\n>+ <para>\n>+ It is also important to consider the overhead of partitioning during\n>+ query planning and execution. The query planner is generally able to\n>+ handle partition hierarchies up a few thousand partitions fairly well,\n>+ providing that the vast majority of them can be pruned during query\n\nprovided ?\n\nI would say: \"provided that typical queries prune all but a small number of\npartitions during planning time\".\n\n>+ <command>DELETE</command> commands. Also, even if most queries are\n>+ able to prune a high number of partitions during query planning, it still\n\nLARGE number?\n\n>+ may be undesirable to have a large number of partitions as each partition\n\nmay still ?\n\n>+ also will obtain a relation cache entry in each session which uses the\n\nwill require ? Or occupy ?\n\n>+ <para>\n>+ With data warehouse type workloads it can make sense to use a larger\n>+ number of partitions than with an OLTP type workload. Generally, in data\n>+ warehouses, query planning time is less of a concern as the majority of\n>+ processing time is generally spent during query execution. With either of\n\nremove the 2nd \"generally\"\n\n>+ these two types of workload, it is important to make the right decisions\n>+ early as re-partitioning large quantities of data can be painstakingly\n\nearly COMMA ?\n\nPAINFULLY slow\n\n>+ When performance is critical, performing workload simulations to\n>+ assist in making the correct decisions can be beneficial. \n\nI would say:\nSimulations of the intended workload are beneficial for optimizing partitioning\nstrategy.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Thu, 6 Jun 2019 00:29:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Hi,\n\nOn Thu, Jun 6, 2019 at 1:44 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Fri, 24 May 2019 at 22:00, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> > I've attached the pg10 and pg11 patches with that updated... and also\n> > the master one (unchanged) with the hopes that the CF bot picks that\n> > one.\n>\n> I got talking to Andres about this at PGCon after a use case of 250k\n> partitions was brought to our attention. I was thinking about the best\n> way to handle this on the long flight home and after studying the\n> current docs I really feel that they fairly well describe what we've\n> done so far implementing table partitioning, but they offer next to\n> nothing on best practices on how to make the most of the feature.\n\nAgreed that some \"best practices\" text is overdue, so thanks for taking that up.\n\n> I've done some work on this today and what I've ended up with is an\n> entirely new section to the partitioning docs about best practices\n> which provides a bit of detail on how you might go about choosing the\n> partition key. It gives an example of why LIST partitioning on a set\n> of values that may grow significantly over time might be a bad idea.\n\nDesign advice like this is good.\n\n> It talks about memory growth with more partitions and mentions that\n> rel cache might become a problem even if queries are touching a small\n> number of partitions per query, but a large number per session.\n\nI wasn't sure at first if stuff like this should be mentioned in the\nuser-facing documentation, but your wording seems fine in general.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 6 Jun 2019 14:46:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On 2019-Jun-06, David Rowley wrote:\n\n> The attached patch is aimed at master. PG11 will need the planner\n> memory and performance part tweaked and for PG10 I'll do that plus\n> remove the mention of PRIMARY KEY and UNIQUE constraints on the\n> partitioned table.\n\nI think in PG10 something should be mentioned about PK and UNIQUE, so\nthat people doing their partitioning on that release can think ahead.\nWe don't want them to have to redesign and redo the whole setup when\nupgrading to a newer release. If we had written the pg10 material back\nwhen pg10 was fresh, it wouldn't make sense, but now that we know the\nfuture, I don't see why we wouldn't do it. Maybe something like \"The\ncurrent version does not support <this>, but future Postgres versions\ndo; consult their manuals for some limitations that may affect the\nchoice of partitioning strategy\".\n\nIn the PG10 version you'll need to elide the mention of HASH\npartitioning strategy.\n\nGenerally speaking, your material looks good to me. Also generally I +1\nJustin's suggestions. The part that mentions a \"relation cache entry\"\nseems too low-level as-is, though ... maybe just say it uses some memory\nper partition without being too specific.\n\nI think it'd be worthwhile to mention sub-partitioning.\n\n\nI wonder if the PG10 manual should just suggest to skip to PG11 if\nthey're setting up partitioning for the first time.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 6 Jun 2019 11:12:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Thu, 6 Jun 2019 at 17:29, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >+\n> >+ <sect2 id=\"ddl-partitioning-declarative-best-practices\">\n> >+ <title>Declarative Partitioning Best Practices</title>\n> >+\n> >+ <para>\n> >+ The choice of how to partition a table should be considered carefully as\n>\n> Either say \"How to partition consider should be ..\" or \"The choice should MADE carefully\" ?\n\nI've changed \"considered\" to \"made\". I'm unable to make sense of the\nfirst suggestion there :(\n\n> >+ <para>\n> >+ One of the most critical design decisions will be the column or columns\n> >+ which you partition your data by. Often the best choice will be to\n>\n> by which ?\n\nokay. I've moved the \"by\" from after \"data\" to before \"which\"\n\n> >+ <para>\n> >+ Choosing the number of partitions to divide the table into is also a\n>\n> the TARGET number of partitions BY WHICH to divide the table ?\n\nChanged.\n\n> >+ critical decision to make. Not having enough partitions may mean that\n> >+ indexes remain too large and that data locality remains poor which could\n> >+ result in poor cache hit ratios. However, dividing the table into too\n> >+ many partitions can also cause issues. Too many partitions can mean\n> >+ slower query planning times and higher memory consumption during both\n> >+ query planning and execution. It's also important to consider what\n> >+ changes may occur in the future when choosing how to partition your table.\n> >+ For example, if you choose to have one partition per customer and you\n> >+ currently have a small number of large customers, what will the\n>\n> have ONLY ?\n\nI assume you mean after the \"have\" before \"one partition per\ncustomer\"? I don't quite understand that since in the scenario we're\npartitioning by customer, so it's not possible to have more than one\npartition per customer, only the reverse is possible. It seems to me\ninjecting \"only\" there would just confuse things.\n\n> >+ implications be if in several years you obtain a large number of small\n> >+ customers. In this case, it may be better to choose to partition by\n> >+ <literal>HASH</literal> and choose a reasonable amount of partitions\n>\n> reasonable NUMBER ?\n\nchanged.\n\n> >+ <para>\n> >+ It is also important to consider the overhead of partitioning during\n> >+ query planning and execution. The query planner is generally able to\n> >+ handle partition hierarchies up a few thousand partitions fairly well,\n> >+ providing that the vast majority of them can be pruned during query\n>\n> provided ?\n>\n> I would say: \"provided that typical queries prune all but a small number of\n> partitions during planning time\".\n\nchanged, only I used \"during query planning\" rather than \"during planning time\".\n\n> >+ <command>DELETE</command> commands. Also, even if most queries are\n> >+ able to prune a high number of partitions during query planning, it still\n>\n> LARGE number?\n\nchanged\n\n> >+ may be undesirable to have a large number of partitions as each partition\n>\n> may still ?\n>\n> >+ also will obtain a relation cache entry in each session which uses the\n>\n> will require ? Or occupy ?\n\n\"require\" seems better. Although, this may need to be reworded a bit\nfurther per what Alvaro mentions.\n\n> >+ <para>\n> >+ With data warehouse type workloads it can make sense to use a larger\n> >+ number of partitions than with an OLTP type workload. Generally, in data\n> >+ warehouses, query planning time is less of a concern as the majority of\n> >+ processing time is generally spent during query execution. With either of\n>\n> remove the 2nd \"generally\"\n\nOops. I should have caught that.\n\n> >+ these two types of workload, it is important to make the right decisions\n> >+ early as re-partitioning large quantities of data can be painstakingly\n>\n> early COMMA ?\n\nremoved\n\n> PAINFULLY slow\n\nyeah\n\n> >+ When performance is critical, performing workload simulations to\n> >+ assist in making the correct decisions can be beneficial.\n>\n> I would say:\n> Simulations of the intended workload are beneficial for optimizing partitioning\n> strategy.\n\nI took that but added \"often\" before \"beneficial\"\n\nI'll write the patches for PG10 and PG11 and send them all a bit later.\n\nThanks for the review.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 06:46:59 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, Jun 07, 2019 at 06:46:59AM +1200, David Rowley wrote:\n> On Thu, 6 Jun 2019 at 17:29, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >+\n> > >+ <sect2 id=\"ddl-partitioning-declarative-best-practices\">\n> > >+ <title>Declarative Partitioning Best Practices</title>\n> > >+\n> > >+ <para>\n> > >+ The choice of how to partition a table should be considered carefully as\n> >\n> > Either say \"How to partition consider should be ..\" or \"The choice should MADE carefully\" ?\n> \n> I've changed \"considered\" to \"made\". I'm unable to make sense of the\n> first suggestion there :(\n\nThe first option was intended to be:\n|How to partition a table should be considered carefully.\n\n(The idea being that the \"choice\" doesn't need to be considered carefully but\nthe thing itself).\n\n> > >+ critical decision to make. Not having enough partitions may mean that\n> > >+ indexes remain too large and that data locality remains poor which could\n> > >+ result in poor cache hit ratios. However, dividing the table into too\n> > >+ many partitions can also cause issues. Too many partitions can mean\n> > >+ slower query planning times and higher memory consumption during both\n> > >+ query planning and execution. It's also important to consider what\n> > >+ changes may occur in the future when choosing how to partition your table.\n> > >+ For example, if you choose to have one partition per customer and you\n> > >+ currently have a small number of large customers, what will the\n> >\n> > have ONLY ?\n> \n> I assume you mean after the \"have\" before \"one partition per\n> customer\"?\n\nNo, I meant \"currently have ONLY\".\n\n> I don't quite understand that since in the scenario we're\n> partitioning by customer, so it's not possible to have more than one\n> partition per customer, only the reverse is possible. It seems to me\n> injecting \"only\" there would just confuse things.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Thu, 6 Jun 2019 13:54:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 7 Jun 2019 at 03:12, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think in PG10 something should be mentioned about PK and UNIQUE, so\n> that people doing their partitioning on that release can think ahead.\n\nThat seems reasonable, but I feel caution would be required as we\ndon't want to provide any details about what a future version will\nsupport, such information might not age very well. We could say that\nfuture versions of PostgreSQL support PRIMARY KEY and UNIQUE\nconstraints, but we'll be unable to detail out that these must be a\nsuper-set of the partition columns as if we get global indexes one day\nthat will no longer be a restriction. I'll have a think about it and\npost a PG10 patch later.\n\n> We don't want them to have to redesign and redo the whole setup when\n> upgrading to a newer release. If we had written the pg10 material back\n> when pg10 was fresh, it wouldn't make sense, but now that we know the\n> future, I don't see why we wouldn't do it. Maybe something like \"The\n> current version does not support <this>, but future Postgres versions\n> do; consult their manuals for some limitations that may affect the\n> choice of partitioning strategy\".\n\n> In the PG10 version you'll need to elide the mention of HASH\n> partitioning strategy.\n\nGood point. I might need to rethink that example completely as I'm not\nsure if swapping HASH for RANGE is such a great fix.\n\n> Generally speaking, your material looks good to me. Also generally I +1\n> Justin's suggestions. The part that mentions a \"relation cache entry\"\n> seems too low-level as-is, though ... maybe just say it uses some memory\n> per partition without being too specific.\n\nYeah, I wondered about that. I did grep the docs for \"relation cache\"\nand saw two other mentions, that's why I ended up going with it, but I\ndo agree that it may be a problem since there's nothing in the docs\nthat explain what that actually means.\n\n> I think it'd be worthwhile to mention sub-partitioning.\n\nI'll try to come up with something for that.\n\n> I wonder if the PG10 manual should just suggest to skip to PG11 if\n> they're setting up partitioning for the first time.\n\nI don't think so. I mean, if they just happened to have just installed\nPG10 that might be okay, but they may already be heavily invested in\nthat version already. Suggesting an upgrade may not be a well-received\nrecommendation for some. Maybe a suggestion that significant\nimprovements have been made in later versions might be enough, but I'm\na bit on the fence about that.\n\nThanks for having a look. I'll post PG10 and 11 patches later.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 06:59:51 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 7 Jun 2019 at 06:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >+ critical decision to make. Not having enough partitions may mean that\n> > > >+ indexes remain too large and that data locality remains poor which could\n> > > >+ result in poor cache hit ratios. However, dividing the table into too\n> > > >+ many partitions can also cause issues. Too many partitions can mean\n> > > >+ slower query planning times and higher memory consumption during both\n> > > >+ query planning and execution. It's also important to consider what\n> > > >+ changes may occur in the future when choosing how to partition your table.\n> > > >+ For example, if you choose to have one partition per customer and you\n> > > >+ currently have a small number of large customers, what will the\n> > >\n> > > have ONLY ?\n> >\n> > I assume you mean after the \"have\" before \"one partition per\n> > customer\"?\n>\n> No, I meant \"currently have ONLY\".\n\nI see, thanks for explaining. I've left that one out as I think adding\n\"only\" would imply that having a small number of large customers is\nless significant that a large number of small customers. I don't\nreally see why either of those has significance over the other, so I\nthink \"only\" is out of place there.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 07:36:03 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Fri, 7 Jun 2019 at 03:12, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> I think it'd be worthwhile to mention sub-partitioning.\n\nIn the attached I did briefly mention about sub-partitioning, however,\nI didn't feel I had any very wise words to write about it other than\nit can be useful to split up larger partitions.\n\nI rather cheaply did the PG10 ones and just removed the mention about\nPRIMARY KEYS and UNIQUE constraints. I also mention that PG11 is able\nto handle \"a few hundred partitions fairly well\", and for PG10 I just\nwrote that it's able to handle \"a few hundred partitions\" without the\n\"fairly well\" part. master gets \"a few thousand partitions fairly\nwell\".\n\nI also swapped out HASH for RANGE in the PG10 version which is not\nquite perfect since its likely a customer ID would be a serial and\nwould fill the partitions one-by-one rather than more evenly as HASH\npartitioning would.\n\nAnyway comments welcome. If I had a few more minutes to spare I'd\nhave wrapped OLTP in <acronym> tags, but out of time for now.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 7 Jun 2019 17:34:20 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Hi,\n\nThanks for the updated patches.\n\nOn Fri, Jun 7, 2019 at 2:34 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> Anyway comments welcome. If I had a few more minutes to spare I'd\n> have wrapped OLTP in <acronym> tags, but out of time for now.\n\nSome rewording suggestions.\n\n1.\n\n+ ... Removal of unwanted data is also a factor to consider when\n+ planning your partitioning strategy as an entire partition can be removed\n+ fairly quickly. However, if data that you want to keep exists in that\n+ partition then that means having to resort to using\n+ <command>DELETE</command> instead of removing the partition.\n\nNot sure if the 2nd sentence is necessary or perhaps should be\nrewritten in a way that helps to design to benefit from this.\n\nMaybe:\n\n... Removal of unwanted data is also a factor to consider when\nplanning your partitioning strategy as an entire partition can be\nremoved fairly quickly, especially if the partition keys are chosen\nsuch that all data that can be deleted together are grouped into\nseparate partitions.\n\n2.\n\n+ ... For example, if you choose to have one partition\n+ per customer and you currently have a small number of large customers,\n+ what will the implications be if in several years you obtain a large\n+ number of small customers.\n\nThe sentence could be rewritten a bit. Maybe as:\n\n... For example, choosing a design with one partition per customer,\nbecause you currently have a small number of large customers, will not\nscale well several years down the line when you might have a large\nnumber of small customers.\n\nBtw, doesn't it suffice here to say \"large number of customers\"\ninstead of \"large number of small customers\"?\n\n3.\n\n+ ... In this case, it may be better to choose to\n+ partition by <literal>RANGE</literal> and choose a reasonable number of\n+ partitions\n\nMaybe:\n\n... and choose reasonable number of partitions, each containing the\ndata of a fixed number of customers.\n\n4.\n\n+ ... It also\n+ may be undesirable to have a large number of partitions as each partition\n+ requires metadata about the partition to be stored in each session that\n+ touches it. If each session touches a large number of partitions over a\n+ period of time then the memory consumption for this may become\n+ significant.\n\nIt might be a good idea to reorder the sentences here to put the\nproblem first and the cause later. Maybe like this:\n\nAnother reason to be concerned about having a large number of\npartitions is that the server's memory consumption may grow\nsignificantly over a period of time, especially if many sessions touch\nlarge numbers of partitions. That's because each partition requires\nits own metadata that must be loaded into the local memory of each\nsession that touches it.\n\n5.\n\n+ With data warehouse type workloads it can make sense to use a larger\n+ number of partitions than with an OLTP type workload.\n\nIs there a comma missing between \"With data warehouse type workloads\"\nand the rest of the sentence?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 7 Jun 2019 15:59:49 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "I made another pass, hopefully it's useful and not too much of a pain.\n\ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex cce1618fc1..be2ca3be48 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -4674,6 +4675,88 @@ EXPLAIN SELECT count(*) FROM measurement WHERE logdate >= DATE '2008-01-01';\n </itemizedlist>\n </para>\n </sect2>\n+ \n+ <sect2 id=\"ddl-partitioning-declarative-best-practices\">\n+ <title>Declarative Partitioning Best Practices</title>\n+\n+ <para>\n+ The choice of how to partition a table should be made carefully as the\n+ performance of query planning and execution can be negatively affected by\n+ poorly made design decisions.\n\nMaybe just \"poor design\"\n\n+ partitioned table. <literal>WHERE</literal> clause items that match and\n+ are compatible with the partition key can be used to prune away unneeded\n\nremove \"away\" ?\n\n+ requirements for the <literal>PRIMARY KEY</literal> or a\n+ <literal>UNIQUE</literal> constraint. Removal of unwanted data is also\n+ a factor to consider when planning your partitioning strategy as an entire\n+ partition can be removed fairly quickly. However, if data that you want\n\nCan we just say \"dropped\" ? On my first (re)reading, I briefly thought this\nwas now referring to \"pruning\" as \"removal\".\n\n+ to keep exists in that partition then that means having to resort to using\n+ <command>DELETE</command> instead of removing the partition.\n+ </para>\n+\n+ <para>\n+ Choosing the target number of partitions by which the table should be\n+ divided into is also a critical decision to make. Not having enough\n\nShould be: \".. target number .. into which .. should be divided ..\"\n\n+ partitions may mean that indexes remain too large and that data locality\n+ remains poor which could result in poor cache hit ratios. However,\n\nChange the 2nd remains to \"is\" and the second poor to \"low\" ?\n\n+ dividing the table into too many partitions can also cause issues.\n+ Too many partitions can mean slower query planning times and higher memory\n\ns/slower/longer/\n\n+ consumption during both query planning and execution. It's also important\n+ to consider what changes may occur in the future when choosing how to\n+ partition your table. For example, if you choose to have one partition\n\nRemove \"when choosing ...\"? Or say:\n\n|When choosing how to partition your table, it's also important to consider\n|what changes may occur in the future.\n\n+ per customer and you currently have a small number of large customers,\n+ what will the implications be if in several years you obtain a large\n+ number of small customers. In this case, it may be better to choose to\n+ partition by <literal>HASH</literal> and choose a reasonable number of\n+ partitions rather than trying to partition by <literal>LIST</literal> and\n+ hoping that the number of customers does not increase significantly over\n+ time.\n+ </para>\n\nIt's an unusual thing for which to hope :)\n\n+ <para>\n+ Sub-partitioning can be useful to further divide partitions that are\n+ expected to become larger than other partitions, although excessive\n+ sub-partitioning can easily lead to large numbers of partitions and can\n+ cause the problems mentioned in the preceding paragraph.\n+ </para>\n\ncause the SAME problems ?\n\n+ It is also important to consider the overhead of partitioning during\n+ query planning and execution. The query planner is generally able to\n+ handle partition hierarchies up a few thousand partitions fairly well,\n+ provided that typical queries prune all but a small number of partitions\n+ during query planning. Planning times become slower and memory\n\ns/slower/longer/\n\nHm, maybe say \"typical queries ALLOW PRUNNING ..\"\n\n+ consumption becomes higher when more partitions remain after the planner\n+ performs partition pruning. This is particularly true for the\n\nJust say: \"remain after planning\" ?\n\n+ <command>UPDATE</command> and <command>DELETE</command> commands. Also,\n+ even if most queries are able to prune a large number of partitions during\n+ query planning, it still may be undesirable to have a large number of\n\nmay still ?\n\n+ partitions as each partition requires metadata about the partition to be\n+ stored in each session that touches it. If each session touches a large\n\nstored for ?\n\n+ number of partitions over a period of time then the memory consumption for\n+ this may become significant.\n+ </para>\n\nRemove \"over a period of time\" ?\nAdd a comma?\n\nMaybe say:\n\n|If each session touches a large number of partitions, then the memory\n|overhead may become significant.\n\n+ <para>\n+ With data warehouse type workloads it can make sense to use a larger\n+ number of partitions than with an OLTP type workload. Generally, in data\n+ warehouses, query planning time is less of a concern as the majority of\n\nVAST majority? Or \"essentially all\"? Or \" .. query planning time is\ninsignificant compared to the time spent during query execution.\n\n+ processing time is spent during query execution. With either of these two\n+ types of workload it is important to make the right decisions early as\n\nearly COMMA\n\nJustin\n\n\n",
"msg_date": "Sat, 8 Jun 2019 01:38:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Thanks for these suggestions.\n\nOn Fri, 7 Jun 2019 at 19:00, Amit Langote <amitlangote09@gmail.com> wrote:\n> Some rewording suggestions.\n>\n> 1.\n>\n> + ... Removal of unwanted data is also a factor to consider when\n> + planning your partitioning strategy as an entire partition can be removed\n> + fairly quickly. However, if data that you want to keep exists in that\n> + partition then that means having to resort to using\n> + <command>DELETE</command> instead of removing the partition.\n>\n> Not sure if the 2nd sentence is necessary or perhaps should be\n> rewritten in a way that helps to design to benefit from this.\n>\n> Maybe:\n>\n> ... Removal of unwanted data is also a factor to consider when\n> planning your partitioning strategy as an entire partition can be\n> removed fairly quickly, especially if the partition keys are chosen\n> such that all data that can be deleted together are grouped into\n> separate partitions.\n\nIt seems like a good idea to change this to have this mention the\nbenefits rather than the drawbacks. I've reworded it, but not using\nyour exact words as it seems the \"especially\" means that a partition\ncan be removed faster with properly chosen partition keys, which is\nnot the case.\n\nI also split this out into its own paragraph since it's talking about\nsomething quite different from the previous paragraph.\n\n> 2.\n>\n> + ... For example, if you choose to have one partition\n> + per customer and you currently have a small number of large customers,\n> + what will the implications be if in several years you obtain a large\n> + number of small customers.\n>\n> The sentence could be rewritten a bit. Maybe as:\n>\n> ... For example, choosing a design with one partition per customer,\n> because you currently have a small number of large customers, will not\n> scale well several years down the line when you might have a large\n> number of small customers.\n>\n> Btw, doesn't it suffice here to say \"large number of customers\"\n> instead of \"large number of small customers\"?\n\nI'm not really trying to imply to plan for business growth here, I'm\ntrying to angle it as \"what if your business changes\". I've reworded\nthis slightly and it now says \"what will the implications be if in\nseveral years you instead find yourself with a large number of small\ncustomers.\"\n\n> 3.\n>\n> + ... In this case, it may be better to choose to\n> + partition by <literal>RANGE</literal> and choose a reasonable number of\n> + partitions\n>\n> Maybe:\n>\n> ... and choose reasonable number of partitions, each containing the\n> data of a fixed number of customers.\n\nYeah, that seems better. I'll change that for the PG10 version only.\n\n> 4.\n>\n> + ... It also\n> + may be undesirable to have a large number of partitions as each partition\n> + requires metadata about the partition to be stored in each session that\n> + touches it. If each session touches a large number of partitions over a\n> + period of time then the memory consumption for this may become\n> + significant.\n>\n> It might be a good idea to reorder the sentences here to put the\n> problem first and the cause later. Maybe like this:\n>\n> Another reason to be concerned about having a large number of\n> partitions is that the server's memory consumption may grow\n> significantly over a period of time, especially if many sessions touch\n> large numbers of partitions. That's because each partition requires\n> its own metadata that must be loaded into the local memory of each\n> session that touches it.\n\nThat seems better. I've taken that text.\n\n> 5.\n>\n> + With data warehouse type workloads it can make sense to use a larger\n> + number of partitions than with an OLTP type workload.\n>\n> Is there a comma missing between \"With data warehouse type workloads\"\n> and the rest of the sentence?\n\nI've added one.\n\nPatches will follow once I've addressed Justin's review.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 9 Jun 2019 08:29:17 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Thanks for having another look.\n\nOn Sat, 8 Jun 2019 at 18:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> + <para>\n> + The choice of how to partition a table should be made carefully as the\n> + performance of query planning and execution can be negatively affected by\n> + poorly made design decisions.\n>\n> Maybe just \"poor design\"\n\nchanged\n\n> + partitioned table. <literal>WHERE</literal> clause items that match and\n> + are compatible with the partition key can be used to prune away unneeded\n>\n> remove \"away\" ?\n\nremoved\n\n> + requirements for the <literal>PRIMARY KEY</literal> or a\n> + <literal>UNIQUE</literal> constraint. Removal of unwanted data is also\n> + a factor to consider when planning your partitioning strategy as an entire\n> + partition can be removed fairly quickly. However, if data that you want\n>\n> Can we just say \"dropped\" ? On my first (re)reading, I briefly thought this\n> was now referring to \"pruning\" as \"removal\".\n\nI used removed because that could be done via DROP TABLE or by DETACH\nPARTITION. If I change it to \"dropped\" then it sounds like we might\nonly mean DROP TABLE. I've reworded to use \"detached\" instead.\n\n> + to keep exists in that partition then that means having to resort to using\n> + <command>DELETE</command> instead of removing the partition.\n> + </para>\n> +\n> + <para>\n> + Choosing the target number of partitions by which the table should be\n> + divided into is also a critical decision to make. Not having enough\n>\n> Should be: \".. target number .. into which .. should be divided ..\"\n\nI've changed \"by\" to \"into\". I think that's what you mean, otherwise,\nyou've lost me.\n\n> + partitions may mean that indexes remain too large and that data locality\n> + remains poor which could result in poor cache hit ratios. However,\n>\n> Change the 2nd remains to \"is\" and the second poor to \"low\" ?\n\nAn internet search on \"low cache hit ratio\" turns up about twice as\nmany results as \"poor cache hit ratio\", but both seem fine to me.\nHowever, since the search seems to show more for the former, I change\nit to that.\n\n> + dividing the table into too many partitions can also cause issues.\n> + Too many partitions can mean slower query planning times and higher memory\n>\n> s/slower/longer/\n\nchanged\n\n> + consumption during both query planning and execution. It's also important\n> + to consider what changes may occur in the future when choosing how to\n> + partition your table. For example, if you choose to have one partition\n>\n> Remove \"when choosing ...\"? Or say:\n\nI don't see how that would make sense.\n\n> |When choosing how to partition your table, it's also important to consider\n> |what changes may occur in the future.\n\nChanged to that.\n\n> + per customer and you currently have a small number of large customers,\n> + what will the implications be if in several years you obtain a large\n> + number of small customers. In this case, it may be better to choose to\n> + partition by <literal>HASH</literal> and choose a reasonable number of\n> + partitions rather than trying to partition by <literal>LIST</literal> and\n> + hoping that the number of customers does not increase significantly over\n> + time.\n> + </para>\n>\n> It's an unusual thing for which to hope :)\n\nI have reworded this slightly which may help with that.\n\n> + <para>\n> + Sub-partitioning can be useful to further divide partitions that are\n> + expected to become larger than other partitions, although excessive\n> + sub-partitioning can easily lead to large numbers of partitions and can\n> + cause the problems mentioned in the preceding paragraph.\n> + </para>\n>\n> cause the SAME problems ?\n\nAdded\n\n> + It is also important to consider the overhead of partitioning during\n> + query planning and execution. The query planner is generally able to\n> + handle partition hierarchies up a few thousand partitions fairly well,\n> + provided that typical queries prune all but a small number of partitions\n> + during query planning. Planning times become slower and memory\n>\n> s/slower/longer/\n\nChanged\n\n> Hm, maybe say \"typical queries ALLOW PRUNNING ..\"\n>\n> + consumption becomes higher when more partitions remain after the planner\n> + performs partition pruning. This is particularly true for the\n>\n> Just say: \"remain after planning\" ?\n\nI've changed this around, but not really how you've asked.\n\n> + <command>UPDATE</command> and <command>DELETE</command> commands. Also,\n> + even if most queries are able to prune a large number of partitions during\n> + query planning, it still may be undesirable to have a large number of\n>\n> may still ?\n\nThis has been rewritten per Amit's review.\n\n> + <para>\n> + With data warehouse type workloads it can make sense to use a larger\n> + number of partitions than with an OLTP type workload. Generally, in data\n> + warehouses, query planning time is less of a concern as the majority of\n>\n> VAST majority? Or \"essentially all\"? Or \" .. query planning time is\n> insignificant compared to the time spent during query execution.\n\nI don't see any benefit in raising the significance of that.\n\n> + processing time is spent during query execution. With either of these two\n> + types of workload it is important to make the right decisions early as\n>\n> early COMMA\n\nI'm not really sure what you mean here as I don't see any comma in\nthat text. I guess you want me to add one? But I'm confused as you\nseemed to ask me to remove a comma there in your previous review.\n\nYou wrote:\n>>+ these two types of workload, it is important to make the right decisions\n>>+ early as re-partitioning large quantities of data can be painstakingly\n\n> early COMMA ?\n\nCan you be more precise to the exact problem that you see with the\ntext? In the meantime, I've put the comma back where it was in the\noriginal patch.\n\nI've attached the updated patches.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 9 Jun 2019 13:15:09 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Sun, Jun 09, 2019 at 01:15:09PM +1200, David Rowley wrote:\n> Thanks for having another look.\n> \n> On Sat, 8 Jun 2019 at 18:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > + to keep exists in that partition then that means having to resort to using\n> > + <command>DELETE</command> instead of removing the partition.\n> > + </para>\n> > +\n> > + <para>\n> > + Choosing the target number of partitions by which the table should be\n> > + divided into is also a critical decision to make. Not having enough\n> >\n> > Should be: \".. target number .. into which .. should be divided ..\"\n> \n> I've changed \"by\" to \"into\". I think that's what you mean, otherwise,\n> you've lost me.\n\nI meant it should say \"into which it should be divided\" and not \"by which it\nshould be divided INTO\", which has too many prepositions. This is still an\nissue:\n\n+ Choosing the target number of partitions into which the table should be\n+ divided into is also a critical decision to make. Not having enough\n\n> > + partitions may mean that indexes remain too large and that data locality\n> > + remains poor which could result in poor cache hit ratios. However,\n> >\n> > Change the 2nd remains to \"is\" and the second poor to \"low\" ?\n\n> > + consumption during both query planning and execution. It's also important\n> > + to consider what changes may occur in the future when choosing how to\n> > + partition your table. For example, if you choose to have one partition\n> >\n> > Remove \"when choosing ...\"? Or say:\n> \n> I don't see how that would make sense.\n\nI suggested it because otherwise it can read as: \"in the future when choosing ...\".\n\n> > + per customer and you currently have a small number of large customers,\n> > + what will the implications be if in several years you obtain a large\n> > + number of small customers. In this case, it may be better to choose to\n> > + partition by <literal>HASH</literal> and choose a reasonable number of\n> > + partitions rather than trying to partition by <literal>LIST</literal> and\n> > + hoping that the number of customers does not increase significantly over\n> > + time.\n> > + </para>\n> >\n> > It's an unusual thing for which to hope :)\n> \n> I have reworded this slightly which may help with that.\n\nI didn't mean there was any issue with this, just that it's amusing to find\noneself in the unfortunate position of hoping that one's company doesn't end up\nwith many customers.\n\n> > + processing time is spent during query execution. With either of these two\n> > + types of workload it is important to make the right decisions early as\n> >\n> > early COMMA\n> \n> I'm not really sure what you mean here as I don't see any comma in\n> that text. I guess you want me to add one? But I'm confused as you\n> seemed to ask me to remove a comma there in your previous review.\n\nI meant to add one then and now, like:\n\n| these two types of workload, it is important to make the right decisions\n| early, as re-partitioning large quantities of data can be ...\n\nThanks,\nJustin\n\n\n",
"msg_date": "Sat, 8 Jun 2019 23:21:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Sun, 9 Jun 2019 at 16:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I meant it should say \"into which it should be divided\" and not \"by which it\n> should be divided INTO\", which has too many prepositions. This is still an\n> issue:\n\nIt now reads \"divided by\" instead of \"divided into\".\n\n> | these two types of workload, it is important to make the right decisions\n> | early, as re-partitioning large quantities of data can be ...\n\nI've added a comma after \"early\".\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 9 Jun 2019 17:07:39 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Sun, Jun 09, 2019 at 05:07:39PM +1200, David Rowley wrote:\n> On Sun, 9 Jun 2019 at 16:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I meant it should say \"into which it should be divided\" and not \"by which it\n> > should be divided INTO\", which has too many prepositions. This is still an\n> > issue:\n> \n> It now reads \"divided by\" instead of \"divided into\".\n\nSorry, but I think this is still an issue:\n\n> Choosing the target number of partitions into which the table should be\n> divided by is also a critical decision to make. Not having enough\n\nI think it should say:\n\n| Choosing the target number of partitions into which the table should be\n| divided is also a critical decision to make. Not having enough\n\nJustin\n\n\n",
"msg_date": "Sun, 9 Jun 2019 00:11:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Sun, 9 Jun 2019 at 17:11, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Sorry, but I think this is still an issue:\n>\n> > Choosing the target number of partitions into which the table should be\n> > divided by is also a critical decision to make. Not having enough\n>\n> I think it should say:\n>\n> | Choosing the target number of partitions into which the table should be\n> | divided is also a critical decision to make. Not having enough\n\nAlright. I guess I misunderstood you. Updated patches are attached.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sun, 9 Jun 2019 17:44:58 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Hi,\n\nThanks for the updated patches.\n\nOn Sun, Jun 9, 2019 at 5:29 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> On Fri, 7 Jun 2019 at 19:00, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Maybe:\n> >\n> > ... Removal of unwanted data is also a factor to consider when\n> > planning your partitioning strategy as an entire partition can be\n> > removed fairly quickly, especially if the partition keys are chosen\n> > such that all data that can be deleted together are grouped into\n> > separate partitions.\n>\n> It seems like a good idea to change this to have this mention the\n> benefits rather than the drawbacks. I've reworded it, but not using\n> your exact words as it seems the \"especially\" means that a partition\n> can be removed faster with properly chosen partition keys, which is\n> not the case.\n>\n> I also split this out into its own paragraph since it's talking about\n> something quite different from the previous paragraph.\n\nDid you miss to split? In v4 patches, I still see this point\nmentioned in the same paragraph that it was in before:\n\n+ <para>\n+ One of the most critical design decisions will be the column or columns\n+ by which you partition your data. Often the best choice will be to\n+ partition by the column or set of columns which most commonly appear in\n+ <literal>WHERE</literal> clauses of queries being executed on the\n+ partitioned table. <literal>WHERE</literal> clause items that match and\n+ are compatible with the partition key can be used to prune unneeded\n+ partitions. Removal of unwanted data is also a factor to consider when\n+ planning your partitioning strategy. An entire partition can be detached\n+ fairly quickly, so it may be beneficial to design the partition strategy\n+ in such a way that all data to be removed at once is located in a single\n+ partition.\n+ </para>\n\n> > 2.\n> >\n> > + ... For example, if you choose to have one partition\n> > + per customer and you currently have a small number of large customers,\n> > + what will the implications be if in several years you obtain a large\n> > + number of small customers.\n> >\n> > The sentence could be rewritten a bit. Maybe as:\n> >\n> > ... For example, choosing a design with one partition per customer,\n> > because you currently have a small number of large customers, will not\n> > scale well several years down the line when you might have a large\n> > number of small customers.\n> >\n> > Btw, doesn't it suffice here to say \"large number of customers\"\n> > instead of \"large number of small customers\"?\n>\n> I'm not really trying to imply to plan for business growth here, I'm\n> trying to angle it as \"what if your business changes\".\n\nHmm, okay. I thought you were intending this as an example of how a\nparticular partitioning design may not *scale with time*.\n\n> I've reworded\n> this slightly and it now says \"what will the implications be if in\n> several years you instead find yourself with a large number of small\n> customers.\"\n\nI suggest \"consider the implications\" in place of \"what will the\nimplications be...\". Also a user may choose a particular design (one\npartition per customer) *because* of their business situation (small\nnumber of large customers), so I suggest linking the two clauses with\n\"because\". With these two changes, the whole sentence will read more\nconnected, imho:\n\nFor example, if you choose to have one partition per customer because\nyou currently have a small number of large customers, consider the\nimplications if in several years you instead find yourself with a\nlarge number of small customers.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 10 Jun 2019 17:11:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Mon, 10 Jun 2019 at 20:11, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Sun, Jun 9, 2019 at 5:29 AM David Rowley\n> > I also split this out into its own paragraph since it's talking about\n> > something quite different from the previous paragraph.\n>\n> Did you miss to split? In v4 patches, I still see this point\n> mentioned in the same paragraph that it was in before:\n\nNot quite. I just changed my mind again after reading it through.\nSince both paragraphs were talking about the number of partitions I\ndecided they should be the same paragraph after all.\n\n> > I've reworded\n> > this slightly and it now says \"what will the implications be if in\n> > several years you instead find yourself with a large number of small\n> > customers.\"\n>\n> I suggest \"consider the implications\" in place of \"what will the\n> implications be...\". Also a user may choose a particular design (one\n> partition per customer) *because* of their business situation (small\n> number of large customers), so I suggest linking the two clauses with\n> \"because\". With these two changes, the whole sentence will read more\n> connected, imho:\n\nThe disconnect there is on purpose. I don't really want to suggest\nthey chose to partition by customer because they have a small number\nof large customers. The choice to partition by customer could well\nhave come from \"customer_id = ...\" always being present in WHERE\nclauses and they may be fooled into thinking it's a good idea to\npartition by that because of that fact. I'm hoping the text there\npoints out that it might not always be a good choice.\n\nI have slightly reworded it to be a bit closer to your suggestion, but\nI maintained the disconnect.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Mon, 10 Jun 2019 21:10:24 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "part_doc_pg10_v5.patch :\n+ query planning and execution. The query planner is generally able to\n+ handle partition hierarchies up a few hundred partition. Planning times\n\n\"up TO a few hundred partition*S*\" ?\n\n\npart_doc_master_v5.patch:\n+ Choosing the target number of partitions into which the table should be\n+ divided by is also a critical decision to make.\n\n\"into which ... should be divided by\" seems like a copy-editing\nmistake. Did you mean to remove either the \"into which\" or the \"by\"?\nI think \"the target number of partitions THAT the table should be\ndivided into\" is simple and sensible; I'm not sure I trust the version\nwith \"into which\" instead of \"that\", and the role of \"by\" is not clear\nto me (\"divide by\" implies a divisor, but here we're talking about the\nresulting chunks and not the divisor).\n\n\nIn this phrase (all versions):\n+ That's because each partition requires its own metadata that must be\n+ loaded into the local memory of each session that touches it.\n\nI would replace \"requires its own metadata that must be loaded\" with\n\"requires its metadata to be loaded\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 09:44:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "Thanks for looking at this.\n\nOn Tue, 11 Jun 2019 at 01:44, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> part_doc_pg10_v5.patch :\n> + query planning and execution. The query planner is generally able to\n> + handle partition hierarchies up a few hundred partition. Planning times\n>\n> \"up TO a few hundred partition*S*\" ?\n\nOops. My backspace key must have removed too many chars when I removed\n\"quite well\" out of the PG10 version.\n\n> part_doc_master_v5.patch:\n> + Choosing the target number of partitions into which the table should be\n> + divided by is also a critical decision to make.\n>\n> \"into which ... should be divided by\" seems like a copy-editing\n> mistake.\n\nYes it is. It only existed in the master version. I'm not sure how it\nsnuck by in there.\n\n> Did you mean to remove either the \"into which\" or the \"by\"?\n\nI meant to remove \"by\", per advice from Justin.\n\n> I think \"the target number of partitions THAT the table should be\n> divided into\" is simple and sensible; I'm not sure I trust the version\n> with \"into which\" instead of \"that\", and the role of \"by\" is not clear\n> to me (\"divide by\" implies a divisor, but here we're talking about the\n> resulting chunks and not the divisor).\n\nThis is tricky. Justin liked it that way and since it took me a few\nrounds to get it the way he wanted, I'm quite tempted by the\nstatus-quo.\n\n> In this phrase (all versions):\n> + That's because each partition requires its own metadata that must be\n> + loaded into the local memory of each session that touches it.\n>\n> I would replace \"requires its own metadata that must be loaded\" with\n> \"requires its metadata to be loaded\".\n\nThat seems like a good improvement. Changed to that.\n\nv6 versions are attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 11 Jun 2019 09:45:16 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On 2019-Jun-09, Justin Pryzby wrote:\n\n> I think it should say:\n> \n> | Choosing the target number of partitions into which the table should be\n> | divided is also a critical decision to make. Not having enough\n\nI opined elsewhere in the thread that this phrase can be made into more\nstraightforward English:\n\n Choosing the target number of partitions THAT the table should be\n divided INTO is also a critical decision to make. Not having enough\n\nWhat do you think of that formulation?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 18:11:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 06:11:35PM -0400, Alvaro Herrera wrote:\n> On 2019-Jun-09, Justin Pryzby wrote:\n> \n> > I think it should say:\n> > \n> > | Choosing the target number of partitions into which the table should be\n> > | divided is also a critical decision to make. Not having enough\n> \n> I opined elsewhere in the thread that this phrase can be made into more\n> straightforward English:\n> \n> Choosing the target number of partitions THAT the table should be\n> divided INTO is also a critical decision to make. Not having enough\n> \n> What do you think of that formulation?\n\nIt originally said:\n| Choosing the number of partitions to divide the table into is also a\n\nSo this mostly changes it back.\n\nOne could also say:\n| Another critical decision is [the choice of?] the number of partitions\n| into which the table['s content?] should be divided...\n\nI'm okay with it if David is okay making the change :)\n\nThanks,\nJustin\n\n\n",
"msg_date": "Mon, 10 Jun 2019 18:15:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Tue, 11 Jun 2019 at 11:15, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Jun 10, 2019 at 06:11:35PM -0400, Alvaro Herrera wrote:\n> > On 2019-Jun-09, Justin Pryzby wrote:\n> >\n> > > I think it should say:\n> > >\n> > > | Choosing the target number of partitions into which the table should be\n> > > | divided is also a critical decision to make. Not having enough\n> >\n> > I opined elsewhere in the thread that this phrase can be made into more\n> > straightforward English:\n> >\n> > Choosing the target number of partitions THAT the table should be\n> > divided INTO is also a critical decision to make. Not having enough\n> >\n> > What do you think of that formulation?\n>\n> It originally said:\n> | Choosing the number of partitions to divide the table into is also a\n>\n> So this mostly changes it back.\n>\n> One could also say:\n> | Another critical decision is [the choice of?] the number of partitions\n> | into which the table['s content?] should be divided...\n>\n> I'm okay with it if David is okay making the change :)\n\nChanges attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 11 Jun 2019 13:30:05 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On 2019-Jun-11, David Rowley wrote:\n\n> Changes attached.\n\nUnreserved +1 to these patches.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 10 Jun 2019 22:43:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Tue, Jun 11, 2019 at 11:43 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-Jun-11, David Rowley wrote:\n>\n> > Changes attached.\n>\n> Unreserved +1 to these patches.\n\nThe latest version looks good to me too.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 11 Jun 2019 11:52:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Tue, 11 Jun 2019 at 14:53, Amit Langote <amitlangote09@gmail.com> wrote:\n> The latest version looks good to me too.\n\nPushed. Thank you all for the reviews.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 12 Jun 2019 08:12:19 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Wed, Jun 12, 2019 at 5:12 AM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n>\n> On Tue, 11 Jun 2019 at 14:53, Amit Langote <amitlangote09@gmail.com> wrote:\n> > The latest version looks good to me too.\n>\n> Pushed. Thank you all for the reviews.\n\nThanks.\n\nI noticed a typo:\n\n\"...able to handle partition hierarchies up a few thousand partitions\"\n\ns/up/up to/g\n\nI'm inclined to add one more word though, as:\n\n\"...able to handle partition hierarchies with up to a few thousand partitions\"\n\nor\n\n\"...able to handle partition hierarchies containing up to a few\nthousand partitions\"\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 12 Jun 2019 14:48:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we warn against using too many partitions?"
},
{
"msg_contents": "On Wed, 12 Jun 2019 at 17:49, Amit Langote <amitlangote09@gmail.com> wrote:\n> I noticed a typo:\n>\n> \"...able to handle partition hierarchies up a few thousand partitions\"\n>\n> s/up/up to/g\n>\n> I'm inclined to add one more word though, as:\n>\n> \"...able to handle partition hierarchies with up to a few thousand partitions\"\n>\n> or\n>\n> \"...able to handle partition hierarchies containing up to a few\n> thousand partitions\"\n\nThanks for noticing that. I've pushed a fix.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 10:36:44 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Should we warn against using too many partitions?"
}
] |
[
{
"msg_contents": "Hi,\nI am trying to use a create function in order to update some values in a\ntable (see below code).\nHowever, when I run the function, it never enters into the following loop\n*FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where succursale\n= quote_literal(s.succursale) order by row_number*\n\nHowever, if I remove the condition *where succursale =\nquote_literal(s.succursale)* then it works\n\nI need to filter on every value of succursale\nIs there a way to achieve it without removing ?\nAny help will be appreciated. I'm struggling with it for a while now\n\nCREATE OR REPLACE FUNCTION create_new_emp_succ_numbers() RETURNS SETOF\nlist_succursale AS\n$BODY$\nDECLARE\n r immatriculationemployeursucctemp2%rowtype;\n s list_succursale%rowtype;\n seq_priv INTEGER := 1;\n\nBEGIN\n\n FOR s IN SELECT * FROM list_succursale where succursale\nin('010100062D1','010102492S1')\n\n LOOP\n\n\n FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where\nsuccursale = quote_literal(s.succursale) order by row_number\n\n\n LOOP\n\n update immatriculationemployeursucctemp set no_employeur= '10' ||\nlpad(seq_priv::text,6,'0') || '0' || r.row_number-1 where employer_type=10\nand id=r.id;\n\n\n\n END LOOP;\n seq_priv := seq_priv + 1;\n RETURN NEXT s;\nEND LOOP;\n\n RETURN;\nEND\n$BODY$\nLANGUAGE 'plpgsql' ;\n\nSELECT * FROM create_new_emp_succ_numbers();\n\nHi,I am trying to use a create function in order to update some values in a table (see below code).However, when I run the function, it never enters into the following loopFOR r IN SELECT * FROM immatriculationemployeursucctemp2 where succursale = quote_literal(s.succursale) order by row_numberHowever, if I remove the condition \nwhere succursale = quote_literal(s.succursale)\n\n then it worksI need to filter on every value of succursaleIs there a way to achieve it without removing ?Any help will be appreciated. I'm struggling with it for a while now CREATE OR REPLACE FUNCTION create_new_emp_succ_numbers() RETURNS SETOF list_succursale AS$BODY$DECLARE r immatriculationemployeursucctemp2%rowtype; s list_succursale%rowtype; seq_priv INTEGER := 1; BEGIN FOR s IN SELECT * FROM list_succursale where succursale in('010100062D1','010102492S1') LOOP FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where succursale = quote_literal(s.succursale) order by row_number LOOP update immatriculationemployeursucctemp set no_employeur= '10' || lpad(seq_priv::text,6,'0') || '0' || r.row_number-1 where employer_type=10 and id=r.id; END LOOP; seq_priv := seq_priv + 1; RETURN NEXT s;END LOOP; RETURN;END$BODY$LANGUAGE 'plpgsql' ;SELECT * FROM create_new_emp_succ_numbers();",
"msg_date": "Thu, 23 May 2019 09:49:52 +0000",
"msg_from": "Mohamed DIA <macdia2002@gmail.com>",
"msg_from_op": true,
"msg_subject": "Create function using quote_literal issues"
},
{
"msg_contents": "I found the solution by defining r as record and using\n FOR r in EXECUTE v_select\n\nThanks\n\nOn Thu, May 23, 2019 at 9:49 AM Mohamed DIA <macdia2002@gmail.com> wrote:\n\n> Hi,\n> I am trying to use a create function in order to update some values in a\n> table (see below code).\n> However, when I run the function, it never enters into the following loop\n> *FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where\n> succursale = quote_literal(s.succursale) order by row_number*\n>\n> However, if I remove the condition *where succursale =\n> quote_literal(s.succursale)* then it works\n>\n> I need to filter on every value of succursale\n> Is there a way to achieve it without removing ?\n> Any help will be appreciated. I'm struggling with it for a while now\n>\n> CREATE OR REPLACE FUNCTION create_new_emp_succ_numbers() RETURNS SETOF\n> list_succursale AS\n> $BODY$\n> DECLARE\n> r immatriculationemployeursucctemp2%rowtype;\n> s list_succursale%rowtype;\n> seq_priv INTEGER := 1;\n>\n> BEGIN\n>\n> FOR s IN SELECT * FROM list_succursale where succursale\n> in('010100062D1','010102492S1')\n>\n> LOOP\n>\n>\n> FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where\n> succursale = quote_literal(s.succursale) order by row_number\n>\n>\n> LOOP\n>\n> update immatriculationemployeursucctemp set no_employeur= '10' ||\n> lpad(seq_priv::text,6,'0') || '0' || r.row_number-1 where employer_type=10\n> and id=r.id;\n>\n>\n>\n> END LOOP;\n> seq_priv := seq_priv + 1;\n> RETURN NEXT s;\n> END LOOP;\n>\n> RETURN;\n> END\n> $BODY$\n> LANGUAGE 'plpgsql' ;\n>\n> SELECT * FROM create_new_emp_succ_numbers();\n>\n\nI found the solution by defining r as record and using FOR r in EXECUTE v_selectThanksOn Thu, May 23, 2019 at 9:49 AM Mohamed DIA <macdia2002@gmail.com> wrote:Hi,I am trying to use a create function in order to update some values in a table (see below code).However, when I run the function, it never enters into the following loopFOR r IN SELECT * FROM immatriculationemployeursucctemp2 where succursale = quote_literal(s.succursale) order by row_numberHowever, if I remove the condition \nwhere succursale = quote_literal(s.succursale)\n\n then it worksI need to filter on every value of succursaleIs there a way to achieve it without removing ?Any help will be appreciated. I'm struggling with it for a while now CREATE OR REPLACE FUNCTION create_new_emp_succ_numbers() RETURNS SETOF list_succursale AS$BODY$DECLARE r immatriculationemployeursucctemp2%rowtype; s list_succursale%rowtype; seq_priv INTEGER := 1; BEGIN FOR s IN SELECT * FROM list_succursale where succursale in('010100062D1','010102492S1') LOOP FOR r IN SELECT * FROM immatriculationemployeursucctemp2 where succursale = quote_literal(s.succursale) order by row_number LOOP update immatriculationemployeursucctemp set no_employeur= '10' || lpad(seq_priv::text,6,'0') || '0' || r.row_number-1 where employer_type=10 and id=r.id; END LOOP; seq_priv := seq_priv + 1; RETURN NEXT s;END LOOP; RETURN;END$BODY$LANGUAGE 'plpgsql' ;SELECT * FROM create_new_emp_succ_numbers();",
"msg_date": "Thu, 23 May 2019 10:09:52 +0000",
"msg_from": "Mohamed DIA <macdia2002@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Create function using quote_literal issues"
}
] |
[
{
"msg_contents": "Hi hackers!\nI am a student participating in GSoC 2019. I am looking forward to working\nwith you all and learning from you.\nMy project would aim to provide the ability to de-TOAST a fully TOAST'd and\ncompressed field using an iterator.\nFor more details, please take a look at my proposal[0]. Any suggestions or\ncomments about my immature ideas would be much appreciated:)\n\nI've implemented the first step of the project, the segment pglz\ncompression provides the ability to get the subset of the raw data without\ndecompressing the entire field.\nAnd I've done some test[1] for the compressor. The test result is as\nfollows:\nNOTICE: Test summary:\nNOTICE: Payload 000000010000000000000001\nNOTICE: Decompressor name | Compression time (ns/bit) |\nDecompression time (ns/bit) | ratio\nNOTICE: pglz_decompress_hacked | 23.747444 |\n 0.578344 | 0.159809\nNOTICE: pglz_decompress_hacked8 | 23.764193 |\n 0.677800 | 0.159809\nNOTICE: pglz_decompress_hacked16 | 23.740351 |\n 0.704730 | 0.159809\nNOTICE: pglz_decompress_vanilla | 23.797917 |\n 1.227868 | 0.159809\nNOTICE: pglz_decompress_hacked_seg | 12.261808 |\n 0.625634 | 0.184952\n\nComment: Compression speed increased by nearly 100% with compression rate\ndropped by 15%\n\nNOTICE: Payload 000000010000000000000001 sliced by 2Kb\nNOTICE: pglz_decompress_hacked | 12.616956 |\n 0.621223 | 0.156953\nNOTICE: pglz_decompress_hacked8 | 12.583685 |\n 0.756741 | 0.156953\nNOTICE: pglz_decompress_hacked16 | 12.512636 |\n 0.774980 | 0.156953\nNOTICE: pglz_decompress_vanilla | 12.493062 |\n 1.262820 | 0.156953\nNOTICE: pglz_decompress_hacked_seg | 11.986554 |\n 0.622654 | 0.159590\nNOTICE: Payload 000000010000000000000001 sliced by 4Kb\nNOTICE: pglz_decompress_hacked | 15.514469 |\n 0.565565 | 0.154213\nNOTICE: pglz_decompress_hacked8 | 15.529144 |\n 0.699675 | 0.154213\nNOTICE: pglz_decompress_hacked16 | 15.514040 |\n 0.721145 | 0.154213\nNOTICE: pglz_decompress_vanilla | 15.558958 |\n 1.237237 | 0.154213\nNOTICE: pglz_decompress_hacked_seg | 14.650309 |\n 0.563228 | 0.153652\nNOTICE: Payload 000000010000000000000006\nNOTICE: Decompressor name | Compression time (ns/bit) |\nDecompression time (ns/bit) | ratio\nNOTICE: pglz_decompress_hacked | 8.610177 |\n 0.153577 | 0.052294\nNOTICE: pglz_decompress_hacked8 | 8.566785 |\n 0.168002 | 0.052294\nNOTICE: pglz_decompress_hacked16 | 8.643126 |\n 0.167537 | 0.052294\nNOTICE: pglz_decompress_vanilla | 8.574498 |\n 0.930738 | 0.052294\nNOTICE: pglz_decompress_hacked_seg | 7.394731 |\n 0.171924 | 0.056081\nNOTICE: Payload 000000010000000000000006 sliced by 2Kb\nNOTICE: pglz_decompress_hacked | 6.724060 |\n 0.295043 | 0.065541\nNOTICE: pglz_decompress_hacked8 | 6.623018 |\n 0.318527 | 0.065541\nNOTICE: pglz_decompress_hacked16 | 6.898034 |\n 0.318360 | 0.065541\nNOTICE: pglz_decompress_vanilla | 6.712711 |\n 1.045430 | 0.065541\nNOTICE: pglz_decompress_hacked_seg | 6.630743 |\n 0.302589 | 0.068471\nNOTICE: Payload 000000010000000000000006 sliced by 4Kb\nNOTICE: pglz_decompress_hacked | 6.624067 |\n 0.220942 | 0.058865\nNOTICE: pglz_decompress_hacked8 | 6.659424 |\n 0.240183 | 0.058865\nNOTICE: pglz_decompress_hacked16 | 6.763864 |\n 0.240564 | 0.058865\nNOTICE: pglz_decompress_vanilla | 6.743574 |\n 0.985348 | 0.058865\nNOTICE: pglz_decompress_hacked_seg | 6.613123 |\n 0.227582 | 0.060330\nNOTICE: Payload 000000010000000000000008\nNOTICE: Decompressor name | Compression time (ns/bit) |\nDecompression time (ns/bit) | ratio\nNOTICE: pglz_decompress_hacked | 52.425957 |\n 1.050544 | 0.498941\nNOTICE: pglz_decompress_hacked8 | 52.204561 |\n 1.261592 | 0.498941\nNOTICE: pglz_decompress_hacked16 | 52.328491 |\n 1.466751 | 0.498941\nNOTICE: pglz_decompress_vanilla | 52.465308 |\n 1.341271 | 0.498941\nNOTICE: pglz_decompress_hacked_seg | 31.896341 |\n 1.113260 | 0.600998\nNOTICE: Payload 000000010000000000000008 sliced by 2Kb\nNOTICE: pglz_decompress_hacked | 30.620611 |\n 0.768542 | 0.351941\nNOTICE: pglz_decompress_hacked8 | 30.557334 |\n 0.907421 | 0.351941\nNOTICE: pglz_decompress_hacked16 | 32.064903 |\n 1.208913 | 0.351941\nNOTICE: pglz_decompress_vanilla | 30.489886 |\n 1.014197 | 0.351941\nNOTICE: pglz_decompress_hacked_seg | 27.145243 |\n 0.774193 | 0.352868\nNOTICE: Payload 000000010000000000000008 sliced by 4Kb\nNOTICE: pglz_decompress_hacked | 36.567903 |\n 1.054633 | 0.514047\nNOTICE: pglz_decompress_hacked8 | 36.459124 |\n 1.267731 | 0.514047\nNOTICE: pglz_decompress_hacked16 | 36.791718 |\n 1.479650 | 0.514047\nNOTICE: pglz_decompress_vanilla | 36.241913 |\n 1.303136 | 0.514047\nNOTICE: pglz_decompress_hacked_seg | 31.526327 |\n 1.059926 | 0.526875\nNOTICE: Payload 16398\nNOTICE: Decompressor name | Compression time (ns/bit) |\nDecompression time (ns/bit) | ratio\nNOTICE: pglz_decompress_hacked | 9.508625 |\n 0.435190 | 0.071816\nNOTICE: pglz_decompress_hacked8 | 9.546987 |\n 0.473871 | 0.071816\nNOTICE: pglz_decompress_hacked16 | 9.534496 |\n 0.471662 | 0.071816\nNOTICE: pglz_decompress_vanilla | 9.559053 |\n 1.352561 | 0.071816\nNOTICE: pglz_decompress_hacked_seg | 8.479486 |\n 0.441536 | 0.073232\nNOTICE: Payload 16398 sliced by 2Kb\nNOTICE: pglz_decompress_hacked | 6.808167 |\n 0.326570 | 0.082775\nNOTICE: pglz_decompress_hacked8 | 6.790743 |\n 0.361720 | 0.082775\nNOTICE: pglz_decompress_hacked16 | 6.886097 |\n 0.364549 | 0.082775\nNOTICE: pglz_decompress_vanilla | 6.918429 |\n 1.191265 | 0.082775\nNOTICE: pglz_decompress_hacked_seg | 6.752811 |\n 0.340805 | 0.085705\nNOTICE: Payload 16398 sliced by 4Kb\nNOTICE: pglz_decompress_hacked | 7.244472 |\n 0.261872 | 0.076860\nNOTICE: pglz_decompress_hacked8 | 7.290275 |\n 0.295988 | 0.076860\nNOTICE: pglz_decompress_hacked16 | 7.340706 |\n 0.294683 | 0.076860\nNOTICE: pglz_decompress_vanilla | 7.429289 |\n 1.151645 | 0.076860\nNOTICE: pglz_decompress_hacked_seg | 7.054166 |\n 0.267896 | 0.078325\nNOTICE: Payload shakespeare.txt\nNOTICE: Decompressor name | Compression time (ns/bit) |\nDecompression time (ns/bit) | ratio\nNOTICE: pglz_decompress_hacked | 25.998753 |\n 1.345542 | 0.281363\nNOTICE: pglz_decompress_hacked8 | 26.121630 |\n 1.917667 | 0.281363\nNOTICE: pglz_decompress_hacked16 | 26.139312 |\n 2.101329 | 0.281363\nNOTICE: pglz_decompress_vanilla | 26.155571 |\n 2.082123 | 0.281363\nNOTICE: pglz_decompress_hacked_seg | 16.792089 |\n 1.951269 | 0.436558\n\nComment: In this case, the compression rate has dropped dramatically.\n\nNOTICE: Payload shakespeare.txt sliced by 2Kb\nNOTICE: pglz_decompress_hacked | 14.992793 |\n 1.923663 | 0.436270\nNOTICE: pglz_decompress_hacked8 | 14.982428 |\n 2.695319 | 0.436270\nNOTICE: pglz_decompress_hacked16 | 15.211803 |\n 2.846615 | 0.436270\nNOTICE: pglz_decompress_vanilla | 15.113214 |\n 2.580098 | 0.436270\nNOTICE: pglz_decompress_hacked_seg | 15.120852 |\n 1.922596 | 0.439199\nNOTICE: Payload shakespeare.txt sliced by 4Kb\nNOTICE: pglz_decompress_hacked | 18.083400 |\n 1.687598 | 0.366936\nNOTICE: pglz_decompress_hacked8 | 18.185038 |\n 2.395928 | 0.366936\nNOTICE: pglz_decompress_hacked16 | 18.096120 |\n 2.554812 | 0.366936\nNOTICE: pglz_decompress_vanilla | 18.435380 |\n 2.329129 | 0.366936\nNOTICE: pglz_decompress_hacked_seg | 18.103267 |\n 1.705517 | 0.368400\nNOTICE:\n\nDecompressor score (summ of all times):\nNOTICE: Decompressor pglz_decompress_hacked result 11.288848\nNOTICE: Decompressor pglz_decompress_hacked8 result 14.438165\nNOTICE: Decompressor pglz_decompress_hacked16 result 15.716280\nNOTICE: Decompressor pglz_decompress_vanilla result 21.034867\nNOTICE: Decompressor pglz_decompress_hacked_seg result 12.090609\nNOTICE:\n\ncompressor score (summ of all times):\nNOTICE: compressor pglz_compress_vanilla result 276.776671\nNOTICE: compressor pglz_compress_hacked_seg result 222.407850\n\nThere are some questions now:\n1. The compression algorithm is not compatible with the original\ncompression algorithm now.\n2. If the idea works, we need to test more data, what kind of data is more\nappropriate?\nAny comments are much appreciated.\n\nBest regards, Binguo Bao.\n\n[0]\nhttps://docs.google.com/document/d/1V4oXV5vGrGx24deBTKKM7bVdO3Cy-zfj-wQ4dkBUCl4/edit\n[1] https://github.com/djydewang/test_pglz\n\nHi hackers!I am a student participating in GSoC 2019. I am looking forward to working with you all and learning from you.My project would aim to provide the ability to de-TOAST a fully TOAST'd and compressed field using an iterator.For more details, please take a look at my proposal[0]. Any suggestions or comments about my immature ideas would be much appreciated:)I've implemented the first step of the project, the segment pglz compression provides the ability to get the subset of the raw data without decompressing the entire field.And I've done some test[1] for the compressor. The test result is as follows:NOTICE: Test summary:NOTICE: Payload 000000010000000000000001NOTICE: Decompressor name | Compression time (ns/bit) | Decompression time (ns/bit) | ratio NOTICE: pglz_decompress_hacked | 23.747444 | 0.578344 | 0.159809NOTICE: pglz_decompress_hacked8 | 23.764193 | 0.677800 | 0.159809NOTICE: pglz_decompress_hacked16 | 23.740351 | 0.704730 | 0.159809NOTICE: pglz_decompress_vanilla | 23.797917 | 1.227868 | 0.159809NOTICE: pglz_decompress_hacked_seg | 12.261808 | 0.625634 | 0.184952Comment: Compression speed increased by nearly 100% with compression rate dropped by 15%NOTICE: Payload 000000010000000000000001 sliced by 2KbNOTICE: pglz_decompress_hacked | 12.616956 | 0.621223 | 0.156953NOTICE: pglz_decompress_hacked8 | 12.583685 | 0.756741 | 0.156953NOTICE: pglz_decompress_hacked16 | 12.512636 | 0.774980 | 0.156953NOTICE: pglz_decompress_vanilla | 12.493062 | 1.262820 | 0.156953NOTICE: pglz_decompress_hacked_seg | 11.986554 | 0.622654 | 0.159590NOTICE: Payload 000000010000000000000001 sliced by 4KbNOTICE: pglz_decompress_hacked | 15.514469 | 0.565565 | 0.154213NOTICE: pglz_decompress_hacked8 | 15.529144 | 0.699675 | 0.154213NOTICE: pglz_decompress_hacked16 | 15.514040 | 0.721145 | 0.154213NOTICE: pglz_decompress_vanilla | 15.558958 | 1.237237 | 0.154213NOTICE: pglz_decompress_hacked_seg | 14.650309 | 0.563228 | 0.153652NOTICE: Payload 000000010000000000000006NOTICE: Decompressor name | Compression time (ns/bit) | Decompression time (ns/bit) | ratio NOTICE: pglz_decompress_hacked | 8.610177 | 0.153577 | 0.052294NOTICE: pglz_decompress_hacked8 | 8.566785 | 0.168002 | 0.052294NOTICE: pglz_decompress_hacked16 | 8.643126 | 0.167537 | 0.052294NOTICE: pglz_decompress_vanilla | 8.574498 | 0.930738 | 0.052294NOTICE: pglz_decompress_hacked_seg | 7.394731 | 0.171924 | 0.056081NOTICE: Payload 000000010000000000000006 sliced by 2KbNOTICE: pglz_decompress_hacked | 6.724060 | 0.295043 | 0.065541NOTICE: pglz_decompress_hacked8 | 6.623018 | 0.318527 | 0.065541NOTICE: pglz_decompress_hacked16 | 6.898034 | 0.318360 | 0.065541NOTICE: pglz_decompress_vanilla | 6.712711 | 1.045430 | 0.065541NOTICE: pglz_decompress_hacked_seg | 6.630743 | 0.302589 | 0.068471NOTICE: Payload 000000010000000000000006 sliced by 4KbNOTICE: pglz_decompress_hacked | 6.624067 | 0.220942 | 0.058865NOTICE: pglz_decompress_hacked8 | 6.659424 | 0.240183 | 0.058865NOTICE: pglz_decompress_hacked16 | 6.763864 | 0.240564 | 0.058865NOTICE: pglz_decompress_vanilla | 6.743574 | 0.985348 | 0.058865NOTICE: pglz_decompress_hacked_seg | 6.613123 | 0.227582 | 0.060330NOTICE: Payload 000000010000000000000008NOTICE: Decompressor name | Compression time (ns/bit) | Decompression time (ns/bit) | ratio NOTICE: pglz_decompress_hacked | 52.425957 | 1.050544 | 0.498941NOTICE: pglz_decompress_hacked8 | 52.204561 | 1.261592 | 0.498941NOTICE: pglz_decompress_hacked16 | 52.328491 | 1.466751 | 0.498941NOTICE: pglz_decompress_vanilla | 52.465308 | 1.341271 | 0.498941NOTICE: pglz_decompress_hacked_seg | 31.896341 | 1.113260 | 0.600998NOTICE: Payload 000000010000000000000008 sliced by 2KbNOTICE: pglz_decompress_hacked | 30.620611 | 0.768542 | 0.351941NOTICE: pglz_decompress_hacked8 | 30.557334 | 0.907421 | 0.351941NOTICE: pglz_decompress_hacked16 | 32.064903 | 1.208913 | 0.351941NOTICE: pglz_decompress_vanilla | 30.489886 | 1.014197 | 0.351941NOTICE: pglz_decompress_hacked_seg | 27.145243 | 0.774193 | 0.352868NOTICE: Payload 000000010000000000000008 sliced by 4KbNOTICE: pglz_decompress_hacked | 36.567903 | 1.054633 | 0.514047NOTICE: pglz_decompress_hacked8 | 36.459124 | 1.267731 | 0.514047NOTICE: pglz_decompress_hacked16 | 36.791718 | 1.479650 | 0.514047NOTICE: pglz_decompress_vanilla | 36.241913 | 1.303136 | 0.514047NOTICE: pglz_decompress_hacked_seg | 31.526327 | 1.059926 | 0.526875NOTICE: Payload 16398NOTICE: Decompressor name | Compression time (ns/bit) | Decompression time (ns/bit) | ratio NOTICE: pglz_decompress_hacked | 9.508625 | 0.435190 | 0.071816NOTICE: pglz_decompress_hacked8 | 9.546987 | 0.473871 | 0.071816NOTICE: pglz_decompress_hacked16 | 9.534496 | 0.471662 | 0.071816NOTICE: pglz_decompress_vanilla | 9.559053 | 1.352561 | 0.071816NOTICE: pglz_decompress_hacked_seg | 8.479486 | 0.441536 | 0.073232NOTICE: Payload 16398 sliced by 2KbNOTICE: pglz_decompress_hacked | 6.808167 | 0.326570 | 0.082775NOTICE: pglz_decompress_hacked8 | 6.790743 | 0.361720 | 0.082775NOTICE: pglz_decompress_hacked16 | 6.886097 | 0.364549 | 0.082775NOTICE: pglz_decompress_vanilla | 6.918429 | 1.191265 | 0.082775NOTICE: pglz_decompress_hacked_seg | 6.752811 | 0.340805 | 0.085705NOTICE: Payload 16398 sliced by 4KbNOTICE: pglz_decompress_hacked | 7.244472 | 0.261872 | 0.076860NOTICE: pglz_decompress_hacked8 | 7.290275 | 0.295988 | 0.076860NOTICE: pglz_decompress_hacked16 | 7.340706 | 0.294683 | 0.076860NOTICE: pglz_decompress_vanilla | 7.429289 | 1.151645 | 0.076860NOTICE: pglz_decompress_hacked_seg | 7.054166 | 0.267896 | 0.078325NOTICE: Payload shakespeare.txtNOTICE: Decompressor name | Compression time (ns/bit) | Decompression time (ns/bit) | ratio NOTICE: pglz_decompress_hacked | 25.998753 | 1.345542 | 0.281363NOTICE: pglz_decompress_hacked8 | 26.121630 | 1.917667 | 0.281363NOTICE: pglz_decompress_hacked16 | 26.139312 | 2.101329 | 0.281363NOTICE: pglz_decompress_vanilla | 26.155571 | 2.082123 | 0.281363NOTICE: pglz_decompress_hacked_seg | 16.792089 | 1.951269 | 0.436558Comment: In this case, the compression rate has dropped dramatically. NOTICE: Payload shakespeare.txt sliced by 2KbNOTICE: pglz_decompress_hacked | 14.992793 | 1.923663 | 0.436270NOTICE: pglz_decompress_hacked8 | 14.982428 | 2.695319 | 0.436270NOTICE: pglz_decompress_hacked16 | 15.211803 | 2.846615 | 0.436270NOTICE: pglz_decompress_vanilla | 15.113214 | 2.580098 | 0.436270NOTICE: pglz_decompress_hacked_seg | 15.120852 | 1.922596 | 0.439199NOTICE: Payload shakespeare.txt sliced by 4KbNOTICE: pglz_decompress_hacked | 18.083400 | 1.687598 | 0.366936NOTICE: pglz_decompress_hacked8 | 18.185038 | 2.395928 | 0.366936NOTICE: pglz_decompress_hacked16 | 18.096120 | 2.554812 | 0.366936NOTICE: pglz_decompress_vanilla | 18.435380 | 2.329129 | 0.366936NOTICE: pglz_decompress_hacked_seg | 18.103267 | 1.705517 | 0.368400NOTICE: Decompressor score (summ of all times):NOTICE: Decompressor pglz_decompress_hacked result 11.288848NOTICE: Decompressor pglz_decompress_hacked8 result 14.438165NOTICE: Decompressor pglz_decompress_hacked16 result 15.716280NOTICE: Decompressor pglz_decompress_vanilla result 21.034867NOTICE: Decompressor pglz_decompress_hacked_seg result 12.090609NOTICE: compressor score (summ of all times):NOTICE: compressor pglz_compress_vanilla result 276.776671NOTICE: compressor pglz_compress_hacked_seg result 222.407850There are some questions now:1. The compression algorithm is not compatible with the original compression algorithm now.2. If the idea works, we need to test more data, what kind of data is more appropriate?Any comments are much appreciated.Best regards, Binguo Bao.\n[0] https://docs.google.com/document/d/1V4oXV5vGrGx24deBTKKM7bVdO3Cy-zfj-wQ4dkBUCl4/edit[1] https://github.com/djydewang/test_pglz",
"msg_date": "Thu, 23 May 2019 22:27:09 +0800",
"msg_from": "Binguo Bao <djydewang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pglz performance"
}
] |
[
{
"msg_contents": "Hackers,\n\nIn src/backend/snowball/libstemmer/utilities.c, 'create_s' uses\nmalloc (not palloc) to allocate memory, and on memory exhaustion\nreturns NULL rather than throwing an exception. In this same\nfile, 'replace_s' calls 'create_s' and if it gets back NULL, returns\nthe error code -1. Otherwise, it sets z->p to the allocated\nmemory.\n\nIn src/backend/snowball/libstemmer/api.c, 'SN_set_current' calls\n'replace_s' and returns whatever 'replace_s' returned, which in\nthe case of memory exhaustion will be -1.\n\nIn src/backend/snowball/dict_snowball.c, 'dsnowball_lexize'\ncalls 'SN_set_current' and ignores the return value, thereby\nfailing to notice the error, if any.\n\nI checked one of the stemmers, stem_ISO_8859_1_english.c,\nand it treats z->p as an array without checking whether it is\nNULL. This will crash the backend in the above error case.\n\nThere is something else weird here, though. The call to\n'SN_set_current' is wrapped in a memory context switch, along\nwith a call to the stemmer, as if the caller expects any allocated\nmemory to be palloc'd, which it is not, given the underlying code's\nuse of malloc and calloc.\n\nThere is a comment higher up in dict_snowball.c that seems to\nuse some handwaving about all this, or perhaps it is documenting\nsomething else entirely. In any event, I find the documentation\nabout dictCtx insufficient to explain why this memory handling\nis correct.\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 08:14:24 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Memory bug in dsnowball_lexize"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> In src/backend/snowball/libstemmer/utilities.c, 'create_s' uses\n> malloc (not palloc) to allocate memory, and on memory exhaustion\n> returns NULL rather than throwing an exception.\n\nActually not, see macros in src/include/snowball/header.h.\n\n> In src/backend/snowball/dict_snowball.c, 'dsnowball_lexize'\n> calls 'SN_set_current' and ignores the return value, thereby\n> failing to notice the error, if any.\n\nHm. This seems like possibly a bug, in that even if we cover the\nmalloc issue, there's no API guarantee that OOM is the only possible\nreason for reporting failure.\n\n> There is a comment higher up in dict_snowball.c that seems to\n> use some handwaving about all this, or perhaps it is documenting\n> something else entirely. In any event, I find the documentation\n> about dictCtx insufficient to explain why this memory handling\n> is correct.\n\nFair complaint --- do you want to propose some new wording that\nreferences what header.h does?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 11:46:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory bug in dsnowball_lexize"
},
{
"msg_contents": "On Thu, May 23, 2019 at 8:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > In src/backend/snowball/libstemmer/utilities.c, 'create_s' uses\n> > malloc (not palloc) to allocate memory, and on memory exhaustion\n> > returns NULL rather than throwing an exception.\n>\n> Actually not, see macros in src/include/snowball/header.h.\n\nYou are correct. Thanks for the pointer.\n\n> > In src/backend/snowball/dict_snowball.c, 'dsnowball_lexize'\n> > calls 'SN_set_current' and ignores the return value, thereby\n> > failing to notice the error, if any.\n>\n> Hm. This seems like possibly a bug, in that even if we cover the\n> malloc issue, there's no API guarantee that OOM is the only possible\n> reason for reporting failure.\n\nOk, that sounds fair. Since the memory is being palloc'd, I suppose\nit would be safe to just ereport when the return value is -1?\n\n> > There is a comment higher up in dict_snowball.c that seems to\n> > use some handwaving about all this, or perhaps it is documenting\n> > something else entirely. In any event, I find the documentation\n> > about dictCtx insufficient to explain why this memory handling\n> > is correct.\n>\n> Fair complaint --- do you want to propose some new wording that\n> references what header.h does?\n\nPerhaps something along these lines?\n\n /*\n- * snowball saves alloced memory between calls, so we should\nrun it in our\n- * private memory context. Note, init function is executed in long lived\n- * context, so we just remember CurrentMemoryContext\n+ * snowball saves alloced memory between calls, which we force to be\n+ * allocated using palloc and friends via preprocessing macros (see\n+ * snowball/header.h), so we should run snowball in our private memory\n+ * context. Note, init function is executed in long lived\ncontext, so we\n+ * just remember CurrentMemoryContext.\n */\n\n\n",
"msg_date": "Thu, 23 May 2019 09:02:01 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory bug in dsnowball_lexize"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On Thu, May 23, 2019 at 8:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Mark Dilger <hornschnorter@gmail.com> writes:\n>>> In src/backend/snowball/dict_snowball.c, 'dsnowball_lexize'\n>>> calls 'SN_set_current' and ignores the return value, thereby\n>>> failing to notice the error, if any.\n\n>> Hm. This seems like possibly a bug, in that even if we cover the\n>> malloc issue, there's no API guarantee that OOM is the only possible\n>> reason for reporting failure.\n\n> Ok, that sounds fair. Since the memory is being palloc'd, I suppose\n> it would be safe to just ereport when the return value is -1?\n\nYeah ... I'd just make it an elog really, since whatever it is\nwould presumably not be a user-facing error.\n\n>> Fair complaint --- do you want to propose some new wording that\n>> references what header.h does?\n\n> Perhaps something along these lines?\n\nSeems reasonable, please include in patch covering the other thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 12:06:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory bug in dsnowball_lexize"
}
] |
[
{
"msg_contents": "Now that beta is out, I wanted to do some crash-recovery testing where I\ninject PANIC-inducing faults and see if it recovers correctly. A long-lived\nPerl process keeps track of what it should find after the crash, and\nverifies that it finds it. You will probably be familiar with the general\ntheme from examples like the threads below. Would anyone like to nominate\nsome areas to focus on? I think the pluggable storage refactoring work\nwill be get inherently tested, so I'm not planning designing test\nspecifically for that (unless there is a non-core plugin I should test\nwith). Making the ctid be tie-breakers in btree index is also tested\ninherently (plus I think Peter tested that pretty thoroughly himself with\nsimilar methods). I've already tested declarative partitioning where the\ntuples do a lot of migrating, and tested prepared transactions. Any other\nsuggestions for changes that might be risky and should be specifically\ntargeted for testing?\n\n\n\nhttps://www.postgresql.org/message-id/CAMkU=1xEUuBphDwDmB1WjN4+td4kpnEniFaTBxnk1xzHCw8_OQ@mail.gmail.com\n\n\nhttps://www.postgresql.org/message-id/CAMkU=1xBP8cqdS5eK8APHL=X6RHMMM2vG5g+QamduuTsyCwv9g@mail.gmail.com\n\n\nCheers,\n\nJeff\n\nNow that beta is out, I wanted to do some crash-recovery testing where I inject PANIC-inducing faults and see if it recovers correctly. A long-lived Perl process keeps track of what it should find after the crash, and verifies that it finds it. You will probably be familiar with the general theme from examples like the threads below. Would anyone like to nominate some areas to focus on? I think the pluggable storage refactoring work will be get inherently tested, so I'm not planning designing test specifically for that (unless there is a non-core plugin I should test with). Making the ctid be tie-breakers in btree index is also tested inherently (plus I think Peter tested that pretty thoroughly himself with similar methods). I've already tested declarative partitioning where the tuples do a lot of migrating, and tested prepared transactions. Any other suggestions for changes that might be risky and should be specifically targeted for testing? https://www.postgresql.org/message-id/CAMkU=1xEUuBphDwDmB1WjN4+td4kpnEniFaTBxnk1xzHCw8_OQ@mail.gmail.com https://www.postgresql.org/message-id/CAMkU=1xBP8cqdS5eK8APHL=X6RHMMM2vG5g+QamduuTsyCwv9g@mail.gmail.com Cheers,Jeff",
"msg_date": "Thu, 23 May 2019 11:24:12 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "crash testing suggestions for 12 beta 1"
},
{
"msg_contents": "On Thu, May 23, 2019 at 8:24 AM Jeff Janes <jeff.janes@gmail.com> wrote:\n> Now that beta is out, I wanted to do some crash-recovery testing where I inject PANIC-inducing faults and see if it recovers correctly.\n\nThank you for doing this. It's important work.\n\n> Making the ctid be tie-breakers in btree index is also tested inherently (plus I think Peter tested that pretty thoroughly himself with similar methods).\n\nAs you may know, the B-Tree code has a tendency to soldier on when an\nindex is corrupt. \"Moving right\" tends to conceal problems beyond\nconcurrent page splits. I didn't do very much fault injection type\ntesting with the B-Tree enhancements, but I did lean on amcheck\nheavily during development. Note that a new, extremely thorough option\ncalled \"rootdescend\" verification was added following the v12 work:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c1afd175b5b2e5c44f6da34988342e00ecdfb518\n\nIt probably wouldn't add noticeable overhead to use this during your\ntesting, and maybe to combine it with the \"heapallindexed\" option,\nwhile using the bt_index_parent_check() variant -- that will detect\nalmost any imaginable index corruption. Admittedly, amcheck didn't\nfind any bugs in my code after the first couple of versions of the\npatch series, so this approach seems unlikely to find any problems\nnow. Even still, it wouldn't be very difficult to do this extra step.\nIt seems worthwhile to be thorough here, given that we depend on the\nB-Tree code so heavily.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 May 2019 08:55:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: crash testing suggestions for 12 beta 1"
},
{
"msg_contents": "On 2019-May-23, Jeff Janes wrote:\n\n> Now that beta is out, I wanted to do some crash-recovery testing where I\n> inject PANIC-inducing faults and see if it recovers correctly. A long-lived\n> Perl process keeps track of what it should find after the crash, and\n> verifies that it finds it. You will probably be familiar with the general\n> theme from examples like the threads below. Would anyone like to nominate\n> some areas to focus on?\n\nThanks for the offer! Your work has showed its value in previous cycles.\n\nREINDEX CONCURRENTLY would be one good area to focus on, I think, as\nwell as ALTER TABLE ATTACH PARTITION. Maybe also INCLUDE columns in\nGiST, and the stuff in commits 9155580fd, fe280694d and 7df159a62.\n\nThanks,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 17:11:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: crash testing suggestions for 12 beta 1"
},
{
"msg_contents": "On Wed, Jun 5, 2019 at 2:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> REINDEX CONCURRENTLY would be one good area to focus on, I think, as\n> well as ALTER TABLE ATTACH PARTITION. Maybe also INCLUDE columns in\n> GiST, and the stuff in commits 9155580fd, fe280694d and 7df159a62.\n\nThose all seem like good things to target.\n\nForgive me for droning on about amcheck once more, but maybe it'll\nhelp: amcheck has the capability to detect at least two historic bugs\nin CREATE INDEX CONCURRENTLY that made it into stable releases. The\n\"heapallindexed\" verification option's bt_tuple_present_callback()\nfunction has a header comment that talks about this. In short, any\n\"unhandled\" broken hot chain (i.e. broken hot chains that are somehow\nnot correctly detected and handled) should be reported as corrupt by\namcheck with the \"heapallindexed\" check, provided the tuple is visible\nto verification's heap scan.\n\nThe CREATE INDEX CONCURRENTLY bug that Pavan found a couple of years\nback while testing the WARM patch is one example. A bug that was\nfallout from the DROP INDEX CONCURRENTLY work is another historic\nexample. Alvaro will recall that this same check had a role in the\n\"freeze the dead\" business.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 5 Jun 2019 14:32:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: crash testing suggestions for 12 beta 1"
},
{
"msg_contents": "On Wed, Jun 05, 2019 at 02:32:49PM -0700, Peter Geoghegan wrote:\n> Forgive me for droning on about amcheck once more, but maybe it'll\n> help: amcheck has the capability to detect at least two historic bugs\n> in CREATE INDEX CONCURRENTLY that made it into stable releases. The\n> \"heapallindexed\" verification option's bt_tuple_present_callback()\n> function has a header comment that talks about this. In short, any\n> \"unhandled\" broken hot chain (i.e. broken hot chains that are somehow\n> not correctly detected and handled) should be reported as corrupt by\n> amcheck with the \"heapallindexed\" check, provided the tuple is visible\n> to verification's heap scan.\n> \n> The CREATE INDEX CONCURRENTLY bug that Pavan found a couple of years\n> back while testing the WARM patch is one example. A bug that was\n> fallout from the DROP INDEX CONCURRENTLY work is another historic\n> example. Alvaro will recall that this same check had a role in the\n> \"freeze the dead\" business.\n\nREINDEX CONCURRENTLY is mostly a mapping of CREATE INDEX CONCURRENTLY\n+ relation swapping + DROP INDEX CONCURRENTLY separated by multiple\ntransactions. In my opinion, the swapping part which renames the\nindexes and switches the dependencies is the most interesting of the\nwhole set because that's completely new.\n\nAre you planning to make sanity checks using pg_catcheck or such?\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 16:31:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: crash testing suggestions for 12 beta 1"
}
] |
[
{
"msg_contents": "Hackers,\n\nIn src/backend/storage/ipc/barrier.c, BarrierAttach\ngoes to the bother of storing the phase before\nreleasing the spinlock, and then returns the phase.\n\nIn nodeHash.c, ExecHashTableCreate ignores the\nphase returned by BarrierAttach, and then immediately\ncalls BarrierPhase to get the phase that it just ignored.\nI don't know that there is anything wrong with this, but\nif the phase can be retrieved after the spinlock is\nreleased, why hold the spinlock extra long in\nBarrierAttach?\n\nJust asking....\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 09:10:35 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question about BarrierAttach spinlock"
},
{
"msg_contents": "On Fri, May 24, 2019 at 4:10 AM Mark Dilger <hornschnorter@gmail.com> wrote:\n> In src/backend/storage/ipc/barrier.c, BarrierAttach\n> goes to the bother of storing the phase before\n> releasing the spinlock, and then returns the phase.\n>\n> In nodeHash.c, ExecHashTableCreate ignores the\n> phase returned by BarrierAttach, and then immediately\n> calls BarrierPhase to get the phase that it just ignored.\n> I don't know that there is anything wrong with this, but\n> if the phase can be retrieved after the spinlock is\n> released, why hold the spinlock extra long in\n> BarrierAttach?\n>\n> Just asking....\n\nWell spotted. I think you're right, and we could release the spinlock\na nanosecond earlier. It must be safe to move that assignment, for\nthe reason explained in the comment of BarrierPhase(): after we\nrelease the spinlock, we are attached, and the phase cannot advance\nwithout us. I will contemplate moving that for v13 on principle.\n\nAs for why ExecHashTableCreate() calls BarrierAttach(build_barrier)\nand then immediately calls BarrierPhase(build_barrier), I suppose I\ncould remove the BarrierAttach() line and change the BarrierPhase()\ncall to BarrierAttach(), though I think that'd be slightly harder to\nfollow. I suppose I could introduce a variable phase.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 May 2019 10:43:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about BarrierAttach spinlock"
},
{
"msg_contents": "On Thu, May 23, 2019 at 3:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, May 24, 2019 at 4:10 AM Mark Dilger <hornschnorter@gmail.com> wrote:\n> > In src/backend/storage/ipc/barrier.c, BarrierAttach\n> > goes to the bother of storing the phase before\n> > releasing the spinlock, and then returns the phase.\n> >\n> > In nodeHash.c, ExecHashTableCreate ignores the\n> > phase returned by BarrierAttach, and then immediately\n> > calls BarrierPhase to get the phase that it just ignored.\n> > I don't know that there is anything wrong with this, but\n> > if the phase can be retrieved after the spinlock is\n> > released, why hold the spinlock extra long in\n> > BarrierAttach?\n> >\n> > Just asking....\n>\n> Well spotted. I think you're right, and we could release the spinlock\n> a nanosecond earlier. It must be safe to move that assignment, for\n> the reason explained in the comment of BarrierPhase(): after we\n> release the spinlock, we are attached, and the phase cannot advance\n> without us. I will contemplate moving that for v13 on principle.\n>\n> As for why ExecHashTableCreate() calls BarrierAttach(build_barrier)\n> and then immediately calls BarrierPhase(build_barrier), I suppose I\n> could remove the BarrierAttach() line and change the BarrierPhase()\n> call to BarrierAttach(), though I think that'd be slightly harder to\n> follow. I suppose I could introduce a variable phase.\n\nThanks for the explanation!\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 15:47:11 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about BarrierAttach spinlock"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that v12 release notes is referencing the wrong GUC. It\nshould be recovery_target_timeline instead of recovery_target_time.\nPatch attached.\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Thu, 23 May 2019 14:26:13 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": true,
"msg_subject": "Fix link for v12"
},
{
"msg_contents": "On Thu, May 23, 2019 at 10:56 PM Euler Taveira <euler@timbira.com.br> wrote:\n>\n> Hi,\n>\n> I noticed that v12 release notes is referencing the wrong GUC. It\n> should be recovery_target_timeline instead of recovery_target_time.\n> Patch attached.\n>\n\nYour patch looks correct to me. I will commit it in some time unless\nsomeone does it before or sees any problem with me committing this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 May 2019 08:43:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix link for v12"
},
{
"msg_contents": "On Sat, May 25, 2019 at 8:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 23, 2019 at 10:56 PM Euler Taveira <euler@timbira.com.br> wrote:\n> >\n> > Hi,\n> >\n> > I noticed that v12 release notes is referencing the wrong GUC. It\n> > should be recovery_target_timeline instead of recovery_target_time.\n> > Patch attached.\n> >\n>\n> Your patch looks correct to me. I will commit it in some time unless\n> someone does it before or sees any problem with me committing this.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 May 2019 15:44:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix link for v12"
}
] |
[
{
"msg_contents": "Hackers,\n\nI have seen other lengthy discussions about fsync semantics, and if this\nquestion is being addressed there, this question might not be relevant.\n\nTwo calls to durable_unlink at log level DEBUG1 are ignoring the\nreturn value. Other calls at ERROR and FATAL are likewise ignoring\nthe return value, though those make perfect sense to me. There may\nbe a reason why logging a debug message inside durable_unlink and\ncontinuing along is safe, but it is not clear from the structure of the\ncode.\n\nIn InstallXLogFileSegment, durable_unlink(path, DEBUG1) is called\nwithout the return value being checked followed by a call to\ndurable_link_or_rename, and perhaps that second call works\nwhether the durable_unlink succeeded or failed, but the logic of that is\nnot at all clear.\n\nIn do_pg_stop_backup, durable_unlink(TABLESPACE_MAP, DEBUG1)\nis similarly called without the return value being checked.\n\nThis code appears to have been changed in\n1b02be21f271db6bd3cd43abb23fa596fcb6bac3.\n\nIs this code safe against fsync failures? If so, can I get an explanation\nthat I might put into a code comment patch?\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 10:46:02 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "fsync failure in durable_unlink ignored in xlog.c?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 10:46:02 -0700, Mark Dilger wrote:\n> I have seen other lengthy discussions about fsync semantics, and if this\n> question is being addressed there, this question might not be relevant.\n> \n> Two calls to durable_unlink at log level DEBUG1 are ignoring the\n> return value. Other calls at ERROR and FATAL are likewise ignoring\n> the return value, though those make perfect sense to me. There may\n> be a reason why logging a debug message inside durable_unlink and\n> continuing along is safe, but it is not clear from the structure of the\n> code.\n> \n> In InstallXLogFileSegment, durable_unlink(path, DEBUG1) is called\n> without the return value being checked followed by a call to\n> durable_link_or_rename, and perhaps that second call works\n> whether the durable_unlink succeeded or failed, but the logic of that is\n> not at all clear.\n> \n> In do_pg_stop_backup, durable_unlink(TABLESPACE_MAP, DEBUG1)\n> is similarly called without the return value being checked.\n> \n> This code appears to have been changed in\n> 1b02be21f271db6bd3cd43abb23fa596fcb6bac3.\n> \n> Is this code safe against fsync failures? If so, can I get an explanation\n> that I might put into a code comment patch?\n\nWhat's the danger you're thinking of here? The issue with ignoring fsync\nfailures is that it could be the one signal about data corruption we get\nfor a write()/fsync() that failed - i.e. that durability cannot be\nguaranteed. But we don't care about the file contents of those files.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 10:55:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fsync failure in durable_unlink ignored in xlog.c?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2019-05-23 10:46:02 -0700, Mark Dilger wrote:\n>> Is this code safe against fsync failures? If so, can I get an explanation\n>> that I might put into a code comment patch?\n\n> What's the danger you're thinking of here? The issue with ignoring fsync\n> failures is that it could be the one signal about data corruption we get\n> for a write()/fsync() that failed - i.e. that durability cannot be\n> guaranteed. But we don't care about the file contents of those files.\n\nHmm ... if we don't care, why are we issuing an fsync at all?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 14:06:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fsync failure in durable_unlink ignored in xlog.c?"
},
{
"msg_contents": "On Thu, May 23, 2019 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-23 10:46:02 -0700, Mark Dilger wrote:\n> >> Is this code safe against fsync failures? If so, can I get an explanation\n> >> that I might put into a code comment patch?\n>\n> > What's the danger you're thinking of here? The issue with ignoring fsync\n> > failures is that it could be the one signal about data corruption we get\n> > for a write()/fsync() that failed - i.e. that durability cannot be\n> > guaranteed. But we don't care about the file contents of those files.\n>\n> Hmm ... if we don't care, why are we issuing an fsync at all?\n\nTom's question is about as far as my logic went. It seemed the fsync\nmust be important, or the author of this code wouldn't have put such\nan expensive operation in that spot, and if so, then how can it be safe\nto ignore whether the fsync returned an error. Beyond that, I do not\nhave a specific danger in mind.\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 11:14:13 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fsync failure in durable_unlink ignored in xlog.c?"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 14:06:57 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-05-23 10:46:02 -0700, Mark Dilger wrote:\n> >> Is this code safe against fsync failures? If so, can I get an explanation\n> >> that I might put into a code comment patch?\n> \n> > What's the danger you're thinking of here? The issue with ignoring fsync\n> > failures is that it could be the one signal about data corruption we get\n> > for a write()/fsync() that failed - i.e. that durability cannot be\n> > guaranteed. But we don't care about the file contents of those files.\n> \n> Hmm ... if we don't care, why are we issuing an fsync at all?\n\nFair point. I think we do care in most of those cases, but we don't need\nto trigger a PANIC. We'd be in trouble if e.g. an older tablespace map\nfile were to \"revive\" later. Looking at the cases:\n\n- durable_unlink(TABLESPACE_MAP, DEBUG1) - we definitely care about a\n failure to unlink/remove, but *not* about ENOENT, because that's expected.\n\n- /* Force installation: get rid of any pre-existing segment file */\n durable_unlink(path, DEBUG1);\n\n same.\n\n- RemoveXlogFile():\n rc = durable_unlink(path, LOG);\n\n It's probably *tolerable* to fail here. Not sure why this is a\n durable_unlink(LOG) - doesn't make a ton of sense to me.\n\n- durable_unlink(BACKUP_LABEL_FILE, ERROR);\n\n This is a \"whaa, bad shit is happening\" kind of situation. But\n crashing probably would make it even worse, because we'd restart\n assuming we're restoring from a backup.\n\nISTM that durable_unlink() for the first two cases really needs a\nseparate 'missing_ok' type argument. And that the reason for using\nDEBUG1 here is solely an outcome of that not existing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 11:18:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: fsync failure in durable_unlink ignored in xlog.c?"
}
] |
[
{
"msg_contents": "Hackers,\n\nI only see three invocations of ClosePipeStream in the sources.\nIn two of them, the return value is checked and an error is raised\nif it failed. In the third, the error (if any) is squashed.\n\nI don't know if a pipe stream over \"locale -a\" could ever fail to\nclose, but it seems sensible to log an error if it does.\n\nThoughts?\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 11:29:15 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "ClosePipeStream failure ignored in pg_import_system_collations"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> I only see three invocations of ClosePipeStream in the sources.\n> In two of them, the return value is checked and an error is raised\n> if it failed. In the third, the error (if any) is squashed.\n\n> I don't know if a pipe stream over \"locale -a\" could ever fail to\n> close, but it seems sensible to log an error if it does.\n\nThe concrete case where that's an issue, I think, is that \"locale -a\"\nfails, possibly after outputting a few locale names. The only report\nwe get about that is a failure indication from ClosePipeStream.\nAs things stand we just silently push on, creating no or a few collations.\nWith a check, we'd error out ... causing initdb to fail altogether.\n\nMaybe that's an overreaction; I'm not sure. Perhaps the right\nthing is just to issue a warning? But ignoring it completely\nseems bad.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 18:23:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ClosePipeStream failure ignored in pg_import_system_collations"
},
{
"msg_contents": "On Thu, May 23, 2019 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mark Dilger <hornschnorter@gmail.com> writes:\n> > I only see three invocations of ClosePipeStream in the sources.\n> > In two of them, the return value is checked and an error is raised\n> > if it failed. In the third, the error (if any) is squashed.\n>\n> > I don't know if a pipe stream over \"locale -a\" could ever fail to\n> > close, but it seems sensible to log an error if it does.\n>\n> The concrete case where that's an issue, I think, is that \"locale -a\"\n> fails, possibly after outputting a few locale names. The only report\n> we get about that is a failure indication from ClosePipeStream.\n> As things stand we just silently push on, creating no or a few collations.\n> With a check, we'd error out ... causing initdb to fail altogether.\n>\n> Maybe that's an overreaction; I'm not sure. Perhaps the right\n> thing is just to issue a warning? But ignoring it completely\n> seems bad.\n\nAnother option is to retry the \"locale -a\" call, perhaps after sleeping\na short while, but I have no idea how likely a second (or third...) call\nto \"locale -a\" is to succeed if the prior call failed, mostly because I\ndon't have a clear idea why it would fail the first time.\n\nI would prefer initdb to fail, and fail loudly, rather than warning and\nmoving on, but I can imagine production systems which are set up\nin a way where that would be painful. Perhaps somebody with such\na setup will respond?\n\nmark\n\n\n",
"msg_date": "Thu, 23 May 2019 15:36:38 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ClosePipeStream failure ignored in pg_import_system_collations"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> On Thu, May 23, 2019 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The concrete case where that's an issue, I think, is that \"locale -a\"\n>> fails, possibly after outputting a few locale names. The only report\n>> we get about that is a failure indication from ClosePipeStream.\n>> As things stand we just silently push on, creating no or a few collations.\n>> With a check, we'd error out ... causing initdb to fail altogether.\n>> Maybe that's an overreaction; I'm not sure. Perhaps the right\n>> thing is just to issue a warning? But ignoring it completely\n>> seems bad.\n\n> Another option is to retry the \"locale -a\" call, perhaps after sleeping\n> a short while, but I have no idea how likely a second (or third...) call\n> to \"locale -a\" is to succeed if the prior call failed, mostly because I\n> don't have a clear idea why it would fail the first time.\n\nI doubt that retrying would be of any value; in a resource-exhaustion\nsituation you might as well just redo the whole initdb. The main case\nI can think of where you'd get a hard failure is \"/usr/bin/locale not\ninstalled\". I have no idea whether there are any platforms where that\nwould be a likely situation. On my Linux machines it seems to be part of\nglibc-common, so there's basically 0 chance ... but I can imagine that\nother platforms with a more stripped-down mentality might allow it to\nnot be present.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 18:45:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ClosePipeStream failure ignored in pg_import_system_collations"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now we don't indicate that a top-n sort is going to be used in\nEXPLAIN, just EXPLAIN ANALYZE. That's imo suboptimal, because one quite\nlegitimately might want to know that before actually executing (as it\nwill make a huge amount of difference in the actual resource intensity\nof the query).\n\npostgres[28165][1]=# EXPLAIN (VERBOSE) SELECT * FROM hashes ORDER BY count DESC LIMIT 10;\n┌───────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├───────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Limit (cost=12419057.53..12419058.70 rows=10 width=45) │\n│ Output: hash, count │\n│ -> Gather Merge (cost=12419057.53..66041805.65 rows=459591466 width=45) │\n│ Output: hash, count │\n│ Workers Planned: 2 │\n│ -> Sort (cost=12418057.51..12992546.84 rows=229795733 width=45) │\n│ Output: hash, count │\n│ Sort Key: hashes.count DESC │\n│ -> Parallel Seq Scan on public.hashes (cost=0.00..7452254.33 rows=229795733 width=45) │\n│ Output: hash, count │\n└───────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(10 rows)\n\npostgres[28165][1]=# EXPLAIN (VERBOSE, ANALYZE) SELECT * FROM hashes ORDER BY count DESC LIMIT 10;\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN │\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ Limit (cost=12419057.53..12419058.70 rows=10 width=45) (actual time=115204.278..115205.024 rows=10 loops=1) │\n│ Output: hash, count │\n│ -> Gather Merge (cost=12419057.53..66041805.65 rows=459591466 width=45) (actual time=115204.276..115205.020 rows=10 loops=1) │\n│ Output: hash, count │\n│ Workers Planned: 2 │\n│ Workers Launched: 2 │\n│ -> Sort (cost=12418057.51..12992546.84 rows=229795733 width=45) (actual time=115192.189..115192.189 rows=7 loops=3) │\n│ Output: hash, count │\n│ Sort Key: hashes.count DESC │\n│ Sort Method: top-N heapsort Memory: 25kB │\n│ Worker 0: Sort Method: top-N heapsort Memory: 25kB │\n│ Worker 1: Sort Method: top-N heapsort Memory: 25kB │\n│ Worker 0: actual time=115186.558..115186.559 rows=10 loops=1 │\n│ Worker 1: actual time=115186.540..115186.540 rows=10 loops=1 │\n│ -> Parallel Seq Scan on public.hashes (cost=0.00..7452254.33 rows=229795733 width=45) (actual time=0.080..90442.215 rows=183836589 loops=3) │\n│ Output: hash, count │\n│ Worker 0: actual time=0.111..90366.999 rows=183976442 loops=1 │\n│ Worker 1: actual time=0.107..90461.921 rows=184707680 loops=1 │\n│ Planning Time: 0.121 ms │\n│ Execution Time: 115205.053 ms │\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\n(20 rows)\n\nIt's also noticable that we preposterously assume that the sort actually\nwill return exactly the number of rows in the table, despite being a\ntop-n style sort. That seems bad for costing of the parallel query,\nbecause it think we'll assume that costs tups * parallel_tuple_cost?\n\nI'm also unclear as to why the Gather Merge ends up with twice as\nmany estimated rows as there are in the table.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 12:22:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Top-N sorts in EXPLAIN, row count estimates, and parallelism"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Right now we don't indicate that a top-n sort is going to be used in\n> EXPLAIN, just EXPLAIN ANALYZE.\n\nGiven the way that's implemented, I doubt that we can report it\nreliably in EXPLAIN.\n\n> It's also noticable that we preposterously assume that the sort actually\n> will return exactly the number of rows in the table, despite being a\n> top-n style sort.\n\nIn general, we report nodes below LIMIT with their execute-to-completion\ncost and rowcount estimates. Doing differently for a top-N sort would\nbe quite confusing, I should think.\n\n> That seems bad for costing of the parallel query,\n> because it think we'll assume that costs tups * parallel_tuple_cost?\n\nIf the parallel query stuff doesn't understand about LIMIT, that's\na bug independently of top-N sorts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 May 2019 18:31:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Top-N sorts in EXPLAIN, row count estimates, and parallelism"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-23 18:31:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It's also noticable that we preposterously assume that the sort actually\n> > will return exactly the number of rows in the table, despite being a\n> > top-n style sort.\n> \n> In general, we report nodes below LIMIT with their execute-to-completion\n> cost and rowcount estimates. Doing differently for a top-N sort would\n> be quite confusing, I should think.\n\nI'm not quite sure that's true. I mean, a top-N sort wouldn't actually\nnecessarily return all the input rows, even if run to completion. Isn't\nthat a somewhat fundamental difference?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 May 2019 15:36:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Top-N sorts in EXPLAIN, row count estimates, and parallelism"
},
{
"msg_contents": "On Thu, May 23, 2019 at 3:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Given the way that's implemented, I doubt that we can report it\n> reliably in EXPLAIN.\n\nDoes it have to be totally reliable?\n\ncost_sort() costs sorts as top-N heapsorts. While we cannot make an\niron-clad guarantee that it will work out that way from within\ntuplesort.c, that doesn't seem like it closes off the possibility of\nmore informative EXPLAIN output. For example, can't we at report that\nthe tuplesort will be \"bounded\" within EXPLAIN, indicating that we\nintend to attempt to sort using a top-N heap sort (i.e. we'll\ndefinitely do it that way if there is sufficient work_mem)?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 23 May 2019 15:43:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Top-N sorts in EXPLAIN, row count estimates, and parallelism"
},
{
"msg_contents": "On Fri, 24 May 2019 at 10:44, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, May 23, 2019 at 3:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Given the way that's implemented, I doubt that we can report it\n> > reliably in EXPLAIN.\n>\n> Does it have to be totally reliable?\n>\n> cost_sort() costs sorts as top-N heapsorts. While we cannot make an\n> iron-clad guarantee that it will work out that way from within\n> tuplesort.c, that doesn't seem like it closes off the possibility of\n> more informative EXPLAIN output. For example, can't we at report that\n> the tuplesort will be \"bounded\" within EXPLAIN, indicating that we\n> intend to attempt to sort using a top-N heap sort (i.e. we'll\n> definitely do it that way if there is sufficient work_mem)?\n\nI think this really needs more of a concrete proposal. Remember\nLIMIT/OFFSET don't need to be constants, they could be a Param or some\nreturn value from a subquery, so the bound might not be known until\nafter executor startup, to which EXPLAIN is not going to get to know\nabout that.\n\nPerhaps something to be tagged onto the Sort path in grouping_planner\nif preprocess_limit() managed to come up with a value. double does\nnot seem like the perfect choice for a bound to show in EXPLAIN and\nint64 could wrap for very high LIMIT + OFFSET values. Showing an\napproximate value in EXPLAIN seems like it might be a source of future\nbug reports. Perhaps if we did it, we could just set it to -1\n(unknown) if LIMIT + OFFSET happened to wrap an int64. Implementing\nthat would seem to require adding a new field for that in SortPath,\nSort and SortState, plus all the additional code for passing the value\nover. We'd have to then hope nobody used the field for anything\nimportant in the future.\n\nAfter that, what would we do with it in EXPLAIN? Always show \"Bound:\n<n>\", if it's not -1?\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 24 May 2019 14:47:56 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Top-N sorts in EXPLAIN, row count estimates, and parallelism"
},
{
"msg_contents": "On Thu, May 23, 2019 at 7:48 PM David Rowley\n<david.rowley@2ndquadrant.com> wrote:\n> > cost_sort() costs sorts as top-N heapsorts. While we cannot make an\n> > iron-clad guarantee that it will work out that way from within\n> > tuplesort.c, that doesn't seem like it closes off the possibility of\n> > more informative EXPLAIN output. For example, can't we at report that\n> > the tuplesort will be \"bounded\" within EXPLAIN, indicating that we\n> > intend to attempt to sort using a top-N heap sort (i.e. we'll\n> > definitely do it that way if there is sufficient work_mem)?\n>\n> I think this really needs more of a concrete proposal. Remember\n> LIMIT/OFFSET don't need to be constants, they could be a Param or some\n> return value from a subquery, so the bound might not be known until\n> after executor startup, to which EXPLAIN is not going to get to know\n> about that.\n\nI was merely pointing out that it is clear when a sort *could* be a\ntop-n sort, which could be exposed by EXPLAIN without anyone feeling\nmisled.\n\n> After that, what would we do with it in EXPLAIN? Always show \"Bound:\n> <n>\", if it's not -1?\n\nI'm not sure.\n\nThe distinction between a top-n sort and any other sort is an\nimportant one (it's certainly more important than the distinction\nbetween an internal and external sort), so it's worth being flexible\nin order to expose more information in EXPLAIN output. I would be\nwilling to accept some kind of qualified or hedged description in the\nEXPLAIN output for a bounded sort node, even though that approach\ndoesn't seem desirable in general.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 5 Jun 2019 17:45:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Top-N sorts in EXPLAIN, row count estimates, and parallelism"
}
] |
[
{
"msg_contents": "It appears there is no mention of lack of support for CREATE INDEX\nCONCURRENTLY on partitioned index in the documents.\n\nAdded in the attached.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Fri, 24 May 2019 11:14:14 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "No mention of no CIC support for partitioned index in docs"
},
{
"msg_contents": "On 2019-May-24, David Rowley wrote:\n\n> It appears there is no mention of lack of support for CREATE INDEX\n> CONCURRENTLY on partitioned index in the documents.\n\nI'll leave this one for you to handle, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Jun 2019 16:10:28 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: No mention of no CIC support for partitioned index in docs"
},
{
"msg_contents": "On Wed, 5 Jun 2019 at 08:10, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-May-24, David Rowley wrote:\n>\n> > It appears there is no mention of lack of support for CREATE INDEX\n> > CONCURRENTLY on partitioned index in the documents.\n>\n> I'll leave this one for you to handle, thanks.\n\nThanks. I've just pushed something.\n\nI ended up deciding that we owe the user a bit more of an explanation\nof how they might work around the problem. Of course, a partitioned\ntable index build is likely to take much longer than an index build on\na normal table, since most likely a partitioned table is larger. So I\nwent on to explain how they might minimise the time where writers will\nbe blocked by creating indexes concurrently on each partition first.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 6 Jun 2019 12:41:03 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: No mention of no CIC support for partitioned index in docs"
}
] |
[
{
"msg_contents": "Hi,\n\nI compared two data structures realistically by time, after estimating big\nO. T-tree outperforms b-tree, which is commonly used, for a medium size\ntable. Lehmann and Carey showed the same, earlier.\n\nCan you improve indexing by this?\n\nUnderstandably\n\nSascha Kuhl",
"msg_date": "Fri, 24 May 2019 04:31:20 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Indexing - comparison of tree structures"
},
{
"msg_contents": "T-tree (and variants) are index types commonly associated with in-memory\ndatabase management systems and rarely, if-ever, used with on-disk\ndatabases. There has been a lot of research in regard to more modern cache\nconscious/oblivious b-trees that perform equally or better than t-tree.\nWhat’s the use-case?\n\nOn Fri, May 24, 2019 at 5:38 AM Sascha Kuhl <yogidabanli@gmail.com> wrote:\n\n> Hi,\n>\n> I compared two data structures realistically by time, after estimating big\n> O. T-tree outperforms b-tree, which is commonly used, for a medium size\n> table. Lehmann and Carey showed the same, earlier.\n>\n> Can you improve indexing by this?\n>\n> Understandably\n>\n> Sascha Kuhl\n>\n-- \nJonah H. Harris\n\nT-tree (and variants) are index types commonly associated with in-memory database management systems and rarely, if-ever, used with on-disk databases. There has been a lot of research in regard to more modern cache conscious/oblivious b-trees that perform equally or better than t-tree. What’s the use-case?On Fri, May 24, 2019 at 5:38 AM Sascha Kuhl <yogidabanli@gmail.com> wrote:Hi,I compared two data structures realistically by time, after estimating big O. T-tree outperforms b-tree, which is commonly used, for a medium size table. Lehmann and Carey showed the same, earlier.Can you improve indexing by this?UnderstandablySascha Kuhl\n-- Jonah H. Harris",
"msg_date": "Fri, 24 May 2019 20:15:38 -0400",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Indexing - comparison of tree structures"
},
{
"msg_contents": "Where I can I find research on trees and indexing related to postgresql?\n\nSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 11:14:\n\n> Can you bring me to the research showing b-tree is equally performant? Is\n> postgres taking this research into account?\n>\n> Jonah H. Harris <jonah.harris@gmail.com> schrieb am Sa., 25. Mai 2019,\n> 02:15:\n>\n>> T-tree (and variants) are index types commonly associated with in-memory\n>> database management systems and rarely, if-ever, used with on-disk\n>> databases. There has been a lot of research in regard to more modern cache\n>> conscious/oblivious b-trees that perform equally or better than t-tree.\n>> What’s the use-case?\n>>\n>> On Fri, May 24, 2019 at 5:38 AM Sascha Kuhl <yogidabanli@gmail.com>\n>> wrote:\n>>\n>>> Hi,\n>>>\n>>> I compared two data structures realistically by time, after estimating\n>>> big O. T-tree outperforms b-tree, which is commonly used, for a medium size\n>>> table. Lehmann and Carey showed the same, earlier.\n>>>\n>>> Can you improve indexing by this?\n>>>\n>>> Understandably\n>>>\n>>> Sascha Kuhl\n>>>\n>> --\n>> Jonah H. Harris\n>>\n>>\n\nWhere I can I find research on trees and indexing related to postgresql?Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 11:14:Can you bring me to the research showing b-tree is equally performant? Is postgres taking this research into account?Jonah H. Harris <jonah.harris@gmail.com> schrieb am Sa., 25. Mai 2019, 02:15:T-tree (and variants) are index types commonly associated with in-memory database management systems and rarely, if-ever, used with on-disk databases. There has been a lot of research in regard to more modern cache conscious/oblivious b-trees that perform equally or better than t-tree. What’s the use-case?On Fri, May 24, 2019 at 5:38 AM Sascha Kuhl <yogidabanli@gmail.com> wrote:Hi,I compared two data structures realistically by time, after estimating big O. T-tree outperforms b-tree, which is commonly used, for a medium size table. Lehmann and Carey showed the same, earlier.Can you improve indexing by this?UnderstandablySascha Kuhl\n-- Jonah H. Harris",
"msg_date": "Mon, 27 May 2019 12:40:07 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Indexing - comparison of tree structures"
},
{
"msg_contents": "Dear moderator,\n\nCan you inform me after you (as a mailing list) have changed something\nrelated to my work. I like to keep track of my success.\n\nRegards\n\nSascha Kuhl\n\nSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 16:07:\n\n> Would not\n>\n> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 16:06:\n>\n>> To give you another fair hint: big O estimations would have revealed such\n>> a difference.\n>>\n>> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 14:06:\n>>\n>>> I understand that changing is never easy\n>>>\n>>> Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 13:52:\n>>>\n>>>> You don't have to be rude: social communication is higher than looking\n>>>> and studying in a mailing list db. For me, at least;)\n>>>>\n>>>> Thanks for the direction and permission (I'm respectful with the work\n>>>> of others)\n>>>>\n>>>> Jonah H. Harris <jonah.harris@gmail.com> schrieb am Mo., 27. Mai 2019,\n>>>> 13:05:\n>>>>\n>>>>> On Mon, May 27, 2019 at 5:14 AM Sascha Kuhl <yogidabanli@gmail.com>\n>>>>> wrote:\n>>>>>\n>>>>>> Can you bring me to the research showing b-tree is equally\n>>>>>> performant? Is postgres taking this research into account?\n>>>>>>\n>>>>>\n>>>>> Not trying to be rude, but you've been asking rather general\n>>>>> questions; our mailing list is archived, searchable, and probably a better\n>>>>> use of everyone's time for you to consult prior to posting. Per your\n>>>>> question, to my knowledge, there is no active work on changing our primary\n>>>>> b-tree index structure, which is based on Lehman and Yao's b-link tree.\n>>>>> Given the maturity of our current implementation, I think it would be\n>>>>> rather difficult to improve upon it in terms of performance, especially\n>>>>> considering concurrency-related issues.\n>>>>>\n>>>>> --\n>>>>> Jonah H. Harris\n>>>>>\n>>>>>\n\nDear moderator,Can you inform me after you (as a mailing list) have changed something related to my work. I like to keep track of my success.RegardsSascha KuhlSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 16:07:Would notSascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 16:06:To give you another fair hint: big O estimations would have revealed such a difference.Sascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 14:06:I understand that changing is never easySascha Kuhl <yogidabanli@gmail.com> schrieb am Mo., 27. Mai 2019, 13:52:You don't have to be rude: social communication is higher than looking and studying in a mailing list db. For me, at least;) Thanks for the direction and permission (I'm respectful with the work of others) Jonah H. Harris <jonah.harris@gmail.com> schrieb am Mo., 27. Mai 2019, 13:05:On Mon, May 27, 2019 at 5:14 AM Sascha Kuhl <yogidabanli@gmail.com> wrote:Can you bring me to the research showing b-tree is equally performant? Is postgres taking this research into account?Not trying to be rude, but you've been asking rather general questions; our mailing list is archived, searchable, and probably a better use of everyone's time for you to consult prior to posting. Per your question, to my knowledge, there is no active work on changing our primary b-tree index structure, which is based on Lehman and Yao's b-link tree. Given the maturity of our current implementation, I think it would be rather difficult to improve upon it in terms of performance, especially considering concurrency-related issues.-- Jonah H. Harris",
"msg_date": "Mon, 27 May 2019 16:34:48 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Indexing - comparison of tree structures"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-27 12:40:07 +0200, Sascha Kuhl wrote:\n> Where I can I find research on trees and indexing related to postgresql?\n\n1) Please respect the list style of properly quoting responses inline,\n and only responding to messages that are somewhat related to the\n previous content\n2) You ask a lot of question, without actually responding to responses\n3) Please do some of your own research, before asking\n questions. E.g. there's documentation about our btree implementation\n etc in our source tree.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 28 May 2019 11:37:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Indexing - comparison of tree structures"
},
{
"msg_contents": "On Tue, May 28, 2019 at 11:37:54AM -0700, Andres Freund wrote:\n> 1) Please respect the list style of properly quoting responses inline,\n> and only responding to messages that are somewhat related to the\n> previous content\n> 2) You ask a lot of question, without actually responding to responses\n> 3) Please do some of your own research, before asking\n> questions. E.g. there's documentation about our btree implementation\n> etc in our source tree.\n\nIn this case, you may find the various README in the code to be\nof interest. All index access methods are located in\nsrc/backend/access/, and nbtree/README includes documentation for\nbtree indexes.\n--\nMichael",
"msg_date": "Wed, 29 May 2019 13:51:31 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Indexing - comparison of tree structures"
}
] |
[
{
"msg_contents": "With a sample query such as\n\nSELECT x, avg(x)\nFROM (VALUES (1), (2), (3)) AS v (x);\n\nWe give the error message \"column \"v.x\" must appear in the GROUP BY\nclause or be used in an aggregate function\".\n\nThis is correct but incomplete. Attached is a trivial patch to also\nsuggest that the user might have been trying to use a window function.\n-- \nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support",
"msg_date": "Fri, 24 May 2019 08:17:12 +0200",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Aggregate error message"
},
{
"msg_contents": "On Fri, 24 May 2019 at 18:17, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>\n> With a sample query such as\n>\n> SELECT x, avg(x)\n> FROM (VALUES (1), (2), (3)) AS v (x);\n>\n> We give the error message \"column \"v.x\" must appear in the GROUP BY\n> clause or be used in an aggregate function\".\n>\n> This is correct but incomplete. Attached is a trivial patch to also\n> suggest that the user might have been trying to use a window function.\n\nI think you might have misthought this one. If there's an aggregate\nfunction in the SELECT or HAVING clause, then anything else in the\nSELECT clause is going to have to be either in the GROUP BY clause, be\nfunctionally dependent on the GROUP BY clause, or be in an aggregate\nfunction. Putting it into a window function won't help the situation.\n\npostgres=# select sum(x) over(),avg(x) FROM (VALUES (1), (2), (3)) AS v (x);\npsql: ERROR: column \"v.x\" must appear in the GROUP BY clause or be\nused in an aggregate function\nLINE 1: select sum(x) over(),avg(x) FROM (VALUES (1), (2), (3)) AS v...\n ^\n\nIf there's any change to make to the error message then it would be to\nadd the functional dependency part, but since we're pretty bad at\ndetecting that, I don't think we should.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 24 May 2019 19:20:35 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate error message"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> On Fri, 24 May 2019 at 18:17, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>> With a sample query such as\n>> SELECT x, avg(x)\n>> FROM (VALUES (1), (2), (3)) AS v (x);\n>> We give the error message \"column \"v.x\" must appear in the GROUP BY\n>> clause or be used in an aggregate function\".\n>> This is correct but incomplete. Attached is a trivial patch to also\n>> suggest that the user might have been trying to use a window function.\n\n> I think you might have misthought this one. If there's an aggregate\n> function in the SELECT or HAVING clause, then anything else in the\n> SELECT clause is going to have to be either in the GROUP BY clause, be\n> functionally dependent on the GROUP BY clause, or be in an aggregate\n> function. Putting it into a window function won't help the situation.\n\nYeah. Also, even if the problem really is that avg(x) should have had\nan OVER clause, the fact that the error cursor will not be pointing\nat avg(x) means that Vik's wording is still not that helpful.\n\nThis is a bit outside our usual error-writing practice, but I wonder\nif we could phrase it like \"since this query uses aggregation, column\n\"v.x\" must appear in the GROUP BY clause or be used in an aggregate\nfunction\". With that, perhaps the user would realize \"oh, I didn't\nmean to aggregate\" when faced with Vik's example. But this phrasing\ndoesn't cover the GROUP-BY-without-aggregate case, and I'm not sure\nhow to do that without making the message even longer and more unwieldy.\n\n> If there's any change to make to the error message then it would be to\n> add the functional dependency part, but since we're pretty bad at\n> detecting that, I don't think we should.\n\nYeah, that's another thing we're failing to cover in the message ...\nbut it seems unrelated to Vik's example.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 09:40:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate error message"
}
] |
[
{
"msg_contents": "Hello\n\nI execute following query to the partitioned table, but the plan is different from my assumption, so please tell me the reason.\n\npostgres=# explain select * from jta, (select a, max(b) from jtb where a = 1 group by a ) c1 where jta.a = c1.a;\n QUERY PLAN \n------------------------------------------------------------------------\n Hash Join (cost=38.66..589.52 rows=1402 width=12)\n Hash Cond: (jta0.a = jtb0.a)\n -> Append (cost=0.00..482.50 rows=25500 width=4)\n -> Seq Scan on jta0 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta1 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta2 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta3 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta4 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta5 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta6 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta7 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta8 (cost=0.00..35.50 rows=2550 width=4)\n -> Seq Scan on jta9 (cost=0.00..35.50 rows=2550 width=4)\n -> Hash (cost=38.53..38.53 rows=11 width=8)\n -> GroupAggregate (cost=0.00..38.42 rows=11 width=8)\n Group Key: jtb0.a\n -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=8)\n Filter: (a = 1)\n(18 rows)\n\nI assume that subquery aggregate only pruned table and parent query joins pruned table and subquery results.\nHowever, parent query scan all partitions and join.\nIn my investigation, because is_simple_query() returns false if subquery contains GROUP BY, parent query does not prune.\nIs it possible to improve this?\nIf subquery has a WHERE clause only, parent query does not scan all partitions.\n\npostgres=# explain select * from jta, (select a from jtb where a = 1) c1 where jta.a = c1.a;\n QUERY PLAN \n------------------------------------------------------------------\n Nested Loop (cost=0.00..81.94 rows=143 width=8)\n -> Seq Scan on jta0 (cost=0.00..41.88 rows=13 width=4)\n Filter: (a = 1)\n -> Materialize (cost=0.00..38.30 rows=11 width=4)\n -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=4)\n Filter: (a = 1)\n(6 rows)\n\nregards,\n\nSho Kato\n\n\n\n",
"msg_date": "Fri, 24 May 2019 07:44:18 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Why does not subquery pruning conditions inherit to parent query?"
},
{
"msg_contents": "On Fri, 24 May 2019 at 19:44, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\n> I execute following query to the partitioned table, but the plan is different from my assumption, so please tell me the reason.\n>\n> postgres=# explain select * from jta, (select a, max(b) from jtb where a = 1 group by a ) c1 where jta.a = c1.a;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=38.66..589.52 rows=1402 width=12)\n> Hash Cond: (jta0.a = jtb0.a)\n> -> Append (cost=0.00..482.50 rows=25500 width=4)\n> -> Seq Scan on jta0 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta1 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta2 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta3 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta4 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta5 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta6 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta7 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta8 (cost=0.00..35.50 rows=2550 width=4)\n> -> Seq Scan on jta9 (cost=0.00..35.50 rows=2550 width=4)\n> -> Hash (cost=38.53..38.53 rows=11 width=8)\n> -> GroupAggregate (cost=0.00..38.42 rows=11 width=8)\n> Group Key: jtb0.a\n> -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=8)\n> Filter: (a = 1)\n> (18 rows)\n>\n> I assume that subquery aggregate only pruned table and parent query joins pruned table and subquery results.\n> However, parent query scan all partitions and join.\n> In my investigation, because is_simple_query() returns false if subquery contains GROUP BY, parent query does not prune.\n> Is it possible to improve this?\n\nThe planner can only push quals down into a subquery, it cannot pull\nquals from a subquery into the outer query.\n\nIf you write the query like:\n\nexplain select * from jta, (select a, max(b) from jtb group by a ) c1\nwhere jta.a = c1.a and c1.a = 1;\n\nyou should get the plan that you want.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 24 May 2019 20:09:46 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does not subquery pruning conditions inherit to parent query?"
},
{
"msg_contents": "Friday, May 24, 2019 5:10 PM, David Rowley wrote:\r\n> The planner can only push quals down into a subquery, it cannot pull quals\r\n> from a subquery into the outer query.\r\n> \r\n> If you write the query like:\r\n> \r\n> explain select * from jta, (select a, max(b) from jtb group by a ) c1\r\n> where jta.a = c1.a and c1.a = 1;\r\n> \r\n> you should get the plan that you want.\r\n\r\nThank you for your replay.\r\n\r\nYou are right. I should do that.\r\nHowever, following query looks like the subquery qual is pushed down into the outer query.\r\n\r\npostgres=# explain select * from jta, (select a from jtb where a = 1) c1 where jta.a = c1.a;\r\n QUERY PLAN \r\n------------------------------------------------------------------\r\n Nested Loop (cost=0.00..81.94 rows=143 width=8)\r\n -> Seq Scan on jta0 (cost=0.00..41.88 rows=13 width=4)\r\n Filter: (a = 1)\r\n -> Materialize (cost=0.00..38.30 rows=11 width=4)\r\n -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=4)\r\n Filter: (a = 1)\r\n(6 rows)\r\n\r\nSo, I think I could improve this behavior.\r\nWhy such a difference occur?\r\n\r\nregards,\r\n\r\nSho Kato\r\n> -----Original Message-----\r\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> Sent: Friday, May 24, 2019 5:10 PM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\r\n> Cc: pgsql-hackers@postgresql.org\r\n> Subject: Re: Why does not subquery pruning conditions inherit to parent\r\n> query?\r\n> \r\n> On Fri, 24 May 2019 at 19:44, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\r\n> > I execute following query to the partitioned table, but the plan is\r\n> different from my assumption, so please tell me the reason.\r\n> >\r\n> > postgres=# explain select * from jta, (select a, max(b) from jtb where\r\n> a = 1 group by a ) c1 where jta.a = c1.a;\r\n> > QUERY PLAN\r\n> >\r\n> --------------------------------------------------------------------\r\n> --\r\n> > -- Hash Join (cost=38.66..589.52 rows=1402 width=12)\r\n> > Hash Cond: (jta0.a = jtb0.a)\r\n> > -> Append (cost=0.00..482.50 rows=25500 width=4)\r\n> > -> Seq Scan on jta0 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta1 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta2 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta3 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta4 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta5 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta6 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta7 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta8 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Seq Scan on jta9 (cost=0.00..35.50 rows=2550 width=4)\r\n> > -> Hash (cost=38.53..38.53 rows=11 width=8)\r\n> > -> GroupAggregate (cost=0.00..38.42 rows=11 width=8)\r\n> > Group Key: jtb0.a\r\n> > -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11\r\n> width=8)\r\n> > Filter: (a = 1)\r\n> > (18 rows)\r\n> >\r\n> > I assume that subquery aggregate only pruned table and parent query\r\n> joins pruned table and subquery results.\r\n> > However, parent query scan all partitions and join.\r\n> > In my investigation, because is_simple_query() returns false if\r\n> subquery contains GROUP BY, parent query does not prune.\r\n> > Is it possible to improve this?\r\n> \r\n> The planner can only push quals down into a subquery, it cannot pull quals\r\n> from a subquery into the outer query.\r\n> \r\n> If you write the query like:\r\n> \r\n> explain select * from jta, (select a, max(b) from jtb group by a ) c1\r\n> where jta.a = c1.a and c1.a = 1;\r\n> \r\n> you should get the plan that you want.\r\n> \r\n> --\r\n> David Rowley http://www.2ndQuadrant.com/\r\n> PostgreSQL Development, 24x7 Support, Training & Services\r\n> \r\n> \r\n\r\n",
"msg_date": "Mon, 27 May 2019 05:26:54 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Why does not subquery pruning conditions inherit to parent\n query?"
},
{
"msg_contents": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com> writes:\n> Friday, May 24, 2019 5:10 PM, David Rowley wrote:\n>> The planner can only push quals down into a subquery, it cannot pull quals\n>> from a subquery into the outer query.\n\n> However, following query looks like the subquery qual is pushed down into the outer query.\n> postgres=# explain select * from jta, (select a from jtb where a = 1) c1 where jta.a = c1.a;\n> QUERY PLAN \n> ------------------------------------------------------------------\n> Nested Loop (cost=0.00..81.94 rows=143 width=8)\n> -> Seq Scan on jta0 (cost=0.00..41.88 rows=13 width=4)\n> Filter: (a = 1)\n> -> Materialize (cost=0.00..38.30 rows=11 width=4)\n> -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=4)\n> Filter: (a = 1)\n\nNo, what is happening there is that the subquery gets inlined into the\nouter query. That can't happen in your previous example because of\nthe aggregation/GROUP BY --- but subqueries that are just scan/join\nqueries generally get merged into the parent.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 May 2019 06:55:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why does not subquery pruning conditions inherit to parent query?"
},
{
"msg_contents": "Monday, May 27, 2019 7:56 PM Tom Lane wrote:\n> No, what is happening there is that the subquery gets inlined into the\n> outer query. That can't happen in your previous example because of the\n> aggregation/GROUP BY --- but subqueries that are just scan/join queries\n> generally get merged into the parent.\n\nThank you for your replay and sorry for late response.\n\nOk, I understand.\nIs it possible to improve a subquery quals to pull up into outer query?\nOracle looks like do that.\n\nRegards, Kato Sho\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Monday, May 27, 2019 7:56 PM\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\n> Cc: 'David Rowley' <david.rowley@2ndquadrant.com>;\n> pgsql-hackers@postgresql.org\n> Subject: Re: Why does not subquery pruning conditions inherit to parent\n> query?\n> \n> \"Kato, Sho\" <kato-sho@jp.fujitsu.com> writes:\n> > Friday, May 24, 2019 5:10 PM, David Rowley wrote:\n> >> The planner can only push quals down into a subquery, it cannot pull\n> >> quals from a subquery into the outer query.\n> \n> > However, following query looks like the subquery qual is pushed down\n> into the outer query.\n> > postgres=# explain select * from jta, (select a from jtb where a = 1)\n> c1 where jta.a = c1.a;\n> > QUERY PLAN\n> > ------------------------------------------------------------------\n> > Nested Loop (cost=0.00..81.94 rows=143 width=8)\n> > -> Seq Scan on jta0 (cost=0.00..41.88 rows=13 width=4)\n> > Filter: (a = 1)\n> > -> Materialize (cost=0.00..38.30 rows=11 width=4)\n> > -> Seq Scan on jtb0 (cost=0.00..38.25 rows=11 width=4)\n> > Filter: (a = 1)\n> \n> No, what is happening there is that the subquery gets inlined into the\n> outer query. That can't happen in your previous example because of the\n> aggregation/GROUP BY --- but subqueries that are just scan/join queries\n> generally get merged into the parent.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n\n\n",
"msg_date": "Fri, 31 May 2019 07:18:04 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Why does not subquery pruning conditions inherit to parent\n query?"
},
{
"msg_contents": "On Fri, 31 May 2019 at 03:18, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\n> Is it possible to improve a subquery quals to pull up into outer query?\n\nSure, it's possible, but it would require writing code. When it can\nand cannot/should not be done would need to be determined.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 31 May 2019 08:32:44 -0400",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does not subquery pruning conditions inherit to parent query?"
},
{
"msg_contents": "On Friday, May 31, 2019 9:33 PM, David Rowley wrote:\r\n> On Fri, 31 May 2019 at 03:18, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\r\n> > Is it possible to improve a subquery quals to pull up into outer query?\r\n> \r\n> Sure, it's possible, but it would require writing code. When it can and\r\n> cannot/should not be done would need to be determined.\r\n\r\nIs there any harmful effect by pulling up a subquery quals into outer query?\r\n\r\nEven if this feature is not be needed, it will be a problem if user execute this query to a table partitioned into a lot.\r\nSo, I think it would be better to put together a query that partition pruning does not work on the wiki.\r\nThoughts?\r\n\r\nRegards,\r\nkato sho\r\n> -----Original Message-----\r\n> From: David Rowley [mailto:david.rowley@2ndquadrant.com]\r\n> Sent: Friday, May 31, 2019 9:33 PM\r\n> To: Kato, Sho/加藤 翔 <kato-sho@jp.fujitsu.com>\r\n> Cc: Tom Lane <tgl@sss.pgh.pa.us>; pgsql-hackers@postgresql.org\r\n> Subject: Re: Why does not subquery pruning conditions inherit to parent\r\n> query?\r\n> \r\n> On Fri, 31 May 2019 at 03:18, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\r\n> > Is it possible to improve a subquery quals to pull up into outer query?\r\n> \r\n> Sure, it's possible, but it would require writing code. When it can and\r\n> cannot/should not be done would need to be determined.\r\n> \r\n> --\r\n> David Rowley http://www.2ndQuadrant.com/\r\n> PostgreSQL Development, 24x7 Support, Training & Services\r\n> \r\n> \r\n\r\n",
"msg_date": "Thu, 6 Jun 2019 07:47:07 +0000",
"msg_from": "\"Kato, Sho\" <kato-sho@jp.fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Why does not subquery pruning conditions inherit to parent\n query?"
},
{
"msg_contents": "On Thu, 6 Jun 2019 at 19:47, Kato, Sho <kato-sho@jp.fujitsu.com> wrote:\n>\n> On Friday, May 31, 2019 9:33 PM, David Rowley wrote:\n> > Sure, it's possible, but it would require writing code. When it can and\n> > cannot/should not be done would need to be determined.\n>\n> Is there any harmful effect by pulling up a subquery quals into outer query?\n\nThere are certainly cases where it can't be done, for example, if the\nsubquery is LEFT or FULL joined to. There's probably no shortage of\nother cases too. Someone will need to do the analysis into cases where\nit can and can't be done. That's likely more work than writing code to\nmake it work.\n\n> Even if this feature is not be needed, it will be a problem if user execute this query to a table partitioned into a lot.\n> So, I think it would be better to put together a query that partition pruning does not work on the wiki.\n> Thoughts?\n\nIt's not really a restriction of partition pruning. Pruning done\nduring query planning can only use the base quals of the partitioned\nrelation. Run-time pruning goes only a little further and expands\nthat to allow parameters from other relations to be used too. The good\nthing is that you can easily determine what those quals are by looking\nat EXPLAIN. They're the ones that make it down to the scan level.\nThere's also a series of restrictions on top of that too, which are\nnot very well documented outside of the code.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Fri, 7 Jun 2019 07:55:54 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does not subquery pruning conditions inherit to parent query?"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15819\nLogged by: KOIZUMI Satoru\nEmail address: koizumistr@minos.ocn.ne.jp\nPostgreSQL version: 11.3\nOperating system: (Any)\nDescription: \n\nIn example of random_zipfian, the explanation is \"which itself(2) is\nproduced (3/2)*2.5 = 2.76 times more frequently than 3\".\r\n\"(3/2)*2.5 = 2.76\" is wrong. The correct expression is \"(3/2)**2.5 = 2.76\".",
"msg_date": "Fri, 24 May 2019 14:01:46 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15819: wrong expression in document of pgbench"
},
{
"msg_contents": "> In example of random_zipfian, the explanation is \"which itself(2) is\n> produced (3/2)*2.5 = 2.76 times more frequently than 3\".\n> \"(3/2)*2.5 = 2.76\" is wrong. The correct expression is \"(3/2)**2.5 = 2.76\".\n\nIndeed. Attached patch to fix this typo.\n\n-- \nFabien",
"msg_date": "Fri, 24 May 2019 16:33:43 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15819: wrong expression in document of pgbench"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> In example of random_zipfian, the explanation is \"which itself(2) is\n>> produced (3/2)*2.5 = 2.76 times more frequently than 3\".\n>> \"(3/2)*2.5 = 2.76\" is wrong. The correct expression is \"(3/2)**2.5 = 2.76\".\n\n> Indeed. Attached patch to fix this typo.\n\nIndeed. Pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 May 2019 11:16:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15819: wrong expression in document of pgbench"
}
] |
[
{
"msg_contents": "Hi,\n\nIs it possible to obtain money for a contribution I give hear. Or is\neverything expected to be free?\n\nRegards\n\nSascha Kuhl\n\nHi,Is it possible to obtain money for a contribution I give hear. Or is everything expected to be free?RegardsSascha Kuhl",
"msg_date": "Fri, 24 May 2019 17:07:15 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Contribute - money"
},
{
"msg_contents": "On Fri, May 24, 2019 at 05:07:15PM +0200, Sascha Kuhl wrote:\n> Hi,\n> \n> Is it possible to obtain money for a contribution I give hear. Or is\n> everything expected to be free?\n\nAs there is no sole or primary PostgreSQL company, there is no purse\nfrom which to disburse such payments.\n\nIf you're qualified, i.e. if you show the right level of engineering\nchops, you can make it (part of) your employment to work on\nPostgreSQL. You can do this at one of the many companies which see it\nas being in their interest to hire people for that.\n\nThese include, but are not limited to:\n\n- PostgreSQL consultancies\n- Cloud providers\n- Vendors of proprietary software based on or including PostgreSQL\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 27 May 2019 19:16:54 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Contribute - money"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe return value of RegisterSnapshot is being ignored in a\nfew places in indexam.c and tableam.c, suggesting an\nintimate knowledge of the inner workings of the snapshot\nmanager from these two files. I don't think that is particularly\nwise, and I don't see a performance justification for the way\nit is presently coded. There are no live bugs caused by this\nthat I can see, but I would still like it cleaned up.\n\nInside index_beginscan_parallel:\n\n snapshot = RestoreSnapshot(pscan->ps_snapshot_data);\n RegisterSnapshot(snapshot);\n scan = index_beginscan_internal(indexrel, nkeys, norderbys, snapshot,\n pscan, true);\n\nIt happens to be true in the current implementation of the\nsnapshot manager that restored snapshots will have their\n'copied' field set to true, and that the RegisterSnapshot\nfunction will in that case return the same snapshot that\nit was handed, so the snapshot handed to index_beginscan_internal\nturns out to be the right one. But if RegisterSnapshot were\nchanged to return a different copy of the snapshot, this code\nwould break.\n\nThere is a similar level of knowledge in table_beginscan_parallel,\nwhich for brevity I won't quote here.\n\nThe code in table_scan_update_snapshot appears even more\nbrittle to me. The only function in the core code base that\ncalls table_scan_update_snapshot is ExecBitmapHeapInitializeWorker,\nand it does so right after restoring the snapshot that it hands\nto table_scan_update_snapshot, so the fact that\ntable_scan_update_snapshot then ignores the return value\nof RegisterSnapshot on that snapshot happens to be ok. If\nsome other code were changed to call this function, it is not\nclear that it would work out so well.\n\nI propose that attached patch.\n\nmark",
"msg_date": "Fri, 24 May 2019 11:53:17 -0700",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": true,
"msg_subject": "Nitpick about assumptions in indexam and tableam"
}
] |
[
{
"msg_contents": "The documentation for generated columns should probably say whether\nyou can create indexes on them.\n\n\n",
"msg_date": "Fri, 24 May 2019 20:56:08 +0200",
"msg_from": "Florian Weimer <fw@deneb.enyo.de>",
"msg_from_op": true,
"msg_subject": "Generated columns and indexes"
}
] |
[
{
"msg_contents": "Hi,\n\n11.3 included some change to partition table planning. Namely\ncommit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is\nknown empty.\") seems to redo all paths for partitioned tables\nin apply_scanjoin_target_to_paths. It clears the paths in:\n\n```\n if (rel_is_partitioned)\n rel->pathlist = NIL\n```\n\nThen the code rebuild the paths. However, the rebuilt append path never\ngets the\nset_rel_pathlist_hook called. Thus, the work that hook did previously gets\nthrown away and the rebuilt append path can never be influenced by this\nhook. Is this intended behavior? Am I missing something?\n\nThanks,\nMat\nTimescaleDB\n\nHi,11.3 included some change to partition table planning. Namely commit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is known empty.\") seems to redo all paths for partitioned tables in apply_scanjoin_target_to_paths. It clears the paths in:``` if (rel_is_partitioned) rel->pathlist = NIL ```Then the code rebuild the paths. However, the rebuilt append path never gets theset_rel_pathlist_hook called. Thus, the work that hook did previously gets thrown away and the rebuilt append path can never be influenced by this hook. Is this intended behavior? Am I missing something?Thanks,MatTimescaleDB",
"msg_date": "Fri, 24 May 2019 17:05:34 -0400",
"msg_from": "Mat Arye <mat@timescale.com>",
"msg_from_op": true,
"msg_subject": "Question about some changes in 11.3"
},
{
"msg_contents": "On Fri, May 24, 2019 at 5:05 PM Mat Arye <mat@timescale.com> wrote:\n\n> Hi,\n>\n> 11.3 included some change to partition table planning. Namely\n> commit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is\n> known empty.\") seems to redo all paths for partitioned tables\n> in apply_scanjoin_target_to_paths. It clears the paths in:\n>\n> ```\n> if (rel_is_partitioned)\n> rel->pathlist = NIL\n> ```\n>\n> Then the code rebuild the paths. However, the rebuilt append path never\n> gets the\n> set_rel_pathlist_hook called. Thus, the work that hook did previously gets\n> thrown away and the rebuilt append path can never be influenced by this\n> hook. Is this intended behavior? Am I missing something?\n>\n> Thanks,\n> Mat\n> TimescaleDB\n>\n\nI've attached a small patch to address this discrepancy for when the\nset_rel_pathlist_hook is called so that's it is called for actual paths\nused for partitioned rels. Please let me know if I am misunderstanding how\nthis should be handled.",
"msg_date": "Tue, 28 May 2019 11:52:51 -0400",
"msg_from": "Mat Arye <mat@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about some changes in 11.3"
},
{
"msg_contents": "Hi Mat,\n\nOn 2019/05/25 6:05, Mat Arye wrote:\n> Hi,\n> \n> 11.3 included some change to partition table planning. Namely\n> commit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is\n> known empty.\") seems to redo all paths for partitioned tables\n> in apply_scanjoin_target_to_paths. It clears the paths in:\n> \n> ```\n> if (rel_is_partitioned)\n> rel->pathlist = NIL\n> ```\n> \n> Then the code rebuild the paths. However, the rebuilt append path never\n> gets the\n> set_rel_pathlist_hook called. Thus, the work that hook did previously gets\n> thrown away and the rebuilt append path can never be influenced by this\n> hook.\n\nBy dropping the old paths like done here, the core code is simply\nforgetting that set_rel_pathlist_hook may have editorialized over them,\nwhich seems like an oversight of that commit.\n\nYour proposal to call set_rel_pathlist_hook() after\nadd_paths_to_append_rel() to rebuild the Append paths sounds fine to me.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 29 May 2019 10:58:45 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Question about some changes in 11.3"
},
{
"msg_contents": "Mat Arye <mat@timescale.com> writes:\n> On Fri, May 24, 2019 at 5:05 PM Mat Arye <mat@timescale.com> wrote:\n>> 11.3 included some change to partition table planning. Namely\n>> commit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is\n>> known empty.\") seems to redo all paths for partitioned tables\n>> in apply_scanjoin_target_to_paths. It clears the paths in:\n>> \n>> ```\n>> if (rel_is_partitioned)\n>> rel->pathlist = NIL\n>> ```\n>> \n>> Then the code rebuild the paths. However, the rebuilt append path never\n>> gets the\n>> set_rel_pathlist_hook called. Thus, the work that hook did previously gets\n>> thrown away and the rebuilt append path can never be influenced by this\n>> hook. Is this intended behavior? Am I missing something?\n\nHm. I'd say this was already broken by the invention of\napply_scanjoin_target_to_paths; perhaps 11-before-11.3 managed to\nstill work for you, but it's not hard to envision applications of\nset_rel_pathlist_hook for which it would not have worked. The contract\nfor set_rel_pathlist_hook is supposed to be that it gets to editorialize\non all normal (non-Gather) paths created by the core code, and that's\nno longer the case now that apply_scanjoin_target_to_paths can add more.\n\n> I've attached a small patch to address this discrepancy for when the\n> set_rel_pathlist_hook is called so that's it is called for actual paths\n> used for partitioned rels. Please let me know if I am misunderstanding how\n> this should be handled.\n\nI'm not very happy with this patch either, as it makes the situation\neven more confused, not less so. The best-case scenario is that the\nset_rel_pathlist_hook runs twice and does useless work; the worst case\nis that it gets confused completely by being called twice for the same\nrel. I think we need to maintain the invariant that that hook is\ncalled exactly once per baserel.\n\nI wonder whether we could fix matters by postponing the\nset_rel_pathlist_hook call till later in the same cases where\nwe postpone generate_gather_paths, ie, it's the only baserel.\n\nThat would make its name pretty misleading, though. Maybe we\nshould leave it alone and invent a separate hook to be called\nby/after apply_scanjoin_target_to_paths? Although I don't\nknow if it'd be useful to add a new hook to v11 at this point.\nExtensions would have a hard time knowing if they could use it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 16:07:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about some changes in 11.3"
},
{
"msg_contents": "Thanks for taking a look at this Tom.\n\nOn Mon, Jun 3, 2019 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Mat Arye <mat@timescale.com> writes:\n> > On Fri, May 24, 2019 at 5:05 PM Mat Arye <mat@timescale.com> wrote:\n> >> 11.3 included some change to partition table planning. Namely\n> >> commit 925f46f (\"Fix handling of targetlist SRFs when scan/join\n> relation is\n> >> known empty.\") seems to redo all paths for partitioned tables\n> >> in apply_scanjoin_target_to_paths. It clears the paths in:\n> >>\n> >> ```\n> >> if (rel_is_partitioned)\n> >> rel->pathlist = NIL\n> >> ```\n> >>\n> >> Then the code rebuild the paths. However, the rebuilt append path never\n> >> gets the\n> >> set_rel_pathlist_hook called. Thus, the work that hook did previously\n> gets\n> >> thrown away and the rebuilt append path can never be influenced by this\n> >> hook. Is this intended behavior? Am I missing something?\n>\n> Hm. I'd say this was already broken by the invention of\n> apply_scanjoin_target_to_paths; perhaps 11-before-11.3 managed to\n> still work for you, but it's not hard to envision applications of\n> set_rel_pathlist_hook for which it would not have worked. The contract\n> for set_rel_pathlist_hook is supposed to be that it gets to editorialize\n> on all normal (non-Gather) paths created by the core code, and that's\n> no longer the case now that apply_scanjoin_target_to_paths can add more.\n>\n\nYeah it worked for our cases because (I guess) out paths turned out to be\nlower cost,\nbut I see your point.\n\n\n>\n> > I've attached a small patch to address this discrepancy for when the\n> > set_rel_pathlist_hook is called so that's it is called for actual paths\n> > used for partitioned rels. Please let me know if I am misunderstanding\n> how\n> > this should be handled.\n>\n> I'm not very happy with this patch either, as it makes the situation\n> even more confused, not less so. The best-case scenario is that the\n> set_rel_pathlist_hook runs twice and does useless work; the worst case\n> is that it gets confused completely by being called twice for the same\n> rel. I think we need to maintain the invariant that that hook is\n> called exactly once per baserel.\n>\n\nYeah getting called once per baserel is a nice invariant to have.\n\n\n> I wonder whether we could fix matters by postponing the\n> set_rel_pathlist_hook call till later in the same cases where\n> we postpone generate_gather_paths, ie, it's the only baserel.\n>\n> That would make its name pretty misleading, though.\n\n\nHow would simply delaying the hook make the name misleading? I am also\nwondering if\nusing the condition `rel->reloptkind == RELOPT_BASEREL &&\nbms_membership(root->all_baserels) != BMS_SINGLETON` is sufficient.\nIs it really guaranteed that `apply_scanjoin_target_to_paths` will not be\ncalled in other cases?\n\n\n> Maybe we\n> should leave it alone and invent a separate hook to be called\n> by/after apply_scanjoin_target_to_paths? Although I don't\n> know if it'd be useful to add a new hook to v11 at this point.\n> Extensions would have a hard time knowing if they could use it.\n>\n\nI think for us, either approach would work. We just need a place to\nadd/modify\nsome paths. FWIW, I think delaying the hook is easier to deal with on our\nend if it could work\nsince we don't have to deal with two different code paths but either is\nworkable.\n\nCertainly if we go with the new hook approach I think it should be added to\nv11 as well.\nThat way extensions that need the functionality can hook into it and deal\nwith patch level\ndifferences instead of having no way at all to get at this functionality.\n\nI am more than happy to work on a new patch once we settle on an approach.\n\n\n\n>\n> regards, tom lane\n>\n\nThanks for taking a look at this Tom.On Mon, Jun 3, 2019 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Mat Arye <mat@timescale.com> writes:\n> On Fri, May 24, 2019 at 5:05 PM Mat Arye <mat@timescale.com> wrote:\n>> 11.3 included some change to partition table planning. Namely\n>> commit 925f46f (\"Fix handling of targetlist SRFs when scan/join relation is\n>> known empty.\") seems to redo all paths for partitioned tables\n>> in apply_scanjoin_target_to_paths. It clears the paths in:\n>> \n>> ```\n>> if (rel_is_partitioned)\n>> rel->pathlist = NIL\n>> ```\n>> \n>> Then the code rebuild the paths. However, the rebuilt append path never\n>> gets the\n>> set_rel_pathlist_hook called. Thus, the work that hook did previously gets\n>> thrown away and the rebuilt append path can never be influenced by this\n>> hook. Is this intended behavior? Am I missing something?\n\nHm. I'd say this was already broken by the invention of\napply_scanjoin_target_to_paths; perhaps 11-before-11.3 managed to\nstill work for you, but it's not hard to envision applications of\nset_rel_pathlist_hook for which it would not have worked. The contract\nfor set_rel_pathlist_hook is supposed to be that it gets to editorialize\non all normal (non-Gather) paths created by the core code, and that's\nno longer the case now that apply_scanjoin_target_to_paths can add more.Yeah it worked for our cases because (I guess) out paths turned out to be lower cost,but I see your point. \n\n> I've attached a small patch to address this discrepancy for when the\n> set_rel_pathlist_hook is called so that's it is called for actual paths\n> used for partitioned rels. Please let me know if I am misunderstanding how\n> this should be handled.\n\nI'm not very happy with this patch either, as it makes the situation\neven more confused, not less so. The best-case scenario is that the\nset_rel_pathlist_hook runs twice and does useless work; the worst case\nis that it gets confused completely by being called twice for the same\nrel. I think we need to maintain the invariant that that hook is\ncalled exactly once per baserel.Yeah getting called once per baserel is a nice invariant to have. \nI wonder whether we could fix matters by postponing the\nset_rel_pathlist_hook call till later in the same cases where\nwe postpone generate_gather_paths, ie, it's the only baserel.\n\nThat would make its name pretty misleading, though. How would simply delaying the hook make the name misleading? I am also wondering ifusing the condition `rel->reloptkind == RELOPT_BASEREL &&\t\tbms_membership(root->all_baserels) != BMS_SINGLETON` is sufficient. Is it really guaranteed that `apply_scanjoin_target_to_paths` will not be called in other cases? Maybe we\nshould leave it alone and invent a separate hook to be called\nby/after apply_scanjoin_target_to_paths? Although I don't\nknow if it'd be useful to add a new hook to v11 at this point.\nExtensions would have a hard time knowing if they could use it.I think for us, either approach would work. We just need a place to add/modifysome paths. FWIW, I think delaying the hook is easier to deal with on our end if it could worksince we don't have to deal with two different code paths but either is workable.Certainly if we go with the new hook approach I think it should be added to v11 as well.That way extensions that need the functionality can hook into it and deal with patch leveldifferences instead of having no way at all to get at this functionality.I am more than happy to work on a new patch once we settle on an approach. \n\n regards, tom lane",
"msg_date": "Wed, 5 Jun 2019 15:20:40 -0400",
"msg_from": "Mat Arye <mat@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Question about some changes in 11.3"
},
{
"msg_contents": "Mat Arye <mat@timescale.com> writes:\n> On Mon, Jun 3, 2019 at 4:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm. I'd say this was already broken by the invention of\n>> apply_scanjoin_target_to_paths; perhaps 11-before-11.3 managed to\n>> still work for you, but it's not hard to envision applications of\n>> set_rel_pathlist_hook for which it would not have worked. The contract\n>> for set_rel_pathlist_hook is supposed to be that it gets to editorialize\n>> on all normal (non-Gather) paths created by the core code, and that's\n>> no longer the case now that apply_scanjoin_target_to_paths can add more.\n>> ...\n>> I wonder whether we could fix matters by postponing the\n>> set_rel_pathlist_hook call till later in the same cases where\n>> we postpone generate_gather_paths, ie, it's the only baserel.\n\n> Is it really guaranteed that `apply_scanjoin_target_to_paths` will not be\n> called in other cases?\n\nWell, apply_scanjoin_target_to_paths is called in *all* cases. It only\nthrows away the original paths for partitioned rels, though.\n\nI spent some more time looking at this, and I am afraid that my idea\nof postponing set_rel_pathlist_hook into apply_scanjoin_target_to_paths\nisn't going to work: there is not anyplace in that function where we\ncould call the hook without the API being noticeably different from\nwhat it is at the current call site. In particular, if we try to call\nit down near the end so that it still has the property of being able\nto remove any core-generated path, then there's a *big* difference for\nqueries involving SRFs: we've already plastered ProjectSetPath(s) atop\nthe original paths, and any user of the hook would have to know to do\nlikewise for freshly-generated paths. That would certainly break\nexisting hook users.\n\nI'm inclined to think that the safest course is to leave\nset_rel_pathlist_hook as it stands, and invent a new hook that is called\nin apply_scanjoin_target_to_paths just before the generate_gather_paths\ncall. (Or, perhaps, just after that --- but the precedent of\nset_rel_pathlist_hook suggests that \"before\" is more useful.)\nFor your use-case you'd have to get into both hooks, and they'd both have\nto know that if they're dealing with a partitioned baserel that is the\nonly baserel in the query, the new hook is where to generate paths\nrather than the old hook. Maybe it'd be worth having the core code\nexport some simple test function for that, rather than having the details\nof those semantics be wired into various extensions.\n\nI think it'd be all right to put a patch done that way into the v11\nbranch. It would not make anything any worse for code that uses\nset_rel_pathlist_hook and is OK with the v11 behavior. Code that\nneeds to use the new hook would fail to load into 11-before-11.whatever,\nbut that's probably better than loading and then doing the wrong thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 12 Jun 2019 18:14:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question about some changes in 11.3"
}
] |
[
{
"msg_contents": "CREATE TABLE circles (c circle, EXCLUDE USING gist (c WITH &&));\n\nREINDEX TABLE CONCURRENTLY circles;\nWARNING: cannot reindex exclusion constraint index \"public.circles_c_excl\"\nconcurrently, skipping\nNOTICE: table \"circles\" has no indexes\nREINDEX\n\nThe message \"table has no indexes\" is confusing, as warning above it states\ntable has index, just was skipped by reindex.\n\nSo, currently for any reason (exclusion or invalid index) reindex table\nconcurrently skips reindex, it reports the table has no index. Looking at\nthe behavior of non-concurrent reindex, it emits the NOTICE only if table\nreally has no indexes (since it has no skip cases).\n\nWe need to see what really wish to communicate here, table has no indexes\nor just that reindex was *not* performed or keep it simple and completely\navoid emitting anything. If we skip any indexes we anyways emit WARNING, so\nthat should be sufficient and nothing more needs to be conveyed.\n\nIn-case we wish to communicate no reindex was performed, what do we wish to\nnotify for empty tables?\n\nSeems might be just emit the NOTICE \"table xxx has no index\", if really no\nindex for concurrent and non-concurrent case, make it consistent, less\nconfusing and leave it there. Attaching the patch to just do that. Thoughts?",
"msg_date": "Fri, 24 May 2019 17:06:25 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Sat, 25 May 2019 at 12:06, Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> Seems might be just emit the NOTICE \"table xxx has no index\", if really no index for concurrent and non-concurrent case, make it consistent, less confusing and leave it there. Attaching the patch to just do that. Thoughts?\n\nWould it not be better just to change the error message for the\nconcurrent case so that it reads: \"table \\\"%s\\\" has no indexes that\ncan be concurrently reindexed\"\n\nOtherwise, what you have now is still confusing for partitioned tables:\n\npostgres=# create table listp (a int primary key) partition by list(a);\nCREATE TABLE\npostgres=# REINDEX TABLE CONCURRENTLY listp;\npsql: WARNING: REINDEX of partitioned tables is not yet implemented,\nskipping \"listp\"\npsql: NOTICE: table \"listp\" has no indexes\nREINDEX\n\nAlso, I think people probably will care more about the fact that\nnothing was done for that table rather than if the table happens to\nhave no indexes. For the non-concurrently case, that just happened to\nbe the same thing.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sat, 25 May 2019 14:42:59 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Sat, May 25, 2019 at 02:42:59PM +1200, David Rowley wrote:\n> Also, I think people probably will care more about the fact that\n> nothing was done for that table rather than if the table happens to\n> have no indexes. For the non-concurrently case, that just happened to\n> be the same thing.\n\nThis is equally confusing for plain REINDEX as well, no? Taking your\nprevious example:\n=# REINDEX TABLE listp;\nWARNING: 0A000: REINDEX of partitioned tables is not yet implemented,\nskipping \"listp\"\nLOCATION: reindex_relation, index.c:3513\nNOTICE: 00000: table \"listp\" has no indexes\nLOCATION: ReindexTable, indexcmds.c:2452\nREINDEX\n\nIn this case the relation has partitioned indexes, not indexes, so\nthat's actually correct. Still it seems to me that some users could\nget confused by the current wording.\n\nFor invalid indexes you would get that:\n=# create table aa (a int);\nCREATE TABLE\n=# insert into aa values (1),(1);\nINSERT 0 2\n=# create unique index concurrently aai on aa(a);\nERROR: 23505: could not create unique index \"aai\"\nDETAIL: Key (a)=(1) is duplicated.\nSCHEMA NAME: public\nTABLE NAME: aa\nCONSTRAINT NAME: aai\nLOCATION: comparetup_index_btree, tuplesort.c:405\n=# reindex table concurrently aa;\nWARNING: 0A000: cannot reindex invalid index \"public.aai\"\nconcurrently, skipping\nLOCATION: ReindexRelationConcurrently, indexcmds.c:2772\nNOTICE: 00000: table \"aa\" has no indexes\nLOCATION: ReindexTable, indexcmds.c:2452\nREINDEX\n\nAs you mention for reindex_relation() no indexes <=> nothing to do,\nstill let's not rely on that. Instead of making the error message\nspecific to concurrent operations, I would suggest to change it to\n\"table foo has no indexes to reindex\". What do you think about the\nattached?\n--\nMichael",
"msg_date": "Mon, 27 May 2019 10:43:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Sun, May 26, 2019 at 6:43 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> As you mention for reindex_relation() no indexes <=> nothing to do,\n> still let's not rely on that. Instead of making the error message\n> specific to concurrent operations, I would suggest to change it to\n> \"table foo has no indexes to reindex\". What do you think about the\n> attached?\n>\n\nI think we will need to separate out the NOTICE message for concurrent and\nregular case.\n\nFor example this doesn't sound correct\nWARNING: cannot reindex exclusion constraint index \"public.circles_c_excl\"\nconcurrently, skipping\nNOTICE: table \"circles\" has no indexes to reindex\n\nAs no indexes can't be reindexed *concurrently* but there are still indexes\nwhich can be reindexed, invalid indexes I think fall in same category.\n\nOn Sun, May 26, 2019 at 6:43 PM Michael Paquier <michael@paquier.xyz> wrote:\nAs you mention for reindex_relation() no indexes <=> nothing to do,\nstill let's not rely on that. Instead of making the error message\nspecific to concurrent operations, I would suggest to change it to\n\"table foo has no indexes to reindex\". What do you think about the\nattached?I think we will need to separate out the NOTICE message for concurrent and regular case.For example this doesn't sound correctWARNING: cannot reindex exclusion constraint index \"public.circles_c_excl\" concurrently, skippingNOTICE: table \"circles\" has no indexes to reindexAs no indexes can't be reindexed *concurrently* but there are still indexes which can be reindexed, invalid indexes I think fall in same category.",
"msg_date": "Mon, 27 May 2019 22:23:19 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Tue, 28 May 2019 at 01:23, Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> I think we will need to separate out the NOTICE message for concurrent and regular case.\n>\n> For example this doesn't sound correct\n> WARNING: cannot reindex exclusion constraint index \"public.circles_c_excl\" concurrently, skipping\n> NOTICE: table \"circles\" has no indexes to reindex\n>\n> As no indexes can't be reindexed *concurrently* but there are still indexes which can be reindexed, invalid indexes I think fall in same category.\n\nSwap \"can't\" for \"can\" and, yeah. I think it would be good to make the\nerror messages differ for these two cases. This would serve as a hint\nto the user that they might have better luck trying without the\n\"concurrently\" option.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 28 May 2019 15:04:46 -0400",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Tue, May 28, 2019 at 12:05 PM David Rowley <david.rowley@2ndquadrant.com>\nwrote:\n\n> On Tue, 28 May 2019 at 01:23, Ashwin Agrawal <aagrawal@pivotal.io> wrote:\n> > I think we will need to separate out the NOTICE message for concurrent\n> and regular case.\n> >\n> > For example this doesn't sound correct\n> > WARNING: cannot reindex exclusion constraint index\n> \"public.circles_c_excl\" concurrently, skipping\n> > NOTICE: table \"circles\" has no indexes to reindex\n> >\n> > As no indexes can't be reindexed *concurrently* but there are still\n> indexes which can be reindexed, invalid indexes I think fall in same\n> category.\n>\n> Swap \"can't\" for \"can\" and, yeah. I think it would be good to make the\n> error messages differ for these two cases. This would serve as a hint\n> to the user that they might have better luck trying without the\n> \"concurrently\" option.\n>\n\nPlease check if the attached patch addresses and satisfies all the points\ndiscussed so far in this thread.\n\nWas thinking of adding explicit errhint for concurrent case NOTICE to\nconvey, either the table has no indexes or can only be reindexed without\nCONCURRENTLY. But thought may be its obvious but feel free to add if would\nbe helpful.",
"msg_date": "Mon, 3 Jun 2019 16:53:48 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 04:53:48PM -0700, Ashwin Agrawal wrote:\n> Please check if the attached patch addresses and satisfies all the points\n> discussed so far in this thread.\n\nIt looks to be so, please see below for some comments.\n\n> + {\n> result = ReindexRelationConcurrently(heapOid, options);\n> +\n> + if (!result)\n> + ereport(NOTICE,\n> + (errmsg(\"table \\\"%s\\\" has no indexes that can be concurrently reindexed\",\n> + relation->relname)));\n\n\"concurrently\" should be at the end of this string. I have had the\nexact same argument with Tom for 508300e.\n\n> @@ -2630,7 +2638,6 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind,\n> foreach(l, relids)\n> {\n> Oid relid = lfirst_oid(l);\n> - bool result;\n> \n> StartTransactionCommand();\n> /* functions in indexes may want a snapshot set */\n> @@ -2638,11 +2645,12 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind,\n> \n> if (concurrent)\n> {\n> - result = ReindexRelationConcurrently(relid, options);\n> + ReindexRelationConcurrently(relid, options);\n> /* ReindexRelationConcurrently() does the verbose output */\n\nIndeed this variable is not used. So we could just get rid of it\ncompletely.\n\n> + bool result;\n> result = reindex_relation(relid,\n> REINDEX_REL_PROCESS_TOAST |\n> REINDEX_REL_CHECK_CONSTRAINTS,\n> @@ -2656,7 +2664,6 @@ ReindexMultipleTables(const char *objectName, ReindexObjectType objectKind,\n> \n> PopActiveSnapshot();\n> }\n\nThe table has been considered for reindexing even if nothing has been\nreindexed, so perhaps we'd want to keep this part as-is? We have the\nsame level of reporting for a couple of releases for this part.\n\n> -\n> CommitTransactionCommand();\n\nUseless noise diff.\n--\nMichael",
"msg_date": "Tue, 4 Jun 2019 10:27:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 6:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Jun 03, 2019 at 04:53:48PM -0700, Ashwin Agrawal wrote:\n> > Please check if the attached patch addresses and satisfies all the points\n> > discussed so far in this thread.\n>\n> It looks to be so, please see below for some comments.\n>\n> > + {\n> > result = ReindexRelationConcurrently(heapOid, options);\n> > +\n> > + if (!result)\n> > + ereport(NOTICE,\n> > + (errmsg(\"table \\\"%s\\\" has no indexes that can be\n> concurrently reindexed\",\n> > + relation->relname)));\n>\n> \"concurrently\" should be at the end of this string. I have had the\n> exact same argument with Tom for 508300e.\n>\n\nSure modified the same, find attached.\n\n\n> > @@ -2630,7 +2638,6 @@ ReindexMultipleTables(const char *objectName,\n> ReindexObjectType objectKind,\n> > foreach(l, relids)\n> > {\n> > Oid relid = lfirst_oid(l);\n> > - bool result;\n> >\n> > StartTransactionCommand();\n> > /* functions in indexes may want a snapshot set */\n> > @@ -2638,11 +2645,12 @@ ReindexMultipleTables(const char *objectName,\n> ReindexObjectType objectKind,\n> >\n> > if (concurrent)\n> > {\n> > - result = ReindexRelationConcurrently(relid, options);\n> > + ReindexRelationConcurrently(relid, options);\n> > /* ReindexRelationConcurrently() does the verbose output */\n>\n> Indeed this variable is not used. So we could just get rid of it\n> completely.\n>\n\nThe variable is used in else scope hence I moved it there. But yes its\nremoved completely for this scope.\n\n> + bool result;\n> > result = reindex_relation(relid,\n> > REINDEX_REL_PROCESS_TOAST |\n> > REINDEX_REL_CHECK_CONSTRAINTS,\n> > @@ -2656,7 +2664,6 @@ ReindexMultipleTables(const char *objectName,\n> ReindexObjectType objectKind,\n> >\n> > PopActiveSnapshot();\n> > }\n>\n> The table has been considered for reindexing even if nothing has been\n> reindexed, so perhaps we'd want to keep this part as-is? We have the\n> same level of reporting for a couple of releases for this part.\n>\n\nI don't understand the review comment. I functionally didn't change\nanything in that part of code, just have result variable confined to that\nscope of code.\n\n\n> > -\n> > CommitTransactionCommand();\n>\n> Useless noise diff.\n>\n\nOkay, removed it.",
"msg_date": "Tue, 4 Jun 2019 11:26:44 -0700",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Tue, Jun 04, 2019 at 11:26:44AM -0700, Ashwin Agrawal wrote:\n> The variable is used in else scope hence I moved it there. But yes its\n> removed completely for this scope.\n\nThanks for updating the patch. It does its job by having one separate\nmessage for the concurrent and the non-concurrent cases as discussed.\nDavid, what do you think? Perhaps you would like to commit it\nyourself?\n--\nMichael",
"msg_date": "Wed, 5 Jun 2019 15:11:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
},
{
"msg_contents": "On Wed, 5 Jun 2019 at 18:11, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jun 04, 2019 at 11:26:44AM -0700, Ashwin Agrawal wrote:\n> > The variable is used in else scope hence I moved it there. But yes its\n> > removed completely for this scope.\n>\n> Thanks for updating the patch. It does its job by having one separate\n> message for the concurrent and the non-concurrent cases as discussed.\n> David, what do you think? Perhaps you would like to commit it\n> yourself?\n\nThanks. I've just pushed this with some additional comment changes.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 21:08:46 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Confusing error message for REINDEX TABLE CONCURRENTLY"
}
] |
[
{
"msg_contents": "Hello PostgreSQL Mentors,\r\n\r\nHello, my name is Sharon Clark and I’m a technical writer interested in the Google Season of Docs (GSoD) project for PostgreSQL. My interest stems from working with developers and moving toward software documentation. The GSoD project is a great opportunity for me to work with mentors in your open source organization, learn about open source software, and collaborate with the open source community.\r\nI understand you want the candidate to create a community resource for absolute beginners and update the tutorial so it’s user friendly.\r\nActionItem #1. Install PostgreSQL, especially since I am a new user, and observe any issues\r\nActionItem #2. Review existing resources for new users\r\nA brief snapshot of my experience includes the following experience:\r\n\r\n· Creating technical documentation for end users (e.g., user guides, manuals, reference docs, tutorials, training materials, job aids, workflows, etc.)\r\n\r\n· Testing websites and reporting critical bugs and code fixes\r\n\r\n· Designing interactive forms for submitting data online\r\n\r\n· Conducting usability on website interfaces\r\nI plan to submit a proposal for both the PostgreSQL Introductory Resources and Tutorial projects, but I’m open to learning technologies for ANY other projects listed. Please feel free to contact me with any questions.\r\n\r\nBest regards,\r\nSharon Clark\r\nSclb3@hotmail.com<mailto:Sclb3@hotmail.com>\r\n\r\n\n\n\n\n\n\nHello PostgreSQL Mentors,\n\n\r\nHello, my name is Sharon Clark and I’m a technical writer interested in the Google Season of Docs (GSoD) project for PostgreSQL. My interest stems from working with developers and moving toward software documentation. The GSoD project is a great opportunity\r\n for me to work with mentors in your open source organization, learn about open source software, and collaborate with the open source community.\r\n\n\nI understand you want the candidate to create a community resource for absolute beginners and update the tutorial so it’s user friendly.\n\n\n\nActionItem #1. Install PostgreSQL, especially since I am a new user, and observe any issues\n\n\nActionItem #2. Review existing resources for new users\n\n\n\n\n\n\nA brief snapshot of my experience includes the following experience:\n\n\n\n\r\n· Creating technical documentation for end users (e.g., user guides, manuals, reference docs, tutorials, training materials, job aids, workflows, etc.)\n\n\n\r\n· Testing websites and reporting critical bugs and code fixes\n\n\n\r\n· Designing interactive forms for submitting data online\n\n\n\r\n· Conducting usability on website interfaces\n\n\n\nI plan to submit a proposal for both the PostgreSQL Introductory Resources and Tutorial projects, but I’m open to learning technologies for ANY other projects listed. Please feel free to contact me with any questions.\r\n\n \nBest regards,\nSharon Clark\nSclb3@hotmail.com",
"msg_date": "Sat, 25 May 2019 00:31:48 +0000",
"msg_from": "sharon clark <sclb3@hotmail.com>",
"msg_from_op": true,
"msg_subject": "GSoD Introductory Resources and Tutorial Projects"
},
{
"msg_contents": "Greetings,\n\n* sharon clark (sclb3@hotmail.com) wrote:\n> I plan to submit a proposal for both the PostgreSQL Introductory Resources and Tutorial projects, but I’m open to learning technologies for ANY other projects listed. Please feel free to contact me with any questions.\n\nThanks for reaching out. As part of the proposal, you'll want to\ninclude a detailed description (a great deal more than what you included\nin this initial email) of the technical writing project. I encourage\nyou to reach out to pgsql-docs list with your specific ideas and\nsuggestions around the topics you're interested, so we can discuss them\nand hopefully help you put together a good project proposal.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 26 May 2019 10:33:25 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: GSoD Introductory Resources and Tutorial Projects"
},
{
"msg_contents": "Hello Stephen,\n\nThank you for the information. I'll be in touch.\n\nBest wishes,\n\nSharon\n\nGet Outlook for Android<https://aka.ms/ghei36>\n\n________________________________\nFrom: Stephen Frost <sfrost@snowman.net>\nSent: Sunday, May 26, 2019 7:33:25 AM\nTo: sharon clark\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: GSoD Introductory Resources and Tutorial Projects\n\nGreetings,\n\n* sharon clark (sclb3@hotmail.com) wrote:\n> I plan to submit a proposal for both the PostgreSQL Introductory Resources and Tutorial projects, but I’m open to learning technologies for ANY other projects listed. Please feel free to contact me with any questions.\n\nThanks for reaching out. As part of the proposal, you'll want to\ninclude a detailed description (a great deal more than what you included\nin this initial email) of the technical writing project. I encourage\nyou to reach out to pgsql-docs list with your specific ideas and\nsuggestions around the topics you're interested, so we can discuss them\nand hopefully help you put together a good project proposal.\n\nThanks!\n\nStephen\n\n\n\n\n\n\n\n\n\n\nHello Stephen,\n\n\n\nThank you for the information. I'll be in touch.\n\n\n\nBest wishes,\n\n\n\nSharon\n\n\n\n\nGet Outlook for Android\n\n\n\nFrom: Stephen Frost <sfrost@snowman.net>\nSent: Sunday, May 26, 2019 7:33:25 AM\nTo: sharon clark\nCc: pgsql-hackers@lists.postgresql.org\nSubject: Re: GSoD Introductory Resources and Tutorial Projects\n \n\n\n\nGreetings,\n\n* sharon clark (sclb3@hotmail.com) wrote:\n> I plan to submit a proposal for both the PostgreSQL Introductory Resources and Tutorial projects, but I’m open to learning technologies for ANY other projects listed. Please feel free to contact me with any questions.\n\nThanks for reaching out. As part of the proposal, you'll want to\ninclude a detailed description (a great deal more than what you included\nin this initial email) of the technical writing project. I encourage\nyou to reach out to pgsql-docs list with your specific ideas and\nsuggestions around the topics you're interested, so we can discuss them\nand hopefully help you put together a good project proposal.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 26 May 2019 14:50:00 +0000",
"msg_from": "sharon clark <sclb3@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GSoD Introductory Resources and Tutorial Projects"
}
] |
[
{
"msg_contents": "Here's a small patch series aimed to both clean up a few misuses of\nstring functions and also to optimise a few things along the way.\n\n0001: Converts various call that use appendPQExpBuffer() that really\nshould use appendPQExrBufferStr(). If there's no formatting then\nusing the former function is a waste of effort.\n\n0002: Similar to 0001 but replaces various appendStringInfo calls with\nappendStringInfoString calls.\n\n0003: Adds a new function named appendStringInfoStringInfo() which\nappends one StringInfo onto another. Various places did this using\nappendStringInfoString(), but that required a needless strlen() call.\nThe length is already known and stored in the StringInfo's len field.\nNot sure if this is the best name for this function, but can't think\nof a better one right now.\n\n0004: inlines appendStringInfoString so that any callers that pass in\na string constant (most of them) can have the strlen() call optimised\nout.\n\nI don't have any benchmarks to show workloads that this improves,\nLikely the chances that it'll slow anything down are pretty remote.\n\nI'll park this here until July.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Sat, 25 May 2019 19:53:35 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Cleaning up and speeding up string functions"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n> Here's a small patch series aimed to both clean up a few misuses of\n> string functions and also to optimise a few things along the way.\n\n> 0001: Converts various call that use appendPQExpBuffer() that really\n> should use appendPQExrBufferStr(). If there's no formatting then\n> using the former function is a waste of effort.\n\n> 0002: Similar to 0001 but replaces various appendStringInfo calls with\n> appendStringInfoString calls.\n\nAgreed on these; we've applied such transformations before.\n\n> 0003: Adds a new function named appendStringInfoStringInfo() which\n> appends one StringInfo onto another. Various places did this using\n> appendStringInfoString(), but that required a needless strlen() call.\n\nI can't get excited about this one unless you can point to places\nwhere the savings is meaningful. Otherwise it's just adding mental\nburden.\n\n> 0004: inlines appendStringInfoString so that any callers that pass in\n> a string constant (most of them) can have the strlen() call optimised\n> out.\n\nHere the cost is code space rather than programmer-visible complexity,\nbut I still doubt that it's worth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 12:50:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On Sun, 26 May 2019 at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > 0003: Adds a new function named appendStringInfoStringInfo() which\n> > appends one StringInfo onto another. Various places did this using\n> > appendStringInfoString(), but that required a needless strlen() call.\n>\n> I can't get excited about this one unless you can point to places\n> where the savings is meaningful. Otherwise it's just adding mental\n> burden.\n\nThe original idea was just to use appendBinaryStringInfo and make use\nof the StringInfo's len field. Peter mentioned he'd rather seen a\nwrapper function here [1].\n\n> > 0004: inlines appendStringInfoString so that any callers that pass in\n> > a string constant (most of them) can have the strlen() call optimised\n> > out.\n>\n> Here the cost is code space rather than programmer-visible complexity,\n> but I still doubt that it's worth it.\n\nI see on today's master the postgres binary did grow from 8633960\nbytes to 8642504 on my machine using GCC 8.3, so you might be right.\npg_receivewal grew from 96376 to 96424 bytes.\n\n[1] https://www.postgresql.org/message-id/5567B7F5.7050705%40gmx.net\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Sun, 26 May 2019 11:00:41 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On 2019-May-26, David Rowley wrote:\n\n> On Sun, 26 May 2019 at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Here the cost is code space rather than programmer-visible complexity,\n> > but I still doubt that it's worth it.\n> \n> I see on today's master the postgres binary did grow from 8633960\n> bytes to 8642504 on my machine using GCC 8.3, so you might be right.\n> pg_receivewal grew from 96376 to 96424 bytes.\n\nI suppose one place that could be affected visibly is JSON object\nconstruction (json.c, jsonfuncs.c) that could potentially deal with\nmillions of stringinfo manipulations, but most of those calls don't\nactually use appendStringInfoString with constant values, so it's\nprobably not worth bothering with.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 16:54:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On Thu, 6 Jun 2019 at 08:54, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-May-26, David Rowley wrote:\n>\n> > On Sun, 26 May 2019 at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > > Here the cost is code space rather than programmer-visible complexity,\n> > > but I still doubt that it's worth it.\n> >\n> > I see on today's master the postgres binary did grow from 8633960\n> > bytes to 8642504 on my machine using GCC 8.3, so you might be right.\n> > pg_receivewal grew from 96376 to 96424 bytes.\n>\n> I suppose one place that could be affected visibly is JSON object\n> construction (json.c, jsonfuncs.c) that could potentially deal with\n> millions of stringinfo manipulations, but most of those calls don't\n> actually use appendStringInfoString with constant values, so it's\n> probably not worth bothering with.\n\nWe could probably get the best of both worlds by using a macro and\n__builtin_constant_p() to detect if the string is a const, but I won't\nbe pushing for that unless I find something to make it worthwhile.\n\nFor patch 0004, I think it's likely worth revising so instead of\nadding a new function, make use of appendBinaryStringInfo() and pass\nin the known length. Likely mostly for the xml.c calls.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Thu, 13 Jun 2019 17:24:29 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On Sun, 26 May 2019 at 04:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n> > Here's a small patch series aimed to both clean up a few misuses of\n> > string functions and also to optimise a few things along the way.\n>\n> > 0001: Converts various call that use appendPQExpBuffer() that really\n> > should use appendPQExrBufferStr(). If there's no formatting then\n> > using the former function is a waste of effort.\n>\n> > 0002: Similar to 0001 but replaces various appendStringInfo calls with\n> > appendStringInfoString calls.\n>\n> Agreed on these; we've applied such transformations before.\n\nI've pushed 0001 and 0002.\n\nInstead of having 0004, how about the attached?\n\nMost of the calls won't improve much performance-wise since they're so\ncheap anyway, but there is xmlconcat(), I imagine that should see some\nspeedup.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Thu, 4 Jul 2019 13:51:06 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On Thu, 4 Jul 2019 at 13:51, David Rowley <david.rowley@2ndquadrant.com> wrote:\n> Instead of having 0004, how about the attached?\n>\n> Most of the calls won't improve much performance-wise since they're so\n> cheap anyway, but there is xmlconcat(), I imagine that should see some\n> speedup.\n\nI've pushed this after having found a couple more places where the\nlength is known.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n",
"msg_date": "Tue, 23 Jul 2019 00:16:44 +1200",
"msg_from": "David Rowley <david.rowley@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "David Rowley <david.rowley@2ndquadrant.com> writes:\n\n> On Thu, 4 Jul 2019 at 13:51, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>> Instead of having 0004, how about the attached?\n>>\n>> Most of the calls won't improve much performance-wise since they're so\n>> cheap anyway, but there is xmlconcat(), I imagine that should see some\n>> speedup.\n>\n> I've pushed this after having found a couple more places where the\n> length is known.\n\nI noticed a lot of these are appending one StringInfo onto another;\nwould it make sense to introduce a helper funciton\nappendStringInfoStringInfo(StringInfo str, StringInfo str2) to avoid the\n`str.data, str2.len` repetition?\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n",
"msg_date": "Mon, 22 Jul 2019 14:32:45 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> David Rowley <david.rowley@2ndquadrant.com> writes:\n>\n>> On Thu, 4 Jul 2019 at 13:51, David Rowley <david.rowley@2ndquadrant.com> wrote:\n>>> Instead of having 0004, how about the attached?\n>>>\n>>> Most of the calls won't improve much performance-wise since they're so\n>>> cheap anyway, but there is xmlconcat(), I imagine that should see some\n>>> speedup.\n>>\n>> I've pushed this after having found a couple more places where the\n>> length is known.\n>\n> I noticed a lot of these are appending one StringInfo onto another;\n> would it make sense to introduce a helper funciton\n> appendStringInfoStringInfo(StringInfo str, StringInfo str2) to avoid the\n> `str.data, str2.len` repetition?\n\nA bit of grepping only turned up 18 uses, but I was bored and whipped up\nthe attached anyway, in case we decide it's worth it.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law",
"msg_date": "Mon, 22 Jul 2019 16:37:18 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "On 2019-Jul-22, Dagfinn Ilmari Manns�ker wrote:\n\n> ilmari@ilmari.org (Dagfinn Ilmari Manns�ker) writes:\n> \n> > I noticed a lot of these are appending one StringInfo onto another;\n> > would it make sense to introduce a helper funciton\n> > appendStringInfoStringInfo(StringInfo str, StringInfo str2) to avoid the\n> > `str.data, str2.len` repetition?\n> \n> A bit of grepping only turned up 18 uses, but I was bored and whipped up\n> the attached anyway, in case we decide it's worth it.\n\nDavid had already submitted the same thing upthread, and it was rejected\non the grounds that it increases the code space.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 22 Jul 2019 11:41:05 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> On 2019-Jul-22, Dagfinn Ilmari Mannsåker wrote:\n>\n>> ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n>> \n>> > I noticed a lot of these are appending one StringInfo onto another;\n>> > would it make sense to introduce a helper funciton\n>> > appendStringInfoStringInfo(StringInfo str, StringInfo str2) to avoid the\n>> > `str.data, str2.len` repetition?\n>> \n>> A bit of grepping only turned up 18 uses, but I was bored and whipped up\n>> the attached anyway, in case we decide it's worth it.\n>\n> David had already submitted the same thing upthread, and it was rejected\n> on the grounds that it increases the code space.\n\nOops, sorry, I missed that. Never mind then.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n",
"msg_date": "Mon, 22 Jul 2019 17:00:43 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: Cleaning up and speeding up string functions"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI've done another round of cross-checking the master branch for new\nunique identifiers/words. As my previous attempt to fix things was not\nnoticed, now I'm focusing on distinct typos.\n1. authenticaion (user-visible string)\n2. becuase\n3. checkinunique\n4. cheep\n5. comparion (user-visible)\n6. comparision\n7. compatiblity\n8. continuescanthat\n9. current_locked_pid (user-visible)\n10. essentally\n11. exptected\n12. funcation\n13. guarantess\n14. HEAP_HASOID\n15. Interfact\n16. minimalslotslot (similar to heapslot)\n17. modifcations\n18. multiplcation\n19. optimised\n20. pased\n21. perfer\n22. relvant\n23. represnting\n24. ski p\n25. unexcpected (user-visible string)\n\nI still hope such fixes are useful and will be accepted.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 25 May 2019 13:26:10 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typos for v12"
},
{
"msg_contents": "On Sat, May 25, 2019 at 3:56 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> Hello hackers,\n>\n> I've done another round of cross-checking the master branch for new\n> unique identifiers/words. As my previous attempt to fix things was not\n> noticed, now I'm focusing on distinct typos.\n> 1. authenticaion (user-visible string)\n> 2. becuase\n> 3. checkinunique\n> 4. cheep\n..\n>\n> I still hope such fixes are useful and will be accepted.\n>\n\nI think it is good to fix these. I haven't verified all but I can\nreview them. Isn't it better to fix them as one patch instead of\nmultiple patches?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 May 2019 16:12:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "Hello Amit,\n\n25.05.2019 13:42, Amit Kapila wrote:\n> I think it is good to fix these. I haven't verified all but I can\n> review them. Isn't it better to fix them as one patch instead of\n> multiple patches?\n\nIf a single patch is more convenient, then here it is.\nI thought that separate patches would be more handy in case of any doubts.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 25 May 2019 13:53:11 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "On Sat, May 25, 2019 at 4:23 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> Hello Amit,\n>\n> 25.05.2019 13:42, Amit Kapila wrote:\n> > I think it is good to fix these. I haven't verified all but I can\n> > review them. Isn't it better to fix them as one patch instead of\n> > multiple patches?\n>\n> If a single patch is more convenient, then here it is.\n> I thought that separate patches would be more handy in case of any doubts.\n>\n\nI have taken one pass over it and all fixes seem to be correct and got\nintroduced in v12. I will re-verify them once again and then commit\nyour patch if I don't found any problem. In the meantime, if anyone\nelse wants to look at it, that would be great.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 May 2019 19:07:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I have taken one pass over it and all fixes seem to be correct and got\n> introduced in v12. I will re-verify them once again and then commit\n> your patch if I don't found any problem. In the meantime, if anyone\n> else wants to look at it, that would be great.\n\nFWIW, I'd counsel against applying the changes in imath.h/.c, as that\nis not our code, and unnecessary variations from upstream will just\nmake it harder to track upstream. The rest of this looks fine.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 May 2019 11:06:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "On Sat, May 25, 2019 at 8:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I have taken one pass over it and all fixes seem to be correct and got\n> > introduced in v12. I will re-verify them once again and then commit\n> > your patch if I don't found any problem. In the meantime, if anyone\n> > else wants to look at it, that would be great.\n>\n> FWIW, I'd counsel against applying the changes in imath.h/.c, as that\n> is not our code, and unnecessary variations from upstream will just\n> make it harder to track upstream.\n>\n\nThis occurred to me as well while reviewing, but I thought typo fixes\nshould be fine. Anyway, I have excluded those before pushing. So, if\nwe want to fix these, then maybe one has to first get this fixed in\nupstream first and then take from there.\n\n> The rest of this looks fine.\n>\n\nThanks, pushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 26 May 2019 19:19:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "26.05.2019 16:49, Amit Kapila wrote:\n> This occurred to me as well while reviewing, but I thought typo fixes\n> should be fine. Anyway, I have excluded those before pushing. So, if\n> we want to fix these, then maybe one has to first get this fixed in\n> upstream first and then take from there.\n>\n>> The rest of this looks fine.\n>>\n> Thanks, pushed.\nThank you Amit!\nI've filed a Pull Request in the imath project:\nhttps://github.com/creachadair/imath/pull/39\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 26 May 2019 18:43:41 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typos for v12"
},
{
"msg_contents": "On Sun, May 26, 2019 at 06:43:41PM +0300, Alexander Lakhin wrote:\n> 26.05.2019 16:49, Amit Kapila wrote:\n> > This occurred to me as well while reviewing, but I thought typo fixes\n> > should be fine. Anyway, I have excluded those before pushing. So, if\n> > we want to fix these, then maybe one has to first get this fixed in\n> > upstream first and then take from there.\n> >\n> >> The rest of this looks fine.\n> >>\n> > Thanks, pushed.\n> Thank you Amit!\n> I've filed a Pull Request in the imath project:\n> https://github.com/creachadair/imath/pull/39\n\nI noticed that it's gone from upstream. I also noticed that upstream\ndid a release in January since the previous pull. Is it worth trying\nto merge those in as they arrive?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 26 May 2019 22:21:49 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix typos for v12"
}
] |
[
{
"msg_contents": "How do I get rid of this slot ?\n\nselect pg_drop_replication_slot('mysub');\nERROR: replication slot \"mysub\" is active for PID 13065\ntest_database=# select * from pg_subscription;\n subdbid | subname | subowner | subenabled | subconninfo | subslotname |\nsubsynccommit | subpublications\n---------+---------+----------+------------+-------------+-------------+---------------+-----------------\n(0 rows)\n\ntest_database=# select * from pg_publication;\n pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete |\npubtruncate\n---------+----------+--------------+-----------+-----------+-----------+-------------\n(0 rows)\n\nDave Cramer\n\nHow do I get rid of this slot ?select pg_drop_replication_slot('mysub');ERROR: replication slot \"mysub\" is active for PID 13065test_database=# select * from pg_subscription; subdbid | subname | subowner | subenabled | subconninfo | subslotname | subsynccommit | subpublications---------+---------+----------+------------+-------------+-------------+---------------+-----------------(0 rows)test_database=# select * from pg_publication; pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate---------+----------+--------------+-----------+-----------+-----------+-------------(0 rows)Dave Cramer",
"msg_date": "Sat, 25 May 2019 09:35:34 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "This seems like very unfriendly behaviour"
},
{
"msg_contents": "On Sat, 25 May 2019 at 08:35, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> How do I get rid of this slot ?\n>\n> select pg_drop_replication_slot('mysub');\n> ERROR: replication slot \"mysub\" is active for PID 13065\n> test_database=# select * from pg_subscription;\n> subdbid | subname | subowner | subenabled | subconninfo | subslotname | subsynccommit | subpublications\n> ---------+---------+----------+------------+-------------+-------------+---------------+-----------------\n> (0 rows)\n>\n> test_database=# select * from pg_publication;\n> pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate\n> ---------+----------+--------------+-----------+-----------+-----------+-------------\n> (0 rows)\n>\n\nCan you check \"select * from pg_stat_replication\"?\n\nalso, what pid is being reported in pg_replication_slot for this slot?\ndo you see a process in pg_stat_activity for that pid? in the os?\n\n-- \nJaime Casanova www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 26 May 2019 00:40:38 -0500",
"msg_from": "Jaime Casanova <jaime.casanova@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: This seems like very unfriendly behaviour"
},
{
"msg_contents": "On Sun, 26 May 2019 at 01:40, Jaime Casanova <jaime.casanova@2ndquadrant.com>\nwrote:\n\n> On Sat, 25 May 2019 at 08:35, Dave Cramer <davecramer@gmail.com> wrote:\n> >\n> > How do I get rid of this slot ?\n> >\n> > select pg_drop_replication_slot('mysub');\n> > ERROR: replication slot \"mysub\" is active for PID 13065\n> > test_database=# select * from pg_subscription;\n> > subdbid | subname | subowner | subenabled | subconninfo | subslotname |\n> subsynccommit | subpublications\n> >\n> ---------+---------+----------+------------+-------------+-------------+---------------+-----------------\n> > (0 rows)\n> >\n> > test_database=# select * from pg_publication;\n> > pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete |\n> pubtruncate\n> >\n> ---------+----------+--------------+-----------+-----------+-----------+-------------\n> > (0 rows)\n> >\n>\n> Can you check \"select * from pg_stat_replication\"?\n>\n> also, what pid is being reported in pg_replication_slot for this slot?\n> do you see a process in pg_stat_activity for that pid? in the os?\n>\n\nWell it turned out it was on receiver. I did get rid of it, but still not a\nfriendly message.\n\nThanks\n\nDave Cramer\n\nOn Sun, 26 May 2019 at 01:40, Jaime Casanova <jaime.casanova@2ndquadrant.com> wrote:On Sat, 25 May 2019 at 08:35, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> How do I get rid of this slot ?\n>\n> select pg_drop_replication_slot('mysub');\n> ERROR: replication slot \"mysub\" is active for PID 13065\n> test_database=# select * from pg_subscription;\n> subdbid | subname | subowner | subenabled | subconninfo | subslotname | subsynccommit | subpublications\n> ---------+---------+----------+------------+-------------+-------------+---------------+-----------------\n> (0 rows)\n>\n> test_database=# select * from pg_publication;\n> pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate\n> ---------+----------+--------------+-----------+-----------+-----------+-------------\n> (0 rows)\n>\n\nCan you check \"select * from pg_stat_replication\"?\n\nalso, what pid is being reported in pg_replication_slot for this slot?\ndo you see a process in pg_stat_activity for that pid? in the os?Well it turned out it was on receiver. I did get rid of it, but still not a friendly message.Thanks Dave Cramer",
"msg_date": "Sun, 26 May 2019 09:49:49 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: This seems like very unfriendly behaviour"
},
{
"msg_contents": "Hi,\n\nOn May 26, 2019 9:49:49 AM EDT, Dave Cramer <davecramer@gmail.com> wrote:\n>On Sun, 26 May 2019 at 01:40, Jaime Casanova\n><jaime.casanova@2ndquadrant.com>\n>wrote:\n>\n>> On Sat, 25 May 2019 at 08:35, Dave Cramer <davecramer@gmail.com>\n>wrote:\n>> >\n>> > How do I get rid of this slot ?\n>> >\n>> > select pg_drop_replication_slot('mysub');\n>> > ERROR: replication slot \"mysub\" is active for PID 13065\n>> > test_database=# select * from pg_subscription;\n>> > subdbid | subname | subowner | subenabled | subconninfo |\n>subslotname |\n>> subsynccommit | subpublications\n>> >\n>>\n>---------+---------+----------+------------+-------------+-------------+---------------+-----------------\n>> > (0 rows)\n>> >\n>> > test_database=# select * from pg_publication;\n>> > pubname | pubowner | puballtables | pubinsert | pubupdate |\n>pubdelete |\n>> pubtruncate\n>> >\n>>\n>---------+----------+--------------+-----------+-----------+-----------+-------------\n>> > (0 rows)\n>> >\n>>\n>> Can you check \"select * from pg_stat_replication\"?\n>>\n>> also, what pid is being reported in pg_replication_slot for this\n>slot?\n>> do you see a process in pg_stat_activity for that pid? in the os?\n>>\n>\n>Well it turned out it was on receiver. I did get rid of it, but still\n>not a\n>friendly message.\n\nWhat behavior would you like? It's similar to how we behave with dropping databases, roles etc.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sun, 26 May 2019 11:23:46 -0400",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: This seems like very unfriendly behaviour"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nPlease also consider fixing the following inconsistencies found in new\nv12 code:\n\n1. AT_AddOids - remove (orphaned after 578b2297)\n2. BeingModified ->TM_BeingModified (for consistency)\n3. copy_relation_data -> remove (orphaned after d25f5191)\n4. endblock -> endblk (an internal inconsistency)\n5. ExecContextForcesOids - not changed, but may be should be removed\n(orphaned after 578b2297)\n6. ExecGetResultSlot - remove (not used since introduction in 1a0586de)\n7. existedConstraints & partConstraint -> provenConstraint &\ntestConstraint (sync with implementation)\n8. heap_parallelscan_initialize -> remove the sentence (changed in c2fe139c)\n9. heap_rescan_set_params - remove (orphaned after c2fe139c)\n10. HeapTupleSatisfiesSnapshot -> HeapTupleSatisfiesVisibility (an\ninternal inconsistency)\n11. interpretOidsOption - remove (orphaned after 578b2297)\n12. item_itemno -> iter_itemno (an internal inconsistency)\n13. iterset_is_member -> intset_is_member (an internal inconsistency)\n14. latestRemovedxids -> latestRemovedXids (an inconsistent case)\n15. newrode -> newrnode (an internal inconsistency)\n16. NextSampletuple -> NextSampleTuple (an inconsistent case)\n17. oid_typioparam - remove? (orphaned after 578b2297)\n18. recoveryTargetIsLatest - remove (orphaned after 2dedf4d9)\n19. register_unlink -> register_unlink_segment (an internal inconsistency)\n20. RelationGetOidIndex ? just to remove the paragraph (orphaned after\n578b2297)\n21. slot_getsomeattr -> checked in slot_getsomeattrs ? (an internal\ninconsistency and questionable grammar)\n22. spekToken -> specToken (an internal inconsistency)\n23. SSLdone -> secure_done (sync with implementation)\n24. stats_relation & keep_buf - remove (orphaned after 9a8ee1dc & 5db6df0c0)\n25. SyncRequstHandler -> SyncRequestHandler (a typo)\n26. table_needs_toast_table -> table_relation_needs_toast_table (an\ninternal inconsistency)\n27. XactTopTransactionId -> XactTopFullTransactionId (an internal\ninconsistency)\n\nThe separate patches for all the defects (except 5) are attached. In\ncase a single patch is preferable, I can produce it too.\n\nBest regards,\nAlexander",
"msg_date": "Sat, 25 May 2019 23:50:09 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix inconsistencies for v12"
},
{
"msg_contents": "On Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> Hello hackers,\n>\n> Please also consider fixing the following inconsistencies found in new\n> v12 code:\n>\n> 1. AT_AddOids - remove (orphaned after 578b2297)\n> 2. BeingModified ->TM_BeingModified (for consistency)\n>\n\n/*\n- * A tuple is locked if HTSU returns BeingModified.\n+ * A tuple is locked if HTSU returns TM_BeingModified.\n */\n if (htsu == TM_BeingModified)\n\nI think the existing comment is quite clear. I mean we can change it\nif we want, but I don't see the dire need to do it.\n\n\n> 3. copy_relation_data -> remove (orphaned after d25f5191)\n\n- * reason is the same as in tablecmds.c's copy_relation_data(): we're\n- * writing data that's not in shared buffers, and so a CHECKPOINT\n- * occurring during the rewriteheap operation won't have fsync'd data we\n- * wrote before the checkpoint.\n+ * reason is that we're writing data that's not in shared buffers, and\n+ * so a CHECKPOINT occurring during the rewriteheap operation won't\n+ * have fsync'd data we wrote before the checkpoint.\n\nIt seems to me that the same thing is moved to storage.c's\nRelationCopyStorage() in the commit mentioned by you. So, isn't it\nbetter to change the comment accordingly rather than entirely removing\nthe reference to a similar comment in another place?\n\n> 4. endblock -> endblk (an internal inconsistency)\n> 5. ExecContextForcesOids - not changed, but may be should be removed\n> (orphaned after 578b2297)\n\nYes, we should remove the use of ExecContextForcesOids. We are using\nes_result_relation_info at other places as well, so I think we can\nchange the comment to indicate the same.\n\n>\n> The separate patches for all the defects (except 5) are attached. In\n> case a single patch is preferable, I can produce it too.\n>\n\nIt is okay, we can review the individual patches and then combine them\nlater. I couldn't get a chance to review all of them yet. Thanks\nfor working on this.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 May 2019 02:15:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> 5. ExecContextForcesOids - not changed, but may be should be removed\n>> (orphaned after 578b2297)\n\n> Yes, we should remove the use of ExecContextForcesOids.\n\nUnless grep is failing me, ExecContextForcesOids is in fact gone.\nAll that's left is one obsolete mention in a comment, which should\ncertainly be cleaned up.\n\nHowever, the full context of the mention is\n\n /*\n * call ExecInitNode on each of the plans to be executed and save the\n * results into the array \"mt_plans\". This is also a convenient place to\n * verify that the proposed target relations are valid and open their\n * indexes for insertion of new index entries. Note we *must* set\n * estate->es_result_relation_info correctly while we initialize each\n * sub-plan; ExecContextForcesOids depends on that!\n */\n\nwhich makes one wonder if the code to twiddle\nestate->es_result_relation_info during subplan init is dead code. If so\nwe probably ought to remove it, as it's surely confusing. If it's not\ndead, then this comment ought to be updated to explain the surviving\nreason(s), not simply deleted.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 May 2019 18:29:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Mon, May 27, 2019 at 3:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> >> 5. ExecContextForcesOids - not changed, but may be should be removed\n> >> (orphaned after 578b2297)\n>\n> > Yes, we should remove the use of ExecContextForcesOids.\n>\n> Unless grep is failing me, ExecContextForcesOids is in fact gone.\n> All that's left is one obsolete mention in a comment, which should\n> certainly be cleaned up.\n>\n\nThat's right and I was talking about that usage. Initially, I thought\nwe need to change the comment, but on again looking at the code, I\nthink we can remove that comment and the related code, but I am not\ncompletely sure. If we read the comment atop ExecContextForcesOids\n[1] before it was removed, it seems to indicate that the\ninitialization of es_result_relation_info for each subplan is for its\nusage in ExecContextForcesOids. I have run the regression tests with\nthe attached patch (which removes changing es_result_relation_info in\nExecInitModifyTable) and all the tests passed. Do you have any\nthoughts on this matter?\n\n\n[1]\n/*\n ..\n * We assume that if we are generating tuples for INSERT or UPDATE,\n * estate->es_result_relation_info is already set up to describe the target\n * relation. Note that in an UPDATE that spans an inheritance tree, some of\n * the target relations may have OIDs and some not. We have to make the\n * decisions on a per-relation basis as we initialize each of the subplans of\n * the ModifyTable node, so ModifyTable has to set es_result_relation_info\n * while initializing each subplan.\n..\n*/\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 28 May 2019 04:35:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "28.05.2019 2:05, Amit Kapila wrote:\n> On Mon, May 27, 2019 at 3:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Amit Kapila <amit.kapila16@gmail.com> writes:\n>>> On Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>>>> 5. ExecContextForcesOids - not changed, but may be should be removed\n>>>> (orphaned after 578b2297)\n>>> Yes, we should remove the use of ExecContextForcesOids.\n>> Unless grep is failing me, ExecContextForcesOids is in fact gone.\n>> All that's left is one obsolete mention in a comment, which should\n>> certainly be cleaned up.\n>>\n> That's right and I was talking about that usage. Initially, I thought\n> we need to change the comment, but on again looking at the code, I\n> think we can remove that comment and the related code, but I am not\n> completely sure. If we read the comment atop ExecContextForcesOids\n> [1] before it was removed, it seems to indicate that the\n> initialization of es_result_relation_info for each subplan is for its\n> usage in ExecContextForcesOids. I have run the regression tests with\n> the attached patch (which removes changing es_result_relation_info in\n> ExecInitModifyTable) and all the tests passed. Do you have any\n> thoughts on this matter?\n>\n>\n> [1]\n> /*\n> ..\n> * We assume that if we are generating tuples for INSERT or UPDATE,\n> * estate->es_result_relation_info is already set up to describe the target\n> * relation. Note that in an UPDATE that spans an inheritance tree, some of\n> * the target relations may have OIDs and some not. We have to make the\n> * decisions on a per-relation basis as we initialize each of the subplans of\n> * the ModifyTable node, so ModifyTable has to set es_result_relation_info\n> * while initializing each subplan.\n> ..\n> */\nI got a coredump with `make installcheck-world` (on postgres_fdw test):\nCore was generated by `postgres: law contrib_regression [local]\nUPDATE '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00007ff1410ece98 in postgresBeginDirectModify\n(node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n2363 rtindex =\nestate->es_result_relation_info->ri_RangeTableIndex;\n(gdb) bt\n#0 0x00007ff1410ece98 in postgresBeginDirectModify\n(node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n#1 0x0000560a55979e62 in ExecInitForeignScan\n(node=node@entry=0x560a56254dc0, estate=estate@entry=0x560a563f9ae8,\n eflags=eflags@entry=0) at nodeForeignscan.c:227\n#2 0x0000560a5594e123 in ExecInitNode (node=node@entry=0x560a56254dc0,\nestate=estate@entry=0x560a563f9ae8,\n eflags=eflags@entry=0) at execProcnode.c:277\n...\nSo It seems that this is not a dead code.\n\nThis comment initially appeared with c7a165ad in\nnodeAppend.c:ExecInitAppend as following:\n /*\n * call ExecInitNode on each of the plans to be executed and\nsave the\n * results into the array \"initialized\". Note we *must* set\n * estate->es_result_relation_info correctly while we initialize\neach\n * sub-plan; ExecAssignResultTypeFromTL depends on that!\n */\n for (i = appendstate->as_firstplan; i <=\nappendstate->as_lastplan; i++)\n {\n appendstate->as_whichplan = i;\n exec_append_initialize_next(node);\n\n initNode = (Plan *) nth(i, appendplans);\n initialized[i] = ExecInitNode(initNode, estate, (Plan *)\nnode);\n }\n\n /*\n * initialize tuple type\n */\n ExecAssignResultTypeFromTL((Plan *) node, &appendstate->cstate);\n appendstate->cstate.cs_ProjInfo = NULL;\n\nand in ExecAssignResultTypeFromTL we see:\n * This is pretty grotty: we need to ensure that result tuples have\n * space for an OID iff they are going to be stored into a relation\n * that has OIDs. We assume that estate->es_result_relation_info\n * is already set up to describe the target relation.\n\nSo the initial comment stated that before calling\nExecAssignResultTypeFromTL we need to have correct\nes_result_relation_infos (but we don't set them in that code).\n\nLater in commit a376a467 we have the ExecContextForcesOids call inside\nExecAssignResultTypeFromTL appeared:\nvoid\nExecAssignResultTypeFromTL(PlanState *planstate)\n{\n bool hasoid;\n TupleDesc tupDesc;\n\n if (ExecContextForcesOids(planstate, &hasoid))\n {\n /* context forces OID choice; hasoid is now set correctly */\n }\nAnd the comment was changed to:\n Note we *must* set\n * estate->es_result_relation_info correctly while we initialize\neach\n * sub-plan; ExecContextForcesOids depends on that!\n\nalthough the code still calls ExecAssignResultTypeFromTL:\n for (i = appendstate->as_firstplan; i <=\nappendstate->as_lastplan; i++)\n {\n appendstate->as_whichplan = i;\n exec_append_initialize_next(appendstate);\n\n initNode = (Plan *) nth(i, node->appendplans);\n appendplanstates[i] = ExecInitNode(initNode, estate);\n }\n\n /*\n * initialize tuple type\n */\n ExecAssignResultTypeFromTL(&appendstate->ps);\n\nLater, in 8a5849b7 the comment moves out of nodeAppend.c:ExecInitAppend\ninto nodeModifyTable.c: ExecInitModifyTable (where we see it now):\n /*\n * call ExecInitNode on each of the plans to be executed and\nsave the\n * results into the array \"mt_plans\". Note we *must* set\n * estate->es_result_relation_info correctly while we initialize\neach\n * sub-plan; ExecContextForcesOids depends on that!\n */\n estate->es_result_relation_info = estate->es_result_relations;\n i = 0;\n foreach(l, node->plans)\n {\n subplan = (Plan *) lfirst(l);\n mtstate->mt_plans[i] = ExecInitNode(subplan, estate,\neflags);\n estate->es_result_relation_info++;\n i++;\n }\n estate->es_result_relation_info = NULL;\n\nThis code actually sets es_result_relation_info, but\nExecAssignResultTypeFromTL not called there anymore. So it seems that\nthis comment at least diverged from the initial author's intent.\nWith this in mind, I am inclined to just remove it.\n\n(On a side note, I agree with your remarks regarding 2 and 3; please\nlook at a better patch for 3 attached.)\n\nBest regards,\nAlexander",
"msg_date": "Tue, 28 May 2019 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On 2019/05/28 14:00, Alexander Lakhin wrote:\n> 28.05.2019 2:05, Amit Kapila wrote:\n>> ... If we read the comment atop ExecContextForcesOids\n>> [1] before it was removed, it seems to indicate that the\n>> initialization of es_result_relation_info for each subplan is for its\n>> usage in ExecContextForcesOids. I have run the regression tests with\n>> the attached patch (which removes changing es_result_relation_info in\n>> ExecInitModifyTable) and all the tests passed. Do you have any\n>> thoughts on this matter?\n>\n> I got a coredump with `make installcheck-world` (on postgres_fdw test):\n> Core was generated by `postgres: law contrib_regression [local]\n> UPDATE '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x00007ff1410ece98 in postgresBeginDirectModify\n> (node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n> 2363 rtindex =\n> estate->es_result_relation_info->ri_RangeTableIndex;\n> (gdb) bt\n> #0 0x00007ff1410ece98 in postgresBeginDirectModify\n> (node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n> #1 0x0000560a55979e62 in ExecInitForeignScan\n> (node=node@entry=0x560a56254dc0, estate=estate@entry=0x560a563f9ae8,\n> eflags=eflags@entry=0) at nodeForeignscan.c:227\n> #2 0x0000560a5594e123 in ExecInitNode (node=node@entry=0x560a56254dc0,\n> estate=estate@entry=0x560a563f9ae8,\n> eflags=eflags@entry=0) at execProcnode.c:277\n> ...\n> So It seems that this is not a dead code.\n\n> ... So it seems that\n> this comment at least diverged from the initial author's intent.\n> With this in mind, I am inclined to just remove it.\n\nSeeing that the crash occurs due to postgres_fdw relying on\nes_result_relation_info being set when initializing a \"direct\nmodification\" plan on foreign tables managed by it, we could change the\ncomment to say that instead. Note that allowing \"direct modification\" of\nforeign tables is a core feature, so there's no postgres_fdw-specific\nbehavior here; there may be other FDWs that support \"direct modification\"\nplans and so likewise rely on es_result_relation_info being set.\n\nHow about:\n\ndiff --git a/src/backend/executor/nodeModifyTable.c\nb/src/backend/executor/nodeModifyTable.c\nindex a3c0e91543..95545c9472 100644\n--- a/src/backend/executor/nodeModifyTable.c\n+++ b/src/backend/executor/nodeModifyTable.c\n@@ -2316,7 +2316,7 @@ ExecInitModifyTable(ModifyTable *node, EState\n*estate, int eflags)\n * verify that the proposed target relations are valid and open their\n * indexes for insertion of new index entries. Note we *must* set\n * estate->es_result_relation_info correctly while we initialize each\n- * sub-plan; ExecContextForcesOids depends on that!\n+ * sub-plan; FDWs may depend on that.\n */\n saved_resultRelInfo = estate->es_result_relation_info;\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 28 May 2019 15:59:28 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Tue, May 28, 2019 at 12:29 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/05/28 14:00, Alexander Lakhin wrote:\n> > 28.05.2019 2:05, Amit Kapila wrote:\n> >> ... If we read the comment atop ExecContextForcesOids\n> >> [1] before it was removed, it seems to indicate that the\n> >> initialization of es_result_relation_info for each subplan is for its\n> >> usage in ExecContextForcesOids. I have run the regression tests with\n> >> the attached patch (which removes changing es_result_relation_info in\n> >> ExecInitModifyTable) and all the tests passed. Do you have any\n> >> thoughts on this matter?\n> >\n> > I got a coredump with `make installcheck-world` (on postgres_fdw test):\n> > Core was generated by `postgres: law contrib_regression [local]\n> > UPDATE '.\n> > Program terminated with signal SIGSEGV, Segmentation fault.\n> > #0 0x00007ff1410ece98 in postgresBeginDirectModify\n> > (node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n> > 2363 rtindex =\n> > estate->es_result_relation_info->ri_RangeTableIndex;\n> > (gdb) bt\n> > #0 0x00007ff1410ece98 in postgresBeginDirectModify\n> > (node=0x560a563fab30, eflags=0) at postgres_fdw.c:2363\n> > #1 0x0000560a55979e62 in ExecInitForeignScan\n> > (node=node@entry=0x560a56254dc0, estate=estate@entry=0x560a563f9ae8,\n> > eflags=eflags@entry=0) at nodeForeignscan.c:227\n> > #2 0x0000560a5594e123 in ExecInitNode (node=node@entry=0x560a56254dc0,\n> > estate=estate@entry=0x560a563f9ae8,\n> > eflags=eflags@entry=0) at execProcnode.c:277\n> > ...\n> > So It seems that this is not a dead code.\n>\n> > ... So it seems that\n> > this comment at least diverged from the initial author's intent.\n> > With this in mind, I am inclined to just remove it.\n>\n> Seeing that the crash occurs due to postgres_fdw relying on\n> es_result_relation_info being set when initializing a \"direct\n> modification\" plan on foreign tables managed by it, we could change the\n> comment to say that instead. Note that allowing \"direct modification\" of\n> foreign tables is a core feature, so there's no postgres_fdw-specific\n> behavior here; there may be other FDWs that support \"direct modification\"\n> plans and so likewise rely on es_result_relation_info being set.\n>\n\n\nCan we ensure some way that only FDW's rely on it not any other part\nof the code?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 May 2019 16:56:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On 2019/05/28 20:26, Amit Kapila wrote:\n> On Tue, May 28, 2019 at 12:29 PM Amit Langote wrote:\n>> Seeing that the crash occurs due to postgres_fdw relying on\n>> es_result_relation_info being set when initializing a \"direct\n>> modification\" plan on foreign tables managed by it, we could change the\n>> comment to say that instead. Note that allowing \"direct modification\" of\n>> foreign tables is a core feature, so there's no postgres_fdw-specific\n>> behavior here; there may be other FDWs that support \"direct modification\"\n>> plans and so likewise rely on es_result_relation_info being set.\n> \n> \n> Can we ensure some way that only FDW's rely on it not any other part\n> of the code?\n\nHmm, I can't think of any way of doing than other than manual inspection.\nWe are sure that no piece of core code relies on it in the ExecInitNode()\ncode path. Apparently FDWs may, as we've found out here. Now that I've\nlooked around, maybe other loadable modules may too, by way of (only?)\nCustom nodes. I don't see any other way to hook into ExecInitNode(), so\nmaybe that's it.\n\nSo, maybe reword a bit as:\n\ndiff --git a/src/backend/executor/nodeModifyTable.c\nb/src/backend/executor/nodeModifyTable.c\nindex a3c0e91543..95545c9472 100644\n--- a/src/backend/executor/nodeModifyTable.c\n+++ b/src/backend/executor/nodeModifyTable.c\n@@ -2316,7 +2316,7 @@ ExecInitModifyTable(ModifyTable *node, EState\n*estate, int eflags)\n * verify that the proposed target relations are valid and open their\n * indexes for insertion of new index entries. Note we *must* set\n * estate->es_result_relation_info correctly while we initialize each\n- * sub-plan; ExecContextForcesOids depends on that!\n+ * sub-plan; external modules such as FDWs may depend on that.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Wed, 29 May 2019 09:42:24 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Tue, May 28, 2019 at 10:30 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 28.05.2019 2:05, Amit Kapila wrote:\n> > On Mon, May 27, 2019 at 3:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Amit Kapila <amit.kapila16@gmail.com> writes:\n> >>> On Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> >>>> 5. ExecContextForcesOids - not changed, but may be should be removed\n> >>>> (orphaned after 578b2297)\n> >>> Yes, we should remove the use of ExecContextForcesOids.\n> >> Unless grep is failing me, ExecContextForcesOids is in fact gone.\n> >> All that's left is one obsolete mention in a comment, which should\n> >> certainly be cleaned up.\n> >>\n..\n> > */\n> I got a coredump with `make installcheck-world` (on postgres_fdw test):\n>\n\nThanks for noticing this. I have run the tests in parallel mode with\nsomething like make -s check-world -j4 PROVE_FLAGS='-j4'. It didn't\nstop at failure, so I missed to notice it. However, now looking\ncarefully (by redirecting the output to a log file), I could see this.\n\n>\n> (On a side note, I agree with your remarks regarding 2 and 3; please\n> look at a better patch for 3 attached.)\n>\n\nThe new patch looks good to me. However, instead of committing just\nthis one alone, I will review others as well and see which all can be\ncombined and pushed together.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 May 2019 15:12:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Wed, May 29, 2019 at 6:12 AM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n>\n> On 2019/05/28 20:26, Amit Kapila wrote:\n> > On Tue, May 28, 2019 at 12:29 PM Amit Langote wrote:\n> >> Seeing that the crash occurs due to postgres_fdw relying on\n> >> es_result_relation_info being set when initializing a \"direct\n> >> modification\" plan on foreign tables managed by it, we could change the\n> >> comment to say that instead. Note that allowing \"direct modification\" of\n> >> foreign tables is a core feature, so there's no postgres_fdw-specific\n> >> behavior here; there may be other FDWs that support \"direct modification\"\n> >> plans and so likewise rely on es_result_relation_info being set.\n> >\n> >\n> > Can we ensure some way that only FDW's rely on it not any other part\n> > of the code?\n>\n> Hmm, I can't think of any way of doing than other than manual inspection.\n> We are sure that no piece of core code relies on it in the ExecInitNode()\n> code path. Apparently FDWs may, as we've found out here. Now that I've\n> looked around, maybe other loadable modules may too, by way of (only?)\n> Custom nodes. I don't see any other way to hook into ExecInitNode(), so\n> maybe that's it.\n>\n> So, maybe reword a bit as:\n>\n> diff --git a/src/backend/executor/nodeModifyTable.c\n> b/src/backend/executor/nodeModifyTable.c\n> index a3c0e91543..95545c9472 100644\n> --- a/src/backend/executor/nodeModifyTable.c\n> +++ b/src/backend/executor/nodeModifyTable.c\n> @@ -2316,7 +2316,7 @@ ExecInitModifyTable(ModifyTable *node, EState\n> *estate, int eflags)\n> * verify that the proposed target relations are valid and open their\n> * indexes for insertion of new index entries. Note we *must* set\n> * estate->es_result_relation_info correctly while we initialize each\n> - * sub-plan; ExecContextForcesOids depends on that!\n> + * sub-plan; external modules such as FDWs may depend on that.\n>\n\nI think it will be better to include postgres_fdw in the comment in\nsome way so that if someone wants a concrete example, there is\nsomething to refer to.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 May 2019 15:21:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On 2019/05/30 18:51, Amit Kapila wrote:\n> On Wed, May 29, 2019 at 6:12 AM Amit Langote wrote:\n>> On 2019/05/28 20:26, Amit Kapila wrote:\n>>> Can we ensure some way that only FDW's rely on it not any other part\n>>> of the code?\n>>\n>> Hmm, I can't think of any way of doing than other than manual inspection.\n>> We are sure that no piece of core code relies on it in the ExecInitNode()\n>> code path. Apparently FDWs may, as we've found out here. Now that I've\n>> looked around, maybe other loadable modules may too, by way of (only?)\n>> Custom nodes. I don't see any other way to hook into ExecInitNode(), so\n>> maybe that's it.\n>>\n>> So, maybe reword a bit as:\n>>\n>> diff --git a/src/backend/executor/nodeModifyTable.c\n>> b/src/backend/executor/nodeModifyTable.c\n>> index a3c0e91543..95545c9472 100644\n>> --- a/src/backend/executor/nodeModifyTable.c\n>> +++ b/src/backend/executor/nodeModifyTable.c\n>> @@ -2316,7 +2316,7 @@ ExecInitModifyTable(ModifyTable *node, EState\n>> *estate, int eflags)\n>> * verify that the proposed target relations are valid and open their\n>> * indexes for insertion of new index entries. Note we *must* set\n>> * estate->es_result_relation_info correctly while we initialize each\n>> - * sub-plan; ExecContextForcesOids depends on that!\n>> + * sub-plan; external modules such as FDWs may depend on that.\n>>\n> \n> I think it will be better to include postgres_fdw in the comment in\n> some way so that if someone wants a concrete example, there is\n> something to refer to.\n\nMaybe a good idea, but this will be the first time to mention postgres_fdw\nin the core source code. If you think that's OK, how about the attached?\n\nThanks,\nAmit",
"msg_date": "Fri, 31 May 2019 09:57:44 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/05/30 18:51, Amit Kapila wrote:\n>> I think it will be better to include postgres_fdw in the comment in\n>> some way so that if someone wants a concrete example, there is\n>> something to refer to.\n\n> Maybe a good idea, but this will be the first time to mention postgres_fdw\n> in the core source code. If you think that's OK, how about the attached?\n\nThis wording seems fine to me.\n\nNow that we've beat that item into the ground ... there were a bunch of\nother tweaks suggested in Alexander's initial email. Amit (K), were you\ngoing to review/commit those?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 13:26:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 10:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> > On 2019/05/30 18:51, Amit Kapila wrote:\n> >> I think it will be better to include postgres_fdw in the comment in\n> >> some way so that if someone wants a concrete example, there is\n> >> something to refer to.\n>\n> > Maybe a good idea, but this will be the first time to mention postgres_fdw\n> > in the core source code. If you think that's OK, how about the attached?\n>\n> This wording seems fine to me.\n>\n> Now that we've beat that item into the ground ... there were a bunch of\n> other tweaks suggested in Alexander's initial email. Amit (K), were you\n> going to review/commit those?\n>\n\nYes, I am already reviewing those. I will post my comments today.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Jun 2019 05:56:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "Hi Andres,\n\nI have added you here as some of these are related to commits done by\nyou. So need your opinion on the same. I have mentioned where your\nfeedback will be helpful.\n\nOn Sun, May 26, 2019 at 2:20 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 6. ExecGetResultSlot - remove (not used since introduction in 1a0586de)\n>\n\nYeah, I also think this is not required. Andres, this API is not\ndefined. Is it intended for some purpose?\n\n> 8. heap_parallelscan_initialize -> remove the sentence (changed in c2fe139c)\n>\n\nThe same check has been moved to table_block_parallelscan_initialize.\nSo, I think instead of removing the sentence you need to change the\nfunction name in the comment.\n\n> 10. HeapTupleSatisfiesSnapshot -> HeapTupleSatisfiesVisibility (an\n> internal inconsistency)\n>\n\n * This is an interface to HeapTupleSatisfiesVacuum that's callable via\n- * HeapTupleSatisfiesSnapshot, so it can be used through a Snapshot.\n+ * HeapTupleSatisfiesVisibility, so it can be used through a Snapshot.\n\nI think now we don't need to write the second half of the comment (\"so\nit can be used through a Snapshot\"). It makes more sense with\nprevious style API.\n\nAnother related point:\n* HeapTupleSatisfiesNonVacuumable\n *\n * True if tuple might be visible to some\ntransaction; false if it's\n * surely dead to everyone, ie, vacuumable.\n *\n * See SNAPSHOT_TOAST's definition for the intended behaviour.\n\nHere, I think instead of SNAPSHOT_TOAST, we should mention\nSNAPSHOT_NON_VACUUMABLE.\n\nAndres, do you have any comments on the proposed changes?\n\n> 14. latestRemovedxids -> latestRemovedXids (an inconsistent case)\n\n* Conjecture: if hitemid is dead then it had xids before the xids\n * marked on LP_NORMAL items. So we just ignore this item and move\n * onto the next, for the purposes of calculating\n- * latestRemovedxids.\n+ * latestRemovedXids.\n\nI think it should be latestRemovedXid.\n\n> 16. NextSampletuple -> NextSampleTuple (an inconsistent case)\n\nI think this doesn't make much difference, but we can fix it so that\nNextSampleTuple's occurrence can be found during grep.\n\n> 20. RelationGetOidIndex ? just to remove the paragraph (orphaned after\n> 578b2297)\n\n- * This is exported separately because there are cases where we want to use\n- * an index that will not be recognized by RelationGetOidIndex: TOAST tables\n- * have indexes that are usable, but have multiple columns and are on\n- * ordinary columns rather than a true OID column. This code will work\n- * anyway, so long as the OID is the index's first column. The caller must\n- * pass in the actual heap attnum of the OID column, however.\n- *\n\nInstead of removing the entire paragraph, how about changing it like\n\"This also handles the special cases where TOAST tables have indexes\nthat are usable, but have multiple columns and are on ordinary columns\nrather than a true OID column. This code will work anyway, so long as\nthe OID is the index's first column. The caller must\npass in the actual heap attnum of the OID column, however.\"\n\nAndres, any suggestions?\n\n> 27. XactTopTransactionId -> XactTopFullTransactionId (an internal\n> inconsistency)\n>\n\n- * XactTopTransactionId stores the XID of our toplevel transaction, which\n+ * XactTopFullTransactionId stores the XID of our toplevel transaction, which\n * will be the same as TopTransactionState.transactionId in an ordinary\n\nI think in the above sentence, now we need to use\nTopTransactionState.fullTransactionId.\n\nNote that I agree with your changes for the points where I have not\nresponded anything.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Jun 2019 07:36:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Tue, Jun 4, 2019 at 7:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Hi Andres,\n>\n> I have added you here as some of these are related to commits done by\n> you. So need your opinion on the same. I have mentioned where your\n> feedback will be helpful.\n>\n> > 10. HeapTupleSatisfiesSnapshot -> HeapTupleSatisfiesVisibility (an\n> > internal inconsistency)\n> >\n>\n> * This is an interface to HeapTupleSatisfiesVacuum that's callable via\n> - * HeapTupleSatisfiesSnapshot, so it can be used through a Snapshot.\n> + * HeapTupleSatisfiesVisibility, so it can be used through a Snapshot.\n>\n> I think now we don't need to write the second half of the comment (\"so\n> it can be used through a Snapshot\"). It makes more sense with\n> previous style API.\n>\n> Another related point:\n> * HeapTupleSatisfiesNonVacuumable\n> *\n> * True if tuple might be visible to some\n> transaction; false if it's\n> * surely dead to everyone, ie, vacuumable.\n> *\n> * See SNAPSHOT_TOAST's definition for the intended behaviour.\n>\n> Here, I think instead of SNAPSHOT_TOAST, we should mention\n> SNAPSHOT_NON_VACUUMABLE.\n>\n> Andres, do you have any comments on the proposed changes?\n>\n>\n> > 20. RelationGetOidIndex ? just to remove the paragraph (orphaned after\n> > 578b2297)\n>\n> - * This is exported separately because there are cases where we want to use\n> - * an index that will not be recognized by RelationGetOidIndex: TOAST tables\n> - * have indexes that are usable, but have multiple columns and are on\n> - * ordinary columns rather than a true OID column. This code will work\n> - * anyway, so long as the OID is the index's first column. The caller must\n> - * pass in the actual heap attnum of the OID column, however.\n> - *\n>\n> Instead of removing the entire paragraph, how about changing it like\n> \"This also handles the special cases where TOAST tables have indexes\n> that are usable, but have multiple columns and are on ordinary columns\n> rather than a true OID column. This code will work anyway, so long as\n> the OID is the index's first column. The caller must\n> pass in the actual heap attnum of the OID column, however.\"\n>\n> Andres, any suggestions?\n>\n\nLeaving the changes related to the above two points, I have combined\nall the changes and fixed the things as per my review in the attached\npatch. Alexander, see if you can verify once the combined patch. I\nam planning to commit the attached by tomorrow and then we can deal\nwith the remaining two. However, in the meantime, if Andres shared\nhis views on the above two points, then we can include the changes\ncorresponding to them as well.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 7 Jun 2019 10:00:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "Hello Amit,\n07.06.2019 7:30, Amit Kapila wrote:\n> Leaving the changes related to the above two points, I have combined\n> all the changes and fixed the things as per my review in the attached\n> patch. Alexander, see if you can verify once the combined patch. I\n> am planning to commit the attached by tomorrow and then we can deal\n> with the remaining two. However, in the meantime, if Andres shared\n> his views on the above two points, then we can include the changes\n> corresponding to them as well.\nAmit, I agree with all of your changes. All I could is to move a dot:\n.. (see contrib/postgres_fdw/postgres_fdw.c: postgresBeginDirectModify()\nas one example).\n\nBest regards,\nAlexander\n\n\n\n",
"msg_date": "Fri, 7 Jun 2019 15:20:37 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix inconsistencies for v12"
},
{
"msg_contents": "On Mon, Jun 3, 2019 at 10:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> > On 2019/05/30 18:51, Amit Kapila wrote:\n> >> I think it will be better to include postgres_fdw in the comment in\n> >> some way so that if someone wants a concrete example, there is\n> >> something to refer to.\n>\n> > Maybe a good idea, but this will be the first time to mention postgres_fdw\n> > in the core source code. If you think that's OK, how about the attached?\n>\n> This wording seems fine to me.\n>\n> Now that we've beat that item into the ground ... there were a bunch of\n> other tweaks suggested in Alexander's initial email. Amit (K), were you\n> going to review/commit those?\n>\n\nPushed most of the changes except for two (point no. 10 and point no.\n20) about which it is better if someone else can also comment. I have\nprovided suggestions about those in my review email [1]. See, if you\nhave any comments on those.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1J9_gdV22dRg-KaH_tnA1bXOUgLWCoJQikmPVyRbMHboA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 9 Jun 2019 10:48:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix inconsistencies for v12"
}
] |
[
{
"msg_contents": "Was this just forgotten?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 26 May 2019 08:12:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "> Subject: Why does pg_checksums -r not have a long option?\n> \n> Was this just forgotten?\n\nProbably? Attached a patch.\n\n-- \nFabien.",
"msg_date": "Sun, 26 May 2019 08:35:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Sun, May 26, 2019 at 08:35:30AM +0200, Fabien COELHO wrote:\n> Probably? Attached a patch.\n\nNo objections with adding a long option for that stuff. But I do have\nan objection with the naming because we have another tool able to work\non relfilenodes:\n$ oid2name --help | grep FILE\n -f, --filenode=FILENODE show info for table with given file node\n\nIn this case, long options are new as of 1aaf532 which is recent, but\n-f is around for a much longer time.\n\nPerhaps we should use the same mapping for consistency?\npg_verify_checksums has been using -r for whatever reason, but as we\ndo a renaming of the binary for v12 we could just fix that\ninconsistency as well. Hence I would suggest that for the option\ndescription:\n\"-f, --filenode=FILENODE\"\n\nI would also switch to the long option name in the tests at the same\ntime, this makes the perl scripts easier to follow.\n--\nMichael",
"msg_date": "Mon, 27 May 2019 10:52:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "Hello Michael-san,\n\n> No objections with adding a long option for that stuff. But I do have\n> an objection with the naming because we have another tool able to work\n> on relfilenodes:\n> $ oid2name --help | grep FILE\n> -f, --filenode=FILENODE show info for table with given file node\n>\n> In this case, long options are new as of 1aaf532 which is recent, but\n> -f is around for a much longer time.\n>\n> Perhaps we should use the same mapping for consistency?\n>\n> pg_verify_checksums has been using -r for whatever reason, but as we\n> do a renaming of the binary for v12 we could just fix that\n> inconsistency as well. Hence I would suggest that for the option\n> description:\n> \"-f, --filenode=FILENODE\"\n\nFine with me, I was not particularly happy with \"relfilenode\", but just \npicked it up for consistency with -r.\n\n> I would also switch to the long option name in the tests at the same\n> time, this makes the perl scripts easier to follow.\n\nOk. Attached.\n\nI've used both -f & --filenode in the test to check that the renaming was \nok. I have reordered the options in the documentation so that they appear \nin alphabetical order, as for some reason --progress was out of it.\n\n-- \nFabien.",
"msg_date": "Mon, 27 May 2019 08:32:21 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "> On 27 May 2019, at 03:52, Michael Paquier <michael@paquier.xyz> wrote:\n\n> pg_verify_checksums has been using -r for whatever reason, but as we\n> do a renaming of the binary for v12 we could just fix that\n> inconsistency as well.\n\nThe original patch used -o in pg_verify_checksums, the discussion of which\nstarted in the below mail:\n\nhttps://postgr.es/m/20180228194242.qbjasdtwm2yj5rqg%40alvherre.pgsql\n\nSince -f was already used for “force check”, -r ended up being used. Now that\nthere no longer is a -f flag in pg_checksums, it can be renamed.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 27 May 2019 09:22:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Mon, May 27, 2019 at 09:22:42AM +0200, Daniel Gustafsson wrote:\n> The original patch used -o in pg_verify_checksums, the discussion of which\n> started in the below mail:\n> \n> https://postgr.es/m/20180228194242.qbjasdtwm2yj5rqg%40alvherre.pgsql\n> \n> Since -f was already used for “force check”, -r ended up being used. Now that\n> there no longer is a -f flag in pg_checksums, it can be renamed.\n\nInteresting point. Thanks for sharing.\n--\nMichael",
"msg_date": "Mon, 27 May 2019 17:05:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "Hi,\n\nOn Mon, May 27, 2019 at 09:22:42AM +0200, Daniel Gustafsson wrote:\n> > On 27 May 2019, at 03:52, Michael Paquier <michael@paquier.xyz> wrote:\n> > pg_verify_checksums has been using -r for whatever reason, but as we\n> > do a renaming of the binary for v12 we could just fix that\n> > inconsistency as well.\n> \n> The original patch used -o in pg_verify_checksums, the discussion of which\n> started in the below mail:\n> \n> https://postgr.es/m/20180228194242.qbjasdtwm2yj5rqg%40alvherre.pgsql\n> \n> Since -f was already used for “force check”, -r ended up being used. Now that\n> there no longer is a -f flag in pg_checksums, it can be renamed.\n\nBefore we switch to -f out of consistency with oid2name, we should\nconsider Magnus' argument from\nCABUevEzoeXaxbcYmMZsNF1aqdCwovys7-ChqCuGRY5+nsQZFew@mail.gmail.com IMO:\n\n|I have no problem with changing it to -r. -f seems a bit wrong to me,\n|as it might read as a file. And in the future we might want to implement\n|the ability to take full filename (with path), in which case it would\n|make sense to use -f for that.\n\n\nCheers,\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Mon, 27 May 2019 10:17:43 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Mon, May 27, 2019 at 08:32:21AM +0200, Fabien COELHO wrote:\n> I've used both -f & --filenode in the test to check that the renaming was\n> ok. I have reordered the options in the documentation so that they appear in\n> alphabetical order, as for some reason --progress was out of it.\n\nNo objection to clean up this at the same time.\n\n+ <varlistentry>\n+ <term><option>-f <replaceable>filenode</replaceable></option></term>\n+ <term><option>--filenode=<replaceable>filenode</replaceable></option></term>\n+ <listitem>\n+ <para>\n+ Only validate checksums in the relation with specified relation file node.\n+ </para>\nTwo nits. I would just have been careful about the number of\ncharacters in the line within the <para> markup. And we use\nextensively \"filenode\" in the docs. So the description would become\nas follows:\nOnly validate checksums in the relation with filenode\n<replaceable>filenode</replaceable>.\n\n+ [ 'pg_checksums', '--enable', '-filenode', '1234', '-D', $pgdata ],\nThis fails, but not for the reason it is written for.\n\nIt looks strange to not order --filenode alphabetically in --help.\n\nWith all these issues cleaned up, I got the attached. Does that look\nfine? (I ran pgperltidy and pgindent on top of it.)\n--\nMichael",
"msg_date": "Mon, 27 May 2019 17:33:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "\nBonjour Michael,\n\n> + <varlistentry>\n> + <term><option>-f <replaceable>filenode</replaceable></option></term>\n> + <term><option>--filenode=<replaceable>filenode</replaceable></option></term>\n> + <listitem>\n> + <para>\n> + Only validate checksums in the relation with specified relation file node.\n> + </para>\n> Two nits. I would just have been careful about the number of\n> characters in the line within the <para> markup. And we use\n> extensively \"filenode\" in the docs.\n\nOk.\n\n> + [ 'pg_checksums', '--enable', '-filenode', '1234', '-D', $pgdata ],\n> This fails, but not for the reason it is written for.\n\nIndeed. command_fails() is a little too simplistic, it should really check \nthat the error message is there.\n\n> It looks strange to not order --filenode alphabetically in --help.\n\nForgot, it stayed at the r position for no good reason.\n\n> With all these issues cleaned up, I got the attached. Does that look\n> fine? (I ran pgperltidy and pgindent on top of it.)\n\nWorks for me. Doc build is ok as well.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 27 May 2019 16:22:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Mon, May 27, 2019 at 10:17:43AM +0200, Michael Banck wrote:\n> Before we switch to -f out of consistency with oid2name, we should\n> consider Magnus' argument from\n> CABUevEzoeXaxbcYmMZsNF1aqdCwovys7-ChqCuGRY5+nsQZFew@mail.gmail.com IMO:\n> \n> |I have no problem with changing it to -r. -f seems a bit wrong to me,\n> |as it might read as a file. And in the future we might want to implement\n> |the ability to take full filename (with path), in which case it would\n> |make sense to use -f for that.\n\nYou could also use a long option for that without a one-letter option,\nlike --file-path or such, so reserving a one-letter option for a\nfuture, hypothetical use is not really a stopper in my opinion. In\nconsequence, I think that that it is fine to just use -f/--filenode.\nAny objections or better suggestions from other folks here?\n--\nMichael",
"msg_date": "Tue, 28 May 2019 11:56:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": ">> |I have no problem with changing it to -r. -f seems a bit wrong to me,\n>> |as it might read as a file. And in the future we might want to implement\n>> |the ability to take full filename (with path), in which case it would\n>> |make sense to use -f for that.\n>\n> You could also use a long option for that without a one-letter option, \n> like --file-path or such, so reserving a one-letter option for a future, \n> hypothetical use is not really a stopper in my opinion. In consequence, \n> I think that that it is fine to just use -f/--filenode.\n\nYep. Also, the -f option could be overloaded by guessing whether is \nassociated argument is a number or a path…\n\n> Any objections or better suggestions from other folks here?\n\n-- \nFabien.",
"msg_date": "Tue, 28 May 2019 09:59:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Mon, May 27, 2019 at 04:22:37PM +0200, Fabien COELHO wrote:\n> Works for me. Doc build is ok as well.\n\nThanks, committed.\n--\nMichael",
"msg_date": "Thu, 30 May 2019 17:55:05 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On 2019-05-28 04:56, Michael Paquier wrote:\n> You could also use a long option for that without a one-letter option,\n> like --file-path or such, so reserving a one-letter option for a\n> future, hypothetical use is not really a stopper in my opinion. In\n> consequence, I think that that it is fine to just use -f/--filenode.\n> Any objections or better suggestions from other folks here?\n\nI think -r/--relfilenode was actually a good suggestion. Because it\ndoesn't actually check a *file* but potentially several files (forks,\nsegments). The -f naming makes it sound like it operates on a specific\nfile.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 5 Jun 2019 22:31:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Wed, Jun 05, 2019 at 10:31:54PM +0200, Peter Eisentraut wrote:\n> I think -r/--relfilenode was actually a good suggestion. Because it\n> doesn't actually check a *file* but potentially several files (forks,\n> segments). The -f naming makes it sound like it operates on a specific\n> file.\n\nHmm. I still tend to prefer the -f/--filenode interface as that's\nmore consistent with what we have in the documentation, where\nrelfilenode gets only used when referring to the pg_class attribute.\nYou have a point about the fork types and extra segments, but I am not\nsure that --relfilenode defines that in a better way than --filenode.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2019 18:01:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
},
{
"msg_contents": "On Thu, Jun 06, 2019 at 06:01:21PM +0900, Michael Paquier wrote:\n>On Wed, Jun 05, 2019 at 10:31:54PM +0200, Peter Eisentraut wrote:\n>> I think -r/--relfilenode was actually a good suggestion. Because it\n>> doesn't actually check a *file* but potentially several files (forks,\n>> segments). The -f naming makes it sound like it operates on a specific\n>> file.\n>\n>Hmm. I still tend to prefer the -f/--filenode interface as that's\n>more consistent with what we have in the documentation, where\n>relfilenode gets only used when referring to the pg_class attribute.\n>You have a point about the fork types and extra segments, but I am not\n>sure that --relfilenode defines that in a better way than --filenode.\n>--\n\nI agree. The \"rel\" prefix is there mostly because the other pg_class\nattributes have it too (reltablespace, reltuples, ...) and we use\n\"filenode\" elsewhere. For example we have pg_relation_filenode() function,\noperating with exactly this piece of information.\n\nSo +1 to keep the \"-f/--filenode\" options.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n",
"msg_date": "Sun, 9 Jun 2019 13:02:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Why does pg_checksums -r not have a long option?"
}
] |
[
{
"msg_contents": "In the v12 beta1 release note:\n\nE.1.3.9. Server Applications\n\n Allow vacuumdb to select tables for vacuum based on...\n\nWhy is vacuumdb listed in \"Server Applications\"? It's in \"PostgreSQL\nClient Applications\" section in our manual.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 26 May 2019 21:53:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "vacuumdb as server application in v12 release note"
},
{
"msg_contents": "On Sun, May 26, 2019 at 09:53:41PM +0900, Tatsuo Ishii wrote:\n> In the v12 beta1 release note:\n> \n> E.1.3.9. Server Applications\n> \n> Allow vacuumdb to select tables for vacuum based on...\n> \n> Why is vacuumdb listed in \"Server Applications\"? It's in \"PostgreSQL\n> Client Applications\" section in our manual.\n\nSorry, fixed, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 13 Jun 2019 22:08:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuumdb as server application in v12 release note"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed the estimated rows of the base relations during the join\nsearching is *very* different from the estimations in the final plan.\n\nJoin search (rows of the initial_rels):\nRELOPTINFO (ct): rows=1 width=4\nRELOPTINFO (it): rows=1 width=4\nRELOPTINFO (mc): rows=17567 width=32\nRELOPTINFO (mi_idx): rows=1380035 width=8\nRELOPTINFO (t): rows=2528356 width=25\n\nThe final plan:\nSeq Scan on company_type ct\n (cost=0.00..1.05 rows=1 width=4)\nSeq Scan on info_type it\n (cost=0.00..2.41 rows=1 width=4)\nParallel Seq Scan on movie_companies mc\n (cost=0.00..37814.90 rows=7320 width=32)\nParallel Seq Scan on movie_info_idx mi_idx\n (cost=0.00..13685.15 rows=575015 width=8)\nIndex Scan using title_pkey on title t\n (cost=0.43..0.58 rows=1 width=25)\n\nBy looking at the joinrel->rows, I would expect relation t to have\nthe largest size, however, this is not true at all. I wonder what's\ncausing this observation, and how to get estimations close to the\nfinal plan?\n\nThank you,\nDonald Dong\n\n\n",
"msg_date": "Sun, 26 May 2019 10:00:18 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Different row estimations on base rels"
},
{
"msg_contents": "On Sun, May 26, 2019 at 1:00 PM Donald Dong <xdong@csumb.edu> wrote:\n> I noticed the estimated rows of the base relations during the join\n> searching is *very* different from the estimations in the final plan.\n>\n> Join search (rows of the initial_rels):\n> RELOPTINFO (ct): rows=1 width=4\n> RELOPTINFO (it): rows=1 width=4\n> RELOPTINFO (mc): rows=17567 width=32\n> RELOPTINFO (mi_idx): rows=1380035 width=8\n> RELOPTINFO (t): rows=2528356 width=25\n>\n> The final plan:\n> Seq Scan on company_type ct\n> (cost=0.00..1.05 rows=1 width=4)\n> Seq Scan on info_type it\n> (cost=0.00..2.41 rows=1 width=4)\n> Parallel Seq Scan on movie_companies mc\n> (cost=0.00..37814.90 rows=7320 width=32)\n> Parallel Seq Scan on movie_info_idx mi_idx\n> (cost=0.00..13685.15 rows=575015 width=8)\n> Index Scan using title_pkey on title t\n> (cost=0.43..0.58 rows=1 width=25)\n>\n> By looking at the joinrel->rows, I would expect relation t to have\n> the largest size, however, this is not true at all. I wonder what's\n> causing this observation, and how to get estimations close to the\n> final plan?\n\nWell, it's all there in the code. I believe the issue is that the\nfinal estimates are based on the number of rows that will be returned\nfrom the relation, which is often less, and occasionally more, than\nthe total of the rows in the relation. The reason it's often less is\nbecause there might be a WHERE clause or similar which rules out some\nof the rows. The reason it might be more is because a nested loop\ncould return the same rows multiple times.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 16:36:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Different row estimations on base rels"
},
{
"msg_contents": "On May 29, 2019, at 1:36 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, it's all there in the code. I believe the issue is that the\n> final estimates are based on the number of rows that will be returned\n> from the relation, which is often less, and occasionally more, than\n> the total of the rows in the relation. The reason it's often less is\n> because there might be a WHERE clause or similar which rules out some\n> of the rows. The reason it might be more is because a nested loop\n> could return the same rows multiple times.\n\nYes, indeed. I was confused, and I guess I could've thought about it\nabout more before posting here. Thank you for answering this\nquestion!\n\nRegards,\nDonald Dong\n\n\n",
"msg_date": "Wed, 29 May 2019 13:43:56 -0700",
"msg_from": "Donald Dong <xdong@csumb.edu>",
"msg_from_op": true,
"msg_subject": "Re: Different row estimations on base rels"
}
] |
[
{
"msg_contents": "We've had numerous bug reports complaining about the fact that ALTER TABLE\ngenerates subsidiary commands that get executed unconditionally, even\nif they should be discarded due to an IF NOT EXISTS or other condition;\nsee e.g. #14827, #15180, #15670, #15710. In [1] I speculated about\nfixing this by having ALTER TABLE maintain an array of flags that record\nthe results of initial tests for column existence, and then letting it\nconditionalize execution of subcommands on those flags. I started to\nfool around with that concept today, and quickly realized that my\noriginal thought of just adding execute-if-this-flag-is-true markers to\nAlterTableCmd subcommands was insufficient. Most of the problems are with\nindependent commands that execute before or after the main AlterTable,\nand would not have any easy connection to state maintained by AlterTable.\n\nThe way to fix this, I think, is to provide an AlterTableCmd subcommand\ntype that just wraps an arbitrary utility statement, and then we can\nconditionalize execution of such subcommands using the flag mechanism.\nSo instead of generating independent \"before\" and \"after\" statements,\ntransformAlterTableStmt would just produce a single AlterTable with\neverything in its list of subcommands --- but we'd still use the generic\nProcessUtility infrastructure to execute subcommands that correspond\nto existing standalone statements.\n\nLooking into parse_utilcmd.c with an eye to making it do that, I almost\nimmediately ran across bugs we hadn't even known were there in ALTER TABLE\nADD/DROP GENERATED. These have got a different but arguably-related\nflavor of bug: they are making decisions inside transformAlterTableStmt\nthat might be wrong by the time we get to execution. Thus for example\n\nregression=# create table t1 (f1 int);\nCREATE TABLE\nregression=# alter table t1 add column f2 int not null,\nalter column f2 add generated always as identity;\nALTER TABLE\nregression=# insert into t1 values(0);\nERROR: no owned sequence found\n\nThis happens because transformAlterTableStmt thinks it can generate\nthe sequence creation commands for the AT_AddIdentity subcommand,\nand also figures it's okay to just ignore the case where the column\ndoesn't exist. So we create the column but then we don't make the\nsequence. There are similar bugs in AT_SetIdentity processing, and\nI rather suspect that it's also unsafe for AT_AlterColumnType to be\nlooking at the column's attidentity state --- though I couldn't\ndemonstrate a bug in that path, because of the fact that \nAT_AlterColumnType executes in a pass earlier than anything that\ncould change attidentity.\n\nThis can't be fixed just by conditionalizing execution of subcommands,\nbecause we need to know the target column's type in order to set up the\nsequence correctly. So what has to happen to fix these things is to\nmove the decisions, and the creation of the subcommand parsetrees,\ninto ALTER TABLE execution.\n\nThat requires pretty much the same support for recursively calling\nProcessUtility() from AlterTable() that we'd need for the subcommand\nwrapper idea. So I went ahead and tackled it as a separate project,\nand attached is the result.\n\nI'm not quite sure if I'm satisfied with the approach shown here.\nI made a struct containing the ProcessUtility parameters that need\nto be passed down through the recursion, originally with the idea\nthat this struct might be completely opaque outside utility.c.\nHowever, there's a good deal of redundancy in that approach ---\nthe relid and stmt parameters of AlterTable() are really redundant\nwith stuff in the struct. So now I'm wondering if it would be better\nto merge all that stuff and just have the struct as AlterTable's sole\nargument. I'm also not very sure whether AlterTableInternal() ought\nto be modified so that it uses or at least creates a valid struct;\nit doesn't *need* to do so today, but maybe someday it will.\n\nAnd the whole thing has a faint air of grottiness about it too.\nThis makes the minimum changes to what we've got now, but I can't\nhelp thinking it'd look different if we'd designed from scratch.\nThe interactions with event triggers seem particularly ad-hoc.\nIt's also ugly that CreateTable's recursion is handled differently\nfrom AlterTable's.\n\nAnybody have thoughts about a different way to approach it?\n\n\t\t\tregards, tom lane\n\n[1] https://postgr.es/m/7824.1525200461@sss.pgh.pa.us",
"msg_date": "Sun, 26 May 2019 18:23:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On Sun, May 26, 2019 at 6:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Anybody have thoughts about a different way to approach it?\n\nI mean, in an ideal world, I think we'd never call back out to\nProcessUtility() from within AlterTable(). That seems like a pretty\nclear layering violation. I assume the reason we've never tried to do\nbetter is a lack of round tuits and/or sufficient motivation.\n\nIn terms of what we'd do instead, I suppose we'd try to move as much\nas possible inside the ALTER TABLE framework proper and have\neverything call into that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 16:50:49 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, May 26, 2019 at 6:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Anybody have thoughts about a different way to approach it?\n\n> I mean, in an ideal world, I think we'd never call back out to\n> ProcessUtility() from within AlterTable(). That seems like a pretty\n> clear layering violation. I assume the reason we've never tried to do\n> better is a lack of round tuits and/or sufficient motivation.\n\n> In terms of what we'd do instead, I suppose we'd try to move as much\n> as possible inside the ALTER TABLE framework proper and have\n> everything call into that.\n\nHm ... I'm not exactly clear on why that would be a superior solution.\nIt would imply that standalone CREATE INDEX etc would call into the\nALTER TABLE framework --- how is that not equally a layering violation?\n\nAlso, recursive ProcessUtility cases exist independently of this issue,\nin particular in CreateSchemaCommand. My worry about my patch upthread\nis not really that it introduces another one, but that it doesn't do\nanything towards providing a uniform framework/notation for all these\ncases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 17:52:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On Wed, May 29, 2019 at 5:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm ... I'm not exactly clear on why that would be a superior solution.\n> It would imply that standalone CREATE INDEX etc would call into the\n> ALTER TABLE framework --- how is that not equally a layering violation?\n\nWell, the framework could be renamed to something more general, I\nsuppose, but I don't see a *layering* concern.\n\n From my point of view, the DDL code doesn't do a great job separating\nparsing/parse analysis from optimization/execution. The ALTER TABLE\nstuff is actually pretty good in this regard. But when you build\nsomething that is basically a parse tree and pass it to some other\nfunction that thinks that parse tree may well be coming straight from\nthe user, you are not doing a good job distinguishing between a\nstatement and an action which that statement may caused to be\nperformed.\n\n> Also, recursive ProcessUtility cases exist independently of this issue,\n> in particular in CreateSchemaCommand. My worry about my patch upthread\n> is not really that it introduces another one, but that it doesn't do\n> anything towards providing a uniform framework/notation for all these\n> cases.\n\nI'm not really sure I understand this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 18:02:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> From my point of view, the DDL code doesn't do a great job separating\n> parsing/parse analysis from optimization/execution. The ALTER TABLE\n> stuff is actually pretty good in this regard.\n\nMeh. I think a pretty fair characterization of the bug(s) I'm trying to\nfix is \"we separated parse analysis from execution when we should not\nhave, because it leads to parse analysis being done against the wrong\ndatabase state\". So I'm *very* suspicious of any argument that we should\ntry to separate them more, let alone that doing so will somehow fix this\nset of bugs.\n\n>> Also, recursive ProcessUtility cases exist independently of this issue,\n>> in particular in CreateSchemaCommand. My worry about my patch upthread\n>> is not really that it introduces another one, but that it doesn't do\n>> anything towards providing a uniform framework/notation for all these\n>> cases.\n\n> I'm not really sure I understand this.\n\nWell, I tried to wrap what are currently a random set of ProcessUtility\narguments into one struct to reduce the notational burden. But as things\nare set up, that's specific to the ALTER TABLE case. I'm feeling like it\nshould not be, but I'm not very sure where to draw the line between\narguments that should be folded into the struct and ones that shouldn't.\n\nNote that I think there are live bugs in here that are directly traceable\nto not having tried to fold those arguments before. Of the four existing\nrecursive ProcessUtility calls with context = PROCESS_UTILITY_SUBCOMMAND,\ntwo pass down the outer call's \"ParamListInfo params\", and two don't ---\nhow is it not a bug that they don't all behave alike? And none of the\nfour pass down the outer call's QueryEnvironment, which seems like even\nmore of a bug. So it feels like we ought to have a uniform approach\nto what gets passed down during recursion, and enforce it by passing\nall such values in a struct rather than as independent arguments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 May 2019 18:17:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I applied the 'alter-table-with-recursive-process-utility-calls-wip.patch'\r\non the master(e788e849addd56007a0e75f3b5514f294a0f3bca). And \r\nwhen I test the cases, I find it works well on 'alter table t1 add column\r\nf2 int not null, alter column f2 add generated always as identity' case, \r\nbut it doesn't work on #14827, #15180, #15670, #15710.\r\n\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\nHere is the test result with #14827 failed\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\npostgres=# create table t10 (f1 int);\r\nCREATE TABLE\r\npostgres=# alter table t10 add column f2 int not null,\r\npostgres-# alter column f2 add generated always as identity;\r\nALTER TABLE\r\npostgres=# \r\npostgres=# insert into t10 values(0);\r\nINSERT 0 1\r\npostgres=# create table test_serial ( teststring varchar(5));\r\nCREATE TABLE\r\npostgres=# alter table test_serial add column if not exists uid BIGSERIAL;\r\nALTER TABLE\r\npostgres=# alter table test_serial add column if not exists uid BIGSERIAL;\r\npsql: NOTICE: column \"uid\" of relation \"test_serial\" already exists, skipping\r\nALTER TABLE\r\npostgres=# \r\npostgres=# \\d\r\n List of relations\r\n Schema | Name | Type | Owner \r\n--------+----------------------+----------+--------------\r\n public | t10 | table | lichuancheng\r\n public | t10_f2_seq | sequence | lichuancheng\r\n public | test_serial | table | lichuancheng\r\n public | test_serial_uid_seq | sequence | lichuancheng\r\n public | test_serial_uid_seq1 | sequence | lichuancheng\r\n(5 rows)\r\n\r\npostgres=#\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\nSo it's wrong with a 'test_serial_uid_seq1' sequence to appear.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 13 Jun 2019 08:33:19 +0000",
"msg_from": "movead li <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "movead li <movead.li@highgo.ca> writes:\n> I applied the 'alter-table-with-recursive-process-utility-calls-wip.patch'\n> on the master(e788e849addd56007a0e75f3b5514f294a0f3bca). And \n> when I test the cases, I find it works well on 'alter table t1 add column\n> f2 int not null, alter column f2 add generated always as identity' case, \n> but it doesn't work on #14827, #15180, #15670, #15710.\n\nThis review seems not very on-point, because I made no claim to have fixed\nany of those bugs. The issue at the moment is how to structure the code\nto allow ALTER TABLE to call other utility statements --- or, if we aren't\ngoing to do that as Robert seems not to want to, what exactly we're going\nto do instead.\n\nThe patch at hand does fix some ALTER TABLE ... IDENTITY bugs, because\nfixing those doesn't require any conditional execution of utility\nstatements. But we'll need infrastructure for such conditional execution\nto fix the original bugs. I don't see much point in working on that part\nuntil we have some agreement about how to handle what this patch is\nalready doing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2019 22:00:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On Tue, Jul 2, 2019 at 2:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> movead li <movead.li@highgo.ca> writes:\n> > I applied the 'alter-table-with-recursive-process-utility-calls-wip.patch'\n> > on the master(e788e849addd56007a0e75f3b5514f294a0f3bca). And\n> > when I test the cases, I find it works well on 'alter table t1 add column\n> > f2 int not null, alter column f2 add generated always as identity' case,\n> > but it doesn't work on #14827, #15180, #15670, #15710.\n>\n> This review seems not very on-point, because I made no claim to have fixed\n> any of those bugs. The issue at the moment is how to structure the code\n> to allow ALTER TABLE to call other utility statements --- or, if we aren't\n> going to do that as Robert seems not to want to, what exactly we're going\n> to do instead.\n>\n> The patch at hand does fix some ALTER TABLE ... IDENTITY bugs, because\n> fixing those doesn't require any conditional execution of utility\n> statements. But we'll need infrastructure for such conditional execution\n> to fix the original bugs. I don't see much point in working on that part\n> until we have some agreement about how to handle what this patch is\n> already doing.\n\nWith my CF manager hat: I've moved this to the next CF so we can\nclose this one soon, but since it's really a bug report it might be\ngood to get more eyeballs on the problem sooner than September.\n\nWith my hacker hat: Hmm. I haven't looked at the patch, but not\npassing down the QueryEnvironment when recursing is probably my fault,\nand folding all such things into a new mechanism that would avoid such\nbugs in the future sounds like a reasonable approach, if potentially\ncomplicated to back-patch. I'm hoping to come back and look at this\nproperly in a while.\n\n-- \nThomas Munro\nhttps://enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Aug 2019 17:44:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "> This review seems not very on-point, because I made no claim to have fixed\r\n> any of those bugs. The issue at the moment is how to structure the code\r\n\r\nI am sorry for that and I have another question now. I researched the related \r\ncode and find something as below:\r\nCode:\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\ncase AT_AddIdentity:\r\n{\r\n...\r\nattnum = get_attnum(relid, cmd->name);\r\n/*\r\n * if attribute not found, something will error about it\r\n * later\r\n */\r\nif (attnum != InvalidAttrNumber)\r\n generateSerialExtraStmts(&cxt, newdef,\r\n get_atttype(relid, attnum),def->options, true,\r\n NULL, NULL);\r\n...\r\n}\r\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nTest case1:\r\n################################################\r\ncreate table t10 (f1 int);\r\nalter table t10 add column f2 int not null,\r\nalter column f2 add generated always as identity;\r\n################################################\r\nI find that the value of 'attnum' is 0 because now we do not have the 'f2'\r\ncolumn when I run the Test case1, so it can not generate a sequence\r\n(because it can not run the generateSerialExtraStmts function).\r\nYou can see the code annotation that 'something will error about it later',\r\nso I thank it may be an error report instead of executing successfully.\r\n\r\nTest case2:\r\n################################################\r\ncreate table t11 (f1 int);\r\nalter table t11 add column f2 int,\r\nalter column f2 type int8;\r\n################################################ \r\nCode about 'alter column type' have the same code annotation, and\r\nif you run the Test case2, then you can get an error report. I use Test case2\r\nto prove that it may be an error report instead of executing successfully. \r\n\r\n--\r\nMovead.Li\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Mon, 19 Aug 2019 10:57:03 +0000",
"msg_from": "movead li <movead.li@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On 2019-Aug-01, Thomas Munro wrote:\n\n> With my hacker hat: Hmm. I haven't looked at the patch, but not\n> passing down the QueryEnvironment when recursing is probably my fault,\n> and folding all such things into a new mechanism that would avoid such\n> bugs in the future sounds like a reasonable approach, if potentially\n> complicated to back-patch. I'm hoping to come back and look at this\n> properly in a while.\n\nThomas: Any further input on this? If I understand you correctly,\nyou're not saying that there's anything wrong with Tom's patch, just\nthat you would like to do some further hacking afterwards.\n\nTom: CFbot says this patch doesn't apply anymore. Could you please\nrebase? Also: There's further input from Movead; his proposed test\ncases might be useful to add.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Sep 2019 12:04:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Tom: CFbot says this patch doesn't apply anymore. Could you please\n> rebase?\n\nRobert doesn't like the whole approach [1], so I'm not seeing much\npoint in rebasing the current patch. The idea I'd been thinking\nabout instead was to invent a new AlterTableType enum value for\neach type of utility command that we can currently generate as a\nresult of parse analysis of ALTER TABLE, then emit those currently\nseparate commands as AlterTableCmds with \"def\" pointing to the\nrelevant utility-command parsetree, and then add code to ALTER\nTABLE to call the appropriate execution functions directly rather\nthan via ProcessUtility. (This will add significantly more code\nthan what I had, and I'm not convinced it's better, just different.)\n\nI haven't gotten to that yet, and now that the CF has started I'm\nnot sure if I'll have time for it this month. Maybe we should just\nmark the CF entry as RWF for now, or push it out to the next fest.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoa3FzZvWriJmqquvAbf8GxrC9YM9umBb18j5M69iuq9bg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 03 Sep 2019 12:21:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "[ starting to think about this issue again ]\n\nI wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I mean, in an ideal world, I think we'd never call back out to\n>> ProcessUtility() from within AlterTable(). That seems like a pretty\n>> clear layering violation. I assume the reason we've never tried to do\n>> better is a lack of round tuits and/or sufficient motivation.\n\n> ...\n> Also, recursive ProcessUtility cases exist independently of this issue,\n> in particular in CreateSchemaCommand. My worry about my patch upthread\n> is not really that it introduces another one, but that it doesn't do\n> anything towards providing a uniform framework/notation for all these\n> cases.\n\nActually ... looking closer at this, the cases I'm concerned about\n*already* do recursive ProcessUtility calls. Look at utility.c around\nline 1137. The case of interest here is when transformAlterTableStmt\nreturns any subcommands that are not AlterTableStmts. As the code\nstands, ProcessUtility merrily recurses to itself to handle them.\nWhat I was proposing to do was have the recursion happen from inside\nAlterTable(); maybe that's less clean, but surely not by much.\n\nThe thing I think you are actually worried about is the interaction\nwith event triggers, which is already a pretty horrid mess in this\ncode today. I don't really follow the comment here about\n\"ordering of queued commands\". It looks like that comment dates to\nAlvaro's commit b488c580a ... can either of you elucidate that?\n\nAnyway, with the benefit of more time to let this thing percolate\nin my hindbrain, I am thinking that the fundamental error we've made\nis to do transformAlterTableStmt in advance of execution *at all*.\nThe idea I now have is to scrap that, and instead apply the\nparse_utilcmd.c transformations individually to each AlterTable\nsubcommand when it reaches execution in \"phase 2\" of AlterTable().\nIn that way, the bugs associated with interference between different\nAlterTable subcommands touching the same column are removed because\nthe column's catalog state is up-to-date when we do the parse\ntransformations. We can probably also get rid of the problems with\nIF NOT EXISTS, because that check would be made in advance of applying\nparse transformations for a particular subcommand, and thus its\nside-effects would not happen when IF NOT EXISTS fires. I've not\nworked this out in any detail, and there might still be a few ALTER\nbugs this framework doesn't fix --- but I think my original idea\nof \"flags\" controlling AlterTable execution probably isn't needed\nif we go this way.\n\nNow, if we move things around like that, it will have some effects\non what event triggers see --- certainly the order of operations\nat least. But do we feel a need to retain the same sort of\n\"encapsulation\" that is currently happening due to the aforesaid\nlogic in utility.c? I don't fully understand what that's for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Oct 2019 14:18:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On 2019-Oct-29, Tom Lane wrote:\n\n> The thing I think you are actually worried about is the interaction\n> with event triggers, which is already a pretty horrid mess in this\n> code today. I don't really follow the comment here about\n> \"ordering of queued commands\". It looks like that comment dates to\n> Alvaro's commit b488c580a ... can either of you elucidate that?\n\nThe point of that comment is that if you enqueue the commands as they\nare returned by pg_event_trigger_ddl_commands() (say by writing them to\na table) they must be emitted in an order that allows them to be\nre-executed in a remote server that duplicates this one, and the final\nstate should be \"the same\".\n\n> Now, if we move things around like that, it will have some effects\n> on what event triggers see --- certainly the order of operations\n> at least. But do we feel a need to retain the same sort of\n> \"encapsulation\" that is currently happening due to the aforesaid\n> logic in utility.c? I don't fully understand what that's for.\n\nSadly, the DDL replay logic is not being used for anything at present,\nso I don't have a good test case to ensure that a proposed change is\ngood in this regard. I've been approached by a couple people interested\nin finishing the DDL conversion thing, but no takers so far. I know\nthere's people using code based on the src/test/modules/test_ddl_deparse\nmodule, but not for replicating a server's state to a different server, as\nfar as I know.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 29 Oct 2019 22:10:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Oct-29, Tom Lane wrote:\n>> The thing I think you are actually worried about is the interaction\n>> with event triggers, which is already a pretty horrid mess in this\n>> code today. I don't really follow the comment here about\n>> \"ordering of queued commands\". It looks like that comment dates to\n>> Alvaro's commit b488c580a ... can either of you elucidate that?\n\n> The point of that comment is that if you enqueue the commands as they\n> are returned by pg_event_trigger_ddl_commands() (say by writing them to\n> a table) they must be emitted in an order that allows them to be\n> re-executed in a remote server that duplicates this one, and the final\n> state should be \"the same\".\n\nHm. I don't think I understand what is the use-case behind all this.\nIf \"ALTER TABLE tab DO SOMETHING\" generates some subcommands to do what\nit's supposed to do, and then an event trigger is interested in replaying\nthat ALTER, how is it supposed to avoid having the subcommands happen\ntwice? That is, it seems like we'd be better off to suppress the\ngenerated subcommands from the event stream, because they'd just get\ngenerated again anyway from execution of the primary command. Or, if\nthere's something that is interested in knowing that those subcommands\nhappened, that's fine, but they'd better be marked somehow as informative\nrather than something you want to explicitly replay. (And if they are\njust informative, why is the ordering so critical?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Oct 2019 01:44:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I wrote:\n> Anyway, with the benefit of more time to let this thing percolate\n> in my hindbrain, I am thinking that the fundamental error we've made\n> is to do transformAlterTableStmt in advance of execution *at all*.\n> The idea I now have is to scrap that, and instead apply the\n> parse_utilcmd.c transformations individually to each AlterTable\n> subcommand when it reaches execution in \"phase 2\" of AlterTable().\n\nAttached is a patch that does things that way. This appears to fix\nall of the previously reported order-of-operations bugs in ALTER\nTABLE, although there's still some squirrely-ness around identity\ncolumns.\n\nMy original thought of postponing all parse analysis into the\nexecution phase turned out to be not quite right. We still want\nto analyze ALTER COLUMN TYPE subcommands before we start doing\nanything. The reason why is that any USING expressions in those\nsubcommands should all be parsed against the table's starting\nrowtype, since those expressions will all be evaluated against\nthat state during a single rewrite pass in phase 3. Fortunately\n(but not coincidentally, I think) the execution-passes design is\n\"DROP, then ALTER COLUMN TYPE, then everything else\", so that this\nis okay.\n\nI had to do some other finagling to get it to work, notably breaking\ndown some of the passes a bit more. This allows us to have a rule\nthat any new subcommands deduced during mid-execution parse analysis\nsteps will be executed in a strictly later pass. It might've been\npossible to allow it to be \"same pass\", but I thought that would\nbe putting an undesirable amount of reliance on the semantics of\nappending to a list that some other function is busy scanning.\n\nWhat I did about the API issues we were arguing about before was\njust to move the logic ProcessUtilitySlow had for handling\nnon-AlterTableStmts generated by ALTER TABLE parse analysis into\na new function that tablecmds.c calls. This doesn't really resolve\nany of the questions I had about event trigger processing, but\nI think it at least doesn't make anything worse. (The event\ntrigger, logical decoding, and sepgsql tests all pass without\nany changes.) It's tempting to consider providing a similar\nAPI for CREATE SCHEMA to use, but I didn't do so here.\n\nThe squirrely-ness around identity is that while this now works:\n\nregression=# CREATE TABLE itest8 (f1 int);\nCREATE TABLE\nregression=# ALTER TABLE itest8\nregression-# ADD COLUMN f2 int NOT NULL,\nregression-# ALTER COLUMN f2 ADD GENERATED ALWAYS AS IDENTITY;\nALTER TABLE\n\nit doesn't work if there's rows in the table:\n\nregression=# CREATE TABLE itest8 (f1 int);\nCREATE TABLE\nregression=# insert into itest8 default values;\nINSERT 0 1\nregression=# ALTER TABLE itest8\n ADD COLUMN f2 int NOT NULL,\n ALTER COLUMN f2 ADD GENERATED ALWAYS AS IDENTITY;\nERROR: column \"f2\" contains null values\n\nThe same would be true if you tried to do the ALTER as two separate\noperations (because the ADD ... NOT NULL, without a default, will\nnaturally fail on a nonempty table). So I don't feel *too* awful\nabout that. But it'd be better if this worked. It'll require\nsome refactoring of where the dependency link from an identity\ncolumn to its sequence gets set up. This patch seems large enough\nas-is, and it covers all the cases we've gotten field complaints\nabout, so I'm content to leave the residual identity issues for later.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 01 Nov 2019 18:26:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I wrote:\n> [ fix-alter-table-order-of-operations-1.patch ]\n\nThe cfbot noticed that this failed to apply over a recent commit,\nso here's v2. No substantive changes.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 13 Dec 2019 15:02:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I wrote:\n>> [ fix-alter-table-order-of-operations-1.patch ]\n> The cfbot noticed that this failed to apply over a recent commit,\n> so here's v2. No substantive changes.\n\nAnother rebase required :-(. Still no code changes from v1, but this\ntime I remembered to add a couple more test cases that I'd been\nmeaning to put in, mostly based on bug reports from Manuel Rigger.\n\nI'd kind of like to get this cleared out of my queue soon.\nDoes anyone intend to review it further?\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Dec 2019 12:47:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I wrote:\n> [ fix-alter-table-order-of-operations-3.patch ]\n\nRebased again, fixing a minor conflict with f595117e2.\n\n> I'd kind of like to get this cleared out of my queue soon.\n> Does anyone intend to review it further?\n\nIf I don't hear objections pretty darn quick, I'm going to\ngo ahead and push this.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 14 Jan 2020 17:27:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Hello\n\nThank you!\n\nI am clearly not a good reviewer for such changes... But for a note: I read the v4 patch and have no useful comments. Good new tests, reasonable code changes to fix multiple bug reports.\n\nThe patch is proposed only for the master branch, right?\n\nregards, Sergei\n\n\n",
"msg_date": "Wed, 15 Jan 2020 19:15:44 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Sergei Kornilov <sk@zsrv.org> writes:\n> I am clearly not a good reviewer for such changes... But for a note: I read the v4 patch and have no useful comments. Good new tests, reasonable code changes to fix multiple bug reports.\n\nThanks for looking!\n\n> The patch is proposed only for the master branch, right?\n\nYes, it seems far too risky for the back branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jan 2020 11:32:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "On 2020-Jan-14, Tom Lane wrote:\n\n> I wrote:\n> > [ fix-alter-table-order-of-operations-3.patch ]\n> \n> Rebased again, fixing a minor conflict with f595117e2.\n> \n> > I'd kind of like to get this cleared out of my queue soon.\n> > Does anyone intend to review it further?\n> \n> If I don't hear objections pretty darn quick, I'm going to\n> go ahead and push this.\n\nI didn't review in detail, but it seems good to me. I especially liked\ngetting rid of the ProcessedConstraint code, and the additional test\ncases.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 14:11:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> I didn't review in detail, but it seems good to me. I especially liked\n> getting rid of the ProcessedConstraint code, and the additional test\n> cases.\n\nThanks for looking!\n\nYeah, all those test cases expose situations where we misbehave\ntoday :-(. I wish this were small enough to be back-patchable,\nbut it's not feasible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jan 2020 13:12:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
},
{
"msg_contents": "I wrote:\n> The squirrely-ness around identity is that while this now works:\n\n> regression=# CREATE TABLE itest8 (f1 int);\n> CREATE TABLE\n> regression=# ALTER TABLE itest8\n> regression-# ADD COLUMN f2 int NOT NULL,\n> regression-# ALTER COLUMN f2 ADD GENERATED ALWAYS AS IDENTITY;\n> ALTER TABLE\n\n> it doesn't work if there's rows in the table:\n\n> regression=# CREATE TABLE itest8 (f1 int);\n> CREATE TABLE\n> regression=# insert into itest8 default values;\n> INSERT 0 1\n> regression=# ALTER TABLE itest8\n> ADD COLUMN f2 int NOT NULL,\n> ALTER COLUMN f2 ADD GENERATED ALWAYS AS IDENTITY;\n> ERROR: column \"f2\" contains null values\n\n> The same would be true if you tried to do the ALTER as two separate\n> operations (because the ADD ... NOT NULL, without a default, will\n> naturally fail on a nonempty table). So I don't feel *too* awful\n> about that. But it'd be better if this worked.\n\nAfter further poking at that, I've concluded that maybe this is not\na bug but operating as designed. Adding the GENERATED property in a\nseparate step is arguably equivalent to setting a plain default in\na separate step, and look at how we handle that:\n\nregression=# create table t1(x int);\nCREATE TABLE\nregression=# insert into t1 values(1);\nINSERT 0 1\nregression=# alter table t1 add column y int default 11,\n alter column y set default 12;\nALTER TABLE\nregression=# table t1;\n x | y \n---+----\n 1 | 11\n(1 row)\n\nThis is documented, rather opaquely perhaps, for the SET DEFAULT\ncase:\n\nSET/DROP DEFAULT\n\n These forms set or remove the default value for a column. Default\n values only apply in subsequent INSERT or UPDATE commands; they do not\n cause rows already in the table to change.\n\nSo the design principle here seems to be that we fill the column\nusing whatever is specified *in the ADD COLUMN subcommand*, and\nany screwing-about in other subcommands just affects what the\nbehavior will be in subsequent INSERT commands. That's a little\nweird but it has potential use-cases. If we attempt to apply the\n\"new\" default immediately then this syntax devolves to having the\nsame effects as a simple ADD-COLUMN-with-default. There's not\na lot of reason to write the longer form if that's what you wanted.\n\nSo I'm now inclined to think that the code is all right. We could\nimprove the documentation, perhaps, with an explicit example.\nAlso, the man page's entry for SET GENERATED says nothing of this,\nbut it likely ought to say the same thing as SET DEFAULT.\n\nAlso, we don't really have any test cases proving it works that way.\nSimple tests, such as the one above, are not too trustworthy because\nthe attmissingval optimization tends to hide what's really happening.\n(I found this out the hard way while messing with a patch to change\nthe behavior --- which I now think we shouldn't do, anyhow.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jan 2020 19:46:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rearranging ALTER TABLE to avoid multi-operations bugs"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed returning a modified record in a row-level BEFORE UPDATE trigger\non postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n\nBelow are scenarios similar to postgres_fdw test to reproduce the issue.\n\npostgres=# CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS (dbname 'postgres',port '5432');\npostgres=# CREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\npostgres=# create table loc1 (f1 serial, f2 text);\npostgres=# create foreign table rem1 (f1 serial, f2 text)\npostgres-# server loopback options(table_name 'loc1');\n\npostgres=# CREATE FUNCTION trig_row_before_insupdate() RETURNS TRIGGER AS $$\npostgres$# BEGIN\npostgres$# NEW.f2 := NEW.f2 || ' triggered !';\npostgres$# RETURN NEW;\npostgres$# END\npostgres$# $$ language plpgsql;\n\npostgres=# CREATE TRIGGER trig_row_before_insupd BEFORE INSERT OR UPDATE ON rem1\npostgres-# FOR EACH ROW EXECUTE PROCEDURE trig_row_before_insupdate();\n\n-- insert trigger is OK\npostgres=# INSERT INTO rem1 values(1, 'insert');\npostgres=# SELECT * FROM rem1;\n f1 | f2\n----+--------------------\n 1 | insert triggered !\n(1 row)\n\n-- update trigger is OK if we update f2\npostgres=# UPDATE rem1 set f2 = 'update';\npostgres=# SELECT * FROM rem1;\n f1 | f2\n----+--------------------\n 1 | update triggered !\n\n\nWithout attached patch:\n\npostgres=# UPDATE rem1 set f1 = 10;\npostgres=# SELECT * FROM rem1;\n f1 | f2\n----+--------------------\n 10 | update triggered !\n(1 row)\n\nf2 should be updated by trigger, but not.\nThis is because current fdw code adds only columns to RemoteSQL that were\nexplicitly targets of the UPDATE as follows.\n\npostgres=# EXPLAIN (verbose, costs off)\nUPDATE rem1 set f1 = 10;\n QUERY PLAN\n---------------------------------------------------------------------\n Update on public.rem1\n Remote SQL: UPDATE public.loc1 SET f1 = $2 WHERE ctid = $1 <--- not set f2\n -> Foreign Scan on public.rem1\n Output: 10, f2, ctid, rem1.*\n Remote SQL: SELECT f1, f2, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\nWith attached patch, f2 is updated by a trigger and \"f2 = $3\" is added to remote SQL\nas follows.\n\npostgres=# UPDATE rem1 set f1 = 10;\npostgres=# select * from rem1;\n f1 | f2\n----+--------------------------------\n 10 | update triggered ! triggered !\n(1 row)\n\npostgres=# EXPLAIN (verbose, costs off)\npostgres-# UPDATE rem1 set f1 = 10;\n QUERY PLAN\n-----------------------------------------------------------------------\n Update on public.rem1\n Remote SQL: UPDATE public.loc1 SET f1 = $2, f2 = $3 WHERE ctid = $1\n -> Foreign Scan on public.rem1\n Output: 10, f2, ctid, rem1.*\n Remote SQL: SELECT f1, f2, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\nMy patch adds all columns to a target list of remote update query\nas in INSERT case if a before update trigger exists.\n\nI tried to add only columns modified in trigger to the target list of\na remote update query, but I cannot find simple way to do that because\nupdate query is built during planning phase at postgresPlanForeignModify\nwhile it is difficult to decide which columns are modified by a trigger\nuntil query execution.\n\nRegards,\n\n-- \nShohei Mochizuki\nTOSHIBA CORPORATION",
"msg_date": "Mon, 27 May 2019 10:52:02 +0900",
"msg_from": "Shohei Mochizuki <shohei.mochizuki@toshiba.co.jp>",
"msg_from_op": true,
"msg_subject": "BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Mochizuki-san,\n\nOn 2019/05/27 10:52, Shohei Mochizuki wrote:\n> Hi,\n> \n> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n>\n> Without attached patch:\n> \n> postgres=# UPDATE rem1 set f1 = 10;\n> postgres=# SELECT * FROM rem1;\n> f1 | f2\n> ----+--------------------\n> 10 | update triggered !\n> (1 row)\n> \n> f2 should be updated by trigger, but not.\n\nIndeed. That seems like a bug to me.\n\n> This is because current fdw code adds only columns to RemoteSQL that were\n> explicitly targets of the UPDATE as follows.\n\nYeah. So, the trigger execution correctly modifies the existing tuple\nfetched from the remote server, but those changes are then essentially\ndiscarded by postgres_fdw, that is, postgresExecForeignModify().\n\n> With attached patch, f2 is updated by a trigger and \"f2 = $3\" is added to\n> remote SQL\n> as follows.\n> \n> postgres=# UPDATE rem1 set f1 = 10;\n> postgres=# select * from rem1;\n> f1 | f2\n> ----+--------------------------------\n> 10 | update triggered ! triggered !\n> (1 row)\n> \n> postgres=# EXPLAIN (verbose, costs off)\n> postgres-# UPDATE rem1 set f1 = 10;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Update on public.rem1\n> Remote SQL: UPDATE public.loc1 SET f1 = $2, f2 = $3 WHERE ctid = $1\n> -> Foreign Scan on public.rem1\n> Output: 10, f2, ctid, rem1.*\n> Remote SQL: SELECT f1, f2, ctid FROM public.loc1 FOR UPDATE\n> (5 rows)\n> \n> My patch adds all columns to a target list of remote update query\n> as in INSERT case if a before update trigger exists.\n\nThanks for the patch. It seems to fix the problem as far as I can see.\n\n> I tried to add only columns modified in trigger to the target list of\n> a remote update query, but I cannot find simple way to do that because\n> update query is built during planning phase at postgresPlanForeignModify\n> while it is difficult to decide which columns are modified by a trigger\n> until query execution.\n\nI think that the approach in your patch may be fine, but others may disagree.\n\nWe don't require row triggers' definition to declare which columns of the\ninput row it intends to modify. Without that information, the planner\ncan't determine the exact set of changed columns to transmit to the remote\nserver. So it's too early, for example, for PlanForeignModify() to\nconstruct an optimal update query which transmits only the columns that\nare changed, including those that may be modified by triggers. If the FDW\nhad delayed the construction of the exact update query to\nExecForeignUpdate(), we could build a more optimal update query, because\nby then we will know *all* columns that have changed, including those that\nare changed by BEFORE UPDATE row triggers if any. Maybe other FDWs beside\npostgres_fdw do that already, so it's possible to rejigger postgres_fdw to\ndo that too. But considering that such rejiggering is only necessary for\nefficiency, I'm not sure if others will agree to pursuing it, especially\nif it requires too much code change. Also, in the worst case, we'll end\nup generating new query for every row being changed, because the trigger\nmay change different columns for different rows based on some condition.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Mon, 27 May 2019 17:04:33 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> On 2019/05/27 10:52, Shohei Mochizuki wrote:\n>> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n>> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n>> This is because current fdw code adds only columns to RemoteSQL that were\n>> explicitly targets of the UPDATE as follows.\n\n> Yeah. So, the trigger execution correctly modifies the existing tuple\n> fetched from the remote server, but those changes are then essentially\n> discarded by postgres_fdw, that is, postgresExecForeignModify().\n\n> ... Also, in the worst case, we'll end\n> up generating new query for every row being changed, because the trigger\n> may change different columns for different rows based on some condition.\n\nPerhaps, if the table has relevant BEFORE triggers, we should just abandon\nour attempts to optimize away fetching/storing all columns? It seems like\nanother potential hazard here is a trigger needing to read a column that\nis not mentioned in the SQL query.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 May 2019 09:02:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "On 2019/05/27 22:02, Tom Lane wrote:\n> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>> On 2019/05/27 10:52, Shohei Mochizuki wrote:\n>>> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n>>> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n>>> This is because current fdw code adds only columns to RemoteSQL that were\n>>> explicitly targets of the UPDATE as follows.\n> \n>> Yeah. So, the trigger execution correctly modifies the existing tuple\n>> fetched from the remote server, but those changes are then essentially\n>> discarded by postgres_fdw, that is, postgresExecForeignModify().\n> \n>> ... Also, in the worst case, we'll end\n>> up generating new query for every row being changed, because the trigger\n>> may change different columns for different rows based on some condition.\n> \n> Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> our attempts to optimize away fetching/storing all columns? It seems like\n> another potential hazard here is a trigger needing to read a column that\n> is not mentioned in the SQL query.\n\nThe fetching side is fine, because rewriteTargetListUD() adds a\nwhole-row-var to the target list when the UPDATE / DELETE target is a\nforeign table *and* there is a row trigger on the table. postgres_fdw\nsees that and constructs the query to fetch all columns.\n\nSo, the only problem here is the optimizing away of storing all columns,\nwhich the Mochizuki-san's patch seems enough to fix.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 28 May 2019 12:54:34 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "On 2019/05/28 12:54, Amit Langote wrote:\n> On 2019/05/27 22:02, Tom Lane wrote:\n>> Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n>>> On 2019/05/27 10:52, Shohei Mochizuki wrote:\n>>>> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n>>>> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n>>>> This is because current fdw code adds only columns to RemoteSQL that were\n>>>> explicitly targets of the UPDATE as follows.\n>>\n>>> Yeah. So, the trigger execution correctly modifies the existing tuple\n>>> fetched from the remote server, but those changes are then essentially\n>>> discarded by postgres_fdw, that is, postgresExecForeignModify().\n>>\n>>> ... Also, in the worst case, we'll end\n>>> up generating new query for every row being changed, because the trigger\n>>> may change different columns for different rows based on some condition.\n>>\n>> Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n>> our attempts to optimize away fetching/storing all columns? It seems like\n>> another potential hazard here is a trigger needing to read a column that\n>> is not mentioned in the SQL query.\n> \n> The fetching side is fine, because rewriteTargetListUD() adds a\n> whole-row-var to the target list when the UPDATE / DELETE target is a\n> foreign table *and* there is a row trigger on the table. postgres_fdw\n> sees that and constructs the query to fetch all columns.\n> \n> So, the only problem here is the optimizing away of storing all columns,\n> which the Mochizuki-san's patch seems enough to fix.\n\nAmit-san, Tom,\nThanks for the comments.\n\nI checked other scenario. If a foreign table has AFTER trigger, remote update\nquery must return all columns and these cases are added at deparseReturningList\nand covered by following existing test cases.\n\nEXPLAIN (verbose, costs off)\nUPDATE rem1 set f2 = ''; -- can't be pushed down\n QUERY PLAN\n-------------------------------------------------------------------------------\n Update on public.rem1\n Remote SQL: UPDATE public.loc1 SET f2 = $2 WHERE ctid = $1 RETURNING f1, f2\n -> Foreign Scan on public.rem1\n Output: f1, ''::text, ctid, rem1.*\n Remote SQL: SELECT f1, f2, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\n\nRegards,\n\n-- \nShohei Mochizuki\nTOSHIBA CORPORATION\n\n\n",
"msg_date": "Tue, 28 May 2019 13:10:45 +0900",
"msg_from": "Shohei Mochizuki <shohei.mochizuki@toshiba.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Mochizuki-san,\n\nOn 2019/05/28 13:10, Shohei Mochizuki wrote:\n> On 2019/05/28 12:54, Amit Langote wrote:\n>> On 2019/05/27 22:02, Tom Lane wrote:\n>>> Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n>>> our attempts to optimize away fetching/storing all columns? It seems like\n>>> another potential hazard here is a trigger needing to read a column that\n>>> is not mentioned in the SQL query.\n>>\n>> The fetching side is fine, because rewriteTargetListUD() adds a\n>> whole-row-var to the target list when the UPDATE / DELETE target is a\n>> foreign table *and* there is a row trigger on the table. postgres_fdw\n>> sees that and constructs the query to fetch all columns.\n>>\n>> So, the only problem here is the optimizing away of storing all columns,\n>> which the Mochizuki-san's patch seems enough to fix.\n> \n> Amit-san, Tom,\n> Thanks for the comments.\n> \n> I checked other scenario. If a foreign table has AFTER trigger, remote update\n> query must return all columns and these cases are added at\n> deparseReturningList\n> and covered by following existing test cases.\n> \n> EXPLAIN (verbose, costs off)\n> UPDATE rem1 set f2 = ''; -- can't be pushed down\n> QUERY PLAN\n> -------------------------------------------------------------------------------\n> \n> Update on public.rem1\n> Remote SQL: UPDATE public.loc1 SET f2 = $2 WHERE ctid = $1 RETURNING\n> f1, f2\n> -> Foreign Scan on public.rem1\n> Output: f1, ''::text, ctid, rem1.*\n> Remote SQL: SELECT f1, f2, ctid FROM public.loc1 FOR UPDATE\n> (5 rows)\n\nAh, I had missed the AFTER triggers case, which seems to be working fine\nas you've shown here.\n\nThanks,\nAmit\n\n\n\n",
"msg_date": "Tue, 28 May 2019 13:23:49 +0900",
"msg_from": "Amit Langote <Langote_Amit_f8@lab.ntt.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Hi,\n\nOn Tue, May 28, 2019 at 12:54 PM Amit Langote\n<Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> On 2019/05/27 22:02, Tom Lane wrote:\n> > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> >> On 2019/05/27 10:52, Shohei Mochizuki wrote:\n> >>> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n> >>> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n> >>> This is because current fdw code adds only columns to RemoteSQL that were\n> >>> explicitly targets of the UPDATE as follows.\n> >\n> >> Yeah. So, the trigger execution correctly modifies the existing tuple\n> >> fetched from the remote server, but those changes are then essentially\n> >> discarded by postgres_fdw, that is, postgresExecForeignModify().\n\n> > Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> > our attempts to optimize away fetching/storing all columns? It seems like\n> > another potential hazard here is a trigger needing to read a column that\n> > is not mentioned in the SQL query.\n\n> So, the only problem here is the optimizing away of storing all columns,\n> which the Mochizuki-san's patch seems enough to fix.\n\nWill look into the patch after returning from PGCon, unless somebody wants to.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 28 May 2019 15:40:48 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "On Tue, May 28, 2019 at 3:40 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 28, 2019 at 12:54 PM Amit Langote\n> <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > On 2019/05/27 22:02, Tom Lane wrote:\n> > > Amit Langote <Langote_Amit_f8@lab.ntt.co.jp> writes:\n> > >> On 2019/05/27 10:52, Shohei Mochizuki wrote:\n> > >>> I noticed returning a modified record in a row-level BEFORE UPDATE trigger\n> > >>> on postgres_fdw foreign tables do not work. Attached patch fixes this issue.\n> > >>> This is because current fdw code adds only columns to RemoteSQL that were\n> > >>> explicitly targets of the UPDATE as follows.\n> > >\n> > >> Yeah. So, the trigger execution correctly modifies the existing tuple\n> > >> fetched from the remote server, but those changes are then essentially\n> > >> discarded by postgres_fdw, that is, postgresExecForeignModify().\n>\n> > > Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> > > our attempts to optimize away fetching/storing all columns? It seems like\n> > > another potential hazard here is a trigger needing to read a column that\n> > > is not mentioned in the SQL query.\n>\n> > So, the only problem here is the optimizing away of storing all columns,\n> > which the Mochizuki-san's patch seems enough to fix.\n\nYeah, I think so too, because in UPDATE, we fetch all columns from the\nremote (even if the target table doesn't have relevant triggers).\n\n> Will look into the patch after returning from PGCon, unless somebody wants to.\n\nI'll look into the patch more closely tomorrow. Sorry for the delay.\nAs I said in another email today, I felt a bit under the weather last\nweek.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 10 Jun 2019 21:04:05 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Fujita-san,\n\nThanks for the comments.\n\nOn Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Tue, May 28, 2019 at 12:54 PM Amit Langote\n> > <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > > On 2019/05/27 22:02, Tom Lane wrote:\n> > > > Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> > > > our attempts to optimize away fetching/storing all columns? It seems like\n> > > > another potential hazard here is a trigger needing to read a column that\n> > > > is not mentioned in the SQL query.\n> >\n> > > So, the only problem here is the optimizing away of storing all columns,\n> > > which the Mochizuki-san's patch seems enough to fix.\n>\n> Yeah, I think so too, because in UPDATE, we fetch all columns from the\n> remote (even if the target table doesn't have relevant triggers).\n\nHmm, your parenthetical remark contradicts my observation. I can see\nthat not all columns are fetched if there are no triggers present.\n\ncreate extension postgres_fdw ;\ncreate server loopback foreign data wrapper postgres_fdw ;\ncreate user mapping for current_user server loopback;\ncreate table loc1 (a int, b int);\ncreate foreign table rem1 (a int, b int generated always as (a+1)\nstored) server loopback options (table_name 'loc1');\n\nexplain verbose update rem1 set a = 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────\n Update on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n -> Foreign Scan on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n Output: 1, b, ctid\n Remote SQL: SELECT b, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\nwhereas, all columns are fetched if a trigger is defined:\n\ncreate or replace function trigfunc() returns trigger as $$ begin\nraise notice '%', new; return new; end; $$ language plpgsql;\ncreate trigger rem1_trig before insert or update on rem1 for each row\nexecute function trigfunc();\n\nexplain verbose update rem1 set a = 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────\n Update on public.rem1 (cost=100.00..147.23 rows=1241 width=46)\n Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n -> Foreign Scan on public.rem1 (cost=100.00..147.23 rows=1241 width=46)\n Output: 1, b, ctid, rem1.*\n Remote SQL: SELECT a, b, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\nAm I missing something?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 11 Jun 2019 10:29:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "I forgot to send this by \"Reply ALL\".\n\nOn Tue, Jun 11, 2019 at 10:51 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n>\n> Amit-san,\n>\n> On Tue, Jun 11, 2019 at 10:30 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > On Tue, May 28, 2019 at 12:54 PM Amit Langote\n> > > > <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > > > > On 2019/05/27 22:02, Tom Lane wrote:\n> > > > > > Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> > > > > > our attempts to optimize away fetching/storing all columns? It seems like\n> > > > > > another potential hazard here is a trigger needing to read a column that\n> > > > > > is not mentioned in the SQL query.\n> > > >\n> > > > > So, the only problem here is the optimizing away of storing all columns,\n> > > > > which the Mochizuki-san's patch seems enough to fix.\n> > >\n> > > Yeah, I think so too, because in UPDATE, we fetch all columns from the\n> > > remote (even if the target table doesn't have relevant triggers).\n> >\n> > Hmm, your parenthetical remark contradicts my observation. I can see\n> > that not all columns are fetched if there are no triggers present.\n> >\n> > create extension postgres_fdw ;\n> > create server loopback foreign data wrapper postgres_fdw ;\n> > create user mapping for current_user server loopback;\n> > create table loc1 (a int, b int);\n> > create foreign table rem1 (a int, b int generated always as (a+1)\n> > stored) server loopback options (table_name 'loc1');\n> >\n> > explain verbose update rem1 set a = 1;\n> > QUERY PLAN\n> > ─────────────────────────────────────────────────────────────────────────────\n> > Update on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> > Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n> > -> Foreign Scan on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> > Output: 1, b, ctid\n> > Remote SQL: SELECT b, ctid FROM public.loc1 FOR UPDATE\n> > (5 rows)\n>\n> Sorry, my explanation was not good; I should have said that in UPDATE,\n> we fetch columns not mentioned in the SQL query as well (even if the\n> target table doesn't have relevant triggers), so there would be no\n> hazard Tom mentioned above, IIUC.\n>\n> Best regards,\n> Etsuro Fujita\n\n\n",
"msg_date": "Tue, 11 Jun 2019 11:01:05 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "> On Tue, Jun 11, 2019 at 10:51 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Tue, Jun 11, 2019 at 10:30 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > > On Tue, May 28, 2019 at 12:54 PM Amit Langote\n> > > > > <Langote_Amit_f8@lab.ntt.co.jp> wrote:\n> > > > > > On 2019/05/27 22:02, Tom Lane wrote:\n> > > > > > > Perhaps, if the table has relevant BEFORE triggers, we should just abandon\n> > > > > > > our attempts to optimize away fetching/storing all columns? It seems like\n> > > > > > > another potential hazard here is a trigger needing to read a column that\n> > > > > > > is not mentioned in the SQL query.\n> > > > >\n> > > > > > So, the only problem here is the optimizing away of storing all columns,\n> > > > > > which the Mochizuki-san's patch seems enough to fix.\n> > > >\n> > > > Yeah, I think so too, because in UPDATE, we fetch all columns from the\n> > > > remote (even if the target table doesn't have relevant triggers).\n> > >\n> > > Hmm, your parenthetical remark contradicts my observation. I can see\n> > > that not all columns are fetched if there are no triggers present.\n\n[ ... ]\n\n> > Sorry, my explanation was not good; I should have said that in UPDATE,\n> > we fetch columns not mentioned in the SQL query as well (even if the\n> > target table doesn't have relevant triggers), so there would be no\n> > hazard Tom mentioned above, IIUC.\n\nSorry but I still don't understand. Sure, *some* columns of the table\nnot present in the UPDATE statement are fetched, but the column(s)\nbeing assigned to are not fetched.\n\n-- before creating a trigger\nexplain verbose update rem1 set a = 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────\n Update on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n -> Foreign Scan on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n Output: 1, b, ctid\n Remote SQL: SELECT b, ctid FROM public.loc1 FOR UPDATE\n\nIn this case, column 'a' is not present in the rows that are fetched\nto be updated, because it's only assigned to and not referenced\nanywhere (such as in WHERE clauses). Which is understandable, because\nfetching it would be pointless.\n\nIf there is a trigger present though, the trigger may want to\nreference 'a' in the OLD rows, so it's fetched along with any other\ncolumns that are present in the table, because they may be referenced\ntoo.\n\n-- after creating a trigger\nexplain verbose update rem1 set a = 1;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────\n Update on public.rem1 (cost=100.00..147.23 rows=1241 width=46)\n Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n -> Foreign Scan on public.rem1 (cost=100.00..147.23 rows=1241 width=46)\n Output: 1, b, ctid, rem1.*\n Remote SQL: SELECT a, b, ctid FROM public.loc1 FOR UPDATE\n(5 rows)\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 11 Jun 2019 13:31:13 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Amit-san,\n\nOn Tue, Jun 11, 2019 at 1:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jun 11, 2019 at 10:51 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > Sorry, my explanation was not good; I should have said that in UPDATE,\n> > > we fetch columns not mentioned in the SQL query as well (even if the\n> > > target table doesn't have relevant triggers), so there would be no\n> > > hazard Tom mentioned above, IIUC.\n>\n> Sorry but I still don't understand. Sure, *some* columns of the table\n> not present in the UPDATE statement are fetched, but the column(s)\n> being assigned to are not fetched.\n>\n> -- before creating a trigger\n> explain verbose update rem1 set a = 1;\n> QUERY PLAN\n> ─────────────────────────────────────────────────────────────────────────────\n> Update on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n> -> Foreign Scan on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> Output: 1, b, ctid\n> Remote SQL: SELECT b, ctid FROM public.loc1 FOR UPDATE\n>\n> In this case, column 'a' is not present in the rows that are fetched\n> to be updated, because it's only assigned to and not referenced\n> anywhere (such as in WHERE clauses). Which is understandable, because\n> fetching it would be pointless.\n\nRight, but what I'm saying here is what you call \"some columns\". For\nUPDATE, the planner adds any unassigned columns to the targetlist (see\nexpand_targetlist()), so the reltarget for the target relation would\ninclude such columns, leading to fetching them from the remote in\npostgres_fdw even if the target table doesn't have relevant triggers.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 11 Jun 2019 18:09:20 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Fujita-san,\n\nOn Tue, Jun 11, 2019 at 6:09 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, Jun 11, 2019 at 1:31 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Tue, Jun 11, 2019 at 10:51 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > Sorry, my explanation was not good; I should have said that in UPDATE,\n> > > > we fetch columns not mentioned in the SQL query as well (even if the\n> > > > target table doesn't have relevant triggers), so there would be no\n> > > > hazard Tom mentioned above, IIUC.\n> >\n> > Sorry but I still don't understand. Sure, *some* columns of the table\n> > not present in the UPDATE statement are fetched, but the column(s)\n> > being assigned to are not fetched.\n> >\n> > -- before creating a trigger\n> > explain verbose update rem1 set a = 1;\n> > QUERY PLAN\n> > ─────────────────────────────────────────────────────────────────────────────\n> > Update on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> > Remote SQL: UPDATE public.loc1 SET a = $2, b = $3 WHERE ctid = $1\n> > -> Foreign Scan on public.rem1 (cost=100.00..182.27 rows=2409 width=14)\n> > Output: 1, b, ctid\n> > Remote SQL: SELECT b, ctid FROM public.loc1 FOR UPDATE\n> >\n> > In this case, column 'a' is not present in the rows that are fetched\n> > to be updated, because it's only assigned to and not referenced\n> > anywhere (such as in WHERE clauses). Which is understandable, because\n> > fetching it would be pointless.\n>\n> Right, but what I'm saying here is what you call \"some columns\". For\n> UPDATE, the planner adds any unassigned columns to the targetlist (see\n> expand_targetlist()), so the reltarget for the target relation would\n> include such columns, leading to fetching them from the remote in\n> postgres_fdw even if the target table doesn't have relevant triggers.\n\nThanks for clarifying again. I now understand that you didn't mean\n*all* columns.\n\nIt's just that I was interpreting your words in the context of Tom's\nconcern, so I thought you are implying that *all* columns are always\nfetched, irrespective of whether triggers are present (Tom's concern)\nor not. Reading Tom's email again, he didn't say *all* columns, but\nmaybe meant so, because that's what's needed for triggers to work.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 11 Jun 2019 18:37:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I'll look into the patch more closely tomorrow.\n\nI did that, but couldn't find any issue about the patch. Here is an\nupdated version of the patch. Changes are:\n\n* Reworded the comments a bit in postgresPlanFoereignModify the\noriginal patch modified\n* Added the commit message\n\nDoes that make sense? I think this is an oversight in commit\n7cbe57c34, so I'll back-patch all the way back to 9.4, if there are no\nobjections.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 12 Jun 2019 15:13:56 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Fujita-san,\n\nOn Wed, Jun 12, 2019 at 3:14 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> I did that, but couldn't find any issue about the patch. Here is an\n> updated version of the patch.\n\nThanks for the updating the patch.\n\n> Changes are:\n>\n> * Reworded the comments a bit in postgresPlanFoereignModify the\n> original patch modified\n\n+ * statement, and for UPDATE if BEFORE ROW UPDATE triggers since those\n+ * triggers might change values for non-target columns, in which case we\n\nFirst line seems to be missing a word or two. Maybe:\n\n+ * statement, and for UPDATE if there are BEFORE ROW UPDATE triggers,\n+ * since those triggers might change values for non-target columns, in\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 12 Jun 2019 15:33:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Amit-san,\n\nOn Wed, Jun 12, 2019 at 3:33 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jun 12, 2019 at 3:14 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > * Reworded the comments a bit in postgresPlanFoereignModify the\n> > original patch modified\n>\n> + * statement, and for UPDATE if BEFORE ROW UPDATE triggers since those\n> + * triggers might change values for non-target columns, in which case we\n>\n> First line seems to be missing a word or two. Maybe:\n>\n> + * statement, and for UPDATE if there are BEFORE ROW UPDATE triggers,\n> + * since those triggers might change values for non-target columns, in\n\nActually, I omitted such words to shorten the comment, but I think\nthis improves the readability, so I'll update the comment that way.\n\nThanks for the review!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 12 Jun 2019 16:30:26 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Fujita-san,\r\n> On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com>\r\n> wrote:\r\n> > I'll look into the patch more closely tomorrow.\r\n> \r\n> I did that, but couldn't find any issue about the patch. Here is an updated\r\n> version of the patch. Changes are:\r\n> \r\n> * Reworded the comments a bit in postgresPlanFoereignModify the original\r\n> patch modified\r\n> * Added the commit message\r\n\r\nThanks for the update.\r\n\r\nI think your wording is more understandable than my original patch.\r\n\r\nRegards,\r\n\r\n-- \r\nShohei Mochizuki\r\nTOSHIBA CORPORATION\r\n",
"msg_date": "Wed, 12 Jun 2019 09:08:50 +0000",
"msg_from": "<shohei.mochizuki@toshiba.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Mochizuki-san,\n\nOn Wed, Jun 12, 2019 at 6:08 PM <shohei.mochizuki@toshiba.co.jp> wrote:\n> > On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita <etsuro.fujita@gmail.com>\n> > wrote:\n> > > I'll look into the patch more closely tomorrow.\n> >\n> > I did that, but couldn't find any issue about the patch. Here is an updated\n> > version of the patch. Changes are:\n> >\n> > * Reworded the comments a bit in postgresPlanFoereignModify the original\n> > patch modified\n> > * Added the commit message\n\n> I think your wording is more understandable than my original patch.\n\nGreat! I've pushed the patch after updating the comment as proposed\nby Amit-san yesterday, and adding a regression test case checking\nEXPLAIN because otherwise we wouldn't have any EXPLAIN results in\nv9.4.\n\nThanks for the report and fix!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 13 Jun 2019 18:22:41 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BEFORE UPDATE trigger on postgres_fdw table not work"
},
{
"msg_contents": "Fujita-san,\r\n\r\n> From: Etsuro Fujita [mailto:etsuro.fujita@gmail.com]\r\n> \r\n> Mochizuki-san,\r\n> \r\n> On Wed, Jun 12, 2019 at 6:08 PM <shohei.mochizuki@toshiba.co.jp> wrote:\r\n> > > On Mon, Jun 10, 2019 at 9:04 PM Etsuro Fujita\r\n> > > <etsuro.fujita@gmail.com>\r\n> > > wrote:\r\n> > > > I'll look into the patch more closely tomorrow.\r\n> > >\r\n> > > I did that, but couldn't find any issue about the patch. Here is an\r\n> > > updated version of the patch. Changes are:\r\n> > >\r\n> > > * Reworded the comments a bit in postgresPlanFoereignModify the\r\n> > > original patch modified\r\n> > > * Added the commit message\r\n> \r\n> > I think your wording is more understandable than my original patch.\r\n> \r\n> Great! I've pushed the patch after updating the comment as proposed by\r\n> Amit-san yesterday, and adding a regression test case checking EXPLAIN\r\n> because otherwise we wouldn't have any EXPLAIN results in v9.4.\r\n> \r\n> Thanks for the report and fix!\r\n\r\nThanks for the commit!\r\n\r\n--\r\nShohei Mochizuki\r\n",
"msg_date": "Thu, 13 Jun 2019 11:16:13 +0000",
"msg_from": "<shohei.mochizuki@toshiba.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: BEFORE UPDATE trigger on postgres_fdw table not work"
}
] |
[
{
"msg_contents": "I got some idea from the README under storage/lmgr and read some code of\nLockAcquireExtended , but I still have some questions now.\n\nLWLockAcquire(&MyProc->backendLock, LW_EXCLUSIVE);\nif (FastPathStrongRelationLocks->count[fasthashcode] != 0)\n acquired = false;\nelse\n acquired = FastPathGrantRelationLock(locktag->locktag_field2,\nlockmode);\n\n1. In the README, it says: \"A key point of this algorithm is that it\nmust be possible to verify the\nabsence of possibly conflicting locks without fighting over a shared LWLock\nor\nspinlock. Otherwise, this effort would simply move the contention\nbottleneck\nfrom one place to another.\"\n\nbut in the code, there is LWLockAcquire in the above code. Actually I\ncan't think out how can we proceed without a lock.\n\n2. Why does the MyProc->backendLock work? it is MyProc not a global\nlock.\n\n3. for the line, acquired =\nFastPathGrantRelationLock(locktag->locktag_field2,\nlockmode); I think it should be able to replaced with \"acquired =\ntrue\" (but obviously I'm wrong) . I read \"FastPathGrantRelationLock\" but\ncan't understand it.\n\n\nAny hint will be helpful. thanks!\n\nI got some idea from the README under storage/lmgr and read some code of LockAcquireExtended , but I still have some questions now. LWLockAcquire(&MyProc->backendLock, LW_EXCLUSIVE);\t\tif (FastPathStrongRelationLocks->count[fasthashcode] != 0) acquired = false;\t\telse acquired = FastPathGrantRelationLock(locktag->locktag_field2,\t\t\t\t\t\t\t\t\t\t\t\t lockmode);1. In the README, it says: \"A key point of this algorithm is that it must be possible to verify theabsence of possibly conflicting locks without fighting over a shared LWLock orspinlock. Otherwise, this effort would simply move the contention bottleneckfrom one place to another.\"but in the code, there is LWLockAcquire in the above code. Actually I can't think out how can we proceed without a lock. 2. Why does the MyProc->backendLock work? it is MyProc not a global lock.3. for the line, acquired = FastPathGrantRelationLock(locktag->locktag_field2,lockmode); I think it should be able to replaced with \"acquired = true\" (but obviously I'm wrong) . I read \"FastPathGrantRelationLock\" but can't understand it. Any hint will be helpful. thanks!",
"msg_date": "Mon, 27 May 2019 14:01:34 +0800",
"msg_from": "Alex <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "some questions about fast-path-lock"
},
{
"msg_contents": "On Mon, May 27, 2019 at 2:01 AM Alex <zhihui.fan1213@gmail.com> wrote:\n> I got some idea from the README under storage/lmgr and read some code of LockAcquireExtended , but I still have some questions now.\n>\n> LWLockAcquire(&MyProc->backendLock, LW_EXCLUSIVE);\n> if (FastPathStrongRelationLocks->count[fasthashcode] != 0)\n> acquired = false;\n> else\n> acquired = FastPathGrantRelationLock(locktag->locktag_field2,\n> lockmode);\n>\n> 1. In the README, it says: \"A key point of this algorithm is that it must be possible to verify the\n> absence of possibly conflicting locks without fighting over a shared LWLock or\n> spinlock. Otherwise, this effort would simply move the contention bottleneck\n> from one place to another.\"\n>\n> but in the code, there is LWLockAcquire in the above code. Actually I can't think out how can we proceed without a lock.\n\nThe per-backend lock is not heavily contended, because under normal\ncircumstances it is only accessed by a single backend. If there is a\npotential lock conflict that must be analyzed then another backend may\nacquire it and that might lead to a little bit of contention, but it\nhappens quite rarely -- so the overall contention is still much less\nthan if everyone is fighting over the lock manager partition locks.\n\n> 2. Why does the MyProc->backendLock work? it is MyProc not a global lock.\n\nIt's still an LWLock. Putting it inside of MyProc doesn't make it\nmagically stop working. MyProc is in shared memory, not backend-local\nmemory, if that's what you are confused about.\n\n> 3. for the line, acquired = FastPathGrantRelationLock(locktag->locktag_field2,\n> lockmode); I think it should be able to replaced with \"acquired = true\" (but obviously I'm wrong) . I read \"FastPathGrantRelationLock\" but can't understand it.\n\nIt can't say 'acquired = true' because each backend can only acquire a\nmaximum of 16 relation locks via the fast-path mechanism. If a\nprocess acquires more than 16 relation locks, at least some of them\nwill have to be acquired without benefit of the fast-path. This value\ncould be changed by changing the value of the constant\nFP_LOCK_SLOTS_PER_BACKEND, but since we scan the array linearly,\nmaking it too big will lead to other problems. I don't quite\nunderstand what about FastPathGrantRelationLock you don't understand -\nit's a pretty straightforwardly-coded search for either (a) an\nexisting fastpath slot for the specified relid or failing that (b) an\nunused fastpath slot.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 17:29:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: some questions about fast-path-lock"
}
] |
[
{
"msg_contents": "Hello,\n\nWhen investigating behavior of \"DISCARD ALL\", I found that order of\nsteps of equivalent sequence in documentation is not updated with\nchanges in code.\n\nPlease find attached patch to fix documentation.\n\nBest Regards,\nJan Chochol",
"msg_date": "Mon, 27 May 2019 09:37:53 +0200",
"msg_from": "Jan Chochol <jan.chochol@gooddata.com>",
"msg_from_op": true,
"msg_subject": "Fix order of steps in DISCARD ALL documentation"
},
{
"msg_contents": "On 2019-May-27, Jan Chochol wrote:\n\n> Hello,\n> \n> When investigating behavior of \"DISCARD ALL\", I found that order of\n> steps of equivalent sequence in documentation is not updated with\n> changes in code.\n\nPushed. I noticed that DISCARD TEMP and DISCARD SEQUENCES appeared in\nthe opposite order, too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 12:24:45 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix order of steps in DISCARD ALL documentation"
},
{
"msg_contents": "Great, thanks!\n\nOn Tue, Jun 11, 2019 at 6:24 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2019-May-27, Jan Chochol wrote:\n>\n> > Hello,\n> >\n> > When investigating behavior of \"DISCARD ALL\", I found that order of\n> > steps of equivalent sequence in documentation is not updated with\n> > changes in code.\n>\n> Pushed. I noticed that DISCARD TEMP and DISCARD SEQUENCES appeared in\n> the opposite order, too.\n>\n> --\n> Álvaro Herrera https://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 11 Jun 2019 20:47:05 +0200",
"msg_from": "Jan Chochol <jan.chochol@gooddata.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix order of steps in DISCARD ALL documentation"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nThere is the following problem with Postgres at Windows: files of \ndropped relation can be blocked for arbitrary long amount of time.\nSuch behavior is caused by two factors:\n1. Windows doesn't allow deletion of opened file.\n2. Postgres backend caches opened descriptors and this cache is not \nupdated if backend is idle.\n\nSo the problem can be reproduced quite easily: create some table in once \nclient, then drop it in another client and try to do something with \nrelation files.\nSegments of dropped relation are visible but any attempt to copy this \nfile is rejected.\nAnd this state persists until you perform some command in first client.\n\nI wonder if we are going to address this windows specific issue?\nIt will cause problems with file backup utilities which are not able to \ncopy this file.\nAnd situation when backend can be idle for long amount of time are not \nso rare.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 27 May 2019 12:26:58 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Pinned files at Windows"
},
{
"msg_contents": "\n\nOn 27.05.2019 12:26, Konstantin Knizhnik wrote:\n> Hi, hackers.\n>\n> There is the following problem with Postgres at Windows: files of \n> dropped relation can be blocked for arbitrary long amount of time.\n> Such behavior is caused by two factors:\n> 1. Windows doesn't allow deletion of opened file.\n> 2. Postgres backend caches opened descriptors and this cache is not \n> updated if backend is idle.\n>\n> So the problem can be reproduced quite easily: create some table in \n> once client, then drop it in another client and try to do something \n> with relation files.\n> Segments of dropped relation are visible but any attempt to copy this \n> file is rejected.\n> And this state persists until you perform some command in first client.\n>\n> I wonder if we are going to address this windows specific issue?\n> It will cause problems with file backup utilities which are not able \n> to copy this file.\n> And situation when backend can be idle for long amount of time are not \n> so rare.\n>\n\nI have investigated the problem more and looks like the source of the \nproblem is in pgwin32_safestat function:\n\nint\npgwin32_safestat(const char *path, struct stat *buf)\n{\n int r;\n WIN32_FILE_ATTRIBUTE_DATA attr;\n\n r = stat(path, buf);\n if (r < 0)\n {\n if (GetLastError() == ERROR_DELETE_PENDING)\n {\n /*\n * File has been deleted, but is not gone from the \nfilesystem yet.\n * This can happen when some process with FILE_SHARE_DELETE \nhas it\n * open and it will be fully removed once that handle is \nclosed.\n * Meanwhile, we can't open it, so indicate that the file just\n * doesn't exist.\n */\n errno = ENOENT;\n return -1;\n }\n\n return r;\n }\n\n if (!GetFileAttributesEx(path, GetFileExInfoStandard, &attr))\n {\n _dosmaperr(GetLastError());\n return -1;\n }\n\n /*\n * XXX no support for large files here, but we don't do that in \ngeneral on\n * Win32 yet.\n */\n buf->st_size = attr.nFileSizeLow;\n\n return 0;\n}\n\nPostgres is opening file with FILE_SHARE_DELETE flag which makes it \npossible to unlink opened file.\nBut unlike Unixes, the file is not actually deleted. You can see it \nusing \"dir\" command.\nAnd stat() function also doesn't return error in this case:\n\nhttps://stackoverflow.com/questions/27270374/deletefile-or-unlink-calls-succeed-but-doesnt-remove-file\n\nSo first check in pgwin32_safestat (r < 0) is not working at all: \nstat() returns 0, but subsequent call of GetFileAttributesEx\nreturns 5 (ERROR_ACCESS_DENIED).\nIt seems to me that pgwin32_safestat function should be rewritten in \nthis way:\n\nint\npgwin32_safestat(const char *path, struct stat *buf)\n{\n int r;\n WIN32_FILE_ATTRIBUTE_DATA attr;\n\n r = stat(path, buf);\n if (r < 0)\n return r;\n\n if (!GetFileAttributesEx(path, GetFileExInfoStandard, &attr))\n {\n errno = ENOENT;\n return -1;\n }\n\n /*\n * XXX no support for large files here, but we don't do that in \ngeneral on\n * Win32 yet.\n */\n buf->st_size = attr.nFileSizeLow;\n\n return 0;\n}\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Mon, 27 May 2019 17:52:13 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "On Mon, May 27, 2019 at 05:52:13PM +0300, Konstantin Knizhnik wrote:\n> Postgres is opening file with FILE_SHARE_DELETE flag which makes it\n> possible to unlink opened file.\n> But unlike Unixes, the file is not actually deleted. You can see it using\n> \"dir\" command.\n> And stat() function also doesn't return error in this case:\n> \n> https://stackoverflow.com/questions/27270374/deletefile-or-unlink-calls-succeed-but-doesnt-remove-file\n> \n> So first check in pgwin32_safestat (r < 0) is not working at all: stat()\n> returns 0, but subsequent call of GetFileAttributesEx\n> returns 5 (ERROR_ACCESS_DENIED).\n\nSo you would basically hijack the result of GetFileAttributesEx() so\nas any errors returned by this function complain with ENOENT for\neverything seen. Why would that be a sane idea? What if say a\npermission or another error is legit, but instead ENOENT is returned\nas you propose, then the caller would be confused by an incorrect\nstatus.\n\nAs you mention, what we did as of 9951741 may not be completely right,\nand the reason why it was done this way comes from here:\nhttps://www.postgresql.org/message-id/20160712083220.1426.58667@wrigleys.postgresql.org\n\nCould we instead come up with a reliable way to detect if a file is in\na deletion pending state? Mapping blindly EACCES to ENOENT is not a\nsolution I think we can rely on (perhaps we could check only after\nERROR_ACCESS_DENIED using GetLastError() and map back to ENOENT in\nthis case still this can be triggered if a virus scanner holds the\nfile for read, no?). stat() returning 0 for a file pending for\ndeletion which will go away physically once the handles still keeping\nthe file around are closed is not something I would have imagined is\nsane, but that's what we need to deal with... Windows has a long\nhistory of keeping things compatible, sometimes in their own weird\nway, and it seems that we have one here so I cannot imagine that this\nbehavior is going to change.\n\nLooking around, I have found out about NtCreateFile() which could be\nable to report a proper pending deletion status, still that's only\navailable in kernel mode. Perhaps others have ideas?\n--\nMichael",
"msg_date": "Wed, 29 May 2019 15:20:10 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "\n\nOn 29.05.2019 22:20, Michael Paquier wrote:\n> On Mon, May 27, 2019 at 05:52:13PM +0300, Konstantin Knizhnik wrote:\n>> Postgres is opening file with FILE_SHARE_DELETE� flag which makes it\n>> possible to unlink opened file.\n>> But unlike Unixes, the file is not actually deleted. You can see it using\n>> \"dir\" command.\n>> And stat() function also doesn't return error in this case:\n>>\n>> https://stackoverflow.com/questions/27270374/deletefile-or-unlink-calls-succeed-but-doesnt-remove-file\n>>\n>> So first check in� pgwin32_safestat (r < 0) is not working at all: stat()\n>> returns 0, but subsequent call of GetFileAttributesEx\n>> returns 5 (ERROR_ACCESS_DENIED).\n> So you would basically hijack the result of GetFileAttributesEx() so\n> as any errors returned by this function complain with ENOENT for\n> everything seen. Why would that be a sane idea? What if say a\n> permission or another error is legit, but instead ENOENT is returned\n> as you propose, then the caller would be confused by an incorrect\n> status.\n\nIf access to the file is prohibited by lack of permissions, then stat() \nshould fail with error\nand this error is returned by� pgwin32_safestat function.\n\nIf call of stat() is succeed, then my assumption is that the only reason \nof GetFileAttributesEx\nfailure is that file is deleted and returning ENOENT error code in this \ncase is correct behavior.\n\n>\n> As you mention, what we did as of 9951741 may not be completely right,\n> and the reason why it was done this way comes from here:\n> https://www.postgresql.org/message-id/20160712083220.1426.58667@wrigleys.postgresql.org\n\nYes, this is the same reason, but handling STATUS_DELETE_PENDING is not \ncorrect.\n>\n> Could we instead come up with a reliable way to detect if a file is in\n> a deletion pending state? Mapping blindly EACCES to ENOENT is not a\n> solution I think we can rely on (perhaps we could check only after\n> ERROR_ACCESS_DENIED using GetLastError() and map back to ENOENT in\n> this case still this can be triggered if a virus scanner holds the\n> file for read, no?). stat() returning 0 for a file pending for\n> deletion which will go away physically once the handles still keeping\n> the file around are closed is not something I would have imagined is\n> sane, but that's what we need to deal with... Windows has a long\n> history of keeping things compatible, sometimes in their own weird\n> way, and it seems that we have one here so I cannot imagine that this\n> behavior is going to change.\n>\n> Looking around, I have found out about NtCreateFile() which could be\n> able to report a proper pending deletion status, still that's only\n> available in kernel mode. Perhaps others have ideas?\n\nSorry, I do not know better solution.\nI have written small test reproducing the problem which proves that\nif file is opened with FILE_SHARE_DELETE flag, then\nit is possible to delete it using unlink() - no error is returned and \ncall stat() for it - also succeed.\nBy any attempt to open this file for reading/writing or performing \nGetFileAttributesEx\nare failed with� ERROR_ACCESS_DENIED (not with ERROR_DELETE_PENDING \nwhich is hidden by Win32 API).\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Thu, 30 May 2019 10:25:17 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "On Thu, May 30, 2019 at 3:25 AM Konstantin Knizhnik\n<k.knizhnik@postgrespro.ru> wrote:\n> If call of stat() is succeed, then my assumption is that the only reason\n> of GetFileAttributesEx\n> failure is that file is deleted and returning ENOENT error code in this\n> case is correct behavior.\n\nIn my experience, the assumption \"the only possible cause of an error\nduring X is Y\" turns out to be wrong nearly 100% of the time. Our job\nis to report the errors the OS gives us, not guess what they mean.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jun 2019 15:15:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "\n\nOn 03.06.2019 22:15, Robert Haas wrote:\n> On Thu, May 30, 2019 at 3:25 AM Konstantin Knizhnik\n> <k.knizhnik@postgrespro.ru> wrote:\n>> If call of stat() is succeed, then my assumption is that the only reason\n>> of GetFileAttributesEx\n>> failure is that file is deleted and returning ENOENT error code in this\n>> case is correct behavior.\n> In my experience, the assumption \"the only possible cause of an error\n> during X is Y\" turns out to be wrong nearly 100% of the time. Our job\n> is to report the errors the OS gives us, not guess what they mean.\n>\nThis is what we are try to do now:\n\n r = stat(path, buf);\n if (r < 0)\n {\n if (GetLastError() == ERROR_DELETE_PENDING)\n {\n /*\n * File has been deleted, but is not gone from the \nfilesystem yet.\n * This can happen when some process with FILE_SHARE_DELETE \nhas it\n * open and it will be fully removed once that handle is \nclosed.\n * Meanwhile, we can't open it, so indicate that the file just\n * doesn't exist.\n */\n errno = ENOENT;\n return -1;\n }\n\n return r;\n }\n\n\nbut without success because ERROR_DELETE_PENDING is never returned by Win32.\nAnd moreover, stat() doesn't ever return error in this case.\n\n\n",
"msg_date": "Mon, 3 Jun 2019 23:37:30 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "On Mon, Jun 03, 2019 at 11:37:30PM +0300, Konstantin Knizhnik wrote:\n> but without success because ERROR_DELETE_PENDING is never returned by Win32.\n> And moreover, stat() doesn't ever return error in this case.\n\nCould it be possible to find a reliable way to detect that?\nCloberring errno with an incorrect value is not something we can rely\non, and I am ready to buy that GetFileAttributesEx() can also return\nEACCES for some legit cases, like a file it has no access to. What\nif for example something is done on a file between the stat() call and\nthe GetFileAttributesEx() call in pgwin32_safestat() so as EACCES is\na legit error?\n--\nMichael",
"msg_date": "Tue, 4 Jun 2019 09:18:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Pinned files at Windows"
},
{
"msg_contents": "\n\nOn 04.06.2019 3:18, Michael Paquier wrote:\n> On Mon, Jun 03, 2019 at 11:37:30PM +0300, Konstantin Knizhnik wrote:\n>> but without success because ERROR_DELETE_PENDING is never returned by Win32.\n>> And moreover, stat() doesn't ever return error in this case.\n> Could it be possible to find a reliable way to detect that?\n> Cloberring errno with an incorrect value is not something we can rely\n> on, and I am ready to buy that GetFileAttributesEx() can also return\n> EACCES for some legit cases, like a file it has no access to. What\n> if for example something is done on a file between the stat() call and\n> the GetFileAttributesEx() call in pgwin32_safestat() so as EACCES is\n> a legit error?\n\nSorry, I am not a Windows expert so I do not know how if it is possible \nto detect that ERROR_ACCESS_DENIED� returned by GetFileAttributesEx is \nactually caused by pending delete.\nThe situation when file permissions were changed between call of stat() \nand GetFileAttributesEx() is certainly possible but... do your really \nseriously consider probability of this event\nand is there something critical if we return ENOENT instead of EACCES in \nthis case?\n\nActually original problem seems to be caused by the assumption that \nstat() is not correctly setting st_size at Windows:\n/*\n �* The stat() function in win32 is not guaranteed to update the st_size\n �* field when run. So we define our own version that uses the Win32 API\n �* to update this field.\n �*/\n\nI tried to google information about such behavior but didn't find any \nother references except Postgres sources.\nI wonder if such problem really takes place (at least with more or less \nrecent versions of Windows)?\nAnd how critical it can be that we get cached value of file size?\nIf we access file without locking, then it is not correct to say about \nthe \"actual\" file size, isn't it? File can be truncated or appended few \nmilliseconds later after this call.\nIf there are some places in Postgres code which rely on the fact that \nstat() returns the \"latest\" file size value (actual for the moment of \nstat() call), then it can be a sign of possible race condition.\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n",
"msg_date": "Tue, 4 Jun 2019 11:43:55 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Pinned files at Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nI am getting this below error - after performing pg_rewind when i try to \nstart new slave ( which earlier was my master) against PGv12 Beta1.\n\"\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n2019-05-27 18:55:47.387 IST [25500] LOG: entering standby mode\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\ncp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n\"\n\nSteps to reproduce -\n=============\n0)mkdir /tmp/archive_dir1\n1)Master Setup -> ./initdb -D master , add these parameters in \npostgresql.conf file -\n\"\nwal_level = hot_standby\nwal_log_hints = on\nmax_wal_senders = 2\nwal_keep_segments = 64\nhot_standby = on\narchive_mode=on\narchive_command='cp %p /tmp//archive_dir1/%f'\nport=5432\n\"\nStart the server (./pg_ctl -D master start)\nConnect to psql terminal - create table/ insert few rows\n\n2)Slave Setup -> ./pg_basebackup -PR -X stream -c fast -h 127.0.0.1 -U \ncentos -p 5432 -D slave\n\nadd these parameters in postgresql.conf file -\n\"\nprimary_conninfo = 'user=centos host=127.0.0.1 port=5432'\npromote_trigger_file = '/tmp/s1.txt'\nrestore_command='cp %p /tmp/archive_dir1/%f'\nport=5433\n\"\nStart Slave (./pg_ctl -D slave start)\n\n3)Touch trigger file (touch /tmp/s1.txt) -> - standby.signal is gone \nfrom standby directory and now able to insert rows on standby server.\n4)stop master ( ./pg_ctl -D master stop)\n5)Perform pg_rewind\n[centos@mail-arts bin]$ ./pg_rewind -D master/ \n--source-server=\"host=localhost port=5433 user=centos password=edb \ndbname=postgres\"\npg_rewind: servers diverged at WAL location 0/3003538 on timeline 1\npg_rewind: rewinding from last common checkpoint at 0/2000060 on timeline 1\n\npg_rewind: Done!\n\n6)Create standby.signal file on master directory ( touch standby.signal)\n\n7)Modify old master/postgresql.conf file -\nprimary_conninfo = 'user=centos host=127.0.0.1 port=5433'\npromote_trigger_file = '/tmp/s1.txt'\nrestore_command='cp %p /tmp/archive_dir1/%f'\nport=5432\n\n8)Try to start the new slave/old master -\n\n[centos@mail-arts bin]$ ./pg_ctl -D m1/ start\nwaiting for server to start....2019-05-27 18:55:47.237 IST [25499] LOG: \nstarting PostgreSQL 12beta1 on x86_64-pc-linux-gnu, compiled by gcc \n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit\n2019-05-27 18:55:47.237 IST [25499] LOG: listening on IPv6 address \n\"::1\", port 5432\n2019-05-27 18:55:47.237 IST [25499] LOG: listening on IPv4 address \n\"127.0.0.1\", port 5432\n2019-05-27 18:55:47.239 IST [25499] LOG: listening on Unix socket \n\"/tmp/.s.PGSQL.5432\"\n2019-05-27 18:55:47.259 IST [25500] LOG: database system was \ninterrupted while in recovery at log time 2019-05-27 18:53:45 IST\n2019-05-27 18:55:47.259 IST [25500] HINT: If this has occurred more \nthan once some data might be corrupted and you might need to choose an \nearlier recovery target.\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n2019-05-27 18:55:47.387 IST [25500] LOG: entering standby mode\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\ncp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\ncp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n2019-05-27 18:55:47.402 IST [25500] LOG: redo starts at 0/2000028\ncp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n2019-05-27 18:55:47.410 IST [25500] LOG: invalid record length at \n0/301E740: wanted 24, got 0\n2019-05-27 18:55:47.413 IST [25509] FATAL: the database system is \nstarting up\n2019-05-27 18:55:47.413 IST [25508] FATAL: could not connect to the \nprimary server: FATAL: the database system is starting up\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\ncp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n2019-05-27 18:55:47.424 IST [25513] FATAL: the database system is \nstarting up\n2019-05-27 18:55:47.425 IST [25512] FATAL: could not connect to the \nprimary server: FATAL: the database system is starting up\ncp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n.....cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n\nIs there anything i need to change/add to make it work ?\n\nThanks.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Mon, 27 May 2019 19:27:54 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?[pg=5frewind]_cp:_cannot_stat_=e2=80=98pg=5fwal/RECOVERYH?=\n =?UTF-8?Q?ISTORY=e2=80=99:_No_such_file_or_directory?="
},
{
"msg_contents": "Hi,\n\nIs anyone able to reproduce this one ?\nAny pointer to solve this would be helpful.\n\nregards,\n\nOn 05/27/2019 07:27 PM, tushar wrote:\n> Hi,\n>\n> I am getting this below error - after performing pg_rewind when i try \n> to start new slave ( which earlier was my master) against PGv12 Beta1.\n> \"\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> 2019-05-27 18:55:47.387 IST [25500] LOG: entering standby mode\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n> \"\n>\n> Steps to reproduce -\n> =============\n> 0)mkdir /tmp/archive_dir1\n> 1)Master Setup -> ./initdb -D master , add these parameters in \n> postgresql.conf file -\n> \"\n> wal_level = hot_standby\n> wal_log_hints = on\n> max_wal_senders = 2\n> wal_keep_segments = 64\n> hot_standby = on\n> archive_mode=on\n> archive_command='cp %p /tmp//archive_dir1/%f'\n> port=5432\n> \"\n> Start the server (./pg_ctl -D master start)\n> Connect to psql terminal - create table/ insert few rows\n>\n> 2)Slave Setup -> ./pg_basebackup -PR -X stream -c fast -h 127.0.0.1 \n> -U centos -p 5432 -D slave\n>\n> add these parameters in postgresql.conf file -\n> \"\n> primary_conninfo = 'user=centos host=127.0.0.1 port=5432'\n> promote_trigger_file = '/tmp/s1.txt'\n> restore_command='cp %p /tmp/archive_dir1/%f'\n> port=5433\n> \"\n> Start Slave (./pg_ctl -D slave start)\n>\n> 3)Touch trigger file (touch /tmp/s1.txt) -> - standby.signal is gone \n> from standby directory and now able to insert rows on standby server.\n> 4)stop master ( ./pg_ctl -D master stop)\n> 5)Perform pg_rewind\n> [centos@mail-arts bin]$ ./pg_rewind -D master/ \n> --source-server=\"host=localhost port=5433 user=centos password=edb \n> dbname=postgres\"\n> pg_rewind: servers diverged at WAL location 0/3003538 on timeline 1\n> pg_rewind: rewinding from last common checkpoint at 0/2000060 on \n> timeline 1\n>\n> pg_rewind: Done!\n>\n> 6)Create standby.signal file on master directory ( touch standby.signal)\n>\n> 7)Modify old master/postgresql.conf file -\n> primary_conninfo = 'user=centos host=127.0.0.1 port=5433'\n> promote_trigger_file = '/tmp/s1.txt'\n> restore_command='cp %p /tmp/archive_dir1/%f'\n> port=5432\n>\n> 8)Try to start the new slave/old master -\n>\n> [centos@mail-arts bin]$ ./pg_ctl -D m1/ start\n> waiting for server to start....2019-05-27 18:55:47.237 IST [25499] \n> LOG: starting PostgreSQL 12beta1 on x86_64-pc-linux-gnu, compiled by \n> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit\n> 2019-05-27 18:55:47.237 IST [25499] LOG: listening on IPv6 address \n> \"::1\", port 5432\n> 2019-05-27 18:55:47.237 IST [25499] LOG: listening on IPv4 address \n> \"127.0.0.1\", port 5432\n> 2019-05-27 18:55:47.239 IST [25499] LOG: listening on Unix socket \n> \"/tmp/.s.PGSQL.5432\"\n> 2019-05-27 18:55:47.259 IST [25500] LOG: database system was \n> interrupted while in recovery at log time 2019-05-27 18:53:45 IST\n> 2019-05-27 18:55:47.259 IST [25500] HINT: If this has occurred more \n> than once some data might be corrupted and you might need to choose an \n> earlier recovery target.\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> 2019-05-27 18:55:47.387 IST [25500] LOG: entering standby mode\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n> cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n> 2019-05-27 18:55:47.402 IST [25500] LOG: redo starts at 0/2000028\n> cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n> 2019-05-27 18:55:47.410 IST [25500] LOG: invalid record length at \n> 0/301E740: wanted 24, got 0\n> 2019-05-27 18:55:47.413 IST [25509] FATAL: the database system is \n> starting up\n> 2019-05-27 18:55:47.413 IST [25508] FATAL: could not connect to the \n> primary server: FATAL: the database system is starting up\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n> 2019-05-27 18:55:47.424 IST [25513] FATAL: the database system is \n> starting up\n> 2019-05-27 18:55:47.425 IST [25512] FATAL: could not connect to the \n> primary server: FATAL: the database system is starting up\n> cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> .....cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n>\n> Is there anything i need to change/add to make it work ?\n>\n> Thanks.\n>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 5 Jun 2019 11:54:49 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:_[pg=5frewind]_cp:_cannot_stat_=e2=80=98pg=5fwal/RECOV?=\n =?UTF-8?Q?ERYHISTORY=e2=80=99:_No_such_file_or_directory?="
},
{
"msg_contents": "Hello,\n\nOn Wed, Jun 5, 2019 at 11:55 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\nI can see two different problems in this setup.\n\n> > 2)Slave Setup -> ./pg_basebackup -PR -X stream -c fast -h 127.0.0.1\n> > -U centos -p 5432 -D slave\n> > restore_command='cp %p /tmp/archive_dir1/%f'\n> > \"\n> > 7)Modify old master/postgresql.conf file -\n> > restore_command='cp %p /tmp/archive_dir1/%f'\nWhen we define a restore command, we tell the server to copy a file a\nWAL file from the archive. So, it should be\nrestore_command='cp tmp/archive_dir1/%f %p'\n\nThis is the reason you're getting this following error.\n> > cp: cannot stat ‘pg_wal/RECOVERYHISTORY’: No such file or directory\n> > cp: cannot stat ‘pg_wal/RECOVERYXLOG’: No such file or directory\n\n\n> > 2019-05-27 18:55:47.424 IST [25513] FATAL: the database system is\n> > starting up\n> > 2019-05-27 18:55:47.425 IST [25512] FATAL: could not connect to the\n> > primary server: FATAL: the database system is starting up\nThis case looks interesting.\n\n1. Master is running on port 5432.\n2. A standby is created using basebackup with -R option. So, the\npg_basebackup appends the primary connection settings to\npostgresql.auto.conf so that the streaming replication can use the\nsame settings later on.\ncat postgresql.auto.conf -> primary_conninfo = 'port=5432'\n3. The standby is started in port 5433.\n4. Standby is promoted and old master is stopped.\n5. Using pg_rewind, the old master is synchronized with the promoted\nstandby. As part of the process, it has copied the\npostgresql.auto.conf of promoted standby in the old master.\n6. Now, the old master is configured as a standby but the\npostgresql.auto.conf still contains the following settings:\ncat postgresql.auto.conf -> primary_conninfo = 'port=5432'\nSo, the old master tries to connect to the server on port 5432 and\nfinds itself which is still in recovery.\n\nThis can surely be fixed from the script. While configuring the old\nmaster as a standby server, clear/modify the settings in\npostgresql.auto.conf. But, it contradicts with the comment in the file\nwhich forbids the user from editing the file.\n\nAny thoughts?\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 16:37:52 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5Bpg=5Frewind=5D_cp=3A_cannot_stat_=E2=80=98pg=5Fwal=2FRECOVERYHI?=\n\t=?UTF-8?Q?STORY=E2=80=99=3A_No_such_file_or_directory?="
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 7:08 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> This can surely be fixed from the script. While configuring the old\n> master as a standby server, clear/modify the settings in\n> postgresql.auto.conf. But, it contradicts with the comment in the file\n> which forbids the user from editing the file.\n\nThe user isn't really forbidden from editing the file. They can do so\nsafely when the server is down.\n\nThis whole thing looks like a nonissue to me. If you set it up wrong,\nit won't work. So don't do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 10 Jun 2019 09:49:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5Bpg=5Frewind=5D_cp=3A_cannot_stat_=E2=80=98pg=5Fwal=2FRECOVERYHI?=\n\t=?UTF-8?Q?STORY=E2=80=99=3A_No_such_file_or_directory?="
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 7:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jun 10, 2019 at 7:08 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> > This can surely be fixed from the script. While configuring the old\n> > master as a standby server, clear/modify the settings in\n> > postgresql.auto.conf. But, it contradicts with the comment in the file\n> > which forbids the user from editing the file.\n>\n> The user isn't really forbidden from editing the file. They can do so\n> safely when the server is down.\n>\n> This whole thing looks like a nonissue to me. If you set it up wrong,\n> it won't work. So don't do that.\n>\nYeah. Sounds fair.\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 10 Jun 2019 19:56:12 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5Bpg=5Frewind=5D_cp=3A_cannot_stat_=E2=80=98pg=5Fwal=2FRECOVERYHI?=\n\t=?UTF-8?Q?STORY=E2=80=99=3A_No_such_file_or_directory?="
},
{
"msg_contents": "On Mon, Jun 10, 2019 at 07:56:12PM +0530, Kuntal Ghosh wrote:\n> On Mon, Jun 10, 2019 at 7:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> This whole thing looks like a nonissue to me. If you set it up wrong,\n>> it won't work. So don't do that.\n\n+1.\n--\nMichael",
"msg_date": "Tue, 11 Jun 2019 14:45:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [pg_rewind] cp: cannot =?utf-8?Q?stat_?=\n =?utf-8?B?4oCYcGdfd2FsL1JFQ09WRVJZSElTVE9SWeKAmQ==?= =?utf-8?Q?=3A?= No such\n file or directory"
},
{
"msg_contents": "On 06/10/2019 04:37 PM, Kuntal Ghosh wrote:\n> When we define a restore command, we tell the server to copy a file a\n> WAL file from the archive. So, it should be\n> restore_command='cp tmp/archive_dir1/%f %p'\n>\n> This is the reason you're getting this following error.\n\nOhh. Mea Culpa.\n\nThanks for pointing out.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Wed, 12 Jun 2019 18:41:44 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re:_[pg=5frewind]_cp:_cannot_stat_=e2=80=98pg=5fwal/RECOV?=\n =?UTF-8?Q?ERYHISTORY=e2=80=99:_No_such_file_or_directory?="
}
] |
[
{
"msg_contents": "Dear moderator,\n\nAre there teams behind the names or does everybody write with their\npersonal name?\n\nSascha kuhl\n(personal name)\n\nDear moderator,Are there teams behind the names or does everybody write with their personal name?Sascha kuhl(personal name)",
"msg_date": "Mon, 27 May 2019 16:42:16 +0200",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": true,
"msg_subject": "Names"
},
{
"msg_contents": "On Tue, May 28, 2019 at 12:35 AM Sascha Kuhl <yogidabanli@gmail.com> wrote:\n> Are there teams behind the names or does everybody write with their personal name?\n\nI think if you spend some time reading the mailing list, you'll be\nable to figure out the answer to this question and many others you\nmight have. People are generally pretty clear in their messages\nwhether or not they collaborated with others on the work which they\nare presenting.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 May 2019 17:23:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Names"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 15821\nLogged by: Christian Hofstaedtler\nEmail address: ch+pg@zeha.at\nPostgreSQL version: 11.3\nOperating system: Debian stretch amd64\nDescription: \n\nWe have enabled auto_explain and see errors on PostgreSQL 11.3 when\r\nSELECTing from a user defined function. No such crashes have been\r\nobserved on 10.7.\r\n\r\nMaybe relevant config settings:\r\n log_min_duration_statement = 0\r\n auto_explain.log_min_duration = 150ms\r\n auto_explain.log_analyze = on\r\n shared_preload_libraries = 'pg_stat_statements'\r\n pg_stat_statements.track = all\r\n pg_stat_statements.max = 10000\r\n\r\nI can trigger the error on our database using:\r\n\r\n CREATE TABLE reprotable(id serial, val int);\r\n INSERT INTO reprotable(val) SELECT * FROM generate_series(1, 10000000);\r\n CREATE OR REPLACE FUNCTION public.crashrepro4() RETURNS TABLE(foo integer,\nfoo2 integer) LANGUAGE sql AS $function$\r\n SELECT c.id, v.val FROM reprotable c JOIN reprotable v ON v.id = c.id\r\n $function$;\r\n \r\n LOAD 'auto_explain';\r\n SET max_parallel_workers_per_gather TO 8;\r\n SELECT * FROM crashrepro4();\r\n\r\nSometimes this works on the first few tries; running\r\n EXPLAIN ANALYZE SELECT * FROM crashrepro4();\r\nappears to help with the reproduction.\r\n\r\nOutput:\r\nERROR: could not find key 3 in shm TOC at 0x7f45a0334000\r\nCONTEXT: parallel worker\r\nSQL function \"crashrepro4\" statement 1\r\n\r\nBacktrace:\r\n\r\nProgram received signal SIGUSR1, User defined signal 1.\r\n0x00007f459dc94bc6 in posix_fallocate64 () from\n/lib/x86_64-linux-gnu/libc.so.6\r\n(gdb) bt full\r\n#0 0x00007f459dc94bc6 in posix_fallocate64 () from\n/lib/x86_64-linux-gnu/libc.so.6\r\nNo symbol table info available.\r\n#1 0x00005636d9cc675b in dsm_impl_posix_resize (size=134483968, fd=7) at\n./build/../src/backend/storage/ipc/dsm_impl.c:441\r\n rc = <optimized out>\r\n#2 dsm_impl_posix (impl_private=0x8041000, elevel=20,\nmapped_size=0x5636dc084348, mapped_address=0x5636dc084340,\nrequest_size=134483968, handle=<optimized out>, op=DSM_OP_CREATE) at\n./build/../src/backend/storage/ipc/dsm_impl.c:326\r\n flags = <optimized out>\r\n fd = 7\r\n name =\n\"/PostgreSQL.1895625775\\000\\000X\\274ʝE\\177\\000\\000@\\354\\322\\357\\377\\177\\000\\000\\020\\337\\003\\334\\066V\\000\\000@\\000\\000\\000\\000\\000\\000\\000\\000\\200\\000\\000\\000\\000\\000\"\r\n address = <optimized out>\r\n#3 dsm_impl_op (op=op@entry=DSM_OP_CREATE, handle=<optimized out>,\nrequest_size=request_size@entry=134483968,\nimpl_private=impl_private@entry=0x5636dc084338,\nmapped_address=mapped_address@entry=0x5636dc084340, \r\n mapped_size=mapped_size@entry=0x5636dc084348, elevel=20) at\n./build/../src/backend/storage/ipc/dsm_impl.c:177\r\n __func__ = \"dsm_impl_op\"\r\n#4 0x00005636d9cc7877 in dsm_create (size=size@entry=134483968,\nflags=flags@entry=0) at ./build/../src/backend/storage/ipc/dsm.c:474\r\n seg = 0x5636dc084318\r\n i = <optimized out>\r\n nitems = <optimized out>\r\n __func__ = \"dsm_create\"\r\n#5 0x00005636d9e2b317 in make_new_segment (area=area@entry=0x5636dc17fd08,\nrequested_pages=requested_pages@entry=32768) at\n./build/../src/backend/utils/mmgr/dsa.c:2155\r\n new_index = 2\r\n metadata_bytes = 266240\r\n total_size = 134483968\r\n total_pages = <optimized out>\r\n usable_pages = 32768\r\n segment_map = <optimized out>\r\n segment = <optimized out>\r\n#6 0x00005636d9e2cbce in dsa_allocate_extended (area=0x5636dc17fd08,\nsize=size@entry=134217728, flags=flags@entry=0) at\n./build/../src/backend/utils/mmgr/dsa.c:712\r\n npages = 32768\r\n first_page = 139937017269144\r\n pool = 0x7f45a02a25f0\r\n size_class = <optimized out>\r\n start_pointer = <optimized out>\r\n segment_map = <optimized out>\r\n __func__ = \"dsa_allocate_extended\"\r\n#7 0x00005636d9bb3e95 in ExecParallelHashTableAlloc\n(hashtable=hashtable@entry=0x5636dc173a70, batchno=batchno@entry=0) at\n./build/../src/backend/executor/nodeHash.c:3047\r\n batch = 0x7f417ccc0000\r\n buckets = <optimized out>\r\n nbuckets = 16777216\r\n i = <optimized out>\r\n#8 0x00005636d9bb42e3 in ExecHashTableCreate\n(state=state@entry=0x5636dc146280, hashOperators=<optimized out>,\nkeepNulls=<optimized out>) at\n./build/../src/backend/executor/nodeHash.c:615\r\n pstate = <optimized out>\r\n build_barrier = 0x7f45a02a1450\r\n node = 0x5636dc140568\r\n hashtable = 0x5636dc173a70\r\n outerNode = <optimized out>\r\n space_allowed = 64424509440\r\n nbuckets = 16777216\r\n nbatch = 1\r\n rows = <optimized out>\r\n num_skew_mcvs = 10391049\r\n log2_nbuckets = <optimized out>\r\n i = 0\r\n ho = <optimized out>\r\n __func__ = \"ExecHashTableCreate\"\r\n#9 0x00005636d9bb7085 in ExecHashJoinImpl (parallel=true, pstate=<optimized\nout>) at ./build/../src/backend/executor/nodeHashjoin.c:279\r\n outerNode = <optimized out>\r\n hashNode = <optimized out>\r\n econtext = <optimized out>\r\n node = <optimized out>\r\n joinqual = <optimized out>\r\n otherqual = <optimized out>\r\n hashtable = 0x0\r\n hashvalue = 32767\r\n batchno = -641936805\r\n parallel_state = <optimized out>\r\n#10 ExecParallelHashJoin (pstate=<optimized out>) at\n./build/../src/backend/executor/nodeHashjoin.c:581\r\nNo locals.\r\n#11 0x00005636d9bb2194 in ExecProcNode (node=0x5636dc145b60) at\n./build/../src/include/executor/executor.h:247\r\nNo locals.\r\n#12 gather_getnext (gatherstate=0x5636dc145970) at\n./build/../src/backend/executor/nodeGather.c:276\r\n estate = 0x5636dc145730\r\n outerPlan = 0x5636dc145b60\r\n fslot = 0x5636dc0743a8\r\n#13 ExecGather (pstate=0x5636dc145970) at\n./build/../src/backend/executor/nodeGather.c:207\r\n node = 0x5636dc145970\r\n econtext = 0x5636dc145a80\r\n#14 0x00005636d9b9b4c3 in ExecProcNode (node=0x5636dc145970) at\n./build/../src/include/executor/executor.h:247\r\nNo locals.\r\n#15 ExecutePlan (execute_once=<optimized out>, dest=0x5636dc141648,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_SELECT, use_parallel_mode=<optimized out>,\nplanstate=0x5636dc145970, \r\n estate=0x5636dc145730) at\n./build/../src/backend/executor/execMain.c:1723\r\n slot = <optimized out>\r\n current_tuple_count = 0\r\n#16 standard_ExecutorRun (queryDesc=0x5636dc141698, direction=<optimized\nout>, count=0, execute_once=<optimized out>) at\n./build/../src/backend/executor/execMain.c:364\r\n estate = 0x5636dc145730\r\n operation = CMD_SELECT\r\n dest = 0x5636dc141648\r\n sendTuples = <optimized out>\r\n __func__ = \"standard_ExecutorRun\"\r\n#17 0x00007f4597d08e15 in pgss_ExecutorRun (queryDesc=0x5636dc141698,\ndirection=ForwardScanDirection, count=0, execute_once=<optimized out>) at\n./build/../contrib/pg_stat_statements/pg_stat_statements.c:892\r\n save_exception_stack = 0x7fffefd2f0a0\r\n save_context_stack = 0x7fffefd2f1d0\r\n local_sigjmp_buf = {{__jmpbuf = {94793620402592, -81779228135919821,\n1, 0, 94793620523992, 94793620526040, -81779228572127437,\n-24340724960596173}, __mask_was_saved = 0, __saved_mask = {__val =\n{94793619686312, \r\n 140737216966656, 94793581381244, 94793619686312,\n94793620543280, 140737216966688, 94793581092645, 94793620543856,\n94793620543280, 140737216966736, 94793581143132, 94793620522984, 16, 0,\n94793620543280, \r\n 140737216966784}}}}\r\n#18 0x00007f417cdbf4dd in explain_ExecutorRun (queryDesc=0x5636dc141698,\ndirection=ForwardScanDirection, count=0, execute_once=<optimized out>) at\n./build/../contrib/auto_explain/auto_explain.c:268\r\n save_exception_stack = 0x7fffefd2f8a0\r\n save_context_stack = 0x7fffefd2f1d0\r\n local_sigjmp_buf = {{__jmpbuf = {94793620402592, -81779228104462541,\n1, 0, 94793620523992, 94793620526040, -81779228138016973,\n-26933449381777613}, __mask_was_saved = 0, __saved_mask = {__val =\n{94793581107904, 94793620526744, \r\n 94793588700640, 140737216966960, 139936876507429,\n140737216967008, 94793620526744, 1, 0, 140737216967024, 139919244391853, 1,\n0, 94793620523992, 94793620402592, 1}}}}\r\n#19 0x00005636d9ba8f6a in postquel_getnext (es=0x5636dc140bd8,\nes=0x5636dc140bd8, fcache=0x5636dc1231a0, fcache=0x5636dc1231a0) at\n./build/../src/backend/executor/functions.c:867\r\n count = 0\r\n#20 fmgr_sql (fcinfo=0x7fffefd2f340) at\n./build/../src/backend/executor/functions.c:1164\r\n fcache = 0x5636dc1231a0\r\n sqlerrcontext = {previous = 0x0, callback = 0x5636d9ba77a0\n<sql_exec_error_callback>, arg = 0x5636dc06a230}\r\n randomAccess = false\r\n lazyEvalOK = <optimized out>\r\n is_first = <optimized out>\r\n pushed_snapshot = true\r\n es = 0x5636dc140bd8\r\n slot = <optimized out>\r\n result = <optimized out>\r\n eslist = <optimized out>\r\n eslc = 0x5636dc1413d8\r\n __func__ = \"fmgr_sql\"\r\n#21 0x00005636d9ba4bac in ExecMakeTableFunctionResult\n(setexpr=0x5636dc06a210, econtext=0x5636dc06a0e0, argContext=<optimized\nout>, expectedDesc=0x5636dc06ac68, randomAccess=false) at\n./build/../src/backend/executor/execSRF.c:231\r\n result = 94793580413993\r\n tupstore = 0x0\r\n tupdesc = 0x0\r\n funcrettype = 2249\r\n returnsTuple = <optimized out>\r\n returnsSet = true\r\n fcinfo = {flinfo = 0x5636dc06a230, context = 0x0, resultinfo =\n0x7fffefd2f300, fncollation = 0, isnull = false, nargs = 0, arg =\n{94793619488832, 140733193388046, 0, 94793580418128, 140737216967520, 6,\n3432, 140733193388035, \r\n 94793585267779, 1, 59588104878912, 94793585267784, 32,\n139936976061625, 94793584837154, 1, 140737216968608, 3, 140737216967832,\n206158430224, 140737216969520, 94793620609964, 94793620609964,\n139936977225618, \r\n 139936979540000, 8, 0, 139936976486110, 3688503315210960912,\n8187, 0, 94793620609936, 94793620618144, 139936979544832, 0, 160, 142,\n139936976269588, 140737216967904, 94793583475689, 140737216968048,\n94793620609952, \r\n 94793620609936, 252098579, 140737216968464, 139937017242608,\n139937017242584, 139937017242608, 139937017242608, 94793619743656,\n140737216967952, 94793583769478, 139937017242608, 7074228167464498944,\n94793620438296, \r\n 94793581565312, 140737216968032, 94793581372751,\n140737216968032, 94793620438296, 140737216968464, 140737216968464,\n94793584972684, 0, 140737216968144, 94793581565681, 140737216968080,\n7074228167464498944, 94793620438216, \r\n 94793581565312, 140737216968144, 94793581372751, 1,\n7074228167464498944, 140737216968464, 94793620438216, 140737216968464,\n140737216968464, 140737216968256, 94793581565681, 4294967295,\n139937017247848, 139937017247824, \r\n 139937017247848, 139937017247848, 94793619743656,\n140737216968240, 7074228167464498944, 139937017247848, 94793620438376,\n94793581565312, 140737216968464, 140737216968320, 94793581372827,\n140737216968304, 94793620438408, \r\n 140737216968464, 94793620436824, 0, 0}, argnull = {240, 246,\n210, 239, 255, 127, false, false, 241, 150, 193, 217, 54, 86, false, false,\n16, 247, 210, 239, 255, 127, false, false, 169, 156, 193, 217, 54, 86,\nfalse, false, \r\n 240, 246, 210, 239, 255, 127, false, false, 233, 187, 222, 217,\n54, 86, false, false, 24, false, false, false, false, false, false, false,\n175, 189, 160, 217, 54, 86, false, false, 16, 247, 210, 239, 255, 127,\nfalse, \r\n false, 233, 187, 222, 217, 54, 86, false, false, 23, false,\nfalse, false, false, false, false, false, 88, 171, 6, 220, 54, 86, false,\nfalse, 80, 56, 52, 160}}\r\n fcusage = {fs = 0x5636dc1278c0, save_f_total_time = {tv_sec = 0,\ntv_nsec = 0}, save_total = {tv_sec = 0, tv_nsec = 0}, f_start = {tv_sec =\n469579, tv_nsec = 359852825}}\r\n rsinfo = {type = T_ReturnSetInfo, econtext = 0x5636dc06a0e0,\nexpectedDesc = 0x5636dc06ac68, allowedModes = 11, returnMode =\nSFRM_ValuePerCall, isDone = ExprSingleResult, setResult = 0x0, setDesc =\n0x0}\r\n tmptup = {t_len = 0, t_self = {ip_blkid = {bi_hi = 0, bi_lo = 0},\nip_posid = 2}, t_tableOid = 0, t_data = 0x2}\r\n callerContext = 0x5636dc069cb0\r\n first_time = true\r\n __func__ = \"ExecMakeTableFunctionResult\"\r\n#22 0x00005636d9bb1760 in FunctionNext (node=node@entry=0x5636dc069fd0) at\n./build/../src/backend/executor/nodeFunctionscan.c:94\r\n tstore = 0x0\r\n estate = <optimized out>\r\n direction = ForwardScanDirection\r\n scanslot = 0x5636dc06ae88\r\n alldone = <optimized out>\r\n oldpos = <optimized out>\r\n funcno = <optimized out>\r\n att = <optimized out>\r\n#23 0x00005636d9ba40f9 in ExecScanFetch (recheckMtd=0x5636d9bb14a0\n<FunctionRecheck>, accessMtd=0x5636d9bb14d0 <FunctionNext>,\nnode=0x5636dc069fd0) at ./build/../src/backend/executor/execScan.c:95\r\n estate = 0x5636dc069dc0\r\n#24 ExecScan (node=0x5636dc069fd0, accessMtd=0x5636d9bb14d0 <FunctionNext>,\nrecheckMtd=0x5636d9bb14a0 <FunctionRecheck>) at\n./build/../src/backend/executor/execScan.c:145\r\n econtext = <optimized out>\r\n qual = 0x0\r\n projInfo = 0x0\r\n#25 0x00005636d9ba25c9 in ExecProcNodeInstr (node=0x5636dc069fd0) at\n./build/../src/backend/executor/execProcnode.c:461\r\n result = <optimized out>\r\n#26 0x00005636d9b9b4c3 in ExecProcNode (node=0x5636dc069fd0) at\n./build/../src/include/executor/executor.h:247\r\nNo locals.\r\n#27 ExecutePlan (execute_once=<optimized out>, dest=0x5636dc12ccb8,\ndirection=<optimized out>, numberTuples=0, sendTuples=<optimized out>,\noperation=CMD_SELECT, use_parallel_mode=<optimized out>,\nplanstate=0x5636dc069fd0, \r\n estate=0x5636dc069dc0) at\n./build/../src/backend/executor/execMain.c:1723\r\n slot = <optimized out>\r\n current_tuple_count = 0\r\n#28 standard_ExecutorRun (queryDesc=0x5636dc068590, direction=<optimized\nout>, count=0, execute_once=<optimized out>) at\n./build/../src/backend/executor/execMain.c:364\r\n estate = 0x5636dc069dc0\r\n operation = CMD_SELECT\r\n dest = 0x5636dc12ccb8\r\n sendTuples = <optimized out>\r\n __func__ = \"standard_ExecutorRun\"\r\n#29 0x00007f4597d08e15 in pgss_ExecutorRun (queryDesc=0x5636dc068590,\ndirection=ForwardScanDirection, count=0, execute_once=<optimized out>) at\n./build/../contrib/pg_stat_statements/pg_stat_statements.c:892\r\n save_exception_stack = 0x7fffefd2f9b0\r\n save_context_stack = 0x0\r\n local_sigjmp_buf = {{__jmpbuf = {94793620008672, -81779227836027085,\n0, 94793619637648, 0, 1, -81779227869581517, -24340724960596173},\n__mask_was_saved = 0, __saved_mask = {__val = {94793583733915,\n140737216969008, \r\n 7074228167464498944, 140737216969008, 94793619643840,\n94793620438296, 94793619644864, 2, 94793619644368, 140737216969168,\n94793581140788, 7987184768, 94793620441608, 94793619643840, 336, 1}}}}\r\n#30 0x00007f417cdbf4dd in explain_ExecutorRun (queryDesc=0x5636dc068590,\ndirection=ForwardScanDirection, count=0, execute_once=<optimized out>) at\n./build/../contrib/auto_explain/auto_explain.c:268\r\n save_exception_stack = 0x7fffefd2fb50\r\n save_context_stack = 0x0\r\n local_sigjmp_buf = {{__jmpbuf = {94793620008672, -81779227930398925,\n0, 94793619637648, 0, 1, -81779227829735629, -26933449381777613},\n__mask_was_saved = 0, __saved_mask = {__val = {94793620441784,\n140737216969344, 336, 1, \r\n 94793620155408, 24, 94793619637648, 0, 1, 140737216969312,\n94793583755684, 94793620008672, 94793620155728, 140737216969344,\n94793583827107, 94793620008672}}}}\r\n#31 0x00005636d9cee59b in PortalRunSelect\n(portal=portal@entry=0x5636dc0c2ee0, forward=forward@entry=true, count=0,\ncount@entry=9223372036854775807, dest=dest@entry=0x5636dc12ccb8) at\n./build/../src/backend/tcop/pquery.c:932\r\n queryDesc = 0x5636dc068590\r\n direction = <optimized out>\r\n nprocessed = <optimized out>\r\n __func__ = \"PortalRunSelect\"\r\n#32 0x00005636d9cefb20 in PortalRun (portal=portal@entry=0x5636dc0c2ee0,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true, dest=dest@entry=0x5636dc12ccb8, \r\n altdest=altdest@entry=0x5636dc12ccb8, completionTag=0x7fffefd2fca0 \"\")\nat ./build/../src/backend/tcop/pquery.c:773\r\n save_exception_stack = 0x7fffefd2fec0\r\n save_context_stack = 0x0\r\n local_sigjmp_buf = {{__jmpbuf = {1, -81779228012187853,\n94793619488704, 94793619488752, 94793620008672, 2, -81779227917816013,\n-5958515187191648461}, __mask_was_saved = 0, __saved_mask = {__val =\n{94793619463952, 0, \r\n 2817148525, 94793620020232, 94793619637376, 4, 112,\n94793619488704, 94793619484688, 94793584837140, 2, 140737216969760,\n94793583757029, 140737216969792, 2, 94793619488704}}}}\r\n result = <optimized out>\r\n nprocessed = <optimized out>\r\n saveTopTransactionResourceOwner = 0x5636dc0823a8\r\n saveTopTransactionContext = 0x5636dc0e6c10\r\n saveActivePortal = 0x0\r\n saveResourceOwner = 0x5636dc0823a8\r\n savePortalContext = 0x0\r\n saveMemoryContext = 0x5636dc0e6c10\r\n __func__ = \"PortalRun\"\r\n#33 0x00005636d9ceb7b9 in exec_simple_query (query_string=0x5636dc043120\n\"SELECT * FROM crashrepro4();\") at\n./build/../src/backend/tcop/postgres.c:1145\r\n parsetree = 0x5636dc043fc0\r\n portal = 0x5636dc0c2ee0\r\n snapshot_set = <optimized out>\r\n commandTag = <optimized out>\r\n completionTag =\n\"\\000\\000\\000\\000\\002\\000\\000\\000\\340\\374\\322\\357\\377\\177\\000\\000-\\r\\324\\357\\377\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\360\\374\\322\\357\\002\\000\\000\\000Q\\000\\000\\000\\000\\000\\000\\000\n1\\004\\334\\066V\\000\\000`\\376\\322\\357\\377\\177\\000\"\r\n querytree_list = <optimized out>\r\n plantree_list = <optimized out>\r\n receiver = 0x5636dc12ccb8\r\n format = 0\r\n dest = DestRemote\r\n parsetree_list = 0x5636dc044010\r\n parsetree_item = 0x5636dc043ff0\r\n save_log_statement_stats = false\r\n was_logged = false\r\n use_implicit_block = false\r\n msec_str =\n\"\\000\\000\\000\\000\\002\\000\\000\\000\\340\\374\\322\\357\\377\\177\\000\\000-\\r\\324\\357\\377\\177\\000\\000\\000\\000\\000\\000\\000\\000\\000\"\r\n __func__ = \"exec_simple_query\"\r\n#34 0x00005636d9ced623 in PostgresMain (argc=<optimized out>,\nargv=argv@entry=0x5636dc070128, dbname=<optimized out>, username=<optimized\nout>) at ./build/../src/backend/tcop/postgres.c:4182\r\n query_string = 0x5636dc043120 \"SELECT * FROM crashrepro4();\"\r\n input_message = {data = 0x5636dc043120 \"SELECT * FROM\ncrashrepro4();\", len = 29, maxlen = 1024, cursor = 29}\r\n local_sigjmp_buf = {{__jmpbuf = {140737216970368,\n-81779225453662413, 1, 94793619668968, 94793619668968, 94793619632848,\n-81779227978633421, -5958515189572833485}, __mask_was_saved = 1,\n__saved_mask = {__val = {0, \r\n 140737216970544, 140737216970540, 140737216970640,\n8589934592, 94793588696280, 94793585032905, 140737216970864,\n140737216972320, 140737216971248, 94793619668968, 94793619632848,\n139936976780376, 5, 206158430256, \r\n 140737216970848}}}}\r\n send_ready_for_query = false\r\n disable_idle_in_transaction_timeout = false\r\n __func__ = \"PostgresMain\"\r\n#35 0x00005636d99fb097 in BackendRun (port=0x5636dc0672d0) at\n./build/../src/backend/postmaster/postmaster.c:4358\r\n ac = 1\r\n secs = 612288553\r\n usecs = 224860\r\n i = 1\r\n av = 0x5636dc070128\r\n maxac = <optimized out>\r\n#36 BackendStartup (port=0x5636dc0672d0) at\n./build/../src/backend/postmaster/postmaster.c:4030\r\n bn = <optimized out>\r\n pid = <optimized out>\r\n#37 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1707\r\n rmask = {fds_bits = {128, 0 <repeats 15 times>}}\r\n selres = <optimized out>\r\n now = <optimized out>\r\n readmask = {fds_bits = {200, 0 <repeats 15 times>}}\r\n last_lockfile_recheck_time = 1558973336\r\n last_touch_time = 1558973276\r\n __func__ = \"ServerLoop\"\r\n#38 0x00005636d9c78221 in PostmasterMain (argc=7, argv=0x5636dc03de10) at\n./build/../src/backend/postmaster/postmaster.c:1380\r\n opt = <optimized out>\r\n status = <optimized out>\r\n userDoption = <optimized out>\r\n listen_addr_saved = true\r\n i = <optimized out>\r\n output_config_variable = <optimized out>\r\n __func__ = \"PostmasterMain\"\r\n#39 0x00005636d99fc594 in main (argc=7, argv=0x5636dc03de10) at\n./build/../src/backend/main/main.c:228\r\nNo locals.\r\n(gdb)",
"msg_date": "Mon, 27 May 2019 16:11:37 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #15821: Parallel Workers with functions and auto_explain: ERROR:\n could not find key 3 in shm TOC"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> We have enabled auto_explain and see errors on PostgreSQL 11.3 when\n> SELECTing from a user defined function. No such crashes have been\n> observed on 10.7.\n\nI think that you didn't give a complete dump of relevant settings,\nbut after some fooling around I was able to reproduce this error,\nand the cause is this: auto_explain hasn't a single clue about\nparallel query.\n\n1. In the parent process, we have a parallelizable hash join being\nexecuted in a statement inside a function. Since\nauto_explain.log_nested_statements is not enabled, auto_explain\ndoes not deem that it should trace the statement, so the query\nstarts up with estate->es_instrument = 0, and therefore\nExecHashInitializeDSM chooses not to create any shared\nSharedHashInfo area.\n\n2. In the worker processes, auto_explain manages to grab execution\ncontrol when ParallelQueryMain calls ExecutorStart, thanks to being\nin ExecutorStart_hook. Having no clue what's going on, it decides\nthat this is a new top-level query that it should trace, and it\nsets some bits in queryDesc->instrument_options.\n\n3. When the workers get to ExecHashInitializeWorker, they see that\ninstrumentation is active so they try to look up the SharedHashInfo.\nKaboom.\n\nI'm inclined to think that explain_ExecutorStart should simply\nkeep its hands off of everything when in a parallel worker;\nif instrumentation is required, that'll be indicated by options\npassed down from the parent process. It looks like this could\nconveniently be merged with the rate-sampling logic by forcing\ncurrent_query_sampled to false when IsParallelWorker().\n\nLikely this should be back-patched all the way to 9.6. I'm\nnot sure how we managed to avoid noticing it before now,\nbut there are probably ways to cause visible trouble in\nany release that has any parallel query support.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Jun 2019 15:21:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #15821: Parallel Workers with functions and auto_explain:\n ERROR: could not find key 3 in shm TOC"
}
] |
[
{
"msg_contents": "I propose to add a column \"command\" to pg_stat_progress_create_index.\nThe sibling view pg_stat_progress_cluster already contains such a\ncolumn. This can help distinguish which command is running and thus\nwhich phases to expect. It seems reasonable to keep these views\nconsistent, too. (They are both new in PG12.) Patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 27 May 2019 14:18:12 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "Hi,\n\nOn 2019-05-27 14:18:12 -0400, Peter Eisentraut wrote:\n> I propose to add a column \"command\" to pg_stat_progress_create_index.\n> The sibling view pg_stat_progress_cluster already contains such a\n> column. This can help distinguish which command is running and thus\n> which phases to expect. It seems reasonable to keep these views\n> consistent, too. (They are both new in PG12.) Patch attached.\n\nSeems like we should do that for v12 then?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 May 2019 11:20:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "On 2019-May-27, Peter Eisentraut wrote:\n\n> I propose to add a column \"command\" to pg_stat_progress_create_index.\n> The sibling view pg_stat_progress_cluster already contains such a\n> column. This can help distinguish which command is running and thus\n> which phases to expect. It seems reasonable to keep these views\n> consistent, too. (They are both new in PG12.) Patch attached.\n\n+1.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 27 May 2019 15:51:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "On Mon, May 27, 2019 at 4:51 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n>\n> On 2019-May-27, Peter Eisentraut wrote:\n>\n> > I propose to add a column \"command\" to pg_stat_progress_create_index.\n> > The sibling view pg_stat_progress_cluster already contains such a\n> > column. This can help distinguish which command is running and thus\n> > which phases to expect. It seems reasonable to keep these views\n> > consistent, too. (They are both new in PG12.) Patch attached.\n>\n> +1.\n>\n\n+1\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Mon, May 27, 2019 at 4:51 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:>> On 2019-May-27, Peter Eisentraut wrote:>> > I propose to add a column \"command\" to pg_stat_progress_create_index.> > The sibling view pg_stat_progress_cluster already contains such a> > column. This can help distinguish which command is running and thus> > which phases to expect. It seems reasonable to keep these views> > consistent, too. (They are both new in PG12.) Patch attached.>> +1.>+1-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Mon, 27 May 2019 17:12:09 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "On Mon, May 27, 2019 at 11:20:28AM -0700, Andres Freund wrote:\n> On 2019-05-27 14:18:12 -0400, Peter Eisentraut wrote:\n>> I propose to add a column \"command\" to pg_stat_progress_create_index.\n>> The sibling view pg_stat_progress_cluster already contains such a\n>> column. This can help distinguish which command is running and thus\n>> which phases to expect. It seems reasonable to keep these views\n>> consistent, too. (They are both new in PG12.) Patch attached.\n> \n> Seems like we should do that for v12 then?\n\n+1.\n--\nMichael",
"msg_date": "Tue, 28 May 2019 06:40:00 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "On Mon, May 27, 2019 at 11:20:28AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2019-05-27 14:18:12 -0400, Peter Eisentraut wrote:\n> > I propose to add a column \"command\" to pg_stat_progress_create_index.\n> > The sibling view pg_stat_progress_cluster already contains such a\n> > column. This can help distinguish which command is running and thus\n> > which phases to expect. It seems reasonable to keep these views\n> > consistent, too. (They are both new in PG12.) Patch attached.\n> \n> Seems like we should do that for v12 then?\n\n+1\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 28 May 2019 14:40:27 +0200",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
},
{
"msg_contents": "On 2019-05-27 20:18, Peter Eisentraut wrote:\n> I propose to add a column \"command\" to pg_stat_progress_create_index.\n> The sibling view pg_stat_progress_cluster already contains such a\n> column. This can help distinguish which command is running and thus\n> which phases to expect. It seems reasonable to keep these views\n> consistent, too. (They are both new in PG12.) Patch attached.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 4 Jun 2019 09:35:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Add command column to pg_stat_progress_create_index"
}
] |
[
{
"msg_contents": "Please see attached the patch that corrects the file-level SQL comment that\nindicates which submodule of pgcrypto is being tested.\n\nBest regards,\n-- \nGurjeet Singh http://gurjeet.singh.im/",
"msg_date": "Mon, 27 May 2019 19:25:37 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Fix comment in pgcrypto tests"
},
{
"msg_contents": "On Mon, May 27, 2019 at 07:25:37PM -0700, Gurjeet Singh wrote:\n> Please see attached the patch that corrects the file-level SQL comment that\n> indicates which submodule of pgcrypto is being tested.\n\nThanks, committed. There was a second one in pgp-decrypt.sql.\n--\nMichael",
"msg_date": "Tue, 28 May 2019 06:38:14 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix comment in pgcrypto tests"
},
{
"msg_contents": "Thanks!\n\nI have changed the patch status as follows in commitfest [1]\nReviewer: Michael Paquier\nCommitter: Michael Paquier\nStatus: committed\n\n[1]: https://commitfest.postgresql.org/23/2132/\n\nBest regards,\n\nOn Tue, May 28, 2019 at 3:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 27, 2019 at 07:25:37PM -0700, Gurjeet Singh wrote:\n> > Please see attached the patch that corrects the file-level SQL comment\n> that\n> > indicates which submodule of pgcrypto is being tested.\n>\n> Thanks, committed. There was a second one in pgp-decrypt.sql.\n> --\n> Michael\n>\n\n\n-- \nGurjeet Singh http://gurjeet.singh.im/\n\nThanks!I have changed the patch status as follows in commitfest [1]Reviewer: Michael PaquierCommitter: Michael PaquierStatus: committed[1]: https://commitfest.postgresql.org/23/2132/Best regards,On Tue, May 28, 2019 at 3:38 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 27, 2019 at 07:25:37PM -0700, Gurjeet Singh wrote:\n> Please see attached the patch that corrects the file-level SQL comment that\n> indicates which submodule of pgcrypto is being tested.\n\nThanks, committed. There was a second one in pgp-decrypt.sql.\n--\nMichael\n-- Gurjeet Singh http://gurjeet.singh.im/",
"msg_date": "Tue, 28 May 2019 12:39:21 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Fix comment in pgcrypto tests"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.